You are on page 1of 254

A REPORT on advanced software engineering

product line architecture

PRESENTED TO : SIR TOUSEEF TAHIR


DEPT OF COMPUTER SCIENCES LAHORE

PREPARED BY :
1. Muhammad bilal baber 2. Muhammad salman aziz 3. Areeb gillani 4. Hafiz Muhammad adeel bin yameen 5. Adnan maqbool 6. Iqbal ahmad 7. Muhammad Imran

sp10-bcs-057 sp10-bcs-065 sp10-bcs-037 sp10-bcs-025 sp10-bcs-005 sp10-bcs-035 sp10-bcs061 sp10-bcs-109 sp10-bcs-095 sp10-bcs-097

8. Hafiz naseeb ahmad 9. Tahir iqbal 10. Tanseef fahad

Page 1 of 255

11.

Ali shoaib 113

sp10-bcs-

DATED : 28th may , 2012

TABLE OF CONTENTS
1.

Letter Of Transmittal...03

2.

Minutes of Meeting..04

3.

Introduction & Components.....05

4.

Planning in Project Line Architecture..24

5.

Life Cycle Of Product Line Architecture.....48

6.

Role Of Product Line Architecture in Industries.....62

7.

Survey Of Product Line Architecture....78


Page 2 of 255

8. 9.

Product Line Architecture Designs...117 Measuring Product Line Architecture...135

10. Testing

of a software product line architecture.143

11. Product

Family Engineering.....172

12.Software Product Family Evaluation183

May 28th, 2012

Sir Touseef Tahir, Dept of Computer Science, CIIT , Lahore

Dear Sir , We are submitting to you the report, due May 8, 2012, that you assigned to us. The report is entitled Advanced Software Engineering Product Line Architecture. The purpose of the report is to enlighten our research and findings about the Product Line Architecture . it consists of the life cycle , strategies , design , specification , importance , plan and the analysis of Product Line Architecture around the Globe. The material of this report is taken from different books and websites which are mentioned in the reference area and it

Page 3 of 255

is a combined work of 10 group members. If you should have any questions concerning our report and its material please feel free to contact M.Bilal Baber at bilal_baber619@hotmail.com.

Sincerely, M . Bilal Baber

Minutes of the meeting .


SR # 1 2 3 4 5 GROUP MEMBER NAME Muhammad Bilal Baber Muhammad Salman Aziz Hafiz Muhammad Adeel Bin Yameen Areeb Gillani Adnan Maqbool PART IN THE REPORT Examples & Testing of Product Line Architecture Product Family Engineering & Evaluation Designs of Product Line Architecture Survey of Product Line Architecture Introduction & Components of

Page 4 of 255

6 7 8 9 10 11

Muhammad Imran Tanseef Fahad Tahir Iqbal Hafiz Naseeb Iqbal Ahmad Ali Shoaib

Product Line Architecture Life Cycle of Product Line Architecture Role in Industries of Product Line Architecture Evaluation of Product Line Architecture Measuring Product Line Architecture Planning of Product Line Architecture Agile Product Line Architecture

An INTRODUCTION
Abstract Todays software design methodologies are aimed at one-of-a-kind applications, designs are expressed in terms of objects and classes, and software must be coded manually. We argue that future software development will be very different and will center around product-line architectures (i.e., designs for families of

Page 5 of 255

related applications), refinements (a generalization of todays components), and software plug-and-play (a codeless form of programming). Overview A software product line (SPL) is a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way. Software product lines are emerging as a viable and important development paradigm allowing companies to realize order-of-magnitude improvements in time to market, cost, productivity, quality, and other business drivers. Software product line engineering can also enable rapid market entry and flexible response, and provide a capability for mass customization. We are working to make software product line practice a dependable low-risk high-payoff practice that combines the necessary business and technical approaches to achieve success. If you would like to gain expertise in these practices, see training in product lines. WHAT IS A SOFTWARE PRODUCT LINE? A software product line is a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way. This definition is consistent with the definition traditionally given for any product line. But it adds more: it puts constraints on the way in which the systems in a software product line are developed. Why? Because substantial production economies can be achieved when the systems in a software product line are developed from a common set of assets in a prescribed way, in contrast to being developed separately, from scratch, or in an arbitrary fashion. It is exactly these production economies that make the software product line approach attractive. How is production made more economical? Each product is formed by taking applicable components from the base of common assets, tailoring them as necessary through preplanned variation mechanisms such as parameterization

Page 6 of 255

or inheritance, adding any new components that may be necessary, and assembling the collection according to the rules of a common, product-linewide architecture. Building a new product (system) becomes more a matter of assembly or generation than one of creation; the predominant activity is integration rather than programming. For each software product line, there is a predefined guide or plan that specifies the exact product-building approach. Certainly the desire for production economies is not a new business goal, and neither is a product line solution. But a software product line is a relatively new idea, and it should seem clear from our description that software product lines require a different technical tack. The more subtle consequence is that software product lines require much more than new technical practices. The common set of assets and the plan for how they are used to build products don't just materialize without planning, and they certainly don't come free. They require organizational foresight, investment, planning, and direction. They require strategic thinking that looks beyond a single product. The disciplined use of the common assets to build products doesn't just happen either. Management must direct, track, and enforce the use of the assets. Software product lines are as much about business practices as they are about technical practices. Software product lines give economies of scope, which means that you take economic advantage of the fact that many of your products are very similar not by accident, but because you planned it that way. You make deliberate, strategic decisions and are systematic in effecting those decisions.

HISTORY OF PLA:
In the 1500s, it was common and obvious knowledge that the Earth was the center of the Universe; all heavenly bodies moon, sun, planets, stars acknowledged the Earths dominance by revolving around the Earth. For the most part, geocentricity provided an adequate model of the Universe. Predictions of lunar eclipses were amazingly accurate. (Geocentricity was correct). So too were predictions of the positions

Page 7 of 255

of fixed stars. (They didnt move). But the motion of planets was problematic because they did not traverse the sky in simple ways; planets would periodically move backwards against the background of stars before continuing their forward motion. (Today we call this retrograde motion). Scientists of that era proposed inscrutable models that utilized rotating nested spheres to explain retrogrades; but ultimately these models failed to predict planetary motions accurately. In 1543, Copernicus proposed a radically different explanation of the Universe by recognizing the heliocentric nature of our solar system. Not only did heliocentricity explain retrograde motions in a simple and elegant manner, it laid the foundation for todays understanding of planetary systems. Copernicuss result is an extreme, but clear, illustration of how science progresses. That is, by negating commonly held truths yields models of the Universe that are not only consistent with known facts, but are more powerful and lead to deeper understandings and results that simply could not be obtained otherwise. The scientific results that I and many others have made in our careers are more common examples of this paradigm, i.e., results that led to incremental advances. Today we live in a universe of software. Software is elegantly explained in terms of objects that are instances of classes, classes are related to other classes via inheritance, and webs of interconnected objects accurately model run-time executions of applications. Object-orientation has revolutionized our understanding of software. We have abandoned a function-centric view of software where functional decomposition guided our understanding of appropriate encapsulations to a data-centric view where object/class encapsulations reign supreme. The OO view of software is indeed very powerful and will remain current for some time. But years from now, will we have an alternative view of software? The answer, of course, is

Page 8 of 255

yes. But what will it be? What will replace the abstractions of objects and classes? How will we produce and specify software? Our clairvoyance is guided by negating some obvious contemporary truths and seeing if a consistent explanation of our software universe remains. Consider the following three truths: contemporary OO design methodologies are aimed at producing one-of-akind applications, application designs are expressed in terms of objects and classes, and we manually code our implementations given such designs. Invited Presentation, Smalltalk und Java in Industrie und Ausbildung (Smalltalk and Java in Industry and Practical Training), Erfurt, Germany, October 1998. 2 Each of these points belabors the obvious. At the same time, it is not difficult to envision change. Future design methodologies will not focus on unique applications but rather on families of related applications, called product-line architectures (PLAs). Designs of PLAs will not be expressed purely in terms of objects and classes, but rather in terms of components. Actually, expressing software designs in terms of components is already a contemporary phenomenon; so in this paper we anticipate the next step beyond todays components called refinements. And finally, application development will exploit codeless programming. Both industry and academia are moving toward software plug-and-play i.e., the ability to assemble applications of a product-line quickly and cheaply merely through component composition; no source code has to be written. These ideas are further motivated and clarified in the following sections.

Product-Line Architectures

Page 9 of 255

A product-line architecture (PLA) is a blue-print for creating families of related applications. PLAs acknowledge the fact that companies dont build individual products, but instead create families of closely related products. The importance of PLAs is evident: software complexity is growing at an alarming rate and the costs of software development and maintenance must be restrained. PLAs enable companies to amortize the effort of software design and development over multiple products, thereby substantially reducing costs. Recognizing the need for PLAs is ancient history: McIlroy motivated PLAs in 1969 [McI69] and Parnas described the benefits of software families in the mid-1970s [Par79]. What has changed from the time of these pioneering predictions is that proven methodologies for building and understanding PLAs are available. In fact, the first steps in evolving contemporary one-of-a-kind OO design methodologies toward PLAs has occurred. Jacobson, Griss, and Jonsson [Jac97], for example, advocate variation points, i.e., one or more locations at which variations will occur within a class, type, or use case. Different application instances will utilize different variations, which is clearly the beginning of a product-line architecture.

Generalizing Components to Refinements


Todays newest object-oriented methodologies are not based purely on objects and classes, but on components. A component is an encapsulated suite of interrelated classes. Its purpose is to scale the unit of design and software construction from an individual class to that of a package or OO framework. The most recent design methodologies from Catalysis [DS98] and Rational [Rat98], for example, are explicitly named Component-Based Software Designs, where components could be OO packages, COM or CORBA components,
Page 10 of 255

Java Beans, and so on. To give readers a perspective of where work on software componentry is headed, let me share my experiences of working for fifteen years on component-based product-line architecture methodologies. I have encountered many domains where components simply cannot be implemented as OO packages or as COM or CORBA components. The reason is one of performance: applications that were constructed with these components were so horrendously inefficient that no sane person would ever use them. Does this mean that components cant exist for these domains? Certainly not if anything, building applications from components is a goal that we want to achieve for all domains. What it actually means is that the components of these domains must be implemented differently. Implementations must break component encapsulations for domain-specific optimizations. Instead of equating components with concrete source code that would be statically or dynamically linked into an application, a more appropriate implementation might be as a metaprogram i.e., a program that generates the source the application is to execute by composing prewritten code fragments.1 Or if metaprograms are not sophisticated enough to produce efficient source, a component might be implemented as a set of rules in a program transformation system [Par83, Bax92]. (A program transformation system is a technology by which program specifications are transformed into effi3 cient source by applying semantically-correct program rewrite rules; code motion and complex optimizations are examples). Given this observation, I realized that todays notions of components are simply too implementation-oriented or implementation-specific. We need to separate a components abstraction from its possible implementations, where OO packages, COM, metaprograms, program transformation systems are merely

Page 11 of 255

different component implementation technologies, each with their own competing strengths and weaknesses. The component abstraction that we seek that unifies this wide spectrum of technologies is that of a (data) refinement. What is a refinement? Ill give an informal example now and a more precise definition later. Have you ever added a new feature to an existing application? What you discover is that changes arent localized: multiple classes of an application must be simultaneously updated (e.g., adding new data members, methods, replacing existing methods with new methods, etc.). All of these modifications must be made consistently and simultaneously if the resulting application is to work correctly. It is this collection of modifications called a refinement that defines a component or building-block for that feature. By analogy, removing this component/feature from an application requires all of its modifications to be simultaneously and consistently removed. (How such refinements are encapsulated and realized is considered in Section 4.2). This line of reasoning led me to conclude that refinements are central to a general theory of product-line architectures. Abstracting away component implementation details reveals a common set of concepts that underlie the building blocks of domains and product-line architectures. The benefit in doing so is a significant conceptual economy: there is one way in which to conceptualize the building blocks of product-line architectures, yet there are multiple ways in which blocks can be implemented. The choice of implementation is ultimately PLA/domain-specific.

Software Plug-and-Play
Programming with todays components is analogous to the old-fashioned way circuit boards were created
Page 12 of 255

decades ago: one collects a set of chips, plugs them into a board, and wirewraps connections between chip pins to implement a particular functionality. In essence, this is the classical library paradigm: one collects a set of software modules and writes glue code to interweave calls to different modules to implement a particular functionality. This has always been a very successful and largely manual process for creating customized hardware and software applications. It is obvious to most people that software construction allows for much greater degrees of automation. It should be possible to drop a component into an existing system and by imbuing the component with domain-specific expertise have it wire-wrap its connections and thereby automate a tedious coding process that experts would otherwise have to do manually. Hardware plugand-play is a practical realization of this idea. Today we can customize PC configurations merely by connecting components. Although the connections are simple for us, a myriad of low-level connections are being made via standardized hardware interfaces. Hardware plug-and-play makes PC extensions and reconfigurations almost trivial and has empowered novices to do the work once reserved for high-paid experts. We need the same for building and extending software applications by plugging and unplugging components with standardized interfaces. 1. Metaprograms can be implemented in OO languages. My point is that a metaprogram is not application source code, but is considerably more abstract and fundamentally different: it is a generator of customized application source.

Recap
Software development will inevitably evolve toward product-line architectures, distinguishing component

Page 13 of 255

abstractions (refinements) from their implementations, and software plugand-play. The challenge is to achieve these goals. In the following sections, Ill outline a particular way to do so.

Understanding Product-Line Architectures


There are many results that are relevant to this view of the future. Work on extensible or open systems is the first step toward creating PLAs [Obe98]. Research on domain-specific software architectures was to develop PLAs for a variety of military domains [Met92]. Aspect-Oriented Programming, while not specifically aimed at PLAs, certainly has much in common with the basic mechanisms that are needed [Kic97]. Subject-Oriented Programming, where different applications are constructed by composing different subjects, clearly deals with PLAs [Har83]. Feature-Oriented Programming extends OO methodologies to product-lines [Coh98]. There are many other relevant efforts (see [Cza97]). Please note that all of these approaches (including that of Section 3) are not identical in their technical details one shouldnt expect them to be but the essential problems that they address are remarkably similar. Common to most models of product-line architectures are three ideas: (1) identifying the set of features that arise in a family of applications, (2) implementing each feature in one or more ways, and (3) defining specific applications of a PLA by the set of features that it supports plus the particular implementation of each feature. In the early 1990s, I encountered a classic example of a PLA. What attracted me to this example was the unscalability of its design. The Booch Components have undergone a long evolutionary history of improvement [Boo87-93]. The original version was in Ada containing over 400 different data structure
Page 14 of 255

templates/generics. For example, there were 18 varieties of dequeues (i.e., double-ended queues). How did the number 18 arise? Booch proposed a PLA where dequeue data structures had three features: concurrency, memory allocation, and ordering. The concurrency feature had three implementations (which users had to choose one): sequential (meaning there was no concurrency), guarded (programmers had to call waits and signals explicitly), and synchronized (waits and signals would be called automatically). The memory allocation feature also had three implementations (choose one): bounded (meaning that elements were stored in an array), unbounded (elements were stored on a list), and dynamic (elements were stored on a list with memory management). The ordered feature had two implementations (choose one): elements were maintained in key order or they were unordered. Because feature implementations were orthogonal, there were 18 = 3 3 2 distinct variations of dequeues. This approach had glaring problems: what happens when a new feature is added, such as persistence? The consequence is every data structure/dequeue in transient memory must be replicated to exist in persistent memory. That is, the library doubles in size! The problem is actually worse: if one examines even contemporary data structure libraries for C++ (e.g., STL) and Java, one discovers the data structures that are offered are elementary and simplistic. The data structures that are found in, for example, operating systems, compilers, database systems, etc. are much more complicated. What this means is that people are constantly adding new features. This led me to conclude that no conventional library could ever encompass the enormous spectrum of data structures (or more generally, applications of a product line) that will be encountered in practice. Clearly, there had to be a better way to build PLAs. The general problem that the Booch Components exhibited was the lack of library scalability. Given n

Page 15 of 255

optional features, one has a product-line of O(2n) distinct applications. Or more generally, if each feature has m different implementations, the product-line is of size O((1+m)n). What this tells us is that libraries of PLAs shouldnt contain components that implement combinations of features. Instead, scalable libraries contain components that implement individual and largely orthogonal features. A scalable library is quite 5 small on the order of O(mn) components but the number of applications that can be assembled from component compositions is very large e.g. O((1+m)n) [Bat93, Big94]. To explore the possible impact of this approach, Singhal reengineered a C++ version (v1.47) of the Booch Components (see Table 1 and [Sin96]). For that part of the library to which these ideas applied, he reduced the number of components from 82 to 22, the number of lines of source from 11K to under 3K, and increased the number of distinct data structures (applications) that could be created from 169 to 208 (i.e., there were applications in Boochs product line that were unimplemented). When benchmarks were run on corresponding data structures, Singhals data structures were more efficient. More importantly, it was very easy to add new features (i.e. components) to Singhals library that would substantially enlarge the number of data structures that could be created; this couldnt be done with the Booch design. Clearly, this was a big win. The question then became: can these results be replicated for more complicated domains? The answer is yes. Scalable PLA libraries for domains as varied as database systems (where applications are individual DBMSs that can exceed 80K LOC), protocols, compilers, avionics, hand-held radios, and audio-signal processing [Cog93, Bat93, Hei93, Hut91, Ren96]. Generally the PLAs for these domains were created in isolation of each other, which means that researchers are reinventing a common set of

Page 16 of 255

ideas. In the following section, we review these ideas which we have collectively called GenVoca; the name stems from the first known PLAs based on this approach, namely Genesis and Avoca [Bat92]. 3 GenVoca The obvious way in which to create plug-compatible components is to define standardized interfaces. GenVoca takes the idea of components that export and import standardized interfaces to its logical conclusion. Virtual Machines. Every domain/PLA of applications has a small set of fundamental abstractions. Standardized interfaces or virtual machines can be defined for each abstraction. A virtual machine is a set of classes, their objects, and methods that work cooperatively to implement some functionality. Clients of a virtual machine do not know how this functionality is implemented. Components and Realms. A component or layer is an implementation of a virtual machine. The set of all components that implement the same virtual machine forms a realm; effectively, a realm is a library of plug-compatible components. In Figure 1a, realms S and T each have three components, whereas realm W has four. All components of realm S, for example, are plug-compatible because they implement the same interface. The same holds for realms T and W. Implementations. A GenVoca model says nothing about when components/refinements are to be composed the options are dynamically at run-time or statically at compile-time or how components/refinements are to be implemented OO packages, COM components, metaprograms, program transformation systems, etc. The bindings of these implementation decisions are made after the model is created and is

Page 17 of 255

largely determined by the domain and the efficiency of constructed applications. Generally OO and COM implementations offer no possibilities of static optimizations. Metaprogramming implementations automate a wide range of common and simple static domain-specific optimizations; program transformation systems offer unlimited static optimization possibilities. Table 2 tallies the distribution of GenVoca PLAs according to their implementations. Most use a uniform component implementation and binding-time strategy. Others, like hand-held radios, optimize components that are composed statically, and otherwise perform no optimizations for other components that are composed dynamically. 4 Experience GenVoca PLAs have been very successful. Performance of synthesized applications is comparable or substantially better than that of expert-coded software. Productivity increases range from a factor of 4 (where new components have to be written) to several orders of magnitude (where all components are available). Further, an 8-fold reduction in errors has been reported. See [Bat97b] for details. There are problems and limitations with every approach, and GenVoca is no exception. Both technical and nontechnical issues abound. Experience has revealed no technical showstoppers; to be sure, there are plenty of interesting technical challenges, but these are solvable. The hardest problems are nontechnical. Technical Problems Testing and Verification. The most challenging open problem today is testing. We can synthesize high-performance, customized applications quickly and cheaply, but questions about the validity of the generated source remain. It is still necessary to subject synthesized applications to a battery of regression tests to gain

Page 18 of 255

a level of confidence that it is (sufficiently) correct. The ultimate goal of PLAs is to shrink the release time for new products; it is not yet clear how PLA organizations can reduce testing (see [Edw98]). There is hope from verification research. Formal approaches to verified software are often based on (data) refinements (e.g., [Sri96]). The Ensemble/Horus projects at Cornell, for example, are GenVoca-like PLAs for building verified distributed applications from symmetric components [Ren96, Hay98]. Individual components have been verified; so high assurance statements can be made about their compositions. Boergers Evolving Algebra (EV) addresses the problem of scaling proofs for individual applications to families of related applications [Boe96]. EV is clearly based on layered refinements. Refinements. There has been a tremendous amount of work, both theoretical (e.g., [Bro86, Sri96, Boe96]) and pragmatic (e.g., [Nei84, Bax92]), on refinements. My work on GenVoca has admittedly evolved largely in isolation from this work. The primary reason was not lack of interest, but project objectives. Most of the fodder that I used to develop GenVoca stems from my implementation projects and those of others whose goals were to explore and develop software plug-and-play PLAs. It was not in the purview or interest of these individual projects to generalize the idea of layering to other domains. However, doing so raises the connection with refinements. It is an open problem to unify theoretical results with experimental findings. 9 Design Wizards. Common problems that users of GenVoca PLAs encounter are not knowing what components to use or what combinations of components satisfy application requirements. Until a certain level of expertise develops, it not difficult for users to specify type equations that are semantically correct but are

Page 19 of 255

not appropriate (e.g., performance-wise) for the target application. An expert in the domain and PLAs would critique a proposed design by saying Dont use this combination of components, but this combination instead for the following reasons. This is the kind of expertise that needs to be automated. A key to this problem is that application designs are expressed as equations. Expert knowledge of what to use and when to use components can be captured as algebraic rewrite rules (i.e., replace expression x with y under condition z because of reason r). By collecting such rules and using standard rule-based optimizers, a tool called a design wizard can be developed that will automatically (a) optimize equations application designs given workload specifications and (b) critique equations to avoid blunders. We have developed a design wizard for one domain [Bat98]. However, it is open problem to show that the approach is general enough to work for other domains. Standardized Interfaces. Applications of a product line rarely have the same programming interface. How then can applications with variable interfaces be constructed from components with standardized interfaces? In truth, standardized interfaces do not mean cast-in-concrete interfaces. It is possible to insert a component deep into the bowels of an application, and have the exported interface of the application change. It is only recently that we have found a general solution to this problem [Sma98]. (See the next topic). Bridging Communities. Challenging technical problems often arise due to the inability of others to understand the concept of refinements in their own terms. The reason is that refinements often require unusual juxtapositions of ideas and this, in turn, leads to interesting technological advances.

Page 20 of 255

As case in point, it took us several years to understand a fundamental connection between refinements and OO that is, how are refinements expressed as OO concepts? The answer is rather simple. A refinement of a class may involve the addition of new data members, new methods, and overriding existing methods. Such refinements are easily expressed via subclassing: a subclass (that encapsulates refinement modifications) is declared with the original class as its superclass. GenVoca layers encapsulate suites of interrelated classes. Figure 2a shows a (terminal) shaded layer that encapsulates three classes. Figure 2b shows a darker layer that also encapsulates three classes; when it is stacked on top of the shaded layer, one class becomes a subclass of the leftmost shaded class, while the others begin new inheritance hierarchies. Figure 2c shows a white layer to encapsulate four classes. When stacked upon the darker layer, each class becomes a subclass of an existing class. Lastly, Figure 2d shows the effect of adding a black layer, which encapsulates two classes. The application (which is defined by the resulting layer stack) instantiates the most refined classes (i.e., the terminal classes) of these hierarchies. These classes are circled in Figure 2d; the non-terminal classes represent intermediate derivations of the terminal classes. Thus, when GenVoca components are composed, a forest of inheritance hierarchies is created. Adding a new component (stacking a new layer) causes the forest to get progressively broader and deeper [Sma98]. Although straightforward, these ideas are not obvious nor are they common. It is through the use of inheritance that new operations/methods can be added to multiple application classes merely by plugging in a component. It is possible to express the ideas of Figure 2 using mixins. (A mixin is a class whose superclass is specified by a parameter). We wanted a clean expression of these ideas in Java.3 Unfortunately, neither Java or Pizza

Page 21 of 255

[Ode97] (a dialect of Java that supports parameterized polymorphism) supports parameterized inheritance. What we really needed was an extensible Java language in which it was possible to add other features to express refinements. 3. We chose Java because of the languages simplicity and also to more clearly show the concepts that programming languages are lacking in order to express refinements. 10 This lead us to develop the Jakarta Tool Suite (JTS), which is a PLA of Java dialects [Bat98]. JTS is, in fact, a GenVoca generator by which dialects of Java are assembled by composing symmetric components. Presently, JTS has components that extend Java with the features that include Lisp backquote/ comma (to specify and manipulate code fragments), hygienic macros (to avoid the inadvertent capture problem), parameterized inheritance, and a domain-specific language for container data structures. JTS is bootstrapped, so that JTS is written in a dialect of Java that was produced by JTS itself. Conclusions Heliocentricity was advanced in 1543, yet 60 years later it had made no impact. One reason was that most people didnt care about retrograde motions. Even open-minded academics, such as Jean Bodin, were skeptical [Boo83]: No one in his senses or imbued with the slightest knowledge of physics will ever think that the earth staggers up and down around its own center and that of the sun For if the earth moved, we would see cities, fortresses, and mountains thrown down Arrows shot straight up or stones dropped from towers would not fall perpendicularly, but either ahead or behind How then did heliocentricity take hold? Acceptance was gradual as volumes of evidence from telescopic

Page 22 of 255

observations of the heavens accumulated. (The telescope was invented in the early 1600s). Heliocentricity was consistent with other theories, such as those on earth tides. But certainly a contributing factor was its simplicity and elegance in addressing practical scientific problems that were otherwise difficult or impossible to solve. In this paper, I have tried to motivate future directions of software technology. There is no doubt that productline architectures, refinements as generalizations of components, and the codeless programming of software plug-and-play will come to pass; the only debatable points are how and when. I have offered GenVoca as a way in which all three can be achieved. Still, it is questionable that GenVoca will take hold. However, there are three reasons to be optimistic. First, there is a considerable amount of experimental evidence for its correctness and value (and I would expect there to be much more in the future). Second, the ideas are constantly being reinvented by others (after all, the idea of plugcompatible components isnt exactly novel and is quite appealing in its simplicity). Third, it addresses a critical need in software: that of reducing complexity. It is well-known that one of the great advantages of OO is its ability to manage and (a) layer stack inheritance hierarchies (b) layer stack inheritance hierarchies (c) layer stack inheritance hierarchies (d) layer stack inheritance hierarchies Figure 2: Component Composition and Inheritance Hierarchies application classes circled 11 control complexity through class abstractions. Certainly, anyone who has ever written an OO application understands precisely this point. It is not difficult to recognize that standardizing abstractions of a domain/ PLA is a very powerful way of managing and controlling the complexity of software in a family of applications. It is this latter point on which the success or failure of GenVoca may rest.

Page 23 of 255

References D. Batory and S. OMalley, The Design and Implementation of Hierarchical Software Systems with Reusable Components, ACM TOSEM, October 1992, 355-398. D. Batory, V. Singhal, M. Sirkin, and J. Thomas, Scalable Software Libraries, ACM SIGSOFT 1993. D. Batory and B.J. Geraci, Composition Validation and Subjectivity in GenVoca Generators, IEEE Transactions on Software Engineering, February 1997, 67-82. D. Batory, Intelligent Components and Software Generators, Software Quality Institute Symposium on Software Reliability, Austin, Texas, April 1997. D. Batory, G. Chen, E. Robertson, and T. Wang, Design Wizards and Visual Programming Environments for Generators, Int. Conference on Software Reuse, June 1998. I. Baxter, Design Maintenance Systems, CACM April 1992, 73-89. T. Biggerstaff, The Library Scaling Problem and the Limits of Component Reuse, Int. Conference on Software Reuse, November 1994. E. Boerger and I. Durdanovic, Correctness of Compiling Occam to Transputer Code, The Computer Journal, Vol. 39, No. 1.

Planning in product line architecture


Product Line Essential Activities:
At its essence, fielding of a product line involves core asset development and product development using the core assets, both under the aegis of technical and organizational management. Core asset development and product development from the core assets can occur in either order: new products are built from core assets, or core assets are extracted from existing products. Often, products and core assets are built in concert with each other. The following figure illustrates this triad of essential activities.

Page 24 of 255

Each rotating circle represents one of the essential activities. All three are linked together and in perpetual motion, showing that all three are essential, are inextricably linked, can occur in any order, and are highly iterative.The rotating arrows indicate not only that core assets are used to develop products, but also that revisions of existing core assets or even new core assets might, and most often do, evolve out of product development. The diagram in the above figure is neutral in regard to which part of the effort is launched first. In some contexts, already existing products are mined for generic assetsperhaps a requirements specification, an architecture, or software componentswhich are then migrated into the product line's core asset base. In other cases, the core assets may be developed or procured for later use in the production of products. There is a strong feedback loop between the core assets and the products. Core assets are refreshed as new products are developed. Use of core assets is tracked, and the results are fed back to the core asset development activity. In addition, the value of the core assets is realized through the products that are developed from them. As a result, the core assets are made more generic by considering potential new products on the horizon. There is a constant need for strong, visionary management to invest resources in the development and sustainment of the core assets. Management must also precipitate the cultural change to view new products in the context of the available core assets. Either new products must align with the existing core assets, or the core assets must be updated to reflect the new products that are being marketed. Iteration is inherent in product line activities that is,in turning out core assets, in turning out products, and in the coordination of the two.

Core Asset Development:


The goal of the core asset development activity is to establish a production capability for products. The following figure illustrates the core asset development activity along with

Page 25 of 255

its outputs and necessary inputs.

This activity, like its counterparts, is iterative. The rotating arrows suggest that there is no one-way causal relationship from inputs to outputs; the inputs and outputs of this activity affect each other. For example, slightly expanding the product line scope (one of the outputs) may admit whole new classes of systems to examine as possible sources of legacy assets (one of the inputs). Similarly, an input production constraint (such as mandating the use of a particular middleware product) may lead to restrictions on the architectural patterns (other inputs) that will be considered for the product line as a whole (such as the message-passing distributed object pattern). This restriction, in turn, will determine which preexisting assets are candidates for reuse or mining (still other inputs).Three things are required for a production capability to develop products, and these three things are the outputs of the core asset development activity.

Product line scope:

The product line scope is a description of the products that will constitute the product line or that the product line is capable of including. At its simplest, scope may consist of an enumerated list of product names. More typically, this description is cast in terms of the things that the products all have in common, and the ways in which they vary from one another. These might include features or operations they provide, performance or other quality attributes they exhibit, platforms on which they run, and so on. Defining the product line scope is often referred to as scoping. For a product line to be successful, its scope must be defined carefully. If the scope is too large and product members vary too widely, then the core assets will be strained beyond their ability to accommodate the variability, economies of production will be lost, and the product line will collapse into the old-style one-at-a-time product

Page 26 of 255

development effort. If the scope is too small, then the core assets might not be built in a generic enough fashion to accommodate future growth, and the product line will stagnate: economies of scope will never be realized, and the full potential return on investment will never materialize. The scope of the product line must target the right products, as determined by knowledge of similar products or systems, the prevailing or predicted market factors, the nature of competing efforts, and the organization's business goals for embarking on a product line approach (such as merging a set of similar but currently independent product development projects). The scope definition of a product line is itself a core asset, evolved and maintained over the product line's lifetime.

Core assets:

Core assets are the basis for production of products in the product line. As we have already described, these core assets almost certainly include an architecture that the products in the product line will share, as well as software components that are developed for systematic reuse across the product line. Any real-time performance models or other architecture evaluation results associated with the product line architecture are core assets. Software components may also bring with them test plans, test cases, and all manner of design documentation. Requirements specifications and domain models are core assets, as is the statement of the product line's scope. Commercial off-the-shelf (COTS) software, if adopted, also constitute, core assets. So do management artifacts such as schedules, budgets, and plans. Also, any production infrastructure such as domain-specific languages, tools, generators, and environments are core assets as well. Among the core assets, the architecture warrants special treatment. A product line architecture is a software architecture that will satisfy the needs of the product line in general and the individual products in particular by explicitly admitting a set of variation points required to support the spectrum of products within the scope. The product line architecture plays a special role among the other core assets. It specifies the structure of the products in the product lines and provides interface specifications for the components that will be in the core asset base. Each core asset should have associated with it an attached process that specifies how it will be used in the development of actual products. For example, the attached process for a set of product line requirements would give the process to follow when expressing the requirements for an individual product. This process might simply say: use the product line requirements as the baseline requirements, specify the variation requirement for any allowed variation point, add any requirements outside the set of specified product line requirements validate that the variations and extensions can be supported by the architecture The process might also specify the automated tool support for accomplishing these steps. These attached processes are themselves core assets that get folded into what becomes the production plan for the product line. The following figure illustrates this concept of attached processes and how they are incorporated into the production plan.

Page 27 of 255

There are also core assets at a less technical levelnamely, the training specific to the product line, the business case for use of a product line approach for this particular set of products, the technical management process definitions associated with the product line, and the set of identified risks for building products in the product line. Although not every core asset will necessarily be used in every product in the product line, all will be used in enough of the products to make their coordinated development, maintenance, and evolution pay off.

Production plan:

A production plan prescribes how the products are produced from the core assets. As noted above, core assets should each have an attached process that defines how it will be used in product development. The production plan is essentially a set of these attached processes with the necessary glue. It describes the overall scheme for how these individual processes can be fitted together to build a product. It is, in effect, the re user guide to product development within the product line. Each product in the product line will vary consistent with predefined variation points. How these variation points can be accommodated will vary from product line to product line. For example, variation could be achieved by selecting from an assortment of components to provide a given feature, by adding or deleting components, or by tailoring one or more components via inheritance or parameterization. It could also be the case that products are generated automatically. The exact vehicle to be used to provide the requisite variation among products is described in the production plan. Without the production plan, the product builder would not know the linkage

Page 28 of 255

among the core assets or how to utilize them effectively and within the constraints of the product line. To develop a production plan, you need to understand who will be building the Products the audience for the production plan. Knowing who the audience is will give you a better idea how to format the production plan. Production plans can range from a detailed process model to a much more informal guidebook. The degree of specificity required in the production plan depends on the background of the intended product builders, the structure of the organization, the culture of the organization, and the concept of operations for the product line. It will be useful to have at least a preliminary definition of the product line organization before developing the production plan. The production plan should describe how specific tools are to be applied in order to use, tailor, and evolve the core assets. The production plan should also incorporate any metrics defined to measure organizational improvement as a result of the product line (or other process improvement) practices and the plan for collecting the data to feed those metrics. The inputs to the core asset development activity are as follows.

Product constraints:
What are the commonalities and variations among the products that will constitute the product line? What behavioral features do they provide? What features do the market and technology forecasts say will be beneficial in the future? What commercial, military, or company-specific standards apply to the products? What performance limits must they observe? With what external systems must they interface? What physical constraints must be observed? What quality requirements (such as availability and security) are imposed? The core assets must capitalize on the commonalities and accommodate envisioned variation with minimal tradeoff to product quality drivers such as security, reliability, usability, and so on. These constraints may be derived from a set of pre-existing products that will form the basis for the product line, or they may be generated anew, or some combination of two.

Production constraints:

Must a new product be brought to market in a year, a month, or a day? What production capability must be given to engineers in the field? Answering these and similar questions will drive decisions about, for example, whether to invest in a generator environment or rely on manual coding. This in turn will drive decisions about what kind of variability mechanisms to provide in the core assets, and what form the overall production plan will take. Production strategy: The production strategy is the overall approach for realizing the core assets and products. Will the product line be built proactively (starting with a set of core assets and spinning products off of them), reactively (starting with a set of products and generalizing their components to produce the product line core assets), or using some combination. What will the transfer pricing strategy be that is, how will the cost of producing the generic components be divided among the cost centers for the products? Will generic components be produced internally or purchased on the open market?

Page 29 of 255

Will products be automatically generated from the assets or will they be assembled? How will production of core assets be managed? The production strategy dictates the genesis of the architecture and associated components and the path for their growth.

Inventory of preexisting assets:

Legacy systems embody an organization's domain expertise and/or define its market presence. The product line architecture, or at least pieces of it, may borrow heavily from proven structures of related legacy systems. Components may be mined from legacy systems. Such components may represent key intellectual property of the organization in relevant domains and therefore become prime candidates for components in the core asset base. What software and organizational assets are available at the outset of the product line effort? Are there libraries, frameworks, algorithms, tools, and components that can be utilized? Are there technical management processes, funding models, and training resources that can be easily adapted for the product line? The inventory includes all potential preexisting assets. Through careful analysis, an organization determines what is most appropriate to utilize. But preexisting assets are not limited to assets that were built by the product line organization. COTS and open-source products, as well as standards, patterns, and frameworks, are prime examples of preexisting assets that can be imported from outside the organization and used to good advantage.

Product Development
The product development activity depends on the three outputs described abovethe product line scope, the core assets, and the production planplus the requirements for each individual product. The following figure illustrates these relationships.

Page 30 of 255

Once more, the rotating arrows indicate iteration and intricate relationships. For example, the existence and availability of a particular product may well affect the requirements for a subsequent product. As another example, building a product that has previously unrecognized commonality with another product already in the product line will create pressure to update the core assets and provide a basis for exploiting that commonality for future products.

The inputs for the product development activity are as follows:


the requirements for a particular product, often expressed as a delta or variation from some generic product description contained in the product line scope (such a generic description is itself a core asset) or as a delta from the set of product line requirements (themselves a core asset). the product line scope, which indicates whether or not the product under consideration can be feasibly included in the product line the core assets from which the product is built the production plan, which details how the core assets are to be used to build the Product.

A software product line is, fundamentally, a set of related products, but how they come into existence can vary greatly depending on the core assets, the production plan, and the organizational context. From a very simple view, requirements for a product that is in the product line scope are received, and the production plan is followed so that the core assets can be properly used to develop the product. If the production plan is a more informal document, the product

Page 31 of 255

builders will need to build a product development plan that follows the guidance given. If the production plan is documented as a generic process description, the product builders will instantiate the production plan, recognizing the variation points being selected for the given product. However, the process is rarely, if ever, so linear. The creation of products may have a strong feedback effect on the product line scope, the core assets, the production plan, and even the requirements for specific products. The ability to turn out a particular member of the product line quicklyperhaps a member that was not originally envisioned by the people responsible for defining the scopewill in turn affect the product line scope definition. Each new product may have similarities with other products that can be exploited by creating new core assets. As more products enter the field, efficiencies of production may dictate new system generation procedures, causing the production plan to be updated.

Management
Management plays a critical role in the successful fielding of a product line. Activities must be given resources, coordinated, and supervised. Management at both the technical (or project) and organizational levels must be strongly committed to the software product line effort. That commitment manifests itself in a number of ways that feed the product line effort and keep it healthy and vital. Technical management oversees the core asset development and to the product development activities by ensuring that the groups who build core assets and the groups who build products are engaged in the required activities, follow the processes defined for the product line, and collect data sufficient to track progress. Organizational management must set in place the proper organizational structure that makes sense for the enterprise, and must make sure that the organizational units receive the right resources (for example, well-trained personnel) in sufficient amounts. We define organizational management as the authority that is responsible for the ultimate success or failure of the product line effort. Organizational management determines a funding model that will ensure the evolution of the core assets and then provides the funds accordingly. Organizational management also orchestrates the technical activities in and iterations between the essential activities of core asset development and product development. Management should ensure that these operations and the communication paths of the product line effort are documented in an operational concept. Management mitigates those risks at the organizational level that threaten the success of the product line. The organization's external interfaces also need careful management. Product lines tend to engender different relationships with an organization's customers and suppliers, and these new relationships must be introduced, nurtured, and strengthened. One of the most important things that management must do is create an adoption plan that describes the desired state of the organization (that is, routinely producing products in the product lines) and a strategy for achieving that state. Both technical and organizational management also contribute to the core asset base by making available for reuse those management artifacts (especially schedules and budgets) used in developing products in the product line. Finally, someone should be designated as the product line manager and that person must either act as or find and empower a product line champion. This person must be a strong, visionary leader who can keep the organization squarely pointed toward the product line goals, especially when the going gets rough in the early stages. Leadership is required for software product line success. Management and leadership are not always synonymous.

Page 32 of 255

Each of the three activities core asset development, product development, and Management is individually essential, and careful blending of all three is also Essential a blend of technology and business practices. Different organizations may take different paths through the three activities. The path they take is a manifestation of their production strategy, as described in "Core Asset Development." Many organizations begin a software product line by developing the core assets first. These organizations take a proactive approach. They define their product line scope to define the set (more often, a space) of systems that will constitute their product line. This scope definition provides a kind of mission statement for designing the product line architecture, components, and other core assets with the right built-in variation points to cover the scope. Producing any product within that scope becomes a matter of exercising the variation points of the components and architecturethat is, configuringand then assembling and testing the system. Other organizations begin with one or a small number of products they already have and from these generate the product line core assets and future products. They take a reactive approach. Both of these approaches may be attacked iteratively. For example, a proactive approach may begin with the production of only the most important core assets, rather than all of them. Early products use those core assets. Subsequent products are built using more core assets as they are added to the collection. Eventually, the full core asset base is fielded; earlier products may or may not be reengineered to use the full collection. An iterative reactive approach works similarly; the core asset based is populated sparsely at first, using existing products as the source. More core assets are added as time and resources permit. The proactive approach has obvious advantagesproducts come to market extremely quickly with a minimum of code-writing. But there are also disadvantages. It requires a significant up-front investment to produce the architecture and the components that are generic (and reliable) across the entire product space. And it also requires copious upfront predictive knowledge, something that is not always available. In organizations that have long been developing products in a particular application domain, this is not a tremendous disadvantage. For a green field effort, where there is no experience or existing products, this is an enormous risk. The reactive approach has the advantage of a much lower cost of entry to software product lines because the core asset base is not built up front. However, for the product line to be successful, the architecture and other core assets must be robust, extensible, and appropriate to future product line needs. If the core assets are not built beyond the ability to satisfy the specific set of products already in the works, extending them for future products may prove too costly.

Product Line Essential Activities of this framework introduced three essential activities that are involved in developing a software product line. These are (1) core asset development, (2) product development, and (3) management. This section defines in more detail what an organization must do to perform those broad essential activities. We do this by defining practice areas. A practice area is a body of work or a collection of activities that an organization must master to successfully carry out the essential work of a product line. Practice areas help to make the essential activities more achievable by defining activities that are smaller and more tractable than a broad imperative such as "Develop core assets." Practice areas provide starting points from which organizations can make (and measure) progress in adopting a product line approach for software. So, to achieve a software product line you must carry out the three essential activities. To be able to

Practice Areas

Page 33 of 255

carry out the essential activities you must master the practice areas relevant to each. By "mastering," we mean an ability to achieve repeatable, not just one-time, success with the work. Almost all of the practice areas describe activities that are essential for any successful software development, not just software product lines. However, they all either take on particular significance or must be carried out in a unique way in a product line context. Those aspects that are specifically relevant to software product lines, as opposed to single-system development, will be emphasized.

Describing the Practice Areas:


For each practice area we present the following information: An introductory overview of the practice area that summarizes what it's about. You will not find a definitive discourse on the practice area here, since in most cases there is overlap with what can be found in traditional software engineering and management reference books. We provide a few basic references if you need a refresher. Those aspects of the practice area that apply especially to a product line, as opposed to a single system. Here you will learn in what ways traditional software and management practice areas need to be refocused or tailored to support a product line approach. How the practice area is applied to core asset development and product development, respectively. We separate these two essential activities; although in most cases a given practice area applies to both of these broad areas, the lens that you look through to focus changes when you are building products versus developing core assets. A description of any specific practices that are known to apply to the practice area. A specific practice describes a particular way of accomplishing the work associated with a practice area. Specific practices are not meant to be end-to-end methodological solutions to carrying out a practice area but approaches to the problem that have been used in practice to build product lines. Whether or not a specific practice will work for your organization depends on context. Known risks associated with the practice area. These are ways in which a practice area can go wrong, to the detriment of the overall product line effort. Our understanding of these risks is borne out of the pitfalls of others in their product line efforts. A list of references for further reading, to support your investigation in areas where you desire more depth. When planning to carry out the practice area, be sure to keep the following in mind: For each practice area, make a work plan for carrying it out. The work plan should specify the plan owner, specific tasks, who is responsible for doing them, what resources those people will be given, and when the results are due. More information about planning for product lines can be found in the "Technical Planning" and "Organizational Planning" practice areas. For each practice area, define metrics associated with tracking its completion and measuring its success. These metrics will help an organization identify where the practice areas are (or are not) being executed in a way that is meeting the organization's goals. More

Page 34 of 255

information about planning for measurement can be found in the "Data Collection, Metrics, and Tracking" practice area. Many practice areas produce tangible artifacts. For each practice area that does so, make a plan for keeping its produced artifacts up to date and identify the set of stakeholders who hold a vested interest in the artifacts produced. Collect organizational plans for artifact evolution and sustainment, and stakeholder definitions, in your product line's operational concept. Many practice areas lead to the creation of core assets of some sort. For those that do, define and document an attached process that tells how the core assets are used (modified, instantiated, and so on) to build products. These attached processes together form the production plan for the product line. The "Process Definition" practice area describes the essential ingredients for defining these (and other) processes. The "Operations" and "Architecture Definition" practice areas describe documents for containing some of them.

Organizing the Practice Areas


Since there are so many practice areas, we need a way of organizing them for easier access and reference. We divide them loosely into three categories: Software engineering practice areas are those necessary to apply the appropriate technology to create and evolve both core assets and products. Technical management practice areas are those management practices necessary to engineer the creation and evolution of the core assets and the products. Organizational management practice areas are those necessary for the orchestration of the entire software product line effort.

Each of these categories appeals to a different body of knowledge and requires a different skill set for the people needed to carry them out. The categories represent disciplines rather than job titles. There is no way to divide cleanly into practice areas the knowledge necessary to achieve a software product line. Some overlap is inevitable. We have chosen what we hope to be a reasonable scheme and have identified practice area overlap where possible. The description of practice areas that follows is an encyclopedia; neither the ordering nor the categorization constitutes a method or an order for application. In other works we for a particular organization's context and goals.

Software Engineering Practice Areas:


Software engineering practice areas are those practice areas that are necessary for application of the appropriate technology to the creation and evolution of both core assets and products. They are carried out in the technical activities represented by the top two circles in the follow figure.

Page 35 of 255

THESE ARE THE MAJOR PARACTICES:


Architecture Architecture Evaluation Component Development COTS Utilization Mining Existing Assets Requirements Engineering Software System Integration Testing Understanding Relevant Domains

All of these practice areas should sound familiar, because all are part of every well engineered software system. But all take on special meaning when the software is a product line, as we will see. How do they relate to each other in a software product line Context .

The following figure sketches the story.

Page 36 of 255

Domain understanding feeds requirements, which drive an architecture, which specifies components. Components may be made in-house, bought on the open market, mined from legacy assets, or commissioned under contract. This choice depends on the availability of in-house talent and resources, open-market components, an exploitable legacy base, and able contractors. The existence (or nonexistence) of these things can affect the requirements and architecture for the product line. Once available, the components must be integrated, and they and the system must be tested. This is a quick trip through an iterative growth cycle, and it oversimplifies the story shamelessly but shows a good approximation of how the software engineering practice areas come into play.

Architecture:
This practice area describes the activities that must be performed to define a software architecture. By software architecture, we mean the following:
The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them. "Externally visible" properties, we are referring to those assumptions other elements can make of an element, such as its provided services, performance characteristics, fault handling, shared resource usage, and so on..

Page 37 of 255

By making "externally visible properties" of elements1 part of the definition, we intentionally and explicitly include elements' interfaces and behaviors as part of the architecture. We will return to this point later. By contrast, design decisions or implementation choices that do not have systemwide ramifications or visibility are not Architecture is key to the success of any software project, not just a software product line. Architecture is the first design artifact that begins to place requirements into a solution space. Quality attributes of a system (such as performance, modifiability, and availability) are in large part permitted or precluded by its architectureif the architecture is not suitable from the beginning for these qualities, don't expect to achieve them by some miracle later. The architecture determines the structure and management of the development project as well as the resulting system, since teams are formed and resources allocated around architectural elements. For anyone seeking to learn how the system works, the architecture is the place where understanding begins. The right architecture is absolutely essential for smooth sailing. The wrong one is a recipe for disaster.

Architectural requirements:
For an architecture to be successful, its constraints must be known and articulated. And contrary to standard software engineering waterfall models, an architecture's constraints go far beyond implementing the required behavior of the system that is specified in a requirements document Other architectural drivers that a seasoned architect knows to take into account include: the quality attributes (as mentioned above) that are required for each product that will be built from the architecture whether or not the system will have to interact with other systems the business goals that the developing organization has for the system. These might include ambitions to use the architecture as the basis for other systems (or even other software product lines). Or perhaps the organization wishes to develop a particular competence in an area such as Web-based database access.Consequently, the architecture will be strongly influenced by that desire. best sources for components. A software architecture will, when it is completed, call for a set of components to be defined, implemented, and integrated.

The architecture creation process:


Create Scenarios Input _ Domain decision model _ Generic work products (e.g. functional, non-functional requirements, usecases, feature models) Output _ Generic scenarios in the form of activity models The process begins with the creation of scenarios that refine the functional and nonfunctional requirements from the domain analysis phase. Other domain analysis work products (e.g. feature models) are considered as well.

Functional Requirements:

Functional requirements are often derived from the business processes to be supported by a system. They typically relate user actions to desired system responses. In the scenario creation step we describe these business processes detailed and completely. For this purpose, activity models with swim lanes are used because they provide a standardized way for concretely describing the workflow of business processes. Swim lanes are used to assign particular organizational units or roles involved in a workflow to activities. Alternatively

Page 38 of 255

or additionally, interaction scripts can be used as a textual artifact [12]. The scripts are written in form of tables where the columns are the textual equivalents of the swim lanes in activity diagrams. The rows provide typically one-sentence descriptions of the actor and system activities. In the first set of activity models created we typically have two main roles: a user and the system. All activities in the users swim lane identify the interface that the system must support. The activities in the systems swim lane identify internal operations and components required by the system.

Non-functional Requirements

Non-functional requirements play an important role and may even be the dominant drivers of the architecture creation process . In this case, the architects deal with non-functional properties of the system first by applying appropriate architectural approaches. In this way architectural fragments are defined that are then matched with the functional requirements.

Select Scenarios and Plan Next Iteration


Inputs _ Generic Scenarios _ Scope Definition Outputs _ Architecture Creation Plan When the scenarios were created, they are prioritized and grouped to define the architecture creation plan, which defines the order of processing the groups with respect to their priority. Prioritizing scenarios should follow a simple, basic rule: the bigger the impact of a scenario on the system architecture, the higher the scenarios priority. However simple in theory, evaluating the architectural impact of a scenario from its description is a difficult, non-trivial task that requires much experience. Based on implicit experience, scenarios may be weighted through expert voting. Voting can be supported or replaced by specific evaluation criteria. Below, we list some criteria that can be used for prioritizing scenarios and constructing the architecture creation plan. These criteria can be used as an indication of the expected architectural impact of scenarios: Economic Value: This is the value from an economic point of view that will be added to the product-line when a scenario will be realized. The product line scope can deliver this information. It is clear that a high economic value is a hint for high scenario priority. Typicality Criticality: A scenario is of high typicality when it reflects routine operations whereas critical scenarios occur in a rather sporadic manner and when the user does not expect them to occur. Typical scenarios should therefore implemented first. Future-proof: A scenario is future-proof when it considers possible evolution points. Future-proof scenarios are likely to have a big impact on the architecture and therefore they should assigned high priorities. Effort: This is an estimation of the effort required for realizing a scenario. Since the architecture creation plan also assigns responsibilities to personnel, the effort estimation is surely helpful. Non-functional requirements suggest the use of well-known architectural approaches. The architectural impact of a scenario that has non-functional attributes can be therefore high.

Page 39 of 255

Component interfaces:
Architecture includes the interfaces of its components. It is therefore incumbent on the architect to specify those interfaces (or, if the component is developed externally, ensure that its interface is adequately specified by others). By "interface" we mean something far more complete than the simple functional signatures one finds in header files. Signatures simply name the programs and specify the numbers and types of their parameters, but they tell nothing about the semantics of the operations, the resources consumed, the exceptions raised, or the externally visible behavior.

Connecting components:
Applications are constructed by connecting together components to enable communication and coordination. In simple systems that run on a single processor, the venerable procedure call is the oldest and most widely used mechanism for component interaction. In modern distributed systems, however, something more sophisticated is desirable. There are several competing technologies, discussed below, for providing these connections as well as other infrastructure services. Among the services provided by the infrastructures are remote procedure calls (allowing components to be deployed on different processors transparently), communication protocols, object persistence and the creation of standard methods, and "naming services" that allow one component to find another via the component's registered name. These infrastructures are purchased as commercial packages; they are components themselves that facilitate connection among other components. These infrastructure packages are called middleware and, like patterns, represent another class of already solved problems (highly functional component interactions for distributed object-based systems) that the architect need not reinvent Architecture documentation and views: Documenting the architecture is essential for it to achieve its effectiveness. Here, architectural views come into play. A view representation of a set of system elements and the relationships among them . A view can be thought of as a projection of the architecture that includes certain kinds of information and suppresses other kinds. For example, a module decomposition view will show how the software for the system is hierarchically decomposed into smaller units of implementation. A communicating processes view will show the processes in the software and how they communicate or synchronize with each other, but it will not show how the software is divided into layers (if indeed it is). The layered view will show this, but will not show the processes. A deployment view shows how software is assigned to hardware elements in the system. There are many views of an architecture; choosing which ones to document is a matter of what information you wish to convey. Each view has a particular usefulness to one or more segments of the stakeholder community and should be chosen and engineered with that in mind.

Aspects Peculiar to Product Lines:


All architectures are abstractions that admit a plurality of instances; a great source of their conceptual value is, after all, the fact that they allow us to concentrate on design while admitting a number of implementations. But a product line architecture goes beyond this simple dichotomy between design and code; it is concerned identifying and providing mechanisms to achieve a set of explicitly allowed variations (because when exercised, these become products), whereas with a conventional architecture almost any instance will do as long as the (single) system's behavioral and quality goals are met. But product in a software product line exist simultaneously and may vary from each other in terms of their behavior, quality attributes, platform, network, physical configuration, middleware, scale factors, and a multitude of other ways.

Page 40 of 255

In a conventional architecture, the mechanism for achieving different instances almost always comes down to modifying the code. But in a software product line, support for variation can take many forms Mechanisms to achieve variation are discussed under "Specific Practices." Integration may assume a greater role for software product lines than for one-off systems simply because of the number of times it's performed. A product line with a large number of products and upgrades requires a smooth and easy process for each product. Therefore, it pays to select a variation mechanism that allows for reliable and efficient integration when new products are turned out. This means some degree of automation. For example, if the variation mechanism chosen for the architecture is component selection and de selection, you will want an integration tool that carries out your wishes by selecting the right components and feeding them to the compiler or code generator. If the variation mechanism is parameterization or conditional compilation, you will want an integration tool that checks the parameter values for consistency and compatibility, then feeds those values to the compilation step. Hence, the variation mechanism chosen for the architecture will go hand-in-hand with the integration approach . For many other system qualities, such as performance, availability, functionality, usability, and testability, there are no major peculiarities that distinguish architecture for product lines relative to one-of-a-kind systems.

Application to Core Asset Development:


The product line architecture is an early and prominent member in the collection of core assets. The architecture is expected to persist over the life of the product line and to change relatively little and relatively slowly over time. The architecture defines the set of software components (and hence their supporting assets such as documentation and test artifacts) that populate the core asset base. The architecture also spawns its attached process, which is itself an important core asset for sustaining the product line.

Application to Product Development:


Once it has been placed in the product line core asset base, the architecture is used to create instance architectures for each new product according to its attached process. If product builders discover a variation point or a needed mode of variation that is not permitted by the architecture, it should be brought to the architect's attention; if the variation is within scope (or deemed desirable to add to the scope), the architecture may be enhanced to accommodate it. The "Operations" practice area deals with setting up this feedback loop in the organization.

Specific Practices:
Architectural patterns Architecture definition and architecture-based development Attribute-Driven Design Quality Attribute Workshops Aspect-oriented programming Product builder's guide o Introduction o Sources of other information o Basic concepts o Service component catalogue

Page 41 of 255

o o

Building an application Performance engineering

Mechanisms for achieving variability in a product line architecture :


The list includes the following mechanisms:

Inheritance:
in object-oriented systems, used when a method needs to be implemented differently (or perhaps extended) for each product in the product line.

Extensions and extension points:


It used when parts of a component can be augmented with additional behavior or functionality

Parameterization:
used when a component's behavior can be characterized abstractly by a placeholder that is then defined at build time. Macros and templates are forms of parameterization.

Configuration and module interconnection languages:


used to define the build-time structure of a system, including selecting (or deselecting) whole components

Generation:
used when there is a higher-level language that can be used to define a component's desired properties

Compile-time selection of different implementations: The variable can be used


when variability in a component can be realized by choosing different implementations. Code-based mechanisms used to achieve variability within individual components .

Architecture Evaluation:
The architecture of a system represents a coherent set of the earliest design decisions, which are the most difficult to change and the most critical to get right. It is the first design artifact that addresses the quality goals of the system such as security, reliability, usability, modifiability, and real-time performance. The architecture describes the system structure and serves as a common communication vehicle among the system stakeholders: developers, managers, maintainers, users, customers, testers, marketers, and anyone else who has a vested interest in the development or use of the system.

Page 42 of 255

With the advent of repeatable, cost-effective architecture evaluation methods, it is now feasible to make architecture evaluation a standard part of the development cycle. And because so much rides on the architecture, and because it is available early in the life cycle, it makes utmost sense to evaluate the architecture early when there is still time for midcourse correction. In any nontrivial project, there are competing requirements and architectural decisions that must be made to resolve them. It is best to air and evaluate those decisions and to document the basis for making them before the decisions are cast into code. Architecture evaluation is a form of artifact validation, just as software testing is a form of code validation. In the "Testing" practice area, we will discuss validation of artifacts in generaland in fact, prescribe a validation step for all of the product line's core assets but the architecture for the product line is so foundational that we give its validation its own special practice area. The evaluation can be done at a variety of stages during the design process. For example, the evaluation can occur when the architecture is still on the drawing board and candidate structures are being weighed. The evaluation can also be done later, after preliminary architectural decisions have been made, but before detailed design has begun. The evaluation can even be done after the entire system has been built (such as in the case of a reengineering or mining operation). The outputs will depend on the stage at which the evaluation is performed. Enough design decisions must have been made so that the achievement of the requirements and quality-attribute goals can be analyzed. The more architectural decisions that have been made, the more precise the evaluation can be. On the other hand, the more decisions that have been made, the more difficult it is to change them. An organization's business goals for a system lead to particular behavioral requirements and quality-attribute goals. The architecture is evaluated with respect to those requirements and goals. Therefore, before an evaluation can proceed, the behavioral and quality-attribute goals against which an architecture is to be evaluated must be made explicit. These quality-attribute goals support the business goals. For example, if a business goal is that the system should be long-lived, modifiability becomes an important quality-attribute goal. Quality-attribute goals, by themselves, are not definitive enough either for design or for evaluation. They must be made more concrete. Using modifiability as an example, if a product line can be adapted easily to have different user interfaces, but is dependent on a particular operating system, is it modifiable? The answer is yes with respect to the user interface, but no with respect to porting to a new operating system. Whether this architecture is suitably modifiable or not depends on what modifications to the product line are expected over its lifetime. That is, the abstract quality goal of modifiability must be made concrete: modifiable with respect to what kinds of changes, exactly? The same is true for other attributes. The evaluation method that you use must include a way to concretize the quality and behavioral goals for the architecture being evaluated.

Define Evaluation Criteria


Inputs _ Architecture creation plan _ Quality scenarios Outputs _ Architecture evaluation plan consisting of: _ Consistency Rules _ Quality assurance documentation In this step, the criteria to be met by the architecture are defined. The basic criterion requires the support of all scenarios in all variants. Additionally, the architect defines further necessary criteria. Here is a list of categories of additional criteria that may be considered: Responses: Here we can add the desired responses to the stimuli of the quality requirements.

Page 43 of 255

Style-specific rules:
In case that styles or patterns are used we must make sure that the according assumptions and rules are met.

Metrics:
Metrics like coupling, cohesion etc.

Application to Product Development:


An architecture evaluation should be performed on an instance or variation of the architecture that will be used to build one or more of the products in the product line. The extent to which this is a separate, dedicated evaluation depends on the extent to which the product architecture differs in quality-attribute-affecting ways from the product line architecture. If it doesn't, then these product architecture evaluations can be abbreviated, since many of the issues that would normally be raised in a single product evaluation will have been dealt with in the evaluation of the product line architecture. In fact, just as the product architecture is a variation of the product line architecture, the product architecture evaluation is a variation of the product line architecture evaluation. Therefore, depending on the architecture evaluation method used, the evaluation artifacts (scenarios, checklists, and so on) will certainly have reuse potential, and you should create them with that in mind. Document a short attached process for the architecture evaluation of the product line or product architectures. This process description would include the method used, what artifacts can be reused, and what issues to focus on. The results of architecture evaluation for product architectures often provide useful feedback to the architect(s) of the product line architecture and fuel improvements in the product line architecture. Finally, when a new product is proposed that falls outside the scope of the original product line (for which the architecture was presumably evaluated), the product line architecture can be reevaluated to see if it will suffice for this new product. If it will, the product line's scope is expanded to include the new product. If it will not, the evaluation can be used to determine how the architecture would have to be modified to accommodate the new product.

Specific Practices: ATAMSM: SPE:

The Architecture Tradeoff Analysis Method(ATAM) is a scenario-based architecture evaluation method that focuses on a system's quality goals. Software performance engineering (SPE) is a method for making sure that a design will allow a system to meet its performance goals before it has been built. SPE involves articulating the specific performance goals, building coarse-grained models to get early ideas about whether the design is problematic, and refining those models along well defined lines as more information becomes available.

ARID:

Page 44 of 255

Active Reviews for Intermediate Designs (ARID) is a hybrid design review method that combines the active design review philosophy of ADRs withthe scenario-based analysis of the ATAM and SAAM Active design reviews: An Active Design Review (ADR) is a technique thatcan be used to evaluate an architecture still under construction.

Component Development:
One of the tasks of the software architect is to produce the list of components that will populate the architecture A software component is a unit of composition with contractually specified interfaces and explicit context dependencies only. A software component can be deployed independently and is subject to composition by third parties. By component development, we mean the production of components that implement specific functionality within the context of a software architecture. The functionality is encapsulated and packaged, then integrated with other components using an interconnection method.

Mining Existing Assets:


Mining existing assets refers to resurrecting and rehabilitating a piece of an old system to serve in a new system for which it was not originally intended. Often it simply refers to finding useful legacy code from an organization's existing systems portfolio and reusing it within a new application. However, the code-only view completely misses the big picture. We have known for years that in the grand scheme of things, code plays a small role in the cost of a system. Coding is simply not what's difficult about system/software development. Rich candidates for mining include a wide range of assets besides code assets that will pay lucrative dividends. Business models, rule bases, requirements specifications, schedules, budgets, test plans, test cases, coding standards, algorithms, process definitions, performance models, and the like are all wonderful assets for reuse. The only reason so-called "code reuse" pays at all is because of the designs and algorithms and interfaces that come along with the code.

Application to Core Asset Development:


The process of mining existing assets is largely about finding suitable candidates for core assets of the product line. Software assets that are well structured and well documented and have been used effectively over long periods of time can sometimes be included as product line core assets with little or no change. Software assets that can be wrapped to satisfy new interoperability requirements are also desirable. On the other hand, assets that don't satisfy these requirements are undesirable and may have higher maintenance costs over the long term. Depending on the legacy inventory and its quality, an assortment of candidate assets is possible, from architectures to small pieces of code. An existing architecture should be analyzed carefully before being accepted as the pivotal core assetthe product line architecture. See the "Architecture Evaluation" practice area for a discussion of what that analysis should entail. Candidate software assets must align with the product line architecture, meet specified component behavior requirements, and accommodate any specified variation points. In some cases, a mined component may represent a potentially valuable core asset but won't fit directly into the product line architecture. Usually, the component will need to be changed to accommodate the constraints of the architecture. Sometimes a change in the architecture might be easier, but of course this will

Page 45 of 255

have implications for other components, for the satisfaction of quality goals, and for the support of the products in the product line.

Once in the product line core asset base, mined assets are treated in the same way as newly developed assets.

Application to Product Development:


It is possible and reasonable to use mined assets for components that are unique to a single product in the product line, but in this case the mining activity will become indistinguishable from mining in the non-product-line case. The same issues discussed above (paying attention to quality attributes, architecture, cost, and time-to-market) will still apply. And it will be worth taking a long, hard look at whether the mined component really is unique to a single product or could be used in other products as well, thus making the cost of its rehabilitation more palatable. In that case, the team responsible for mining would be wise to look for places where variability could be installed in the future, should the asset in question ever turn out to be useful in a group of products.

Establish mining context: Inventory components: Analyze candidate components Analyze mining options: Select mining option

Mining Architectures:
In some cases the software architecture of an existing system can become the product line architecture. Mining Architectures for Product Lines (MAP) is a method that determines whether the architectures of existing systems are similar and whether the corresponding systems have the potential of becoming a software product line . The MAP method combines techniques for architecture reconstruction and product line analysis to analyze the architectural patterns and attributes of a set of systems. This analysis determines if there are similar components and connections between the components within these systems and examines their commonalities and variabilities. MAP has been used in the development of a prototype product line architecture for a sunroof system. MAP and OAR can also be used together where MAP supports decision-making on reusing architectures, while OAR supports decision-making on identifying components that fit within the constraints of the architecture.

Requirements Reuse and Feature Interaction Management:


Developers realize that complex applications are often best built by using a number of different components, each performing a specialized set of services. But the components, each embodying different requirements in different service domains, can interact in unpredictable ways. How to design components to minimize or at least manage interaction is a current issue. This problem of interaction becomes even more significant when reusing requirements. Interactions must be detected and resolved in the absence of a specific implementation Framework to understanding how to reuse requirements and describes a conceptual process framework for formulating and reusing requirements . Reusable requirements are classified into three different levels of abstraction for software requirements: domain-specific requirements,

Page 46 of 255

generic requirements and domain requirements frameworks. This classification is used as the basis for a reusability plan to support the view of the importance of interaction management.

Wrapping:
Wrapping involves changing the interface of a component to comply with a new architecture, but not making other changes in the component's internals. In fact, pure wrapping involves no change whatsoever in the component, but only interposing a new thin layer of software between the original component and its clients. That thin layer provides the new interface by translating to and from the old. There are enormous advantages to reusing existing assets with little or no internal modification through wrapping.

References:
Bass 98] Bass, Len; Clements, Paul; & Kazman, Rick. Software Architecture in Practice. Boston, MA: Addison-Wesley, 1998. [Batory 97] Batory, Don. Intelligent Components and Software Generators (Technical Report 97-06). Austin, TX: Department of Computer Sciences, University of Texas at Austin, 1997. [Boehm 81] Boehm, Barry. Software Engineering Economics. Englewood Cliffs, NJ: Prentice-Hall, 1981. [Clements 02a] Clements, Paul & Northrop, Linda. Software Product Lines: Practices and Patterns. Boston, MA: Addison-Wesley, 2002. [Clements 02b] Clements, Paul; Kazman, Rick; & Klein, Mark. Evaluating Software Architectures: Methods and Case Studies. Boston, MA: AddisonWesley, 2002. [Cohen 99] Cohen, Sholom. Guidelines for Developing a Product Line Concept of Operations (CMU/SEI-99-TR-008, ADA367714). Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, 1999. <http://www.sei.cmu.edu/publications/documents/99.reports /99tr008/99tr008abstract.html>. [Hax 87] Hax, Arnoldo. Aggregate Production Planning. Production Handbook Russ, Melissa L. & McGregor, John D. A Software Development Process for Small Projects. IEEE Software 17, 5 (SeptemberOctober 2000): 96-101. [Weiss 99] Weiss, David M. & Lai, Chi Tau Robert. Software Product-Line Engineering. Reading, MA: Addison-Wesley, 1999.
Weiss, D. M. and Lai, C. T. R., Software Product Line Engineering: A family Based Software Engineering Process, Addison-Wesley, 1999. [2] K.Kang, S.Cohen, J.Hess, W.Nowak and S.Peterson, Feature-oriented Domain Analysis (FODA), Technical Report CMU/SEI-90-TR-21, Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA, November 1990 [3] Bayer, J., Flege, O., Knauber, P., Laqua, R., Muthig, D., Schmid. K., Widen, T, and Debaud, J.-M., PuLSE: A Methodology to develop Software Product Lines, Proceedings of the Symposium on Software Reuse(SSR'99), May 1999

Page 47 of 255

[4] Bosch, J. Design and Use of Software Architectures. Addison-Wesley, 2000. [5] C. Atkinson, J. Bayer, C. Bunse, E. Kamsties, O. Laitenberger, R. Laqua, D. Muthig, B. Paech, J. Wst, J. Zettel. Component-Based Product-Line Engineering with UML. Addison-Wesley, 2001 [6] M. Anastasopoulos, J. Bayer, O. Flege, and C. Gacek. A Process for Product Line Architecture Creation and Evaluation: PuLSE-DSSA V2.0, Technical Report, Fraunhofer IESE, 2000 [7] Szyperski, C., Component Software: Beyond Object-Oriented Programming, Addison-Wesley 1999. [8] Executive Overview: Model Driven Architecture, Object Management Group, 2001 http://www.omg.org/mda [9] Unified Modeling Language Specification, Version 1.4, Object Management Group, 2000. [10] F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad and M. Stal, PatternOriented Software Architecture A System of Patterns, John Wiley & Sons, July 1996.

Life cycle of product line architecture

A Framework for Software Product Line Practice


This practice area describes the activities that must be performed to define a software architecture. By software architecture, we mean the following: The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them. "Externally visible" properties are those assumptions other elements can make of an element, such as its provided services, performance characteristics, fault handling, shared resource usage, and so on [Bass 2003a]. By making "externally visible properties" of elements1 part of the definition, we intentionally and explicitly include elements' interfaces and behaviors, as part of the architecture. We will return to this point later. By contrast, design decisions or implementation choices that do not have system-wide ramifications or visibility are not architectural. Architecture is key to the success of any software project. It is the first design artifact that begins to place requirements into a solution space. The quality attributes of a system (such

Page 48 of 255

as performance, modifiability, and availability) are, in large part, permitted or precluded by its architectureif the architecture is not suitable from the beginning for these qualities, don't expect to achieve them by some miracle later. The architecture determines the structure and management of the development project as well as the resulting system, since teams are formed and resources allocated around architectural components. For anyone seeking to learn how the system works, the architecture is the place where understanding begins. The right architecture is absolutely essential for smooth sailing. The wrong one is a recipe for disaster.

Architectural requirements: For an architecture to be successful, its


constraints must be known and articulated. And contrary to standard software engineering waterfall models, an architecture's constraints go far beyond implementing the required behavior of the system that is specified in a requirements document [Clements 2002c, p. 57]. Other architectural drivers that a seasoned architect knows to take into account include

the quality attributes (as mentioned above) that are required for each product to be built from the architecture whether the system will have to interact with other systems the business goals that the developing organization has for the system. These might include ambitions to use the architecture as the basis for other systems (or even other software product lines). Or perhaps the organization wishes to develop a particular competence in an area such as Web-based database access. Consequently, the architecture will be strongly influenced by that desire. the best sources for components. A software architecture will call for a set of components to be defined, implemented, and integrated. Those components may be implemented in-house (see the "Component Development" practice area), be purchased or licensed from the commercial marketplace (see the "Using Externally Available Software" practice area), be contracted to third-party developers (see the "Developing an Acquisition Strategy" practice area), or be excavated from the organization's own legacy vaults (see the "Mining Existing Assets" practice area). The availability of preexisting components (ones that are commercial, open source, third party, or legacy) may influence the architecture considerably and cause the architect to carve out a place in the architecture where a preexisting component can fit, if doing so will save time or money or play into the organization's long-term strategies.

Component interfaces: As we said earlier in this section, architecture


includes the interfaces of its components. It is therefore incumbent on the architect to specify those interfaces (or, if the component is developed externally, ensure that its
Page 49 of 255

interface is specified adequately by others). By interface we mean something far more complete than the simple functional signatures one finds in header files. Signatures simply name the programs and specify the numbers and types of their parameters, but they tell nothing about the semantics of the operations, the resources consumed, the exceptions raised, or the externally visible behavior. As Parnas wrote in 1972, an interface consists of the set of assumptions that users of the component may safely make about it-nothing more and nothing less [Parnas 1972a]. Approaches for specifying component interfaces are discussed in "Example Practices."

Connecting components: Applications are constructed by connecting


components to enable communication and coordination. In simple systems that run on a single processor, the venerable procedure call is the oldest and most widely used mechanism for component interaction. In modern distributed systems, however, something more sophisticated is desirable. There are several competing technologies for providing these connections as well as other infrastructure services. Among the services provided by the infrastructures are

remote procedure calls (allowing components to be deployed on different processors transparently) communication protocols object persistence the creation of standard methods, such as "naming services" or a "service discovery" that allow one component to find another via the component's registered name and/or services it provides.

These infrastructures, which are purchased as commercial packages, are components themselves that facilitate connection among other components. These infrastructure packages, like patterns, represent another class of already solved problems (highly functional component interactions for distributed systems) that the architect need not reinvent. Market contenders are Sun Microsystems' Java 2 Enterprise Edition (J2EE), including Enterprise Java Beans (EJB) (http://java.sun.com/j2ee), and Microsoft's .NET (http://www.microsoft.com/net).

Architecture documentation and views: In order for an


architecture to achieve its effectiveness, it must be documented. Here, architectural views come into play. A view is a representation of a set of system elements and the relationships among them [Clements 2002a]. A view can be thought of as a projection of the architecture that includes certain kinds of information and suppresses other kinds; for example

A module decomposition view shows how the software for the system is hierarchically decomposed into smaller units of implementation. A communicating-processes view shows the processes in the software and how they communicate or synchronize with each

Page 50 of 255

other but does not show how the software is divided into layers (if it is). A layered view shows how the software is divided into layers but does not show the processes involved. A deployment view shows how software is assigned to hardware elements in the system.

There are many views of an architecture; choosing which ones to document is a matter of what information you wish to convey. Each view has a particular usefulness to one or more segments of the stakeholder community [IEEE 2000a] and should be chosen and

Page 51 of 255

engineered with that in mind.

Page 52 of 255

Aspects Peculiar to Product Lines:


All architectures are abstractions that admit a plurality of instances; a great source of their conceptual value is, after all, the fact that they allow us to concentrate on design while admitting a number of implementations. But a product line architecture goes beyond this simple dichotomy between design and codeit is concerned with identifying and providing mechanisms to achieve a set of explicitly allowed variations (because when exercised, these variations become products). Choosing appropriate variation mechanisms may be among the product line architect's most important tasks. The variation mechanisms chosen must support

the variations reflected in the products. The product constraints (see Core Asset Development) and the result of a scoping exercise (see the "Scoping" practice area) provides information about envisioned variations in the products of the product linevariations that will need to be supported by the architecture. These variations often manifest as different quality attributes. For example, a product line may include both a high-performance product with enhanced security features and a low-end version of the same product. the production strategy and production constraints (as described in Core Asset Development). The variation mechanisms provided by the architecture should be chosen carefully, so they support the way the organization plans to build products. efficient integration. Integration may assume a greater role for software product lines than for one-off systems simply because of the number of times it's performed. A product line with a large number of products and upgrades requires a smooth and easy process for each product. Therefore, it pays to select variation mechanisms that allow for reliable and efficient integration when new products are turned out. This need for reliability and efficiency means some degree of automation. For example, if the variation mechanism chosen for the architecture is component selection and deselection, you will want an integration tool that carries out your wishes by selecting the right components and feeding them to the compiler or code generator. If the variation mechanism is parameterization or conditional compilation, you will want an integration tool that checks the parameter values for consistency and compatibility before it feeds those values to the compilation step. Hence, the variation mechanism chosen for the architecture will go hand in hand with the integration approach (see the "Software System Integration" practice area).

Support for variation can take many forms (and be exercised many times [Clements 2002c, p. 64]). Mechanisms to achieve variation in the architecture are discussed under "Example Practices." Products in a software product line exist simultaneously and may vary from each other in terms of their behavior, quality attributes, platform, network, physical configuration,

Page 53 of 255

middleware, and scale factors and in a multitude of other ways. Each product may well have its own architecture, which is an instance of the product line architecture achieved by exercising the variation mechanisms. Hence, unlike an organization engaged in singlesystem development, a product line organization will have to manage many related architectures simultaneously. There must be documentation for the product line architecture as it resides in the core asset base and for each product's architecture (to the extent that it varies from the product line architecture). For the product line architecture, the views need to show the variations that are possible and must describe the variation mechanisms chosen with the rationale for the variation. Furthermore, a descriptionthe attached processis required that explains how to exercise the mechanisms to create a specific product. The views of the product architectures, on the other hand, have to show how those variation mechanisms have been used to create this product's architecture. As with all core assets, the attached process becomes the part of the production plan that deals with the architecture. Application to Core Asset Development The product line architecture is an early and prominent member in the collection of core assets. The architecture is expected to persist over the life of the product line and to change relatively little and slowly over time. The architecture defines the set of software components (and hence their supporting assets such as documentation and test artifacts) that populates the core asset base. The product line architecturetogether with the production planprovides the prescription (harkening to the "in a prescribed way" from the definition of a software product line) for how products are built from core assets. Application to Product Development Once it's been placed in the core asset base for the product line, the architecture is used to create product architectures for each new product according to the architecture's attached process. If the product builders discover a variation point or a needed mode of variation that is not permitted by the architecture, they should bring it to the architect's attention; if the variation is within the product line's scope (or deemed desirable to add to the scope), the architecture may be enhanced to accommodate it. The "Operations" practice area deals with setting up this feedback loop in the organization. Example Practices We categorize example practices for architecture definition into those concerned with understanding the requirements for the architecture; designing the architecture; and communicating or documenting the architecture.

Page 54 of 255

Understanding the Requirements for the Architecture


Quality Attribute Workshops: Prerequisite to designing an architecture is understanding the behavioral and quality attribute requirements that it must satisfy. One way to elicit these requirements from the architecture's stakeholders is with an SEI Quality Attribute Workshop (QAW) [SEI 2007g]. QAWs provide a method for identifying the quality attributes that are critical to a system architectureattributes such as availability, performance, security, interoperability, and modifiability. In the QAW, an external team facilitates meetings between stakeholders during which scenarios representing the quality attribute requirements are generated, prioritized, and refined (i.e., adding additional details such as the participants and assets involved, the sequence of activities, and questions about quality attribute requirements). The refined scenarios can be used in different ways; for example, as seed scenarios for an evaluation exercise or as test cases in an acquisition effort. Use of the production strategy: The choice of variation mechanisms is strongly influenced by the organization's production strategy. That strategy describes how the organization plans to build the specific products from the core assets. For example, an organization may decide that an integration team will assemble a product by selecting from existing components. This strategy forces the architecture team to focus on component substitution as a variation mechanism. Mechanisms that require additional coding may not be appropriate in this setting. Planning for architectural variation: Nokia has used a "requirements definition hierarchy" as a way to understand what variations are important to particular products [Kuusela 2000a]. The hierarchy consists of design objectives (goals or wishes) and design decisions (solutions adopted to meet the corresponding goals). For example, a design objective might be "the system shall be highly reliable." One way to meet that objective is to decree that "the system shall be a duplicated system." That, in turn, might mean that "the system shall have duplicated hardware" and/or "the system shall duplicate communication links." Another way to meet the reliability objective is to decree that "the system shall have a self-diagnostic capacity," which can be met in several ways. Each box in the hierarchy is tagged with a vector, each element of which corresponds to a product in the product line. The value of an element is the priority or importance given to that objective, or to the endorsement of that design decision, by the particular product. For example, if an overall goal for a product line is high reliability, being a duplicated system might be very important to Product 2 and Product 3 but not at all important to Product 1 (a single-chip system). The requirements definition hierarchy is a tool that the architect can use as a bridge between the product line's scope (see the "Scoping" practice area), which will tell what variations the architecture will have to support, and the architecture, which may support

Page 55 of 255

the variation in a number of ways. It is also useful to see how widely used a new feature or variation will be: should it be incorporated into the architecture for many products to use, or is it a one-of-a-kind requirement best left to the devices of the product that spawned it? The hierarchy is a way for the architect to capture the rationale behind such decisions.

Designing the Architecture


Architecture definition and architecture-based development: As the field of software architecture has grown and matured, methods of creating, defining, and using architecture have proliferated. Many example practices related to architecture definition are defined in widely available works [Kruchten 1998a, Jacobson 1997a, Hofmeister 2000a, Bachmann 2000a]. The Rational Unified Process (RUP) is a method used for object-oriented systems. A good resource for RUP is the book The Rational Unified Process: An Introduction [Kruchten 2004a]. Attribute-Driven Design (ADD): The SEI Attribute-Driven Design (ADD) method [SEI 2007a] is a method for designing the software architecture of a product line to ensure that the resulting products have the desired qualities. ADD is a recursive decomposition method that starts by gathering architectural drivers that are a combination of the quality, functional, and business requirements that "shape" the architecture. The steps at each stage of the decomposition are
1. Choose architectural drivers: The architectural drivers are the

2.

3.

4.

5.

6.

combination of quality, business, and functional goals that "shape" the architecture. Choose patterns and children component types to satisfy drivers: There are known patterns to achieve various qualities. Choose the solutions that are most appropriate for the highpriority qualities. Instantiate children design elements and allocate functionality from use cases using multiple views: The functionality to be achieved by the product family is allocated to the component types. Identify commonalities across component instances: These commonalities are what define the product line, as opposed to individual products. Validate that the quality and functional requirements and any constraints have not been precluded from being satisfied by the decomposition. Refine use cases and quality scenarios as constraints to children design elements: Because ADD is a decomposition method, the inputs for the next stage of decomposition must be prepared.

Page 56 of 255

Architectural patterns: Architectures are seldom built from scratch but rather evolve from solutions previously applied to similar problems. Architectural patterns represent a current approach to reusing architectural design solutions. An architecture pattern2 is a description of component types and a pattern of their runtime control and/or data transfer [Shaw 1996a]. Architectural patterns are becoming a de facto design language for software architectures. People speak of pipe-and-filter, n-tier, client-server, or agentbased architectures, and these phrases immediately convey complex and sophisticated design information. Architectural pattern catalogues exist that explain the properties of a particular pattern, including how well-suited each one is for achieving specific quality attributes such as security or high performance. Buschmann, Schmidt, and colleagues provide examples of catalogs [Buschmann 1996a, Schmidt 2000a]. Using a previously catalogued pattern shortens the architecture definition process, because patterns come with pedigrees: what applications they work well for, what their performance properties are, where they can easily accommodate variation points, and so forth. Product line architects should be familiar with well-known architectural patterns as well as patterns (well-known or not) that are used in systems similar to the ones they are building. Service-oriented architectures: One architectural pattern that's very popular now is the service-oriented architecture. A service-oriented architecture is one in which the components are services. A service is a reusable, self-contained, distributed component with a published interface that stresses interoperability, is usually thread-safe, and is discoverable and dynamically bound. Service-oriented systems work by "stringing together" services that have specific, well-defined functionality into chains that do sophisticated, useful work. Services can be locally developed or (in theory) "discovered" on a company's intranet or even on the World Wide Web and bound at runtime. Standards exist for services communicating with each other via messaging based on the Extensible Markup Language (XML), for specifying what services do, and for quality-of-service contracts necessary to insure that a service provides the level of functionality and quality attributes required. A service-oriented architecture's basic variation mechanism is component replacementthat is, choosing different services or stringing together services in a different way to meet the needs of individual products. Aspect-oriented software development (AOSD): AOSD is an approach to program development that makes it possible to modularize systemic properties of a program such as synchronization, error handling, security, persistence, resource sharing, distribution, memory management, replication, and the like that would otherwise be distributed widely across the system, making it hard to change. An aspect is a special kind of module that implements one of these specific properties that would otherwise cut across other modules. As that property varies, the effects "ripple" through the entire program automatically. As an example, an AOSD program might define "the public methods of a given package" as a crosscutting structure and then say that all those methods should do a certain kind of error handling. This aspect would be coded in a few lines of wellmodularized code. AOSD is an architectural approach, because it provides a means of separating concerns that would otherwise affect a multitude of components constructed to

Page 57 of 255

separate a different, orthogonal set of concerns. AOSD is appealing for product lines, because the variations can often be represented as aspects. A good starting point for understanding AOSD is provided at http://aosd.net. Mechanisms for achieving variability in a product line architecture (1): Svahnberg and Bosch created a list of variability mechanisms for product lines that includes mechanisms for building variability into components. Svahnberg and Bosch also include these architectural mechanisms [Svahnberg 2000a]:

configuration and module interconnection languages: used to define the build-time structure of a system, including selecting (or deselecting) whole components generation: used when there is a higher level language that can be used to define a component's desired properties compile-time selection of different implementations: The variable #ifdefs can be used when variability in a component can be realized by choosing different implementations.

Code-based mechanisms used to achieve variability within individual components are discussed in the "Component Development" practice area. Mechanisms for achieving variability in a product line architecture (2): Philips Research Laboratories uses service component frameworks to achieve diversity in its product line of medical imaging systems [Wijnstra 2000a]. Goals for that family include extensibility over time and support for different functions at the same time. A framework is a skeleton of an application that can be customized to yield a product. White-box frameworks rely heavily on inheritance and dynamic binding; knowledge of the framework's internals is necessary in order to use it. Black-box frameworks define interfaces for components that can be plugged in via composition tools. A service component framework is a type of black-box framework that supports a variable number of plug-in components. Each plug-in is a container for one or more services, which provide the necessary functionality. All services support the framework's defined interface but exhibit different behaviors. Clients use the functionality provided by the component framework and the services as a whole; the assemblage is, itself, a component in the products' architecture. Conversely, units in the product line architecture may consist of or contain one or more component frameworks. Mechanisms for achieving variability in a product line architecture (3): Bachmann and Clements sum up the current approaches for variation mechanisms in product line architectures and sketch an economics-based approach for choosing them [Bachmann 2005a].

Page 58 of 255

Communicating and Documenting the Architecture


Architecture documentation: Recently, in the software engineering community, more attention has been paid to writing down a software architecture so that others can understand it, use it to build systems, and sustain it. The Unified Modeling Language (UML) is the most-often-used formal notation for software architectures, although it lacks many architecture-centric concepts. The SEI developed the Views and Beyond approach to documentation [Clements 2002a], which holds that documenting a software architecture is a matter of choosing the relevant views based on projected stakeholder needs, documenting those views, and then documenting the information that applies across all of them. Examples of cross-view information include how the views relate to each other and stakeholder-centric roadmaps through the documentation that let people with different interests find information relevant to them quickly and efficiently. The approach includes a three-step method for choosing the best views to engineer and document for any architecture, and the overall approach produces a result compliant with the Institute of Electrical and Electronics Engineers' (IEEE's) recommended best practice on documenting architectures of software-intensive systems [IEEE 2000a]. Specifying component interfaces: Interfaces are often specified using a contractual approach. Contracts state pre- and postconditions for each service and define invariants that express constraints about the interactions of services within the component. The contract approach is static and does not address the dynamic aspects of a component-based system or even the dynamic aspects of a single component's behavior. Additional techniques such as state machines [Harel 1998a] and interval temporal logic [Moszkowski 1986a] can be used to specify constraints on the component that deal with the ordering of events and the timing between events. For example, a service may create a thread and assign it work to do that will not be completed within the service's execution window. A postcondition for that service would include the logical clause for "eventually this work is accomplished." A complete contract should include information about what will be both provided and required. The typical component interface specification describes the services that a component provides. To fully document a component so that it can be integrated easily with other components, the specification should also document the resources that the component requires. In addition, this documentation provides a basis for determining whether there are possible conflicts between the resources needed for the set of components that make up the application. A component's interface provides only a specification of how individual services respond when invoked. As components are integrated, additional information is needed. The interactions between two components needed to achieve a specific objective can be

Page 59 of 255

described as a protocol. A protocol groups together a set of messages from both components and specifies the order in which they are to occur. Each component exhibits a number of externally visible attributes that are important to its use but are often omitted (incorrectly) from its interface specification. Performance (throughput) and reliability are two such attributes. The standard technique for documenting the performance of a component is the computational complexity of the dominant algorithms. Although this technique is platform independent, it is difficult to use in reasoning about satisfying requirements in real-time systems, because it fails to yield an actual time measure. Worse, it uses information that will change when algorithms (presumably encapsulated within the component) change. A better approach is to document performance bounds, setting an upper bound on the time consumed. The documentation remains true when the software is ported to a platform at least as fast as the current onea safe assumption in today's environment. Cases in which the stated bounds are not fast enough can be resolved on a case-by-case basis. If the product can indeed meet the more stringent requirement on that product's platform, that fact can be revealed. If it cannot, either remedial action must be taken or the requirement must be relaxed.

Practice Risks
The biggest risk associated with this practice area is failing to have a suitable product line architecture. An unsuitable product line architecture will result in

components that do not fit together or interact properly products that do not meet their behavioral, performance, or other quality attribute goals products that should be within scope but which cannot be produced from the core assets at hand a tedious and ad hoc product-building process

These effects, in turn, will lead to extensive and time-consuming rework, poor system quality, and an inability to realize the product line's full benefits. If product teams do not find the architecture suitable for their products and easy to understand and use, they may bypass it, resulting in the eventual degradation of the entire product line concept. Unsuitable architectures could result from

the lack of a skilled architect: A product line architect must be skilled in current and promising technologies, the nuances of the application domains at hand, modern design techniques and tool support, and professional practices such as the use of architectural patterns. The architect must know all the sources of requirements and constraints on the architecture, including those not traditionally specified in a requirements specification (such as organizational goals) [Clements 2002c, p. 58].

Page 60 of 255

the lack of sound input: The product line scope and production strategy must be well-defined and stable. The requirements for products must be articulated clearly and completely enough for architectural decisions to be reliably based on them. Forthcoming technology, which the architecture must be poised to accept, must be forecast accurately. Relevant domains must be understood so that their architectural lessons are learned. To the extent to which the architect is compelled to make guesses, the architecture poses a risk. poor communication: The best architecture is useless if it is documented and communicated in ways that its consumersfor example, the product builders cannot understand. An architecture whose documentation is chronically out of date is effectively the same as an undocumented architecture. Clear and open two-way communication channels must exist between the architect and the organizations using the architecture. Architecture documentation that is appropriate for architects and developers may not be good enough for other stakeholders, who, for example, may not understand UML diagrams. a lack of supportive management and culture: Management must support the creation and use of the product line architecture, especially if the architecture group is separate from the product development group. Failing this, product groups may "go renegade" and make unilateral changes to the architecture, or decline to use it at all, when turning out their systems. There are additional risks if management does not support the strong integration of system and software engineering. architecture in a vacuum: The exploration and definition of software architecture cannot take place in a vacuum separate from system architecture. poor tools: There are precious few tools for this practice area, especially those that help with designing, specifying, or exercising an architecture's variation mechanismsa fundamental part of a product line architecture. Tools for testing the compliance of products to an architecture are virtually nonexistent. poor timing: Declaring that an architecture is ready for production before it really is leads to stagnation, while declaring it too late may allow unwanted variation. Discretion is needed when deciding when and how firmly to freeze the architecture. The time required to fully develop the architecture also may be too long. If product development is curtailed while the product line architecture is being completed, developers may lose patience, management may lose resolve, and salespeople may lose market share.

Unsuitable architectures are characterized by

inappropriate parameterization: Overparameterization can make a system unwieldy and difficult to understand. Underparameterization can eliminate some of the necessary system customizations. The early binding of parameters can also preclude easy customization, while the late binding of parameters can lead to inefficiencies. inadequate specifications: Components may not integrate properly if their specifications are sketchy or limited to static descriptions of individual services.

Page 61 of 255

decomposition flaws: Without an appropriate decomposition of the required system functionality, a component may not provide the functionality needed to implement the system correctly. wrong level of specificity: A component may not be reusable if the component is too specific or too general. If the component is made so general that it encompasses multiple domain concepts, the component may require complex configuration information to make it fit a specific situation and therefore be inherently difficult to reuse. The excessive generality may also tax performance and other quality attributes to an unacceptable point. If the component is too specific, there will be few situations in which it is the correct choice. excessive inter component dependencies: A component may become less reusable if it has excessive dependencies on other components.

References:
http://www.sei.cmu.edu/productlines/frame_report/arch_def.htm http://en.wikipedia.org/wiki/Product_family_engineering http://homepages.inf.ed.ac.uk/perdita/product-line.html http://dl.acm.org/citation.cfm?id=2019152

The Role of product Line Architecture In Industries

Page 62 of 255

Introduction
Product-line architectures have received attention in research, but especially in industry. Many companies have moved away from developing software from scratch for each product and instead focused on the commonalities between the different products and capturing those in a product-line architecture and an associated set of reusable assets. This development is, especially in the Swedish industry, a logical development since software is an increasingly large part of products and often defines the competitive advantage. When moving from a marginal to a major part of products, the required effort for software development also becomes a major issue and industry searches for ways to increase reuse of existing software to minimize product-specific development and to increase the quality of software. In this Report, we Also study on a product-line architecture involving two Swedish software development organizations, i.e., Axis Communications Ab and Securitas Larm AB.

Since the beginning of the90s, both organizations have moved towards product-line architecture based software development, especially through the use of object-oriented frameworks as reusable assets. Since these organizations have considerable experience using this approach, we report on their way of organizing software development, the obtained experiences and the identified problems. The contribution of this report is, we believe, that it provides exemplars of industrial organizations in software industry that can be used for comparison or as inspiration. In addition, the experiences and problems provide, at least part of, are search agenda for the software architecture reuse community and makes the relations to other research communities more explicit.

1- Axis Communications Ab
Page 63 of 255

This company develops and sells network-based products, such as printer-, scanner-, camera- and storage-servers.

2- Securitas Larm AB.

whereas this company produces security- and safety-related products such as fire-alarm, intruder-alarm and passage control systems.

Case -1
:: Axis Communications Ab
Axis Communications started its business in 1984 with the development of a printer server product that allowed IBM mainframes to print on non-IBM printers. Up to then, IBM maintained a monopoly on printers for their computers,with consequent price settings. The first product was a major success that established the base of the company. In1986, the company developed the first version of its proprietary RISC CPU that allowed for a better performance and cost-efficiency than standard processors for their data-communication oriented products. Today, the company develops and introduces new products on a regular basis. Since the beginning of the 90s, objectoriented frameworks were introduced into the company and since then, a base of reusable assets is maintained based on which most products are developed. Axis develops IBM-specific and general printer servers, CD-ROM and storage servers, network cameras and scanner servers. Especially the latter three products are built using common product-line architecture and reusable assets. In figure 1, an overview of the product-line and product architectures is shown. The organization is more complicated than the standard case with one product-line architecture (PLA) and several products below this product-line. In the Axis case, there is a hierarchical organization of PLAs, i.e. the top product-line architecture and the product-group architectures, e.g. the storage-server architecture.

Page 64 of 255

Below these, there are product architectures, but since generally several Product variations exist, each variation has its own adapted product architecture, because of which the product architecture could be called product-line architecture. However, for the use in this paper, we use the term product line architecture for the top level (or two top levels in case of the storage and printer-server architectures) and product architecture for the lower levels. The focus of the study is on the marked area in the figure, although the other parts are discussed briefly as well.

Figure 1. Product-Line and Product Software Architectures in Axis Communications

Page 65 of 255

Orthogonal to the products, Axis maintains a productline architecture and a set of reusable assets that are used for product construction. The main assets are a framework providing file-system functionality and a framework proving a common interface to a considerable set of network protocols, but also smaller frameworks are used such as a data chunk framework, a smart pointer framework, a toolkit framework providing domain-independent classes and a kernel system for the proprietary processor providing, among others, memory management and a job scheduler. In figure 2, the organization of the main frameworks and a simplified representation of the product-line architecture is shown.

Figure 2. Overview of the main frameworks used in Axis products and Also simplified version of Product-Line Architecture

Page 66 of 255

The size of the frameworks including the specializations is considerable, whereas the abstract frameworks is rather small. The abstract design of the file-system framework is about 3500 lines of code (LOC).However, each specialization of the framework, implementing a file system standard, also is about 3500 LOC and since the framework currently supports 7 standards, the total size is about 28 KLOC. In the protocol framework, the concrete specializations are even larger. The abstract protocol framework is about 2500 LOC. The framework contains three major specializations ,i.e., Netware, Microsoft SMB and TCP/IP, and a few smaller specializations operating on top of the mentioned protocols. The total size of the framework is about 200 KLOC, due to the large size of concrete specializations. For example, the implementation of the Netware protocol is about 80 KLOC. In addition to the frameworks and the PLA, the other smaller frameworks are part of most products and each product contains a substantial part of product-specific code. A product can, consequently, contain up to 500 KLOC of C++code. Axis makes considerable use of software engineering methods and techniques. As mentioned, the object-oriented paradigm is used throughout the organization, including more advanced concepts such as object-oriented frameworks and design patterns. Also, it makes use of peer review of software, collects test metrics, performs project followups and has started to put effort into root-cause analysis of problems identified after products have been put in operation in the field. Systems development at Axis was reorganized into business units about a year ago. Each business unit has responsibility for a product or product category, e.g., storage servers. Earlier, all engineers had been part of a development department. The reorganization was, among others, caused by the identified need to increase the focus on individual products. The product-line architecture and its associated assets, however, is shared between the business units and asset responsible are assigned to guide the evolution. Evolution of products, the PLA and the reusable assets is a major challenge. The hardware of products evolves at arate of 1-2 times per year. Software, being more flexible, has more frequent updates, i.e., 2-4

Page 67 of 255

major updates per year depending on the product. Since the products are equipped with flash memory, customers can, after having obtained the product, upgrade (for free) by downloading and installing a new version of the software. The evolution is caused by changing and new requirements. These requirements originate from customers and future needs predicted by the business unit. The decision process involves all stakeholders and uses both formal and informal communication, but the final decision is taken by the business unit manager. The high level of involvement of especially the engineers is very important due to the extreme pressure on time-to-market of product features. If engineers did not commit to this, it might be hard to match the deadlines. The evolution of the product-line architecture and the reusable assets is controlled by the new product features. When a business unit identifies a need for asset evolution, it will, after communicating to other business units and the asset responsible, basically proceed and extend the asset, test it in its own context and publish it so that other business units can benefit from the extension as well. Obviously, this process creates a number of problems, as discussed later in the paper, but these have, so far, proven to be manageable.

Case-2 Securitas Larm Ab


Securitas Larm AB, earlier TeleLarm AB, develops, sells, installs and maintains safety and security systems such as fire-alarm systems, intruder alarm systems, passage control systems and video surveillance systems. The companys focus is especially on larger buildings and complexes, requiring integration between the aforementioned systems. Therefore, Securitas has a fifth product unit developing integrated solutions for customers including all or a subset of the aforementioned systems. In figure 3, an overview of the products is presented.

Page 68 of 255

Figure 3. Securitas Larm Product Overview Securitas uses a product-line architecture only in the fire-alarm products, in practice only the EBL 512 product, and traditional approaches in the other products. However, due to the success in the fire-alarm domain, the intention is to expand the PLA in the near future to include the intruder alarm and passage control products as well. Different from most other approaches where the product-line architecture only contains the functionality that is shared between various products, the fire-alarm PLA aims at encompassing the functionality in all fire-alarm product instantiations. A powerful configuration tool, Win512, is associated with the EBL 512 product that allows product instantiation to be configured easily and supports in trouble-shooting. The products of Securitas are rather different than the products massproduced by Axis. A fire-alarm system, for example, requires

Page 69 of 255

considerable effort in installation, testing, trouble-shooting and maintenance and the acquisition of such a system generally involves a long term relation between the customer and Securitas. Consequently the number of products for Securitas is in the order of magnitude of hundreds per year, whereas for Axis the order is in the tens of thousands per month. The development at Securitas is organized in a single development department. A few years ago, the engineers were located in the business units organized around the product categories. However, due to the small size of the engineering group in each business unit, generally a handful, and that much similar work was performed in the business units, it was decided to reorganize development into a development department that acts as an internal supplier to business units responsible for marketing, installation and maintenance of the products. The development department uses a number of software engineering techniques and methods. Since the beginning of the 90s, the objectoriented paradigm has been adopted and, consequently, concepts such as object-oriented frameworks and design patterns are used extensively. Peer and team reviews are used for all major revisions of software and for all critical parts in the systems. Since the organisation is ISO9000 certified, the decision and development processes are documented and enforced. System errors that appear after systems have been put in operation are logged and the counter measures are documented as well. Some of the problems the development department is concerned with are the following. No suitable tools for automated testing of updated software have been found, but there is a considerable need. In general, the engineers identify a lack of tools support for embedded systems, such as compilers translating the right programming language to the right microprocessor. It has proven notoriously hard to accurately predict the memory requirements of the software for products. Since hardware and software are co-designed, the supported memory size has to predicted early in the project. To minimize cost, one wants to minimize the maximum amount memory supported by the hardware. However, in several occasions, early predictions have proven to be way too optimistic. Finally, since each product area has an associated organizational product unit and the development department acts as an internal supplier to these product units, benefiting from the commonalities between the different products has proven nearly impossible, despite the considerable potential.

Page 70 of 255

a-Problems while using product line architecture

a-1

Background knowledge

Problem.
Software engineers developing or maintaining products based on a product-line architecture require considerable knowledge of the rationale and concepts underlying the product-line and the concrete structure of the reusable assets that are part of the PLA. This is generally true for reuse-based software engineering, but, when using PLAs, the amount of required knowledge seems to be even larger. Rather than having knowledge of a components interface, software engineers need to know about the architecture for which the asset was defined, the semantics of the behavior of the component and the quality attributes for which the component was optimized.

Example.
New engineers starting at Axis generally require several months to get a, still superficial, overview over the PLA and its assets. Only a few engineers in the organisation have a deep understanding of the complete PLA and it was identified that the learning process basically does not stop. Understanding the philosophy behind the PLA is important because new engineers should develop their software compliant to the architecture. Although architecture erosion can never be avoided completely, it should at least be minimized.

Causes.
Page 71 of 255

Todays software products often are large and complex. Complexity of software is both due to the inherent complexity present in the problem domain and to less-than-optimal designs of software, resulting in, e.g., insufficient modularization and too wide interfaces between components. Secondly, it is generally harder to understand abstractions than concrete entities. Thirdly, the lack of documentation and proven documentation techniques is another cause (see also section 4.6). Finally, standard solutions, such as available for compiler construction, are lacking in the domains in which Axis and Securitas are operating. If such standards are present, education programs often incorporate these solutions, requiring considerably less effort for new engineers to understand new systems since they already have a context.

Solutions.
Although there are no solutions that solve this problem completely, some approaches will decrease the problem. First, a first-class, explicit representation of the product-line architecture and the architecture of the large assets should be available so that all software can be placed in a conceptual context. Second, all design and redesign of the PLA and the assets should aim at minimizing the interfaces between components. Finally, although optimal documentation techniques are not available, using todays documentation techniques to provide solid documentation will be useful support.

Research issues.
A number of research issues can be identified. First, both for representations and for programming languages, one can identify a lack of support for high-level abstractions that capture the relevant aspects while leaving out unnecessary details. Secondly, the design and acceptance of standard solutions for domains should be stressed. It is not relevant whether the standard is formal or de-facto, but whether it becomes part of computer science and software engineering education programs. Finally, novel approaches documentation are required as well as experimentation and evaluation of existing approaches to identify strengths and weaknesses.

Page 72 of 255

A-2 Information distribution


Problem
The software engineers constructing software based on or related to the product-line architecture need to be informed about new versions, extensions and other relevant information in order to function optimally. However, since so many people are involved in the process, it proves, in practice, to be very hard to organize the information distribution. If engineers are not properly informed, this may lead to several problems, such as double work, software depending on outdated interfaces, etc.

Example
This problem was primarily identified at Axis and there may exist a relation to the organizational structure, i.e. the business units. Since potentially all business units may generate new versions of the reusable assets, software engineers have a hard time figuring out the functionality of the last version and the differences from the most recent version they worked with. Although information about an asset extension is broadcasted once the new version is available, during development other business units are unaware. This has lead to conflicts at a number of occasions.

Causes.
The problems associated with information distribution can be attributed to a number of causes. First, with increasing size and organization into business units, informal communications channels become clogged and more formalized communications channels are required. Secondly, a defined and enforced process for asset evolution is required so that software engineers know when to distribute and expect information. Thirdly, the business unit structure shifts focus from commonalities to differences between products, since software engineers only work with a single
Page 73 of 255

product categories instead of multiple. Finally, there are no visible differences between versions of assets, such as the unique interface identifiers in Microsoft COM [Szyperski 97] where an updated interface leads to a new interface identifier.

Solutions.
The interviewed companies do not use separate domain engineering units and are very hesitant about their usefulness. (See section 5.1 for a detailed discussion) However, instantiating separate organizational units responsible for reusable assets and their evolution would address several of the aforementioned causes. In either case, defining and, especially, enforcing explicit processes around asset evolution would solve some of the problems.

Research issues.
The primary research issue is concerned with the processes surrounding asset evolution. More case studies and experimentation is required to gather evidence of working and failing processes and mandatory and optional steps. A second research issue is the visibility of versions in software. , although the strict Microsoft COM model has clear advantages, it does not fit traditional object models (since interfaces and objects are decoupled through a forwarding interface) and there are other disadvantages associated with the approach as well.

a-3

Multiple versions of assets

Problem.
The reusable assets that are part of the product line are generally optimized for particular quality attributes, e.g., performance or code size. Different products in the product-line, even though they require the same functionality, may have conflicting quality

Page 74 of 255

requirements. These requirements may have so high priority that no single component can fulfil both. The reusability of the affected asset is then restricted to only one or a few of the products while other products require another implementation of the same functionality.

Example.
In Axis, the printer server product was left out of the product-line architecture (although it can be consider to be a PLA on its own with more than 10 major variations) because minimizing the binary code size is the driving quality attributes for the printer server whereas performance and time to market are the driving quality attributes for the other network-server products. One can even identify that the printer server product is a much more mature product that has come considerably further in its lifecycle, compared to the storage, camera and scanner products. The driving quality attributes of a product tend to change during its lifecycle from feature and time-to-market driven to cost and efficiency driven .

Causes.
The main cause for this problem are incompatible differences between quality requirements for a particular asset. For example, it may be impossible to incorporate both the performance and code size requirements in a single component because they conflict with each other. A second cause is that domain functionality and quality attribute related functionality (as well as the structure of the asset) are heavily intertwined early in the design process, thus not allowing for, e.g., a component with conditional code. Finally, since business units focus on their own quality attributes and design for achieving those during asset extension, multiple versions of assets may be created even though a unified solution may exist.

Solutions.
A solution aiming at minimizing the number of implementations of assets is to relax quality requirements for one or more of the product categories, thereby allowing to incorporate all requirements in one version of the asset. In addition, a separate domain

Page 75 of 255

engineering unit may, due to the focus shifted from products to reusable assets, find unified solutions where product engineering units may not.

Research issues.
An important research issue is to find approaches that allow for late composition of domain functionality and quality attribute-related functionality. Examples of this can be found in aspectoriented programming [Kiczales et al. 97] and in the layered object model [Bosch 98a] and [Bosch 98b]. In addition, evaluation techniques for assessing the effects of extensions and changes on the quality attributes of an asset early in the design process would help identify potential conflicts.

A-4 Documentation

Problem.
Although most software is documented for maintenance purposes, documentation techniques explaining how to reuse software are still considerably less mature. (See [Mattson 96] for a detailed discussion) This problem is complicated by the low priority of documentation of assets in most organizations and the backlog of most documentation, causing the software engineer to be uncertain about whether the documentation is valid for the latest version of 9. the reusable asset. One interviewed software engineer suggested to require executable code in the documentation so one would be able to check the correctness of a part of the documentation by compiling the associated example code.

Example.
At Axis, both the protocol framework and the file system framework have evolved considerably recently. One product,

Page 76 of 255

CD servers, a product in the storage servers category, is still using an old version of the file system framework. When investigating how to upgrade their software to using latest version of all assets, they identified the aforementioned documentation problems.

Causes.
First, documentation generally has a low priority compared to other tasks. This is reinforced by the availability of experienced engineers that know the assets well enough to answer questions normally found in documentation. Obviously, this approach, although working in small development organizations, easily fails in larger departments. Secondly, because a documentation backlog exists, the most relevant version, i.e., the last one, is never documented problem.

Solutions.
Defining documentation as an explicit part of the asset evolution process, not allowing engineers to proceed without delivering updated documentation as well might alleviate the situation. Secondly, documentation as an activity has to receive higher status and more support from management. Thirdly, several approaches to documenting reusable assets exist, such as example applications, recipes, cookbooks, pattern languages, interface and interaction contracts, design patterns, framework overviews and reference manuals. Despite their not being perfect, documentation using one or some of these techniques is certainly preferable over not documenting at all. bSome Issues while implementing the product line Architecture in industries Different from problems, issues represent fundamental choices for the development organization related to organizational issues, process issues or software design issues. In some issues, the two organizations made the same decisions, whereas in other issues, they are on different sides.

Page 77 of 255

B-1

When to split off products from the product-line

Another difficult issue to decide upon is when to separate a product from the product line or when to merge a product with the product line. In the case of Axis, the printer server software was kept out of the network-server product line for three reasons, i.e., the printer server product contains considerable amounts of software specific to printer servers, traditionally the printer server software was written in C, whereas the product-line software written in C++, i.e., a programming language mismatch, and, thirdly, the quality requirements for the printer server were different from the quality requirements for the other network products. In the printer server, code size was the primary requirement, with time-to-market as secondary requirements, whereas in the other network server products, performance and time-to market were both primary requirements. The difference in quality requirements called for a different organization of the software assets, optimizing their usability for the other network server products. Deciding to include or exclude a product in the product-line is a complex decision to make, involving many aspects. Guidelines or methods for making more objective decisions would be valuable to technical managers.

b-2

Related Work

The authors identify four elements of product-line development, i.e., process-driven, domain-specific, technology support and architecture-centric. The lessons learned during the project are discussed and a set development, whereas we investigated the problems of product-line based development after its introduction. A second difference between our studies is that the companies studied in this paper use a product-line architecture as part of their main business and are critically dependent on it for their success and survival. Finally, the types of business domains of the companies in the studies are fundamentally different.

Conclusions for role in industries

Page 78 of 255

Product-line architectures have received attention especially in industry since it provides a means to exploit the commonalities between related products and thereby reduce development cost and have presented a case study involving two Swedish companies, Axis Communications AB and Securitas Larm AB, that use product-line architectures in their product development.

Concluding, product-line architectures can and are successfully applied in small- and medium-sized enterprises. These organizations are struggling with a number of difficult problems and challenging issues, but the general consensus is that a product-line architecture approach is beneficial, if not crucial, for the continued success of the interviewed organizations.

References
These are the References which we have collected the data about The role of product line Architecture in Industries

http://www.janbosch.com/Articles/PLA-casestudy.PDF

http://c2.com/cgi/wiki?SoftwareProductLine

http://www.mendeley.com/research/architecturebased-evolutionmanagement-method-software-product-line/.

Page 79 of 255

Survey of product line architecture


The architecture of a software system defines that system in terms of computational components and connections among those components [SG96, p. 1]. A software product line, in turn, is a set of systems which share a common software architecture and set of reusable components [Bos00, p. 2]. According to Jazayeri et al., a product family software architecture (a product-line architecture) defines the concepts, structure, and texture necessary to achieve variation in features of variant products while achieving maximum sharing parts in the implementation [JRvdL00, p. 27]. Concerning software architectures in general, different architectures may have different styles and different architectures support different quality attributes. Architectural styles are closely related to patterns such that a certain style may be best suited a particular type of problem. To make sure that an architecture fulfills

Page 80 of 255

its quality requirements, the architecture must be analyzed and assessed against these requirements. Creating an architecture is the first step in creating a system. Architecture description is needed when moving from an architectural design to a code framework. For explicit description, there are architectural description languages. Referring especially to product-line architectures, an important task is to analyze the domain and to identify the commonalities and variabilities of the objects and operations of that domain. Product-line architectures can also be found from existing systems by analyzing their commonalities. This is called architecture recovery. After using the products of a product line, new requirements usually arise for these products. These requirements may suggest modifications also to the product-line architecture. Thus, architectures can evolve. This report provides an overview to product-line architectures considering the aspects described above. The following areas concerning the topic are discussed: domain analysis and domain engineering, variation management, design and styles,

Page 81 of 255

modeling and description, analysis and assessment, development and evolution, recovery and reengineering, reuse, testing. The areas considered in the report are not separate. For example, variation among the products of the same family (Section 3) is analyzed during domain engineering (Section 2), and variation among components (Section 3) enables reusing those components (Section 9). In addition, architectural recovery (Section 8) is associated with domain engineering (Section 2) because recovery exploits commonality analysis belonging to domain engineering. Thus, these areas are connected to each other, although they are considered in different sections. Besides the subjects considered in this report, there are areas that are related to product-line architectures but not discussed here in detail. Such are, for example: components, patterns, frameworks,

Page 82 of 255

architectural methods.

Domain analysis and domain engineering


Domain analysis considers the scope of the domain covering the objects, operations, and relationships of a certain area. The result of domain analysis is domain model which describes the identified objects, operations, and relationships. Domain analysis is an early step in any programming project, not only in product-line architectures. Domain engineering, in turn, is associated with productline architectures. It considers both the commonalities and variabities among the productfamily. According to Ardis et al., the first stage of domain engineering is domain analysis in which domain experts collect and document their knowledge of the product family [ADH+00]. Besides domain analysis and domain engineering, there is a related term, scoping, meaning determination the boundaries of the product line.

Domain and domain analysis


The term domain can be used in several associations [Sch00]: business area,

Page 83 of 255

collection of problems (problem domain), collection of applications (solution domain), area of knowledge with common terminology. Domain analysis can apply different approaches. It can concentrate on describing what is inside the domain, what is the boundary of the domain, or what is outside the domain [Sch00]. The first case describes the items that constitute the domain, or it identifies other domains that together form the actual domain (domains can have sub-domains). The second case describes the rules of inclusion and exclusion. In addition, structure and context diagrams can be produced both to describe the boundary of the domain and to show the relation of the domain to the outside. Domain analysis is concerned with product-line architectures, but it can be exploited in other contexts, too. It can be used in considering legacy systems and in exploring how to transform legacy systems into a common architecture as follows [BCC+99, pp. 3132]: Legacy products are analyzed to consider whether they are appropriate to be used in product lines. This includes comparing the functional capabilities

Page 84 of 255

of the products to those needed in product lines. 5 The general product concept can be analyzed for feasibility. Domain analysis is used to give understanding about the structure and state of the domain to which a product line should be constructed. Legacy products represent the history of a domain while the product line to be built represents its future, and former informs the latter. Domain requirements can be analyzed in order to maintain them according to changing market needs and technologies. When the product line evolves, also the domain model must evolve. Moreover, the domain model must accommodate to new product requirements.

Domain engineering
Domain engineering is associated with product-line architectures. It studies how the products of the same family shares the common basis and how they differ from each other [ADH+00]. According to Bergey et al., domain engineering means development and acquisition of the core assets of the product line [BFG+00, p. 5]. Domain engineering is a part of the Family-Oriented Abstraction, Specification,

Page 85 of 255

and Translation (FAST) process [ADD+00, ADH+00, CHW98,WL99]. The FAST process is a product-line development process covering all the phases of producing families of architectures. Figure 1 introduces FAST process which is divided into two phases: domain engineering and application engineering. The purpose of domain engineering is to understand the relationships among the products of the product family: both their commonalities and variabilities. This understanding is translated into technology such as a common set of subroutines or a domain-specific language. This technology is called application engineering environment. Application engineering uses this environment to produce the members of the product family. Feedback from application engineering suggests modifications to the application engineering environment. The modifications are made after considering their impact on the original domain analysis effort. Domain engineering (and FAST process) involves cost estimation of the productline approach. It should be decided whether there are sufficient potential family members to justify the investment in domain engineering. It should also be considered to what extent it pays to generate family members. At the early initiation

Page 86 of 255

stage, the product-line approach requires more costs and provides less benefit than producing a single product. However, further deployment of products from the product line will be more efficient. Domain engineering is very close to scoping that defines which products and

Domain Engineering Application Engineering Environment Application Engineering Applications Create Use Create Feedback Figure 1: Domain engineering according to the FAST process [ADH+00] features are included in the product line and which ones are excluded. There are three different levels of scoping: product line, domain and asset base [Sch00]: Product-line scoping identifies the specific requirements and individual products that should be

Page 87 of 255

part of the product line.

Domain scoping identifies appropriate boundaries for conceptual groups of functionality that are relevant to the domain. Asset scoping identifies the various elements that should be made reusable.

Commonality analysis
As mentioned, domain engineering considers the similarities and differencies between the members of the product family. This kind of analysis is called commonality analysis, although it covers also considering the variabilities. Commonality analysis identifies and makes useful the abstractions that are common to all family members [Wei98]. There are two main sources of abstrac7 tion: terminology and commonalities. Terms concerning product-line architecture make communication among developers easier and more precise. As another source of abstraction, commonalities are actually assumptions that are true for

Page 88 of 255

all family members. Besides commonalities, it is important to consider variabilities among family members. Variabilities provide a way to prepare for potential changes by pointing those decisions concerning family members that are likely to alter over the lifetime of the family. The result of commonality analysis is commonality document consisting of the following sections [ADH+00]: Overview describes the domain and its relation to other domains. Definitions provide a standard set of technical terms. Commonalities consist of a structured list of assumptions that are true for every member of the family. Variabilities consist of a structured list of assumptions about how family members differ. Parameters of variation consist of a list of parameters that refine the variabilities, adding a value range and binding time for each. Issues

Page 89 of 255

form a record of important decisions and alternatives. Scenarios are examples used in describing commonalities and variabilities. Commonality analysis can be used for several purposes [Wei98]. It can be used in the further phases of the FAST process, for example, in designing a domain specific language and then in generating code and documentation from the language specification for each product. It serves as a basis for a family architecture and as reference documentation. Commonality analysis can also be exploited in reengineering the members of a product family. It can be used as a training aid for new project members. In addition, a plan for evolution of the family can be derived from commonality analysis.

Variation management is an important topic in software product lines. Although it is a part of domain analysis, it is considered as an own section (Section

Variation management
The products belonging to the same product family has much in common. However,

Page 90 of 255

there are also variation among the products of the same family. Variation is provided according to different users or different design and implementation requirements. Variation can be identified during domain analysis. It is important to take variation into account in the early phases in designing a productline architecture and handling variation in the architecture level instead of code level [CAJ98]. Variation is associated with reusability. To enable reuse, components should be applicable in different contexts which suggests that there must be variation among the components. Adjusting a component is achieved in two ways: via component variation and via component adaptation [Bos00, pp. 224]. Variation is possible in particular variation points in which the behavior of the component can be changed. These variation points should be decided during component design. Adaptation is needed when the variability of a component is not sufficient. Variation is also associated with architecture evolution. Evolution affects on how variability is handled in software product lines [SB00]. For example, variability can be handled by selecting component implementations. In addition, component

Page 91 of 255

interfaces may evolve, affecting the way they can be used in variant products. 3.1 Variation categories The requirements or feature properties of product lines can be divided into following categories [CAJ98, KCH+90, LM97, TCY93, Tra95]: mandatory requirements are supported in all systems in a domain, optional requirements are only required in some systems, alternative requirements are alternatives for each other, prerequisite requirements are needed for other requirements. From another point of view, variability can be divided into three categories called axes of variability [DMNS97, MHM98]: Feature variability means variation in the definition and implementation of a specific feature or additional features. Such are, for example, variation in checking the duplication of messages, or providing a choice of pleasing alerts in addition to a standard alert. 10 Hardware platform variability means variation in the type of microcontroller, memory, and devices that need to be supported.

Page 92 of 255

Performances and attributes variability means variation in the required performances such as number of backtoback messages to be received, and in the attributes such as failure handling and concurrency support. 3.2 Variation levels Variability can occur at different levels in the design [SB00]. These levels are: product line level, product level, component level, sub-component level, code level. Variability at product line level defines how different products in the product line varies. Components for different products are selected, and product-specific code to be used is selected or generated. Variability at product level defines the architecture and choice of components for a particular product. The components are fitted together to form a product architecture, and the product-specific code is customized for the particular product variation. At this level, it is also considered how to cope with evolving interfaces.

Page 93 of 255

Variability at component level defines the component implementations to be selected into the product. A component can be considered as an abstract objectoriented framework with several framework implementations. At this level, the set of framework implementations is selected. It is taken into account how to enable addition and use of several component implementations, and how to design the component interface to adapt to the addition of more concrete implementations. Variability at sub-component level defines the features to form a component for a particular product. All features of a component are not needed in all products. Thus, to avoid dead code, these unnecessary features should be removed. 11 However, removing features may affect to other components, and their implementations may require modifications, too. As a consequence, variability at this level concerns removing and adding parts of a component. The actual evolution and variability described above is implemented at code level. If variability has been considered properly at the upper levels, the code level calls only for checking that the provided class interfaces match the required interfaces.

Page 94 of 255

However, when the components and classes evolve, also their interfaces may change. Several components and component implementations use these interfaces. In addition, different products may use different versions of the component implementation and its interface. All these variabilities must be considered at code level. As shown, variabililty occurs at different levels. However, it is important to consider variability already at the higher architectural level, not only at code level [CAJ98]. It is more intuitive to think about variation at a higher level before implementing them at a lower level. Business goals and constraints can be expressed more naturally at a higher level, when implementation details need not be considered yet. Different types of variation can be implemented by customizing architecture at the design level. Moreover, variation at the architecture level may be equivalent to variations at the code level. Thus, less work is required at the code level, if variation is considered already at higher levels. 3.3 Variation mechanisms There are different mechanisms to enable variation [Bos00, SB00]: Inheritance

Page 95 of 255

Inheritance can be used if the component is implemented as a class in an object-oriented language. Through inheritance and late binding, each component can be specialized from its superclass for each specific context. Extensions Extension means such kind of variation that the user selects one of different behavioral variants. There is typically the stable functionality, and each variable functionality is modeled as an independent entity. The user can select an existing variant entity or introduce a new one. For example, the strategy design pattern [GHJV95] uses extensions. Configuration Configuration allows variation where all variants are present at all variation 12 points. The user may select appropriate files and set parameters to connect modules and components to each other. Parametrization, templates, and macros These variability techniques are used when parameters ormacro expressions can be introduced and later instantiated with the actual parameter or by expanding the macro. In template instantiation, components are configured

Page 96 of 255

with application-specific types. This variation can be applied in list or queue implementations for different element types. Generation A generator requires its input to be a specification written in some domainspecific or component-specific language. The generator then translates this specification into a source-code-level component which can be attached to the product or application. For example, graphical user interfaces can be generated from graphical or textual specifications. Compiler directives Compiler directives (like ifdef in C++) can be used at compile-time to select between different implementations in the code. Different variation mechanisms can be applied at different variation levels. Configuration is mainly used in product-line level, product level, and component level. At the product-line level, it can be applied to select the components and the product-specific code. At the product level, the selected components are connected together. At the component level, the actual concrete implementations are selected to include into the product. Configuration may also be used at subcomponent level, if components have been designed as a collection of disjoint

Page 97 of 255

sub-components. In this case, configuration management can be used to select the specific parts of the components. The same three levels (product-line level, product level, and component level) may also apply compiler directives and parametrization. At the productline level, compiler directives can be used to remove unnecessary product-specific code. At the product level, both of the techniques can be used to connect components to each other. However, they allow only a static way of connecting components. Compiler directives and parametrization at sub-component level are not recommended because they may lead to dead code and the complexity of the code. Inheritance and extensions can be used at all the levels of variability. However, they are especially important at class level (code level). They provide a way to divide the source code into several files. Instead of using inheritance, templates can 13 usually be chosen. However, at sub-component level, templates are not always suitable. Usually more than one extension is allowed to be present in a system, and this is not technically possible when using templates.

Page 98 of 255

Generation is best suited for product level. At that level, code needs to be instrumented with product-specific code and to connect the components to each other. Karhinen and Kuusela introduce a different division for variation mechanisms and their applicable levels [KK98]: Implementation configuration provides one design for all products. The design is very simple because variation is managed at implementation level. At that level, conditional compilation and source code configuration management are applied. However, the more products the product line comprises, the more complex the implementation becomes. Customization supports variation by one universal product that can be customized, for example, for different customers. All possible components must be present in the design and implementation of the product. The active set of components is selected for each product. Modularization places variation to structural elements of the design. The common part of

Page 99 of 255

the design is a framework, and variants are produced by selecting existing and specifying new components. Design configuration handles variation at the design level. Each product has a different design. The management of different designs depends on the tool support handling the dependencies between the products of the family.

Design and styles


Architectures consist of components and connections between these components. However, there are different alternatives to relate these components and connections together. These alternatives can be called architectural styles. Styles are associated with architectures in general, not especially with product-line architectures. However, architectural styles show the common aspects between different architectures. They can also be used in guiding the design of an architecture. When designing an architecture, the existing architectural styles can be taken into account and considered which of them best suits the current situation. Architectural styles

Page 100 of 255

Architectural styles can also be called systempatterns (cf. design patterns) [BCK98, p. 93] or architectural patterns [BMR+96]. An architecural style represents a collection of design decisions that have already been made and can be reused. It consists of a few key features and rules for combining those features so that architectural integrity is preserved. An architectural software style is determined by the following [BCK98, p. 94]: a set of component types performing some function at run-time (such as data repository, process, and procedure), a topological layout of the components according their relationships at runtime, a set of semantic constraints (for example, a data repository is not allowed to change the values stored in it), a set of connectors providing communication, coordination, or cooperation among components (such as subroutine call and data streams). Actually, a style defines a class of architectures. In other words, it is an abstraction for a set of actual architectures applying that style. We cannot usually find clear occurencies of particular styles in system designs, instead, the styles appear in slightly different forms. However, style catalogs are important because

Page 101 of 255

they reveal when two identified styles are similar. They also inform about the situations in which a particular style can be applied. A small catalog of architectural styles is shown below [BCK98, SG96]: Data-centered architectures emphasize integrability of data. They are appropriate for systems that describe the access and update of a widely accessed data store. Subtypes of these architectures are repository, database, hypertext, and blackboard architectures. Data-flow architectures emphasize reuse and modifiability. They are appropriate for systems that describe transformations on successive pieces of input data. Data enters the system and then flows through the components one at a time until some final destination is reached. Subtypes of these architectures are batchsequential and pipe-and-filter architectures. Virtual machine architectures emphasize portability. They simulate such functionality that is not native to the hardware or software on which it is implemented. They can, for example, simulate platforms that have not yet been built (such as new hardware)

Page 102 of 255

or disaster modes that would be too complex or dangerous to test with the real system (such as flight and safety-critical systems). Examples of these architectures are interpreters, rule-based systems, and command language processors. Call-and-return architectures emphasize modifiability and scalability. They are the most general architectural styles in large software systems. Subtypes of these architectures are main-program-and-subroutine architectures, remote procedure calls, objectoriented systems, and hierarchically layered systems. Independent component architectures emphasize modifiability by separating various parts of the computations. They consist of independent processes or objects that communicate through messages. They send data to each other but do not directly control each other. The messages can be passed to named receivers or they can be broadcast such that interested participants pick up the messages. Subtypes of these architectures are communicating processes and event systems. Besides the above styles, Buschmann et al. introduce the following styles and patterns to be exploited in each style [BMR+96]:

Page 103 of 255

Distributed systems include one architectural pattern: broker. The broker pattern consists of decoupled components that interact by remote service invocations. A broker component coordinates communication by forwarding requests and by transmitting results and exceptions. Interactive systems include model-view-controller and presentation-abstraction-control as their

patterns. Both of these patterns support human-computer interaction. The model part of the former one contains the core functionality and data. Views display information to the user, while controllers handle user input. Views and controllers together form the user interface. The latter kind of pattern provides a hierarchy of co-operating agents. Each agent manages a specific functional part of the system and consists of three components: presentation, abstraction, and control. With this division, human-computer interaction can be separated from the functional aspect of each agent and from the mutual communication of the agents. Adaptable systems

Page 104 of 255

include two patterns: microkernel and reflection. These patterns support applications to extend and to adapt to evolving technology and changing requirements. The microkernel pattern supports adaptation to changing requirements by separating the minimal functional core from the extended functionality and customer-specific parts. The reflection pattern supports dynamic changes to software systems. In this pattern, an application is divided into two parts. The meta level provides information about the selected system properties and makes the system self-aware. The base level includes the application logic. Architectural styles can be associated with architecture analysis and quality attributes (considered in Section 6). The resulting styles are called attribute-based architecture styles [KKB+99]. Architectural styles define the conditions under which they should be used. In addition to defining the components and connectors, attribute-based architecture styles include quality-attribute-specific model that declares the behavior of component types interacting with each other. For example, with pipe-and-filter architectures, it should be considered how performance is handled.

Page 105 of 255

In addition, attention should be paid to the assumptions made by filters that effect their reuse. Architecture design Architecture design covers different aspects some of which are considered elsewhere in this report. For example, Section 2 introduces scoping, and architecture assessment is included in Section 6. However, in this subsection, we describe the outline of the design process concerning product-line architectures. Bosch has considered the design of a product-line architecture [Bos00, pp. 189]. In his model, the design consists of the following steps: 20 business case analysis, scoping, product and feature planning, actual design of the product-line architecture (consisting of): functionality-based architectural design, architectural assessment, architecture transformation, component requirement specification, validation.

Page 106 of 255

Business case analysis makes sure that the software product-line approach will be paying. This phase also considers different ways to move to productline-based production. Examples of these ways are revolutionary and evolutionary paths. Scoping uses the results of the business case analysis as a basis in selecting the products and product features which will be included in the product line. Scoping also defines which features include in which products. Product and feature planning considers the characteristics of subsequent versions of the product-line architecture. As the architecture evolves, it becomes actual to include new products and new features in the product-line architecture. Future inclusion is easier, if these potential new products and features have been considered before-hand. Design of the product-line architecture is the main step of the process. Design process can be considered to consist of three steps: functionality-based architectural design, architectural assessment and architecture transformation. These steps are considered below. Functionality-based architectural design first defines the product context in

Page 107 of 255

which the software architecture operates. In the case of product-line architectures, the contexts are not necessarily specified for the product line as a whole. Instead, single products of the same product line differ from each other according to their supported context. The next step in functionality-based architectural design is the identification and definition of the core abstractions of the product line. Correspondingly, the components are defined as instances of these abstractions. 21 The final step in functionality-based architectural design verifies the suitability of the selected abstractions and the ability of the current architecture to represent all variations of the product. Architecture assessment evaluates the product-line architecture against its quality requirements. Architecture assessment techniques are, for example, scenarios, simulation and mathematical models (see Section 6). However, evaluating all the products of the family would be too expensive and time-consuming. Thus, we can concentrate on assessing those products that contain the critical features in the

Page 108 of 255

product line. Alternatively, we can concentrate on evaluating extreme products such as the largest, the smallest, etc. Assessment also covers evaluating the capability of future inclusion of new features and products. This kind of evaluation tells about the evolvability and maintainability of the product-line architecture. Architecture transformation is concerned with improving the quality attributes of a software architecture. A product-line architecture should support three transformation aspects: variants, optionality, and conflicts. Most often, it is necessary to provide two or more solutions (variants) for subsets of the product line. For example, the layered architectural style supports variants because variants can be encapsulated as layers. In addition, many design patterns [GHJV95] (such as abstract factory, strategy, and mediator) support variants. Optionality means that for some products in the product line, certain components can be excluded. Transformations may be needed to reduce the dependency on optional components. However, accessing these components should be allowed when needed. For example, the blackboard architectural style supports optionality, since components depend

Page 109 of 255

only on the blackboard, not on other processes. In addition, some design patterns (such as proxy and strategy) support optionality. The design considerations of the product-line architecture may reveal conflicts between the product-line architecture and the requirements of individual products. If the conflict is handled in the product-line architecture, some transformations are needed. Some design patterns (such as adapter, proxy, and mediator) may prove to be useful in resolving conflicts. Component requirement specification is important in the design phase because software architecture defines a set of components that implement the required behavior. When exploiting components, the software engineer has to know the functional and quality requirements of the components: which products use the components and how the components should be instantiated for each product. Validation ensures that the product line supports the features defined during scoping, that it can be easily instantiated for each product, and that planned new

Modeling and description


Architectures are typically presented as boxes and lines connecting the boxes.

Page 110 of 255

However, it is not necessarily easy to understand the meaning of those items in architectural description. The boxes can, for example, represent components of the system, programs, source code fragments, or logical groupings of functionality. Correspondingly, the lines or arrows (connectors) can, for example, represent compilation dependencies, data-flow, control-flow, calling relationships, or partof relationships. All such items and relationships is hard to describe in a single figure. There are different ways to describe an architecture, and for explicit modeling there are architectural description languages. As an alternative for these languages, UML (Unified Modeling Language) can also be used to describe an architecture. Architectural description is associated with architectural styles and design. Architectural description methods are applied in the design process of the architecture. Architectural description is also concerned with variation. During the design process of a product-line architecture, it should be considered how to describe variation between products. 5.1 Different views of architectural description

Page 111 of 255

Architectural description can be represented via several views such as 4 + 1 view model [Kru95]. Each of the first four views describes the system from a different view point while the fifth view illustrates and validates the earlier views. The five views are: Logical view describes the functionality of the system provided for the end user. The abstractions of this view are derived from the problem domain. Logical view supports object-oriented architectural style. Process view describes concurrency and synchronization aspects of the system. This view also takes into account performance, scalability, system integrity, and faulttolerance. Process view supports several architectural styles, for example, pipes-and-filters and client-server styles. Development view describes the organization of the system into modules or subsystems as hi25 erarchical layers. Each subsystem layer can be developed separately. Thus, development view supports layered architectural style. Physical view

Page 112 of 255

describes the mapping of the system onto the hardware. Elements identified in the earlier views must be mapped onto different physical configurations, for example, according to different customers. Scenarios ties the other views together and illustrates them with selected use cases or scenarios. Scenarios show how the other views work and fit together. There are also different divisions of views. For example, Garlan and Sousa introduce four views: problem domain view, code view, run-time view, and deployment view [GS00]. Architectural description languages Architectural descriptions can be specified in an explicit and precise way by using an architectural description language (ADL). These languages provide notations for representing architectural structures such as components, connectors, systems, properties, and styles. ADLs allow formal description of architectures. In addition, they support early analysis and feasibility testing of architectural design decisions [BCK98, p. 268]. ARES project has set some requirements for ADLs [JRvdL00, pp. 35]:

Page 113 of 255

Level of abstraction Representing the architecture of a large and complex system usually needs abstractions. Architectural views can be considered as some kind of abstractions. The semantics of components may also be complex, and thus, require abstraction. System construction in addition to system documentation ADLs provide system documentation through different views. In addition, when the system changes, it would be useful to make the modification in the system description and to automatically propagate it to the system implementation. Handling of variations within a product family Handling of variations in ADLs is usually poorly supported. However, it would be desirable to represent the variability within a product family at

an architectural level rather than at the program code level. Thus, an ADL should provide means to describe both the common architecture and the variable parts of each product. Definition of dynamic structures Most ADLs concentrete on representing static components and connectors.

Page 114 of 255

However, support for dynamic structures is needed for the run-time development of the system. Multiple system views and attributes It should be possible to make the relationships clear among different views and between the views and the basic architecture. However, it might not be sensible to show this information with the ADL itself, instead, the ADL would provide a framework for managing this kind of information. Description of hierarchical and layered architectures Hierarchical and layered architectures should be described at different levels of abstraction and detail. Similarly, components should be represented with various amounts of details. This makes large systems easier to manage. Tool support for architectural visualization and manipulation In addition to textual representation, there should be ways to visualize the architecture in a graphical form. Graphical representations support system comprehension, system navigation, and consistency management. In addition to architectural description languages, there are architecture interchange languages such as ACME [GMW97]. These languages are meant to interchange architectural specifications across ADLs. For this purpose, ADLs should

Page 115 of 255

have a common basis for architectural representation including the following aspects: components, connectors, systems (configurations of components and connectors), ports (interaction points i.e. interface of a component), roles (interaction points i.e. interface of a connector), representations to model hierarchical compositions, mappings from the internal architecture of a composite component or connector to the external interface. 27 HHN HHN p1 p2 r3 r1 r2 p3 Figure 7: A Koala component [vO98] 5.3 Examples of architectural description languages This subsection introduces two architectural description languages: Koala and C2. More examples of these languages and their mutual comparison is provided in [MT00]. 5.3.1 Koala

Page 116 of 255

Koala is meant to describe software architectures of embedded systems. It provides means to describe components, interfaces, configurations, bindings, etc. [JRvdL00, vO98]. An example of a Koala component is presented in Figure 7. Components can communicate with their environments through interfaces. Koala divides interfaces into incoming and outgoing interfaces. A component provides functionality through incoming interfaces, and in order to do so it requires functionality through outgoing interfaces. In Figure 7, components are shown as rectangles and interfaces as small squares containing triangles. The direction of a triangle indicates the direction of the corresponding function call. Configurations are represented by connecting the interfaces of components. A requires-interface must always be bound to precisely one providesinterface, but a provides-interface may be bound to more than one (or zero) requiresinterfaces. To enable to describe large systems, Koala provides a recursive component model. It means that any combination of components can again be viewed as a component with provides- and requires-interfaces. In addition, Koala provides means to describe diversity, as will be considered in Subsection 5.5.

Page 117 of 255

Database Component Admin GUI User GUI Window System Figure 8: An example C2 architecture for a database application [RMRR98] 5.3.2 C2 C2 is a software architecture style for user interface intensive systems [MT97, MR99, RMRR98, TMA+96]. C2 SADL is an architectural description language for describing C2-style architectures. Thus, we use C2 to refer to the combination C2 and C2 SADL. In C2-style architecture, connectors transmit messages between components. Components, in turn, maintain state, perform operations, and exchange messages with each other via two interfaces called top and down. Each interface defines a set of messages that may be sent and a set of messages that may be received. A message can be a request for a component to perform an operation or a notification that the component has performed the operation or changed state. In C2 style, components may not directly exchange message, instead, they communicate via connectors. Each component interface may be attached to at

Page 118 of 255

most one connector. A connector may be attached to any number of other components and connectors. Request messages may only be sent upward through the architecture, while notification messages may only be sent downward. An example of this situation is shown in Figure 8. The depicted system consists of four components (database server, window system, and two graphical user interfaces) and of two connectors (the dark horizontal balks). When one of the user interfaces is used to request a modification, a request message is sent upward to the connector, and then to the database. When the database performs an operation, a notification message is sent to the connector and further to the GUI components. Architectural description with UML UML (UnifiedModeling Language) can be used to describe an architecture [HNS99, MR99, RMRR98]. Although UML ismeant tomodel the design of objectoriented systems, it can be used more widely because it supports describing various elements and relations between them. Thus, it is applicaple to describe software architectures. Actually, both ADLs and UML has advantages and disadvantages in describing

Page 119 of 255

architectures [MR99, RMRR98]. Architectural description languages are specialpurpose notations having a great deal of expressive power. However, they are not well integrated with common development methods. These more widely used methods like UML, in turn, are accessible to developers, but they lack the semantics needed for extensive analysis. This subsection first introduces UML and after that shows how to apply UML to architectural description. 5.4.1 Introducing UML A UML model of a software system consists of the following partial models [MR99, RMRR98]: classes and their declared attributes, operations, and relationships, the possible states and behavior of individual classes, packages of classes and their dependencies, example scenarios of system usage including kinds of users and relationships between user tasks, the behavior of the overall system in the context of a usage scenario, examples of object instances with actual attributes and relationships in the context of a scenario,

Page 120 of 255

examples of the actual behavior of interacting instances in the context of a scenario, the deployment and communication of software components on distributed hosts. UML is an extensible language that allows adding new constructs without changing the existing syntax or semantics of the language. There are three mechanisms to allow such extension [MR99, RMRR98]: constraints place semantic restrictions on particular design elements, tagged values allow new attributes to be added to particular elements of the model, stereotypes allow groups of constraints and tagged values to be given descriptive names and applied to other model elements. Applying UML To apply UML to describe an architecture, the architecture can be divided into four views [HNS99]: conceptual, module, execution, code.

Page 121 of 255

Note that the above division has much in common with Kruchtens 4 + 1 division [Kru95] presented in Subsection 5.1. The conceptual architecture view describes the architecture in terms of domain elements. Such elements are components with ports enabling interactions, and connectors with roles defining the relationships between the connectors and ports. The components and connectors are associated with each other to form a configuration. To join also ports and roles in that configuration, their protocols must be compatible with each other. Components can be decomposed into other components and connectors. These elements, with their associated behavior and relationships are shown in the upper part of Table 1. Conceptual architecture view uses the following UML features: UML class diagrams for showing the static configuration, UML sequence diagrams or state diagrams for showing the protocols connected to ports, UML sequence diagrams for showing a particular sequence of interactions among a group of components.

Page 122 of 255

The module architecture view describes the decomposition of the software and its organization into layers. Subsystems are decomposed into modules, and modules are related to layers according to their use-dependencies (see the second part of Table 1.) There is no configuration for the module view because it only shows the relationships among modules but not the combination of modules in a particular product. The module architecture view uses the following features: tables for describing themapping between the conceptual andmodule views, UML package diagrams for showing subsystem decomposition dependencies, UML class diagrams for showing use-dependencies between modules, UML package diagrams for showing use-dependencies among layers and the assignment of modules to layers. The execution architecture view is the run-time view of the system. It shows how modules are combined into a particular product by mapping modules to runtime images. The execution view also defines communication among modules and assigns modules to physical resources. The run-time images and communication

Page 123 of 255

paths are associated to each other to form a configuration (see the third part of Table 1). The execution architecture view uses the following UML features: UML class diagrams for showing the static configuration, UML sequence diagrams for showing the dynamic behavior of a configuration, or the transition between configurations, UML state diagrams or sequence diagrams for showing the protocol of a communication path. The code architecture view contains files and directories. It maps the modules and interfaces of the module view to source files, and the run-time images of the execution view to executable files. The code view does not have a configuration. It defines relationships to be applied across all products, not just to a particular product. The elements and their relations are shown in the last part of Table 1. The code architecture view uses the following features: tables to describe the mapping between elements of the module and execution views and elements of the code view, UML component diagrams for showing the dependencies among source, intermediate and executable files.

Page 124 of 255

View Elements Behavior Relations Conceptual Component Component Component decomposiPort functionality tion Connector Port protocol Port-role binding Role Connector (for configuration) protocol Module Module Interface Module implements Subsystem protocol Conceptual component Layer Subsystem decomposition Module use-dependency Execution Run-time Communica- Run-time image contains image tion protocol Module Communica- Binding tion path (for configuration) Code Source Source implements Intermediate module Executable Source includes source Directory Intermediate compiled from run-time image Executable implements run-time image

Page 125 of 255

Executable linked from intermediate Table 1: Elements of different architecture views [HNS99]

There are also other ways to describe architectures with UML. Garlan and Kompanek propose several alternative ways to describe each architectural concept in UML [GK00]. They discuss the advantages and disadvantages of each way. Medvidovic et al. shows how UML can be used to describe C2-style architectures [MR99, RMRR98]. They apply the built-in extension mechanism of UML to map architectural models expressed in C2-style into object notations. 5.5 Variation modeling An important aspect in product-line architectures is variation among products. However, variation is difficult to model in architectural descriptions. Although many models provide means to describe hierarchical systems (ways to decompose systems into smaller subsystems), they do not always support the description of variation. The model introduced by van den Hamer et al. distinguishes abstract components and component variants from each other [vdHvdLStS98]. An abstract

Page 126 of 255

component can be one of several components. For example, transmission of a car can be either manual or automatic. Thus, an abstract component can be considered as a place-holder for one or more implementations (component variants) of the same basic idea or design. A particular component variant can have a known internal structure. Another variant for the same component can have a different structure consisting of either same or different (abstract) components. The model of van den Hamer et al. is recursive. A component variant can itself be a family. This family may have a fixed structure defined with abstract components. However, it achieves its diversity by supporting alternative variants for the lower level components. Describing variation is also possible in Koala language [vO98] (introduced in subsection 5.3.1). In Koala, variation is divided into internal diversity (within components) and structural diversity (between components). To enable diversity, components and configurations are separated from each other. Flexible mechanisms are needed to instantiate components and bind them into configurations.

Page 127 of 255

Internal diversity is implemented via parametrization. In Koala, diversity parameters are declared as functions in requires-interfaces. Thus, their implementation lies outside of the component. Such requires-interfaces that contain diversity functions are called diversity interfaces. However, they can be used like normal requires-interfaces.

Structural diversity is implemented with switches. A switch defines connections between interfaces. The top side of a switch must be connected to the tip of a triangle describing an interface, and its bottom side must be connected to the triangle base of a different interface. Switches are needed, for example, in following situations. Suppose component A uses component B1 in one product and B2 in another. It would be possible to define two configurations to handle the implementation. However, A may be a part of a complex combound component, and it is not desired to duplicate the rest of it. Thus, a switch can be used between the requires-interface of A and the provides-interfaces of B1 and B2. In addition, Koala provides optional interfaces to handle diversity. A new version

Page 128 of 255

of a component may differ from the existing ones such that it has different interfaces. Instead of introducing a new component with a new unique name, it is more reasonable to allow adding new (optional) interfaces into existing configurations.

Software Product Line Architecture Designs


1. Introduction
Software product lines (PL) are a well-known approach in the field of software engineering. Several methods have been published to address the problems of PL engineering. Methods are diverging in terminology and application domains. Therefore it is difficult to find out the differences and similarities of the methods. Only few attempts have been made to evaluate or compare the product line architecture (PLA) design methods, Lopez-Herrejon and Batory propose a standard example case for evaluating product line methods. However, this example is very close to implementation and measures method features with performance benchmarking of the products. This kind of evaluation of product line methods is very limited and a comparison covering also the other aspects of PL methods is required. The other example of report on product line architectures touches all the aspects related to the product line from assessment to domain engineering and testing. However, this report either does not provide any comparisons that would concern product line design methods. The third attempt represents a covering survey on software architecture analysis methods however, software architecture design methods are not considered.On

Page 129 of 255

the basis of our studies, there are five methods answering the needs of product lines from the software architectural point of view.These are: *COPA *FAST * FORM *KobrA *QADA

*COPA : COPA, is a component-oriented but architecture-centric method that enables the development of software intensive product families. *FAST: FAST Family-Oriented Abstraction, Specification and Translation - is a software development process focused on building families. *FORM: Feature-Oriented Reuse Method for product line software engineering. The core of FORM lies in the analysis of domain features and the use of these features to develop reusable and adaptable domain artifacts. That is, FORM is a feature-oriented approach to product line architecture engineering. *KOBRA: Denoting a practical method for component-based product line engineering with UML. Quality-driven Architecture Design and Analysis. *QADA: QADA states a product line architecture design method providing traceable product quality and design time quality assessment.

Page 130 of 255

2.PURPOSE:
The purpose of this investigation was to study and compare the existing methods for the design of software product line architectures. The intention of this paper is not to provide an exhaustive survey on the area but provide a state-of-the-art of current PLA practices and help others to understand and contrast alternative approaches to product line design. This paper does neither guide in selecting the right approach for PLA design but opens up a basis for creation of such a decision tool. First, this paper provides background knowledge on architectural design methods and introduces a comparison framework for evaluating PLA design methods. Then, the five PLA design methods are briefly presented and compared against the framework. The most remarkable observations of the comparison close the paper.

3. Architecture Design
Architectural views have been the basis for a number of techniques developed and used during the last few years for describing architectures. It seems that the first of them was "4+1 views to software architecture". The four main views used are: logical view. process view. physical view. Development view.

Logical view: The logical view describes an object model. Process view:

Page 131 of 255

The process view describes the designs concurrency and synchronization aspect. Physical view: The physical view describes the mapping of the software onto the hardware reflecting the distributed aspect of the system. Development view: The development view describes the softwares static organization in its development environment.

The +1 denotes the use-case view consisting of scenarios that are used to illustrate the four views.Suggests a slightly modified version of the 4+1 view technique and ends up with 3+1 views necessary to describe the software architecture. The views are Logical view,Runtime view and Development view, plus the scenario view. The 3+1 method applies the Unified Modeling Language (UML) as an architectural description language define four views (conceptual,module, execution and code view) that are based on observations done in practice on various domains, e.g. image and signal processing systems, a real-time operating system, communication systems, etc. Despite the fact that the techniques introduced above are capable and exhaustive in their own way; none of them concerns the product line approach to the architectural design.

4. PLA design methods


4.1. COPA
A Component-Oriented Platform Architecting Method for Families of Software Intensive Electronic Products (COPA) is being developed at the Philips Research Labs.The COPA method is one of the results of the

Page 132 of 255

Gaudi project . The ambition of the Gaud project is to make the art and emerging methodology of System architecture more accessible and to transfer this know how and skills to a new generation of system architects. The specific goal of the COPA method is to achieve the best possible fit between business, architecture, process and organization. This goal results in the middle name of the COPA method.The specific goal of architecture design is to find a balance between componentbased and architecture-centric approaches , where in the componentbased approach is a bottom-up approach relying on composition. The architecture-centric approach is a top-down approach relying on variation. COPA covers the following aspects of product lines: business, architecture, process and organizational aspects.Here in, our evaluation concentrates on the architecture and process aspects. The application domains of the COPA method are telecommunication infrastructure systems and the medical domain. Within these domains, COPA assists in building product populations . Product populations denote the largescale diversity in a product family developed with a component-driven, bottom-up software development using, as much as possible, available software to create products within an organization.Originally, the COPA method starts by analyzing the customer needs. To be more specific, the inputs of the methods architecting phase are facts, stakeholder expectations, (existing) architecture(s) and the architects(s) intuition. The completely applied COPA method produces the final products. COPA is an extensive method targeted to all interest groups of a software company. Especially, the architecture stakeholders of the COPA method are the customers, suppliers, business managers and engineers . The multi-view architecting is addressed for these four main stakeholders Motivation to use COPA is a promise to manage size and complexity, obtain high quality, manage diversity and obtain lead time reduction.

Page 133 of 255

4.2. FAST
David Weiss introduced a practical, family-oriented software production process in the early 1990s.The process is known as the Family-Oriented Abstraction, Specification, and Translation process. At the time of writing the book on FAST (1999), the process was in use at Lucent Technologies and the evolution was continuing. The FAST process is an alternative to the traditional software development process. It is applicable wherever an organization creates multiple versions of a product that share significant common attributes, such as common behavior, common interfaces, or common code.The specific goal of FAST is to make the software engineering process more efficient by reducing multiple tasks, by decreasing production costs, and by shortening the marketing time. Considering the product line aspects, the FAST method defines a full product line engineering process with activities and artifacts. FAST divides the process of a product line into three sub processes, i.e. domain qualification, domain engineering and application engineering. Domain engineering creates the feature model, reference architecture, and reusable components as an output.

4.3. FORM
FORM is targeted to the wide spectrum of domain and application engineering, including the development of reusable architectures and code components. It is used at software engineering in many industrial aspects. The model that captures commonalities and differences is called a feature model. The use of features is motivated by the fact that customers and engineers often speak of product characteristics in terms of features the product has and/or delivers. Features are abstractions that both customers and developers understand and should be the first class objects in software development.

4.4. KobrA
Fraunhofer IESE has been developing the KobrA method that is a methodology for modeling architectures. .KobrA denotes itself as a

Page 134 of 255

component-based incremental product line development approach or a methodology for modeling architectures. It is also designed to be suitable for both single system and family based approaches in software development. In addition, the approach can be viewed as a method that supports a Model Driven Architecture (MDA) approach to software development, in which the essence of a systems architecture is described independently of platform. Another important goal is to be as concrete and prescriptive as possible and make a clear distinction between the products and processes. KobrA defines a full product line engineering process with activities and artifacts. The most important parts of PL engineering are framework engineering and application engineering with their sub steps, but KobrA also defines implementation, releasing, inspection and testing aspects of product line engineering process. KobrA is developed for the information systems domain. Therefore, KobrA can be customized to better fit the needs of a specific project. The method provides support for being changed in terms of its processes and products. In addition to the application domain, the factors influencing the KobrA method are organizational context, project structure and the goals of the project.Framework engineering starts from the very beginning of the software development: context realization. Framework engineering does not need any other input than the idea of a new framework with two or more applications. The other main activity of the method application engineering - starts when a customer contacts the software development organization. When such an expression of interest is received, an application engineering project is set up and the context realization instantiation is initiated. This activity equals to the elicitation of user requirements within the scope of the framework. KobrA states it is a simple, systematic, scalable and practical method. Simple here means that a method is as economic as possible with its concepts and the features in a method should be as orthogonal as possible. In addition, a method should separate concerns to the greatest extent possible. Systematic expects that the concepts and guidelines defined in the method should be precise and unambiguous. Also, a method should tell developers what they should do, rather that what they may do. Another feature of the method, that products of a method are strictly separated from the process, also serves in reaching a

Page 135 of 255

systematic method. A scalable method provides two aspects of scalability, these being granularity scalability and complexity scalability. The first one means that a method should be able to accommodate large-scale and small-scale problems in the same manner using the same basic set of concepts, whereas fulfillment of the last one refers to incremental application of the method concepts. Practicality requires that a method is compatible with as many commonly used implementation and middleware technologies as possible, particular those that are either de facto or de jure standards.

4.5. QADA
The QADA method is being developed at VTT, the Technical Research Centre of Finland. QADA is an abbreviation for Quality-driven Architecture Design and quality Analysis, a method for both to design and to evaluate software architecture of service-oriented systems. QADA claims to be a quality-driven architecture design method. It means that quality requirements are the driving force when selecting software structures and, each viewpoint concerns certain quality attributes. Architecture design is combined with quality analysis, which discovers if the designed architecture meets the quality requirements set in the very beginning. QADA method describes the architectural design part of the software development process, including steps and artifacts produced in each step. It also covers the description language used in the artifacts. It does not cover organizational or business aspects.Quality-driven design is aimed for middleware and service architecture domains. The case studies cover the design of distributed service platform two kinds of platform services for wireless multimedia applications and the design of wireless multimedia game. In addition, a recent case study on traffic information management system. Quality analysis has been applied to the middleware platform, spectrometer controller and terminal software. The method starts with the requirements engineering phase that even though called

Page 136 of 255

requirements engineering means a link between requirements engineering and software architecture design. The aim is in collecting the driving ideas of the system and the technical properties on which the system is to be designed. In addition to functional properties, the quality requirements and constraints of the system are captured as input. The output of the QADA method is two fold: design and analysis. Design covers software architecture at two abstraction levels: conceptual and concrete. Conceptual architecture covers the conceptual components, relationships and responsibilities, which are intended to be used by certain high level stakeholders related to product line, e.g. product line architects or management.Concrete architecture is closer to the socalled traditional architecture description aimed for software engineers and designers. The QADA method does not produce implementation artifacts. Analysis provides precious information concerning the quality of the design. Analysis results in feedback of whether the design addresses the quality requirements defined for the system. Analysis may also produce quality feedback about an existing system. The method users are product line architects and software architects or an architecting team. However, the group of stakeholders that use the method output is much wider. At the conceptual level, the stakeholders include system architects, service developers, product architects and developers, maintainers, component designers, service users, project manager and component acquisition, whereas at concrete level, the architectural descriptions are aimed at component designers, service developers, product developers, testing engineers, integrators, maintainers and assets managers. These groups continue by implementing, testing or maintaining the architecture that is designed. QADA claims - as do almost all the methods to be a systematic method and simple to learn. In addition, it is applicable to existing modeling tools. The architecture modeling method also improves communication among various stakeholders and conforms to the IEEE standard for architectural description.

5. Comparison Results
5.1. Context
Page 137 of 255

Each of the methods under evaluation is distinguishable concerning the specific goal the method has. All the methods have the same overall goal, i.e. produce product line architectures. However, to find a difference, a specific goal denotes what point(s) does the method press or highlight in PLA development. Although e.g. both COPA and KobrA are component based, the COPA method stands out by combining component-based (i.e. bottom-up) and architecture-centric (top-down) approaches with a novel way. Another top-down approach in addition to COPA is the QADA method. However again, QADA has a diverging goal in combining quality-driven approach with the architecturecentric one. The FAST method expresses itself as a process-driven method, and finally, the FORM method represents well-known feature-orientation to product line engineering.As a feature-orientation to product line engineering. As a feature-oriented approach, FORM states that it also covers the requirements engineering. The commonality analysis in the FAST method covers the requirements phase extensively. The other methods seem to step aside in this area, except that the QADA method represents an interface between requirements engineering and architecture design however this interface cannot be considered as a systematic approach to gather and analyze product requirements. In addition to requirements engineering, the FORM method covers architecture, implementation and process, as does also the KobrA method. What comes to the other methods, the COPA method is the most complete, covering all the aspects of a product line, whereas FAST captures only the process aspect and QADA extends the methods scope from process aspects to architectural aspects. The information systems domain is the most popular application domain; three methods altogether, namely KobrA (library system ), FORM (electronic bulletin board ) and QADA (traffic information management system ). In addition to the information system, QADA has been applied in middleware , the wireless multimedia domain and in the space application domain. In addition to the electronic bulletin board system, the FORM method has been applied on the elevator control system and the telecommunication infrastructure system . The telecommunication infrastructure domain has been the application domain of also COPA and FAST . The FAST method has been applied on the domain of real-

Page 138 of 255

time systems as well . Quite apart from that, the COPA method alone among the methods extends to the medical domain. The COPA case studies on the consumer electronics domain are discussed . All the methods start from the very beginning, taking context or user requirements as input. While considering the method outputs, all the methods seem to produce quite in-depth outputs by generating results that are close to the implementation. COPA also takes a wider insight into the issue by considering the business and organizational aspects. KobrA defines the process as far as to the implementation and testing phases of the software product. Furthermore, the QADA method is distinguished with output information concerning the quality of the design.

5.2. User
The users of the method are either people who actually use the method, i.e. follow the steps and create the defined artifacts, or people who benefit and use the outputs of the method. It seems the methods agree on the rough division of stakeholder groups related to product line engineering: engineers, architects, business managers and customers. To make a difference, KobrA perhaps is the most practical method aimed at software engineers and designers currently working in the industry. It is a simple method for developing software, and the adoption of the method does not probably express overwhelming challenges for software practitioners today. The conformance to a language standard (UML) and usage of commercial tools emphasizes the practicality and applicability of the KobrA method. Quite the contrary one may say that FORM is aimed at the academic audience. What comes to the motivation, adopting any of these product line architecture design methods provides several benefits e.g. reuse, complexity management, higher quality and shorter time-to-market. However, these benefits do not motivate the real method users (software architects) as well as the following implicit reasons. Both KobrA and QADA are developed with a goal to produce a simple and systematic method. They also conform to commonly known standards: UML (KobrA), MDA (KobrA and QADA ) and IEEE-Std-1471-2000 (QADA ). With an industry proven background COPA is a practical method, and with extensive architectural descriptions, it improves communication

Page 139 of 255

among various stakeholders of PL engineering. As well, featureorientation of FORM gives a common language and therefore improves communication between customers and engineers. Considering the question of what are the skills the method users need when applying the method, the following issues were concluded. One of the essential method properties is the method language. Two of the methods have a special notation language or ADL to learn (see for FORM notation and COPA Koala ) and the most of the methods apply UML as description language. However, current commercial UML tools do not provide a sufficient customization aspect to the needs of architectural descriptions and therefore, every one of the methods need special or extended tool support. This will scale up the effort needed to learn the method.Furthermore, each method has its own method ideology needed to learn. However a skill needed for this purpose is just an open mind. All the methods provide descriptive case studies. In addition, FORM provides a special guideline for using a feature-oriented approach. COPA and QADA suffer a lack of method documentation, whereas FAST and KobrA are captured in extensive manuals.

5.3. Contents
FORM, FAST and KobrA define a quite similar structure for the method. The basic idea is to first define the context of the system. After that the main two phases are (1) domain engineering and (2) application engineering. Domain engineering is also called product family engineering or framework engineering and it analyses the commonalities and variabilities among requirements and defines the domain architecture or a component framework. Application engineering instantiates the architectural model from domain architecture and produces application realization. In addition to these two main phases, the COPA method introduces the third phase called platform engineering. Platform engineering focuses on the development, maintenance and administration of reusable assets within the platform. Therefore, platform engineering is nothing more than a sub phase derived from domain engineering. Despite, the steps defined in the QADA method are diverging. First, an interface is defined for requirements engineering, which is somewhat compliant to the context analysis. However, design is divided into two phases of conceptual and
Page 140 of 255

concrete architecture design. After both design phases, QADA introduces the phase of quality evaluation that assesses the quality of architectural design against defined quality attributes. FORM and FAST explicitly define support for variability in requirements elicitation, whereas the other methods do not. In addition, through tool support FORM provides automatic transformation from the requirements to an instance of the domain architecture. The other methods concentrate on capturing variability with graphical language in architectural design. QADA and KobrA content themselves with adapted UML and manual transformation to code, whereas COPA has developed its own language and tools to represent variability and transform component descriptions automatically into code skeletons. FAST does not define explicit tool support. Instead, Process and Artifact State Transition Abstraction (PASTA) process modeling tool of FAST serves to explain FAST in more detail, to help the user to improve FAST and to help the user to develop automated support for FAST. Quite contrary, the FORM method has a single tool, ASADAL.

5.4. Validation
All of the methods have been validated in practical industrial case studies. The COPA method was born in the industry and therefore, perhaps, has the strongest industrial experience with software applications in large product families. Most of the methods i.e. FORM, FAST, COPA and KobrA ensure quality attributes with non-architectural evaluation methods, such as model checking, inspections and testing. Although KobrA also proposes scenariobased architecture evaluation (SAAM ) for ensuring maintainability, none of these methods define an explicate way to validate the output from the domain of application engineering. Despite this, the QADA method has an exceptional way of evaluating software architecture designs before implementation. The quality of the design is validated with a scenario based evaluation method in two phases: conceptual and concrete.

6. Conclusions

Page 141 of 255

This study has compared five methods for product linen architectural design: COPA, FAST, FORM, KobrA and QADA according to specially developed question framework. The comparison largely rested on the available literature. Based on the combined experience of the five product line engineering methods, the most important conclusions were as follows.The methods do not seem to compete with each other, because each of them has a special goal or ideology. All the methods highlight and follow this ideology throughout the method descriptions. COPA. Concentrated on balancing between topdown and bottom-up approaches and covering all the aspects of product line engineering i.e. architecture, process, business and organization. FAST. Family oriented process description with activities, artifacts and roles. Therefore, it is very adapting but not applicable as it is. FORM. Feature-oriented method for capturing commonality inside a domain. Extended also to cover architectural design and development of code assets.

KobrA. Practical, simple method for traditional component-based software engineering with UML. Adapts to both single systems and family development. QADA. Concentrated on architectural design according to quality requirements. Provides support for parallel quality assessment of product line software architectures. The most popular domains for applying the methods have been information and telecommunication (infrastructure) domains. These domains have six case studies published all together. However, also the real-time domain, wireless services, middleware, medical systems and consumer electronics domains have been on trial. All the methods agree that none of the available commercial tools alone and/or without extensions support product line architectural design. Therefore, special tools or tool extensions have been developed to form a set of tools. This way, product line methods may have a full, practical tool support. There are not available de jure standards for product line architecture development. KobrA and QADA apply other software standards - namely

Page 142 of 255

OMG MDA and UML and IEEE Std-1471-2000 which provide support for formalizing PLA design. The aim of this study was to provide a comparative analysis and overview on the PLA engineering methods. In addition, this study may provide a basis for developing a decision tool for selecting an appropriate PLA engineering practice. Meanwhile, getting familiar with all the approaches before embarking on suitable PLA development method is recommended.

Measuring product line architecture


In recent years, the focus of the software engineering community has shifted from programming stand-alone applications to developing component-based application families. Various technical challenges exist in this domain, such as the need to represent family members, to express and capture commonality, variability, and incompleteness, as well as to incorporate domain knowledge while populating generic, reference

Page 143 of 255

architectures [1]. In addition to technical challenges, organizations face strategic, financial, and human factors challenges that make it difficult to initially adopt product line families within an organization. However, if properly deployed, largescale reuse results in numerous rewards including reduced costs and risks, higher reuse and predictability, better performance modeling, and more effective communication between stakeholders. Initial evidence is showing that these benefits far outweigh the initial cost, and many organizations are beginning to leverage product-line architectures (PLAs) as a basis for software component reuse. In many areas of software engineering, the use of metrics has proven to be helpful in assessing particular situations. They help us learn from the past, evaluate the present, and sometimes even predict the future. Metrics provide condensed information about the current state of a system or process, track the progress towards goals, and provide measurements that form a basis for guiding stakeholders in comparative decision making. However, the current set of metrics as defined in the literature (e.g., cohesion [6], cyclomatic complexity [3], fan-in/fan-out [4], and depth and node density [5]) is very

Page 144 of 255

much focused on the object level and cannot be directly applied to product line architectures. Specifically, product line architectures exhibit several unique features that make direct use of the above metrics impossible, namely hierarchical composition, optionality, and variability. In essence, these metrics assume a static set of interfaces and are thus not equipped for supporting the diversity that is present in PLAs. For example, if we want to calculate the fan-in and fan-out of a particular component in a product family architecture, we would not know what to do with the PLAs optional or variant components to which the component in question is connected. Our work focuses on PLA -level metrics that support the hierarchical, incomplete, and diverse nature of PLAs to guide architectural decisions during system evolution. In particular, we provide an incremental set of metrics that allow an architect to make informed decisions about the usage levels of architectural components, the cohesiveness of the components, and the validity of product family architectures. Although only in the beginning stages of our investigation into these metrics, we believe that they (and their future refinements) show promise for applicability in the product line architecture domain.

Page 145 of 255

The remainder of the paper is organized as follows. We first provide a short overview of relevant concepts in product family architectures in Section 2. Then we introduce our proposed metrics in Section 3 and conclude in Section 4 with an outlook at our future work.

Overview of PLA Concepts


Certain properties are shared across PLAs. The first is partiality; PLAs must have partial representation to support the commonality of a product family, yet provide enough flexibility for family members to satisfy different requirements or accommodate future requirements arising from either internal organizational strategies or external market forces. The second property is diversity. Components in a PLA can be mandatory (they are part of a generic architecture), optional (their existence is not guaranteed, dependent on some context), and variant (different algorithmic implementation, different interfaces, or different platform dependencies). The final property is compositionality. For example,

Page 146 of 255

the architecture in Figure 1 is composed of three complex components (each of which is, in turn, hierarchically composed of primitive components). Complex components CC1 and CC3 are mandatory, whereas CC2 is optional. Additionally, CC1 comprises 3 primitive components: PC12 is mandatory, PC11 is variant, and PC13 is optional. Fig. 1. Example product line architecture. Unshaded boxes represent mandatory components;

lightly shaded boxes represent optional components; darkly shaded boxes represent variant

components.

In order to make the analysis of complex, product line systems such as the one depicted

in Figure 1 more efficient, specific techniques must be provided. One such technique that we have exploited extensively in developing PLA metrics is type checking. Type checking stipulates that a service (i.e., an operation with its accompanying interfaces) provided by one component will satisfy a service required by another component if and only if their interfaces and behaviors match as defined in [7].

Page 147 of 255

To illustrate how architectural type checking works, let us assume that complex component CC1 in Figure 1 is part of a logistics system for routing incoming cargo to a set of warehouses. Its constituent component PC11 is a variant clock component that provides time measurement to the other components, PC12 models delivery ports, while PC13 is an optional component that models vehicles. Each component has a set of provided and a set of required services, denoted in Table 1 by Pi and Ri, respectively. For brevity, we have omitted component service behavior specifications from Table 1; see [7] for an example of the complete service specification. When performing a type check of the specified architecture, the required services of PC12 and PC13 (R1 and R2, respectively) are matched to the provided services of all components along their communication links. In this case, the only component along PC12s and PC13s communication links is PC11 and an attempt is made to match PC11s provided services with PC12s and PC13s required services. PC11s P1 provided service matches both the R1 and R2 required services. PCPC1111 PCPC1111 CC

Page 148 of 255

PC12 PC13 PC24 PC21 PC22 PC23 CC PCPC1111 PC11 PC34 PC31 PC33 PC32 CC PC35

Table 1. Component services.

Component Provided Interface Element Required Interface Element

PC11 P1: Tick ()

P2: setClockSpeed (rate: Integer)

PC12 P3: newShipment

(port; PortID; shp: ShipmentType)

Page 149 of 255

P4: unloadShipment

(port: PortID; shp: Integer)

P5: getDeliveryPorts()

R1: Tick ()

PC13 P6: addShipment

(veh: VehicleID; shp: ShipmentType)

P7: unloadShipment (veh: VehicleID)

P8: getVehicles (): \set VehicleType

R2: Tick ()

It should be noted that architectural type checking is not an all or nothing proposition; rather, architecture-level interoperability is a point on a spectrum in which the highest degree of interoperability is achieved when every service required by every

Page 150 of 255

component is provided by some other component(s) along its communication links. This issue is further discussed in Section 3 below. Type checking has provided a necessary, though not sufficient, basis for developing the PLA metrics discussed below.

Proposed PLA Metrics


Based upon a preliminary examination, we have defined a number of initial metrics that we believe will form the basis for more advanced metrics in the domain of product family architectures. Below, we discuss these metrics, their derivation, and their meaning.

3.1 Primitive Components The basic building blocks of our metrics are the Required Service Utilization (RSU) and the Provided Service Utilization (PSU). These two metrics are context -dependent: a given component will have different RSU and PSU measures depending on the architecture of which it is a part. The RSU is defined, per basic component, as the number of required services that are satisfied by other basic components within a complex component. The

Page 151 of 255

PSU is defined, per basic component, as the number of provided services that are used by other basic components within a complex component. For example, in the architecture of Figure 1, the RSU and PSU of the component PC11 are 0 and 0.5, respectively. Similarly, the RSU and PSU of the component PC12 are 1 and 0, respectively. The RSU and PSU for the other components can be computed in a similar fashion. Both the RSU and PSU have a well-defined meaning. The RSU defines a satisfaction rate: the closer the RSU is to 1, the more services that are required by a basic component are actually provided by the other basic components in an architecture. In fact, for a basic component that is fully contained within a complex component, ideally the RSU should be 1. The PSU, on the other hand, defines the utilization rate of the provided services of a basic component: the closer the PSU is to 1, the more functionality that is provided by a basic component is actually used by the other basic components. Note that, contrary to the RSU, full containment within a complex component does not necessarily lead to a PSU of 1: some services may be provided that are never used. The PSU, thus, can also be used in an inverse manner: the closer the PSU is to 0, the more bloated a basic comp onent is. In

Page 152 of 255

such a case, a basic component carries with it a lot of extra functionality that makes it more heavy weight than required by the other basic comp onents in the given architecture. Note that the RSU and PSU are related to, but different from the concepts of fan-in and fan-out. Whereas fan-in and fan-out provide absolute numbers of connections, our RSU and PSU define a satisfaction and utilization rate. This provides a slightly more useful metric since the context in which the connections are made is taken into account.

3.2 Complex Components The hierarchical nature of a product family architecture complicates the nature of the RSU and PSU as we go up the complexity hierarchy: because different architectural styles and architecture description languages define different rules of service propagation from lower level components to higher-level components (some may prescribe that all provided and required services are propagated, others may prescribe that only leftover services are propagated, and yet others may prescribe that the provided and required services of higher level components are explicitly defined), the relationship between the provided and

Page 153 of 255

required services of a complex component and the provided and required services of its constituent (basic and complex) components is unclear. Nonetheless, the RSU and PSU of higher-level components also provide useful information to a systemlevel architect. We define the RSU and PSU measures for a complex component in exactly the same way as the RSU and PSU for a basic component, with the observation that the actual propagation of services from lower level components to higher-level comp onents depends on the particular applicable rules. In our example, we only propagate leftover services: those that are not satisfied within the complex component. The meaning of the RSU and PSU for complex components remains the same: the RSU defines the satisfaction rate of a component and the PSU defines the utilization rate.

3.3 Average per Complex Component The average RSU and average PSU per higher-level complex component are another set of useful measures. They are calculated by averaging the RSU and PSU of the components within a complex component. For example, the average RSU and PSU for component CC1 are 0.67 and 0.125, respectively. These two metrics can be utilized to

Page 154 of 255

assess the cohesion of a complex component: the closer to 1 both the average RSU and average PSU are, the more self-contained, and thus cohesive, the component is. The average RSU and PSU can also be used to classify the nature of a comp onent. If the average RSU is high and the average PSU is low, the component is a service component at the next higher level since the component provides many services that are not used internally. If the average RSU is low and the average PSU is high, the component is a driver component since it requires many services that are not provided internally. If the average RSU is high and the average PSU is high, the component typically is a transformational componentservices are translated to and from each other. Finally, if the average RSU is low and the average PSU is low, this typically is a marginal component, i.e., one that serves a very limited function in the system. Such a component should be closely examined for its usefulness and potential for absorption into another component.

3.4 Average per Product Family Architecture The second and third complications of a product family architecture are its inherent

Page 155 of 255

abilities to capture optionality and variability. For our metrics, this poses a challenge: when a component is optional, the RSU and PSU of other components (and the optional component itself!) depend on whether the optional component is included in the architecture or not. As such, the RSU and PSU of a complex component or architecture have to be calculated per the above after the selection process of optional and variant components has taken place. However, another useful metric is the average RSU and PSU per product family architecture. These metrics are calculated as follows: enumerate all possible configurations that a product family architecture may exhibit and calculate the average of the RSUs and PSUs of each of those configurations. In the ideal case, this average RSU is 1, indicating that all possible configurations are validvalid meaning that all required services are provided by the components inside the configuration. In reality, however, the average RSU will be lower, indicating that some configurations are not valid. The lower the RSU, the more spotty a product family architecture is and the more attention has to be paid to the configuration process to ensure a proper configuration is selected.

Page 156 of 255

The average PSU for a product family architecture serves a similar kind of role. In the extreme case the average is 1, indicating that all services that are provided by a product family architecture are actually used in each of the instances of that family. In reality, of course, the average is lower: since a product family architecture is build to provide a good degree of flexibility, it cannot be expected that all services that are provided are used within each of the product family members. If the average is too low, however, it indicates a degree of separation within the product family architecture and perhaps a split into two or more separate product family architectures is required.

Of note is that not only the average is a useful metric, but also the span over the

Page 157 of 255

average RSU and average PSU of a product family architecture. The larger the range of average RSUs and PSUs, the more unbalanced a product family architecture is. Note that the span of average RSU and PSU values directly takes into account the existence of variant and optional components in product line architectures.

Conclusion
This paper has presented some preliminary results in applying metrics to product family architectures. Based upon two basic metrics that operate at the individual, basic component level, namely the Required Service Utilization and the Provided Service Utilization, we have defined additional metrics that are able to assess complex components and product family architectures as a whole. While our experience with the metrics is limited, we believe their close relationship to existing measures, such as cohesion, fan-in, and fan-out, is an indication of their applicability and relevance. Our immediate future work involves applying the metrics on an actual product family architecture to evaluate their applicability in a real-world setting. Additionally, we plan to refine and enhance our existing set of metrics while also introducing several new metrics

Page 158 of 255

that operate at the level of the individual services.

Page 159 of 255

Acknowledgements
This material is based upon work supported by the National Science Foundation under Grant No. CCR-9985441. Effort also sponsored by the Defense Advanced Research Projects Agency, Rome Laboratory, Air Force Materiel Command, USAF under agreement numbers F30602-00-2-0615 and F30602-00-2-0599. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Defense Advanced Research Projects Agency, Rome Laboratory or the U.S. Government. Effort also supported in part by Xerox.

References
1. A. van der Hoek, M. Rakic, R. Roshandel, and N. Medvidovic. Taming Architectural Evolution. ESEC/FSE 2001, Vienna, September 2001. 2. J. Poulin. Measuring Software Reuse. Addison Wesley, 1997. 3. T. J. McCabe. A Complexity Measure. IEEE Transactions on Software Engineering, 2(4): 308320, 1976.

Page 160 of 255

4. S. Henry and D. Kafura. Software Structure Metrics Based on Information Flow. IEEE Transactions on Software Engineering, 7(5): 510-518, 1981. 5. M. Lorenz and J. Kidd. Object-Oriented Software Metrics. Prentice Hall, 1994.

testing a software product line architecture


The software product line approach to the development of software intensive systems has been used by organizations to improve quality, increase productivity, and reduce cycle time. These gains require different approaches to a number of the practices in the development organization including testing. The planned variability that facilitates some of the benefits of the product line approach poses a challenge for testrelated activities. This chapter provides a comprehensive view of testing at various points in the software development process and describes specific techniques for carrying out the various test-related tasks. These techniques are illustrated using a pedagogical product line developed by the Software Engineering Institute (SEI). Organizations are making the strategic decision to adopt a product line approach to the production of software-intensive systems. This decision is often in response to initiatives within the organization to achieve competitive advantage within their markets. The product line strategy has proven successful at helping organizations achieve aggressive goals for increasing quality and productivity and reducing cycle time. The strategy is successful, at least in part, due to its comprehensive framework that touches all aspects of product development. Testing plays an important role in this strategic effort. In order to achieve overall goals of increased productivity and reduced cycle time, there need to be improvements to traditional testing activities. These improvements include:

Page 161 of 255

{ Closer cooperation between development and test personnel. { Increased throughput in the testing process. { Reduced resource consumption. { Additional types of testing that address product line specific faults. There are several challenges for testing in an organization that realizes seventy five to ninety-five percent of each product from reusable assets. These include: { variability - The breadth of the variability that must be accommodated in the assets, including test assets, directly impacts the resources needed for adequate testing. { emergent behavior - As assets are selected from inventory and combined in ways not anticipated, the result can be an interaction that is a behavior not present in any one of the assets being combined. This makes it difficult to have a reusable test case that covers the interaction. { creation of reusable assets - Test cases and test data are obvious candidates to be reused and when used as is they are easy to manage. The amount of reuse achieved in a project can be greatly increased by decomposing the test assets into miner grained pieces that are combined in a variety of ways to produce many different assets. The price of this increased reuse is increased effort for planning the creation of the assets and management of the increased number of artifacts. { management of reusable assets - Reuse requires traceability among all of the pieces related to an asset to understand what an asset is, where it is stored, and when it is appropriate for use. A configuration management system provides the traceability by explicit artifacts. I will discuss several activities that contribute to the quality of the software products that comprise the product line. I will think of these activities as forming a chain of quality in which

Page 162 of 255

quality assuring activities are applied in concert with each production step in the software development process. In addition to discussing the changes in traditional testing processes needed to accommodate the product line approach, I will present a modified inspection process that greatly increases the defect finding power of traditional inspection processes. This approach to inspection applies testing techniques to the conduct of an inspection. I will use a continuing example throughout the chapter to illustrate the topics and then I will summarize the example near the end of the chapter. The Arcade Game Maker product line is an example product line developed for pedagogical purposes1. A complete set of product line assets are available for this example product line. The product line consists of three games: Brickles, Pong, and Bowling. The variation points in the product line include the operating system on which the games run, a choice of an analog, digital, or no scoreboard, and whether the product has a practice mode. The software product line strategy is a business strategy that uses a specific method to achieve its goals. The material in this chapter will reflect this orientation by combining technical and managerial issues. I will briefly introduce a comprehensive approach to software product line development and I will provide a state-of-the-practice summary. Then I will describe the current issues, detail some experiences, and outline research questions regarding the test-related activities in a software product line organization.

Page 163 of 255

Software Product Lines


A software product line is a set of software-intensive systems sharing a common, managed set of features that satisfy the specific needs of a particular market 1 The complete example is available at http://www.sei.cmu.edu/productlines/ppl segment or mission and that are developed from a common set of core assets in a prescribed manner[8]. This definition has a number of implications for test strategies. Consider these key phrases from the definition: { set of software-intensive systems - The product line is the set of products. The product line organization is the set of people, business processes, and other resources used to build the product line. The commonalities among the products will translate into opportunities for reuse in the test artifacts. The variabilities among products will determine how much testing will be needed. { common, managed set of features - Test artifacts should be tied to significant reusable chunks of the products, such as features. These artifacts are managed as assets in parallel to the production assets to which they correspond. This will reduce the effort needed to trace assets for reuse and maintenance purposes. A test asset is used whenever the production asset to which it is associated is used.

Page 164 of 255

{ specific needs of a particular market segment or mission There is a specified domain of interest. The culture of this domain will influence the priorities of product qualities and ultimately the levels of test coverage. For example, a medical device that integrates hardware and software requires far more evidence of the absence of defects than the latest video game. Over time those who work in the medical device industry develop a different view of testing and other quality assurance activities from workers in the video game domain. { common set of core assets - The test core assets include test plans, test infrastructures, test cases, and test data. These assets are developed to accommodate the range of variability in the product line. For example, a test suite constructed for an abstract class is a core asset that is used to quickly create tests for concrete classes derived from the abstract class. { in a prescribed manner - There is a production strategy and production method that define how products are built. The test strategy and test infrastructure must be compatible with the production strategy. A production strategy that calls for dynamic binding imposes similar constraints on the testing technology. The software product line approach to development affects how many development tasks are carried out. Adopting the product line strategy has implications for the software engineering activities as well as the technical and organizational management activities. The Software Engineering Institute has developed a Framework for Software Product Line PracticeSM2 which defines 29 practice areas that affect the success of the product line. A list of these practices is included in the Appendix. The practices are grouped into Software Engineering, Technical Management, and Organizational Management categories. These categories reflect the dual technical and

Page 165 of 255

business perspectives of a product line 2 SM service mark of Carnegie Mellon University organization. A very brief description of the relationship between the Testing practice area and the other 28 practices is included in the list in the appendix. The software product line approach seeks to achieve strategic levels of reuse. Organizations have been able to achieve these strategic levels using the product line approach, while other approaches have failed, because of the comprehensive nature of the product line approach. For example, consider the problem of creating a \reusable" implementation of a component. The developer is left with no design context, just the general notion of the behavior the component should have and the manager is not certain how to pay for the 50% - 100% additional cost of making the component reusable instead of purpose built. In a software product line, the qualities and properties, required for a product to be a member, provide the context. The reusable component only has to work within that context. The manager knows that the product line organization that owns all of the products is ultimately responsible for funding the development. This chapter will present testing in the context of such a comprehensive approach.

Commonality and Variability


The products in a product line are very similar but differ from each other (otherwise they would be the same product). The points at which the products differ are referred to as variation points. Each possible implementation of a variation point is referred to as a variant. The set of products possible by taking various combinations of the variants defines the scope of the product line. Determining the appropriate scope of the product line is critical to the success of the product line in general and the testing practice specifically. The scope constrains the possible variations that

Page 166 of 255

must be accounted for in testing. Too broad a scope will waste testing resources if some of the products are never actually produced. Too vague a scope will make test requirements either vague or impossible to write. Variation in a product is nothing new. Control structures allow several execution paths to be specified but only one to be taken at a time. By changing the original inputs, a different execution path is selected from the existing paths. The product line approach adds a new dimension. From one product to another, the paths that are possible change. In the first type of variation, the path taken during a specific execution changes from one execution to the next but the control flow graph does not change just because different inputs are chosen. In the second type of variation different control flow graphs are created when the a different variant is chosen. This adds the need for an extra step in the testing process, sampling across the product space, in addition to sampling across the data space. This is manageable only because of the use of explicit, pre-determined variation points and automation. Simply taking an asset and modifying it in any way necessary to a product will quickly introduce the level of chaos that has caused most reuse efforts to fail. The commonality among the products in a product line represents a very large portion of the functionality of these products and the largest opportunity to reduce the resources required. In some product lines this commonality is delivered in the form of a platform which may provide virtually identical functionality to every product in the product line. Then a select set of features are added to the platform to define each product. In other cases, each product is a unique assembly of assets, some of which are shared with other products. These different approaches will affect our choice of test strategy.

Page 167 of 255

The products that comprise a product line have much in common beyond the functional features provided to users. They typically attack similar problems, require similar technologies, and are used by similar markets. The product line approach facilitates the exploitation of the identified commonality even to the level of service manuals and training materials. The commonality/variability analysis of the product line method produces a model that identifies the variation points needed in the product line architecture to support the range of requirements for the products in the product line. Failure to recognize the need for variation in an asset will require custom development and management of a separate branch of code that must be maintained until a future major release. Variability has several implications for testing: { Variation is identified at explicit, documented variation points - Each of these points will impose a test obligation in terms either of selecting a test conguration or test data. Analysis is required at each point to fully understand the range of variation possible and the implications for testing. { Variation among products means variation among tests - The test software will typically have at least the same variation points as the product software.

Page 168 of 255

Constraints are needed to associate test variants with product variants. One solution to this is to have automated build scripts that build both the product and the tests at the same time. { Differences in behavior between the variants and the tests should be minimal - I will show later that using the same mechanisms to design the test infrastructure as are used to design the product software is usually an effective technique. Assuming the product mechanisms are chosen to achieve certain qualities, the test software should seek to achieve the same qualities. { The specific variant to be used at a variation point is bound at a specific time. - The binding time for a variation point is one specific attribute that should be carefully matched to the binding of tests. Binding the tests later than the product assets are bound is usually acceptable but not the reverse. { The test infrastructure must support all the binding times used in the product line. - Dynamically bound variants are of particular concern. For example, binding aspects to the product code as it executes in the virtual machine may require special techniques to instrument test cases. { Managing the range of variation in a product line is essential Every variant possibility added to a variation point potentially has a combinatorial impact on the test effort. Candidate variants should be analyzed carefully to determine the value

Page 169 of 255

added by that variant. Test techniques that reduce the impact of added variants will be sought as well. Commonality also has implications for testing: { Techniques that avoid retesting of the common portions can greatly reduce the test effort. { The common behavior is reused in several products making an investment in test automation viable.

Planning and structuring


Planning and structuring are two activities that are important to the success of the product line. A software product line is chartered with a specific set of goals. Test planning takes these high level goals into account when defining the goals for the testing activities. The test assets will be structured to enhance their reusability. Techniques such as inheritance hierarchies, aspectoriented programming, and template programming provide a basis for defining assets that possess specific attributes including reusability. Planning and structuring must be carried out incrementally. To optimize reuse, assets must be decomposed and structured to facilitate assembly by a product team. One goal is to have incremental algorithms that can be completed on individual modules and then be more rapidly completed for assembled subsystems and products using the partial results. Work in

Page 170 of 255

areas such as incremental model checking and component certification may provide techniques for incremental testing. A product line organization defines two types of roles: core asset builder and product builder. The core asset builders create assets that span a sufficient range of variability to be usable across a number of products. The core assets include early assets such as the business case, requirements, and architecture and later assets such as the code. Test assets include plans, frameworks, and code. The core asset builder creates an attached process for each core asset. The attached process describes how to use the core asset in building a product. The attached process may be a written description, provided as a cheat sheet in Eclipse or as a guidance element in the .NET environment, or it may be a script that will drive an automated tool. The attached process adds value by reducing the time required for a product builder to learn how to use the core asset. The core asset builder's perspective is: create assets that are usable by product builders. A core asset builder's primary trade-o is between sufficient variability to maximize the reuse potential of an asset and sufficient commonality to provide substantial value in product building. The core asset developer is typically responsible for creating all parts of an asset. This may include test cases that are used to test the asset during development

Page 171 of 255

and can be used by product developers to sanity test the assets once they are integrated into a product. The product builders construct products using core assets and any additional assets they must create for the unique portion of product. The product builder provides feedback to the core asset builders about the usefulness of the core assets. This feedback includes whether the variation points were sufficient for the product. The product builder's perspective is: achieve the required qualities for their specific product as rapidly as possible. The product builder's primary trade-o is between maximizing the use of core assets in building the product and achieving the precise requirements of their specific product. In particular, product builders may need a way to select test data to focus on those ranges critical to the success of their product. To coordinate the work of the two groups, a production strategy is created that provides a strategic overview of how products will be created from the core assets. A production method is defined to implement the strategy. The method details how the core asset developers should build and test the core assets so that product builders can achieve their objectives.

Page 172 of 255

Testing Overview
Testing is the detailed examination of an artifact guided by specific information. The examination is a search for defects. Program failures signal that the search has been successful. How hard we search depends on the consequences if a defect is released in a product. Software for life support systems will require a more thorough search than word processing software. The purpose of this section is to give a particular perspective on testing at a high level before discussing the specifics of testing in a software product line. I will discuss some of the artifacts needed to operate a test process. After that I will present a perspective on the testing role and then briefly describe fault models.

Testing Artifacts
This definition of testing encompasses a number of testing activities that are dispersed along the software development life cycle. I will refer to the places

Page 173 of 255

where these activities are located as test points. Figure 1 shows the set of test points for a high-level view of a development process. This sequence is intended to establish the chain of quality. The IEEE 829 standard defines a number of testing artifacts, which will be used at each of the test points in the development process [19]. Several of these test assets are modified from their traditional form to support testing in a product line: { test plan - A description of what testing will be done, the resources needed, and a schedule for when activities will occur. Any software development project should have a high level end-to-end plan that coordinates the various specific types of tests that are applied by developers and system testers. Then individual test plans are constructed for each test that will be conducted. In a software product line organization a distinction is made between those plans developed by core asset builders that will be delivered to the product builders of every product and the plans that the product builder derives

Page 174 of 255

from the product line plan and uses to develop their product specific plan.

The core asset builders might provide a template document or a tool that

Page 175 of 255

collects the needed data and then generates the required plan. { test case - A single configuration of test elements that embodies a use scenario for the artifact under test. In a software product line, a test case may have variation points that allow it to be configured for use with multiple products. { test data - All of the data needed to fully describe a scenario. The data is linked to specific test cases so that it can be reused when the test case is. The test data may have variation points that allow some portion of the data to be included or excluded for a particular use. { test report - A summary of the information resulting from the test process. The report is used to communicate to the original developers, management, and potentially to customers. The test report may also be used as evidence of the quantity and quality of testing in the event of litigation related to a product failure. The primary task in testing is defining effective test cases. To a tester, that means finding a set of stimuli that, when applied to the artifact under test, exposes a defect. I will discuss several techniques for test case creation but all

Page 176 of 255

of them rely on a fault model. There will be a fault model for each test point. I will focus on faults related to being in a product line in section 1.

Testing Perspective
The test activities, particularly the review of non-software assets, are often carried out by people without a traditional testing background. Unit tests are usually carried out by developers who pay more attention to creating than critiquing. A product line organization will provide some fundamental training in testing techniques and procedures but this is no substitute for the perspective of an experienced tester. In addition to training in techniques, the people who take on a testing role at any point in any process should adopt the \testing perspective". This perspective guides how they view their assigned testing activities. Each person with some responsibility for a type of testing should consider how these qualities should affect their actions. { Systematic - Testing is a search for defects and an effective search must be systematic about where it looks. The tester must follow a well-defined process when they are selecting test cases so that it is clear what has been tested and what has not. For example, coverage criteria are usually stated

Page 177 of 255

in a manner that describes the \system." All branches level of test coverage means that test cases have been created for each path out of each decision point in the control flow graph. { Objective - The tester should not make assumptions about the work to be tested. Following specific algorithms for test case selection removes any of the tester's personal feelings about what is likely to be correct or incorrect. \Bob always does good work, I don't need to test his work as thoroughly as John's," is a sure path to failure. { Thorough - The tests should reach some level of coverage of the work being examined that is \complete" by some definition. Essentially, for some classes of defects, tests should look everywhere those defects could be located. For example, test every error path if the system must be fault tolerant. { Skeptical - The tester should not accept any claim of correctness until it has been verified by an acceptable technique. Testing boundary conditions every time eliminates the assumption that \it is bound to work for zero."

Page 178 of 255

Fault models
A fault model is the set of known defects that can result from the development activities leading to the test point. Faults can be related to several different aspects of the development environment. For example, programs written in C and C++ are well known for null pointer errors. Object-oriented design techniques introduce the possibility of certain types of defects such as invoking the incorrect virtual method [29, 1]. Faults also are the result of the development process and even the organization. For example, interface errors are more likely to occur between modules written by teams that are non-co-located. The development organization can develop a set of fault models that reflect their unique blend of process, domain, and development environment. The organization can incorporate some existing models such as Chillaredge's Orthogonal Defect Classification into the fault models developed for the various test points [7]. Testers use fault models to design effective and efficient test cases since the test cases are specifically designed to search for defects that are likely to be present. Developers of safety critical systems construct fault trees as part of a failure analysis. A failure analysis of this type usually starts at the point of failure and works back to find the cause. Another type of analysis is conducted

Page 179 of 255

as a forward search. These types of models capture specific faults. We want to capture fault types or categories of faults. The models are further divided so that each model corresponds to specific test points in the test process. A product line organization develops fault models by tracking defects and classifying these defects to provide a definition of the defect and the frequency with which it occurs. Table 1 shows some possible faults per test point related to a product line strategy. Others can be identified from the fault trees produced from safety analyses conducted on product families [11, 10, 23].

Summary
Nothing that has been said so far requires that the asset under test be program source code. The traditional test points are the units of code produced by an

Page 180 of 255

individual developer or team, the point at which a team's work is integrated with that of other teams, and the completely assembled system. In section 4 I present a review/inspection technique, termed Guided Inspection, that applies the testing perspective to the review of non-software assets. This technique can be applied at several of the test points shown in Figure 1. This adds a number of test points to what would usually be referred to as \testing" but it completes the chain of quality. In section 5 I will present techniques for the other test points. In an iterative, incremental development process the end of each phase may be encountered many times so the test points may be exercised many times during a development effort. Test implementations must be created in anticipation of this repetition.

Guided Inspection
Guided Inspection is a technique that applies the discipline of testing to the review of non-software assets. The review process is guided by scenarios that are, in effect, test cases. This technique is based on Active Reviews by Parnas

Page 181 of 255

and Weiss [30]. An inspection technique is appropriate for a chapter about testing in a product line organization because the reviews and inspections are integral to the chain of quality and because the test professionals should play a key role in ensuring that these techniques are applied effectively.

The Process
Consider a detailed design document for the computation engine in the arcade game product line. The document contains a UML model complete with OCL constraints as well as text tying the model to portions of the use case model that defines the product line. A Guided Inspection follows the steps shown in Figure 2, a screenshot from the Eclipse Process Framework Composer. The scenarios are selected from the use case model shown in Figure 3. For this example, I picked \The Player has begun playing Brickles. The puck is in play and the Player is moving the paddle to reflect the puck. No pucks have been lost and no bricks hit so far. The puck and paddle have just come together on this tick of the clock." as a very simple scenario that corresponds to a test pattern discussed later.

Page 182 of 255

The Game Board shown in the class diagram in Figure 4 is reused, as is, in each game. It serves as a container for Game Pieces. According to the scenario, the game is in motion so the game instance is in the moving state, one of the states shown in Figure 8. The sequence diagram shown in Figure 5 shows the action as the clock sends a tick to the game board which sends the tick on to the Movable Sprites on the game board. After each tick the game board invokes the \check for collision" algorithm shown in Figure 6. The collision detection algorithm detects that the puck and paddle have collided and invokes the collision handling algorithm shown in Figure 7. In the inspection session, the team reads the scenario while tracing through the diagrams to be certain that the situation described in the scenario is accurately represented in the design model. Looking for problems such as missing associations among classes and missing messages between objects. The defects found are noted and in some development organizations would be written up as problem reports. Sufficient scenarios are created and traced to give evidence that the design model is complete, correct, and consistent. Coverage is measured by the portions of diagrams, such as specic classes in a class diagram, that are examined as part of a scenario. One possible set of coverage criteria, listed in order of increasing coverage, includes: { a scenario for each end-to-end use case, including \extends" use cases

Page 183 of 255

{ a scenario that touches each \includes" use case { a scenario that touches each variation point { a scenario that uses each variant of each variation point

Non-functional requirements
Software product line organizations are concerned about more than just the functional requirements for products. These organizations want to address the non-functional requirements as early as possible. Nonfunctional requirements,

Page 184 of 255

sometimes called quality attributes, include characteristics such as performance, modifiability, and dependability. Reis et al describe a technique for using scenarios to begin testing for performance early in the life of the product line[31]. Both Guided Inspection and the ATAM provide a means for investigating quality attributes. During the \create scenarios" activity, scenarios, elicited from stakeholders, describe desirable product behavior in terms of user-visible actions.

Testing Techniques for a Product Line


Page 185 of 255

The test practice in a product line organization encompasses all of the knowledge about testing necessary to operate the test activities in the organization. This includes the knowledge about the processes, technologies, and models needed to define the test method. I will first discuss testing as a practice since this provides a more comprehensive approach than just discussing all of the individual processes needed at the individual test points[20]. Then I will use the three phase view of testing developed by Hetzel [18]- planning, construction, and execution - to structure the details of the rest of the discussion. In his keynote to the Software Product Line Testing Workshop (SPLiT), Grutter listed four challenges to product line testing[17]: { Meeting the product developer's quality expectations for core assets. { Establishing testing as a discipline that is well-regarded by managers and developers. { Controlling the growth of variability. { Making design decisions for testability. I will incorporate a number of techniques that address these challenges.

Page 186 of 255

3 ArchE is available for download at http://www.sei.cmu.edu/architecture/arche.html

Test method overview


The test practice in an organization encompasses the test methods, coordinated sets of processes, tools, and models, that are used at all test points. For example,

Page 187 of 255

\test first development" is an integrated development and testing method that follows an agile development process model and is supported by tools such as JUnit. The test method for a product line organization defines a number of test processes that operate independently of each other. This is true for any project but in a product line organization these processes are distributed among the core asset and product teams and must be coordinated. Often the teams are non-co-located and communicate via a variety of mechanisms. The test method for a project must be compatible with the development method. The test method for an agile development process is very different from the test method for a certifiable software (FAA, FDA, etc.) process. The rhythm of activity must be synchronized between the test and development methods. Tasks must be clearly assigned to one method or the other. Some testing tasks such as unit testing will often be assigned to development staff. These tasks are still defined in the testing method so that expectations for defect search, including levels of coverage and fault models, can be coordinated.

Page 188 of 255

There is a process defined for the conduct of testing at each of the test points. This process defines procedures for constructing test cases for the artifacts that are under test at that test point. The test method defines a comprehensive fault

model and assigns responsibility for specific defect types to each test point. The product line test plan should assign responsibility for operating each of these test processes.

Page 189 of 255

An example model of the testing practice for a software product line organization is provided at http://www.cs.clemson.edu/johnmc/PublishTest/index.htm. The model conforms to the SEPM and was developed using EPF. The software process engineering meta-model (SPEM)[16] provides a basis for modeling methods. SEPM is a standard from the Object Modeling Group (OMG). The Eclipse Process Framework (EPF) Composer provides an implementation of this meta model in the form of a IDE for method modeling.

Test planning

Page 190 of 255

Page 191 of 255

Page 192 of 255

Page 193 of 255

Page 194 of 255

Page 195 of 255

Product family engineering


Product family engineering (PFE), also known as product line engineering, is a synonym for "domain engineering" created by the Software Engineering Institute, a term coined by James Neighbors in his 1980 dissertation at University of California, Irvine. Software product lines are quite common in our daily lives, but before a product family can be successfully established, an extensive process has to be followed. This process is known as product family engineering. Product family engineering can be defined as a method that creates an underlying architecture of an organization's product platform. It provides an architecture that is based on commonality as well as planned variabilities. The various product variants can be derived from the basic product family, which creates the opportunity to reuse and differentiate on products in the family. Product family engineering is a relatively new approach to the creation of new products. It focuses on the process of engineering new products in such a way that it is possible to reuse product components and apply variability with decreased costs and time. Product family engineering is all about reusing components and structures as much as possible. Several studies have proven that using a product family engineering approach for product development can have several benefits (Carnegie Mellon (SEI), 2003). Here is a list of some of them:

Higher productivity Higher quality Faster time-to-market Lower labor needs

The Nokia case mentioned below also illustrates these benefits

Overall process
The product family engineering process consists of several phases. The three main phases are:

Phase 1: Product management Phase 2: Domain engineering Phase 3: Product engineering

Page 196 of 255

The process has been modeled on a higher abstraction level. This has the advantage that it can be applied to all kinds of product lines and families, not only software. The model can be applied to any product family.

Phase 1: product management


The first phase is the starting up of the whole process. In this phase some important aspects are defined especially with regard to economic aspects. This phase is responsible for outlining market strategies and defining a scope, which tells what should and should not be inside the product family.
Evaluate business visioning

During this first activity all context information relevant for defining the scope of the product line is collected and evaluated. It is important to define a clear market strategy and take external market information into account, such as consumer demands. The activity should deliver a context document that contains guidelines, constraints and the product strategy.
Define product line scope

Scoping techniques are applied to define which aspects are within the scope. This is based upon the previous step in the process, where external factors have been taken into account. The output is a product portfolio description, which includes a list of current and future products and also a product roadmap. It can be argued whether phase 1, product management, is part of the product family engineering process, because it could be seen as an individual business process that is more focused on the management aspects instead of the product aspect. However phase 2 needs some important input from this phase, as a large piece of the scope is defined in this phase. So from this point of view it is important to include the product management phase (phase 1) into the entire process as a base for the domain engineering process.

Phase 2: domain engineering


During the domain engineering phases the variable and common requirements are gathered for the whole product line. The goal is to establish a reusable platform. The output of this phase is a set of common and variable requirements for all products in the product line.

Page 197 of 255

Analyze domain requirements

This activity includes all activities for analyzing the domain with regard to concept requirements. The requirements are categorized and split up into two new activities. The output is a document with the domain analysis. As can be seen in Figure 1 the process of defining common requirements is a parallel process with defining variable requirements. Both activities take place at the same time.
Define common requirements

Includes all activities for eliciting and documenting the common requirements of the product line, resulting in a document with reusable common requirements.
Define variable requirements

Includes all activities for eliciting and documenting the variable requirements of the product line, resulting in a document with variable variable requirements.
Design domain

This process step consists of activities for defining the reference architecture of the product line. This generates an abstract structure for all products in the product line.
Implement domain

During this step a detailed design of the reusable components and the implementation of these components are created.
Test domain

Validates and verifies the reusability of components. Components are tested against their specifications. After successful testing of all components in different use cases and scenarios, the domain engineering phase has been completed.

Phase 3: product engineering


In the final phase a product X is being engineered. This product X uses the commonalities and variability from the domain engineering phase, so product X is being derived from the platform established in the domain engineering phase. It basically takes all common requirements and similarities from the preceding phase plus its own variable requirements. Using the base from the domain engineering phase and the individual requirements of the

Page 198 of 255

product engineering phase a complete and new product can be built. After that the product has been fully tested and approved, the product X can be delivered.
Define product requirements

Developing the product requirements specification for the individual product and reuse the requirements from the preceding phase.
Design product

All activities for producing the product architecture. Makes use of the reference architecture from the step "design domain", it selects and configures the required parts of the reference architecture and incorporates product specific adaptations.
Build product

During this process the product is built, using selections and configurations of the reusable components.
Test product

During this step the product is verified and validated against its specifications. A test report gives information about all tests that were carried out, this gives an overview of possible errors in the product. If the product in the next step is not accepted, the process will loop back to "build product", in Figure 1 this is indicated as "[unsatisfied]".
Deliver and support product

The final step is the acceptance of the final product. If it has been successfully tested and approved to be complete, it can be delivered. If the product does not satisfy to the specifications, it has to be rebuilt and tested again.
The next figure shows the overall process of product family engineering as described above. It is a full process overview with all concepts attached to the different steps.

Page 199 of 255

Process data diagram


On the left side the entire process from the top to bottom has been drawn. All activities on the left side are linked to the concepts on the right side through dotted lines. Every concept has a number, which reflects the association with other concepts.

Page 200 of 255

Page 201 of 255

List of concepts
Below the list with concepts will be explained. Most concept definitions are extracted from Pohl, Bockle, & Linden (2005) and also some new definitions have been added.
Concept Definition

Domain analysis Reusable common requirements Variable requirements Reference Architecture

Variability model Design & implementation assets of reusable components Test results Reusable test artifacts Requirements specifications Product architecture Running application

Document contains an analysis of the domain through which common and variable requirements can be split up. Document contains requirements that are common to all products in the product line. Document contains derivation of customised requirements for different products. Determines the static and dynamic decomposition that is valid for all products of the product line. Also the collection of common rules guiding the design, realisation of the parts, and how they are combined to form products. Defines the variability of the product line. The major components for the design and implementation aspects, which are relevant for the whole product family. The output of the tests performed in domain testing. Test artifacts include the domain test plan, the domain test cases, and the domain test case scenarios. The requirements for a particular product. Comparable to reference architecture, but this contains the product specific architecture. A working application that can be tested later on.

Page 202 of 255

Detailed design artifacts Test report Problem report Final product Family model Family member Context document

Guidelines Constraints Product strategy Product portfolio description List of current & future products Product roadmap

These include the different kinds of models that capture the static and dynamic structure of each component. Document with all test results of the product. Document, which lists all problems encountered while testing the product. The delivery of the completed product. The overlapping concept of all family members with all sub products. The concept of the individual product. Document containing important information for determining the scope; containing guidelines, constraints and production strategy. Market/business/product guidelines Market/business/product constraints Product strategy with regard to markets Portfolio containing all available products, with important properties. A list of all current products and the products that will be produced in the future. Describes the features of all products of the product line and categorises the feature into common features that are part of each product and variable features that are only part of some products.

Table 1: List of concepts

Example
There are some good examples of the use of product family engineering, which were quite successful. The abstract model of product family engineering allows different kinds of uses, most of them are related to the consumer electronics market. Below an example is given of an application of the product line engineering process, based on a real experience of Nokia. Nokia produces different types of products. Among them is a mobile phones product family, currently containing 25 to 30 new products every year. These products are sold all over the world, which makes it necessary to support many different languages and user interfaces. A main problem here is that several different user interfaces must be supported, and because new products succeed each other very quickly, this should be done as
Page 203 of 255

efficiently as possible. Product family engineering makes it possible to create software for the different products and use variability to customize the software to each different mobile phone. The Nokia case is comparable with a normal software product line. During the first phase, product management, it is possible to define the scope of the different mobile phone series. During the second phase, domain engineering, requirements are defined for the family, and for the individual types of phones, e.g., 6100/8300 series. In this phase, the software requirements are made, which can serve as a base for the whole product family. This speeds the overall development process for the software. The last phase, product engineering, is more focused on the individual types of phones. The requirements from the preceding phase are used to create individual software for the type of phone then being developed. The use of a product line gave Nokia the opportunity to increase their production of new mobile phone models from 5-10 to around 30. Carnegie Mellon (SEI), 2006, Clements & Northrop (2003).

Page 204 of 255

Software product family evaluation

Page 205 of 255

Page 206 of 255

Page 207 of 255

Page 208 of 255

Page 209 of 255

Page 210 of 255

Page 211 of 255

Page 212 of 255

Page 213 of 255

Page 214 of 255

Page 215 of 255

Page 216 of 255

Page 217 of 255

Page 218 of 254

Page 219 of 254

Page 220 of 254

Page 221 of 254

Page 222 of 254

Page 223 of 254

Page 224 of 254

Page 225 of 254

Page 226 of 254

Page 227 of 254

Page 228 of 254

Page 229 of 254

Page 230 of 254

Page 231 of 254

Agile product line architecture

Page 232 of 254

Page 233 of 254

Page 234 of 254

Page 235 of 254

Page 236 of 254

Page 237 of 254

Page 238 of 254

Page 239 of 254

Page 240 of 254

Page 241 of 254

Page 242 of 254

Page 243 of 254

Page 244 of 254

Page 245 of 254

Page 246 of 254

Page 247 of 254

Page 248 of 254

Page 249 of 254

Page 250 of 254

Page 251 of 254

Page 252 of 254

Page 253 of 254

Page 254 of 254