This action might not be possible to undo. Are you sure you want to continue?
Increasingly private companies and public institutions choose to acquire a Commercial Off the Shelf System or COTS, and increasingly the acquisition process fails because the COTS is a poor fit to the architecture of the enterprise on one or more levels. This article investigates how to make sure a COTS will fit with the enterprise architecture, thereby ensuring the success of the acquisition.
Previous research has largely focused on finding the best practice for acquisition of COTS. A good survey can be found in Land et al. 2008. Here the authors summarize previous research and recommend a model to follow when acquiring a COTS. Other articles have focused on a particular subject, for example the financial aspects (Yeoh & Miller 2004) or integration requirements (Lauesen 2004). Yet others have focused on a single case. A good general survey of important principles in acquisition of COTS products can be found in Damsgaard and Karlsbjerg 2010, but their definition of COTS seems unnecessarily general. Here we will not review best-practice acquisition processes or investigate particular subjects. Instead we will review 10 cases of COTS acquisition in one company and investigate what factors affects the outcome of an acquisition significantly.
COTS. It is not a simple matter to define exactly what a COTS is. In their 2000 article ”What Do You Mean by COTS?” Carney and Long (2000) argue that COTS has two dimensions: Source and Modification. The source dimension specifies to what degree the system is externally produced going from inhouse to completely externally produced. The modification dimension specifies the extent to which the system has been modified and goes from extensive reworking of the source code to very limited or no adaptation. For the purposes of this article I have defined a COTS as a system that is at most customized, but not modified in the source code. It also needs to be an existing component sold by an external supplier. This means that a system specified by the customer and produced externally will not count as a COTS (cf. Torchiano et al. 2004). Succes. Another important term is success. An acquisition and subsequent implementation of a COTS is a success if it complies with the target enterprise architecture. No previous COTS study has offered any definition or indication of how success would be identified. Therefore we will here have to start from scratch. Success has two components. The first is the degree to which the business objectives were reached. This means that it is reasonable to demand that acquisition of the system has a business effect and does not just shine in the corner of the IT department. But this is not enough: If the project was long overdue, budget was exceeded, the scope delivered minuscule compared to the expected and the quality terrible, then we would hardly call it a complete success. Therefore the project parameters: budget, time, scope and quality, well known from project management methodology like PRINCE2, are introduced to measure how big a success it is. In this investigation we measured the business success and project success parameters on a scale from 0 – 31 for all COTS. In order to construct a measure of success that takes into account both business
0 - complete failure, 1- significant problems, 2 – fair, 3 – perfect
The evaluation criteria were selected based on what has been mentioned as important in earlier literature. The project success is converted to a percentage. When all project parameters are in top they will score 100% on project success.67 3 Case Case 1 Case 2 Case 3 Case 4 Case 5 Case 6 Case 7 Case 8 Case 9 Case 10 Budget 1 2 1 2 0 3 3 1 3 3 Time 0 3 1 3 0 3 3 1 3 3 Scope 0 2 1 3 2 2 3 3 3 3 Quality 0 3 3 3 1 3 3 1 1 3 Table 1. I added two I had experienced as important (Cultural compatibility and Time-to-market). Previous articles have most often focused in depth on one aspect like integration (Lauesen (2004)). In order to find out what criteria.5 0. 2 The scores were made based on a discussion with relevant project managers. emails.75 3 1.2. Success scores for the investigated cases2 Evaluation: Measuring the degree of evaluation is a combination of how many criteria had been evaluated by the project team and how thoroughly.5 1.75 0. economy (Yeoh & Miller 2004) or costs (Schneidewind 1999). but if the project was not on time and had a smaller scope. The actual scores can be seen in table 1 below Business success 0 3 1 3 2 3 3 3 2 3 Success score 0 2. data from project management system. Let us look at an example: If the business success of a COTS acquisition was fairly good it could score 2. project documents. Thus the total success score is the business success multiplied by the project success. if any. .success and project success a score was constructed based on these two. Therefore the consolidated success score would be 2*0. were most important these were combined in a list of the 23 commonly mentioned criteria in previous research. They were then grouped into six groups for analytical reasons (See table 2). This percentage score is then multiplied by the business success. the project success would maybe only be 60% of the maximum.5 2.6 = 1.5 2.
In the case company an FTE is calculated as 1370 manhours.3 3 The author was working there before and during the study as Analyst and IT architect. Coop is Denmark’s largest retailer of fast moving consumer goods and has approximately 35.000 employees. This means that acquisition of COTS is quite common. Background of the study The company where this study took place is Coop IT.Group Functionality Criterion Process Conceptual model Requirements Usability Implementation Risk Time to market Price Project team Cultural compatibility Suppliers experience Crime Suppliers financial situation Product maturity Exit strategy Strategic fit for customer Strategic fit for supplier Product roadmap Interface definition Familiar integration technology System context Support Performance Upgrade Business case Strategy Integration Operation Description What are the business processes that are to be supported? How well does the concepts of the system fit the intended use? Are the functional requirements well understood? Has the way the system is to be used been considered? Has it been considered how fast the system can be delivered? Has the price been considered? How strong is the suggested project team? How well does the culture of the supplier fit How experienced is the supplier? What are the criminal threats? What is the suppliers financial situation? Is it a new or a mature product? Is there a way to get out of the system? How well does the COTS fit the strategic of the customer How well does the customer fit the suppliers strategic segments? How strong is the vendors product roadmap? How well is the interface defined? How well does the integration technology fit with the current system portfolio? Is the system context well understood How well is the support that can be offered? How well will the system scale? How difficult is it to upgrade? Is there a business case? Table 2. The size of the project refers only to the implementation part. Size is measured in Full Time Equivalents (FTE). Evaluation criteria used in previous research Size: It is wellknown that the size of projects affect the probability of success negatively (Flyvbjerg & Wee 2008). Prestudy and the evaluation itself is not included in the project size. . Coop IT operates and develops all of Coop’s IT and the strategy is a best-of-breed strategy where no enterprise-wide ERP package has been introduced.
From the list were removed all applications that did not fit the above mentioned criteria for being a COTS. Implementation. For all applications on the list. but no cases were based solely on this source. This resulted in a list of 13 applications. There did not either seem to be a distinct pattern in what groups of criteria (Functionality.First a list of all registered applications in Coop was extracted from the Enterprise Architecture system. one to three employees that had been responsible for. To this list I added 12 applications that were identified during the research phase. data from project management systems. There was a tendency for criteria related to financial matters to be the most thoroughly evaluated. . The least frequently evaluated was security threats and how to upgrade. Interviews were conducted according to a protocol that reflected the parameters of this study. In some cases it was also possible to acquire project documents and mail exchanges documenting the actual evaluation of the COTS. Integration and Operation) were most thoroughly evaluated across the cases. project documents. After the analysis phase a further three more were removed because the evidence collected about them was insufficient. After this phase each case was rated according to the available evidence for example interviews. because the probability of finding reliable evidence about their implementation was too low. Specially contentious ratings were discussed with project managers or other people close to the project. or otherwise part of the implementation. Analysis Overall there was no distinct pattern in the individual criteria being evaluated (see figure 1). This resulted in a spreadsheet with all the details of the evaluation and success of the 10 investigated COTS acquisition cases. A third source of evidence was the informal knowledge gathered by the author after working four years as an analyst in the company. I also removed applications that had been implemented before the year 2000. Risk. were interviewed. Strategy.
We see that project size has a considerably negative effect on the success of the project.03*evaluationdegree + 1. This means that this simple formula explains 60% of the variation in success of the COTS acquisition projects studied. This result. increases the likely success by 0. suggests that the process followed in the acquisition of a COTS is less important than previously assumed.03. R squared for the regression formula is 0.business case Process Requirements Usability Price Project team Interface definition Product maturity Conceptual model Product roadmap Time to market Sytem context Upgrade 0 .5 1 0.60 and is significant at a 95% confidence level. a multiple regression analysis was carried out to find out which parameters had a significant effect on the success of the COTS acquisition. on the contrary. the total amount of evaluation done in the project. TCO . Average evaluation degree. The variables found to have the greatest effect on success were project size and degree of evaluation. Several different variables were tried. you lower the projected success by 0. This is a remarkably high proportion considering that previous research has focused entirely on the process followed in the acquisition. but evaluation degree has a positive.15*projectsize + 0. competence of the team and so on.5 Average Performance Exit strategy Support Crime Cultural compatibility Functionality Implementation Risk Strategy Integration Familiar integrationtechnology Strategic fit for company Suppliers experience Supplier financial stability Strategic fit for supplier Operation Figure 1.Evaluationdegree 3 2. The remaining 40% are other factors like the process followed.15. The success score was calculated as shown above.75 Project size is measured in Full Time Equivalents (FTE) and Evaluation degree is measured as the sum of the scores of all evaluated criteria (0 = not evaluated and 3 = thoroughly evaluated). In order to explore the data further.5 2 1. This resulted in the following regression equation: successcore = -0. Luckily you can do something about this since raising the evaluation degree by one. The maximum success score was therefore 3. This means that every time you increase the project size by one full time equivalent. that is.
Operation Integration Strategy Succes Risk Implementation Functionality 0. that is above or below 1. whereas the failed projects never exceed an average evaluation degree of 1. have a decent chance of becoming a success without any evaluation.50 Failed Figure 2. In order to learn more about these I looked at how they differed in their evaluation pattern (see figure 2). None of the failed projects had evaluated criteria from all groups. 4 Very small projects (under a half FTE) was excluded from this analysis. That means that success projects either evaluated twice as many criteria or twice as thoroughly.50 2.4 Success projects on average scored more than double on all groups of criteria. we can also see that very small projects below two Full Time Equivalents.00 2.5. They have for example no integrations/manual integration.From the regression equation.2 for all groups. Evaluation patterns of successful and failed projects All of this is pretty general. The poorest scoring group for the failed projects was Operations and the highest was strategy. All success projects had evaluated criteria from all six groups.00 0. In order to learn more about the anatomy of successful projects I divided up the cases according to whether they were in the top half or bottom half of the success score. The top half projects were considered as success projects and the bottom as failed projects. 5 Three very small projects (below 2 FTE) were omitted as they don’t seem comparable to the remaining more complicated projects. Another consequence of the equation is that each time we increase the size of the acquisition project with one full time equivalent you have to evaluate two more criteria thoroughly in order to maintain the same likelihood for success. since they seem to differ significantly from larger ones.00 1.5 and 2. .3 in any group. Evaluation degree for successful and failed projects5 Another pattern that emerged was that success projects had a more diverse evaluation pattern. The success projects had an average between 1.50 1.
Discussion We have found evidence to support the central hypothesis: There is a positive correlation between the degree of COTS evaluation and the success of the acquisition. The author works there and knows all the projects. This was to learn whether there is a pattern in the cause of failure.Why COTS acquisition fails In order to investigate further the relation between evaluation and the reasons for failure I looked at each project categorized as failed and asked the following question: “To what extent would proper evaluation of each criterion have avoided failure”. Apart from this. It seems that behind any criterion is a possible trap that may lead to the failure of the project. Causes of failure in COTS acquisitions There is a small tendency towards integration being a pain point. since “Familiar integration technology” and lack of a “System context” diagram stick out as a bit more prevalent as causes of failure. if it is not evaluated and mitigated before implementation. The pattern can be seen in figure 3. Cumulative score 7 6 5 4 3 2 1 0 Performance Exit strategy Support Process Crime Price Cultural compatibility Familiar integrationtechnology Strategic fit for company Suppliers experience Supplier financial stability Strategic fit for supplier TCO . it is relatively mixed what causes failure. Time to market Sytem context Upgrade .business case Requirements Usability Interface definition Project team Product maturity Conceptual model Product roadmap Causes of failure Figure 3. but it should be noted that the success score compares well with the perceived success at the company. It could be argued that the success score is an artificial meassure that may not have any validity in the real world. Furthermore we found that successful projects evaluated significantly more than failed projects. Each of the six groups of evaluation criteria are represented. They also considered more diverse sets of criteria. It could be for example that it was consistently the missing evaluation of operations related criteria.
Successful projects in general have a higher degree of evaluation. The reason is probably that they are typically silo implementations with very limited integration need and that they can more easily be contained in the head of a single person. . Conclusion For practitioners. you should either evaluate more thoroughly or make the project smaller in order to increase the chances of success.Looking at failed projects it was not possible to see any distinct pattern in the failures based on what criteria should have been evaluated in order to change the result of the project. Make sure to evaluate relevant criteria from each of the six groups: Functionality. but you should always consider them. Implementation. So. This is probably why successful implementation of a COTS is tied closely with a diverse evaluation that provokes reflection about all six groups of evaluation criteria. both tend to forget risk and operation. 4. They may not always be equally important. In this article we have found that 1. Whether it is IT development or the business. there are a handful of lessons that can be learned from this study. 5. we were able to specify these two forces against each other. This is not really a new insight and the size is not either usually something that can be adjusted. This entails cutting up the requirements in chunks. just to get the COTS system working and then expanding functionality. Very small projects (below 2 FTE) seem to behave completely different from larger projects. It could be to make a minimal implementation as a first step. 3. In order to maximize the chances of a successful COTS acquisition the following guidelines can be offered: Keep the acquisition project as small as possible. which seem to focus on a few groups of evaluation criteria. The size of the project has a negative effect on the chances of success. 6. Thus. There is no single cause or even a vague pattern in what specific lack of evaluation leads to failure. Integration and Operation. The high evaluation degree of success projects could have been attributed to a very thorough evaluation of a subset of evaluation groups (for example operations and functionality). Strategy. They can turn out successfully with none or a minimum of evaluation. 2. Successful projects have a more diverse evaluation pattern than failed projects. since it depends on the business requirements. when a project grows with one FTE. Never the less there are ways to minimize the COTS acquisition. This is not surprising and has been found in other studies. What typically happens is that business users run the acquisition and only evaluate functionality and strategy and forget that it also has to be implemented and integrated. It is a very distinct pattern which is not trivial. Risk. but since we also found that evaluation has a positive effect on the success of the project. There is an overall positive relationship between the degree to which an acquisition of a COTS has been evaluated and the degree to which it can be called a success. or IT sees it as a technical job and concentrates only on implementation and integration and forgets that it also has to be used by someone. it is necessary to evaluate 2 criteria thoroughly in order to maintain the same chance for success.
but the function is probably not linear when you expand the size significantly beyond 30 Full Time Equivalents. One caveat is that you should probably have access to people with good business and technical knowledge in order to pull it off. you should probably not waste too much time selecting the COTS. 53. 2004 Land. Only further research will be able to settle this. Issue 2. You should take seriously the fact that large projects are more complex and therefore need to be evaluated from many more angles than the average project. James Miller. Planning and Innovation (Transport Economics. 2008 Hean Chin Yeoh. D. What this means is that if you want to implement a spamguard service you will probably not gain a lot by charting a 3 month project with 200 different criteria. since the chances of success are good. IEEE Software. 2010 Flybjerg. (8): 63-71. and Policy). Another problem is whether we can generalize the results from the cases studied here to larger projects. Edward Elgar. A further study must reveal whether or not the results in this study can be meaningfully expanded beyond projects of a size larger than 20 people. Jan & Jan Karlsberg: “Seven Principles for Selecting Software Packages”. Vol. Vol. F. “COTS Selection Best Practices in Literature and in Industry”: 100-111 in Mei. "COTS Acquisition Process: Incorporating Business Factors in COTS Vendor Evaluation Taxonomy. Bent & Bernt van Wee (eds) “Decision-Making on Mega-Projects: Cost-Benefit Analysis. & Long. Hong (ED) High Confidence Software Reuse in Large Systems. Berlin / Heidelberg. Communications of the ACM. Rikard et al. IEEE International Symposium on. References Carney. 10th IEEE International Symposium on Software Metrics (METRICS'04). 17. Don’t overdo evaluation of small projects. The larger the project the more thoroughly you should evaluate the criteria. 2008 . They may be both industry specific and even company specific. So. Management. 2000 Damsgaard. When the implementation is estimated to be a small project. 84-95. pp. The problem stems from the fact that the regression is a linear approximation. “What Do You Mean by COTS? Finally a useful answer”." Software Metrics. This is the most surprising conclusion. It is difficult to state with certainty how far we can generalize these results. Springer. The evaluation score consists of number of criteria multiplied by a rating between 1 and 3. The regression equation we found may even give you some indication of exactly how thorough you should be depending on the size of your project: If the project is estimated to take 10 Full time equivalents to implement you would need to have an evaluation score of 93 in order to have maximum success. 83-86. you have to evaluate 31 criteria very thoroughly (with a score of 3) in order to be likely to land a COTS implementation of that size.
. C. 1998 Maiden. 1997 Markus. W. N. A. Ncube.L. 2004 Maiden.” Commun.Selection Requirements”. IEEE software 21 (2). Cincinnatii (OH). Information systems journal: 271-299. M.A. 46–56. in R. M. ACM.Lauesen. ”The Enterprise System Experience – From Adoption to Success”. C. Søren "COTS Tenders and Integration Requirements": 166-175.) Framing Domains of IT Management: Projecting the Future through the Past”. 21–25. C. M.. & M. “Cost framework for COTS evaluation. Morision: “Overlooked aspects of COTS-based development”. N. 2000 Shang. 2004 . 88-93. vol.” in Proc. Zmud & M. Pinnaflex Educational Resources. IEEE Software 15 (2). Shari & Peter B.: 100–101. Moore. and A. no. F. 40. & Tanis. 12. 1999 Torchiano.: “Acquiring COTS Software. Price (Eds. pp. Int. N. “Lessons learned during requirements acquisition for COTS systems.F. Computer Software and Applications Conf. Ncube. 2002 Schneidewind.23rd Annu. Sheddon “Assesing and Managing the Benefits of enterprise Systems: The Business Manager’s Perspective”. 12th IEEE International Requirements Engineering Conference (RE'04).
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.