This action might not be possible to undo. Are you sure you want to continue?
Audrey Dorofee Lisa Marino Christopher Alberts March 2008
TECHNICAL NOTE CMU/SEI-2008-TN-004 Dynamic Systems
Unlimited distribution subject to the copyright.
This work is sponsored by the U.S. Department of Defense. The Software Engineering Institute is a federally funded research and development center sponsored by the U.S. Department of Defense. Copyright 2008 Carnegie Mellon University. NO WARRANTY THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN “AS-IS" BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. Use of any trademarks in this report is not intended in any way to infringe on the rights of the trademark holder. Internal use. Permission to reproduce this document and to prepare derivative works from this document for internal use is granted, provided the copyright and “No Warranty” statements are included with all reproductions and derivative works. External use. Requests for permission to reproduce this document or prepare derivative works of this document for external and commercial use should be addressed to the SEI Licensing Agent. This work was created in the performance of Federal Government Contract Number FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the copyright license under the clause at 252.227-7013. For information about purchasing paper copies of SEI reports, please visit the publications portion of our Web site (http://www.sei.cmu.edu/publications/pubweb.html)..
Table of Contents
Abstract 1 Introduction 1.1 Overview of Mission Diagnostic 1.2 2 Contents of this Technical Note
v 1 1 2 3 3 3 4 4 5 5 5 6 6 7 7 8 9 9 9 9 10 10 11 12 12 12 12 13
Overview of the Conducted Assessment 2.1 Project Description 2.2 2.3 2.4 Purpose of this Assessment Mission Diagnostic Process Used Participant Composition
Tailoring the Mission Diagnostic 3.1 Product and Process Risk Analysis 3.2 3.3 3.4 Driver Superset for a New Domain Teleconferencing for Interviews Outcome-Based Scenario Analysis
Results of the Conducted Assessment 4.1 Three Outcome Scenarios 4.2 Long-Term Issues
Lessons Learned 5.1 New Set of Drivers 5.2 5.3 5.4 5.5 5.6 Wording of Driver Questions New Technique for Interviewing Success of Outcome-Based Scenario Analysis Communications and Logistics Sponsors for this Assessment
Summary 6.1 Mission Diagnostic and In-Depth Assessments 6.2 6.3 Outcome-Based Scenario Analysis Drives New Protocol Basis for New Framework for Risk
SOFTWARE ENGINEERING INSTITUTE | i
ii | CMU/SEI-2008-TN-004
The authors wish to thank Linda Levine, Melissa Kasan Ludwick, Jeannine Siviy, and MaryCatherine Ward of the Software Engineering Institute for their reviews and comments of this document.
SOFTWARE ENGINEERING INSTITUTE | iii
iv | CMU/SEI-2008-TN-004
The Mission Success in Complex Environments (MSCE) research team along with Acquisition Support Program (ASP) staff applied the Mission Diagnostic (MD) to a project for the development and broad deployment of a software Application. The purpose of this MD engagement was to conduct a rapid evaluation of the Application project’s potential for success during its alpha phase deployment, knowing additional deployments were to quickly follow. Software development and deployment was a new domain for applying the MD, and this particular project included additional constraints. Therefore, we modified the basic MD in several ways. We developed a new set of MD drivers for use with software development and deployment projects, used teleconferencing to collect data from project personnel, and included outcome-based scenarios to assess a variety of mission outcomes. From this experience, we have also derived the basis for a new success-driven framework with an integrated risk perspective. This technical note describes the adaptation of the MD necessary for this customer and the lessons we learned from its use.
SOFTWARE ENGINEERING INSTITUTE | v
vi | CMU/SEI-2008-TN-004
The purpose of this technical note is to describe the lessons that were learned from applying the Mission Diagnostic (MD) to a customer’s software development and deployment project. The Mission Success in Complex Environments (MSCE) research team along with Acquisition Support Program (ASP) staff applied the MD to a project for the development and broad deployment of a software Application just prior to its initial or alpha deployment. Software development and deployment was a new domain for applying the MD, and this particular project included additional constraints. Therefore, we modified the basic MD in several ways. This technical note describes the adaptation of the MD necessary for this customer and the lessons we learned from its use. These lessons are from the first application of the MD with this customer. Lessons from future use of the MD with this customer may be documented at a later date. The information provided in this technical note has been sanitized to protect the customer’s identity and any sensitive information.
1.1 OVERVIEW OF MISSION DIAGNOSTIC
The MD is a risk-based assessment for evaluating a mission’s current condition and determining whether it is on track for success. This time-efficient assessment can be applied to projects, programs, and processes across the life cycle and throughout the supply chain. An MD assessment is straightforward to conduct, and the basic version can be self-applied by people who are responsible for overseeing projects, programs, and processes. By providing a broad overview of the state of possible success for a mission, the MD can be viewed as a first-pass screening of a project, program, or process to diagnose any circumstances that might affect its potential for success. More detailed, follow-on evaluations might be required when a mission’s potential for success is judged to be unacceptable. The MD uses a driver-based approach for gauging a mission’s potential for success. In the MD context, a driver2 is a quantitative or qualitative parameter that provides an indirect measure of a broad concept. Drivers are used to measure the current conditions affecting a given mission. In the MD, a five-point scale is used to establish a value for each driver. A simple algorithm based on these drivers is used to estimate the potential for success and the degree of progress toward the desired outcome. An important aspect of an MD assessment is time efficiency. To ensure that the assessment can be completed in a reasonable amount of time, you need to keep the number of drivers small. Experience has shown that good results are achieved by using between 10 and 20 drivers in an assessment.
The Mission Diagnostic overview material presented here is from the Mission Diagnostic Protocol, Version 1.0: A Risk-Based Approach for Assessing the Potential for Mission Success [Alberts 2008] technical report. An example of a driver (drivers are worded as questions) is “Are people at the alpha deployment site and related sites prepared to operate, maintain, and support the application?”
SOFTWARE ENGINEERING INSTITUTE | 1
Each driver represents a key driver that can guide a mission toward success or failure, based on its current state or value. A success driver is a condition or circumstance that guides a mission toward a successful outcome, while a failure driver is a condition or circumstance that guides a mission toward an unsuccessful outcome. A driver can be one or the other, based on its current state or value. Drivers are worded as yes/no questions, where a yes answer denotes that the driver is a success driver in this context and no denotes a failure driver.3 Each mission has a mixture of success and failure drivers currently influencing the eventual outcome. The philosophy underlying the MD is that the relative impact of success and failure drivers can be used to gauge a mission’s potential for success. A predominance of success drivers in relation to failure drivers indicates an acceptable potential for success (and vice versa). The analysis of drivers in an MD assessment can be conceptually broken into the following two parts: 1. Evaluate each driver to determine the extent to which each is present. 2.
Analyze the entire set of drivers to determine the mission’s potential for success.
CONTENTS OF THIS TECHNICAL NOTE
The contents of this technical note are as follows: • Introduction—an explanation of the purpose of this document and a brief overview of the Mission Diagnostic
Overview of this MD Assessment—customer project description and the purpose and constraints of the initial MD application Tailoring the Mission Diagnostic—a summary of the types of tailoring that were done to the basic MD to suit its application for this customer and this project Results of the MD Assessment—a brief summary of the results of assessing this customer project Lessons Learned—the lessons we learned from applying the MD Summary—the impacts on our future work
For example, given a driver “Is the project budget adequate for all activities and required resources?”, a No answer can be viewed as a failure driver and a Yes answer as a success driver.
2 | CMU/SEI-2008-TN-004
2 Overview of the Conducted Assessment
This customer’s project was a development and deployment mission to create a replacement application. This replacement application (hereafter referred to as the Application) would be one of a larger suite of new enterprise-wide applications servicing geographically distributed sites. This particular Application focused on enhancing system performance so that the customer service representatives could do their jobs more efficiently and in a timely manner. The Application would have a phased rollout (alpha, beta, and enterprise-wide) with increasing functionality and sites in each phase. The underlying infrastructure and support were relatively new and built to connect the enterprise sites. This project would develop one of the first new applications to integrate with the existing, legacy applications and the new architecture. To accomplish this, personnel had to: • Reengineer associated business processes
• • • • • • • •
Learn newer technologies (Java, Web services, etc.) Interface with several applications being upgraded or replaced Deliver the same functionality and improve performance Complete successful integration testing Retrain end users and support personnel Deploy at staged intervals with increasing functionality and coverage Achieve schedule compression Complete architecture changes to the underlying infrastructure in parallel with development and deployment of the Application
The project had attracted a lot of internal attention as well as the attention of senior internal and external stakeholders, and the pressure to succeed was considerable. Several externally driven changes (e.g., increases in data security requirements and changes to integration and test resources) had occurred, which had already compressed the project’s schedule when the Software Engineering Institute (SEI) was brought in to assess the situation.
2.2 PURPOSE OF THIS ASSESSMENT
The goals of the initial use of the MD with this customer were to make a rapid evaluation of the potential for success for the alpha deployment of the Application and, as a side benefit, identify any major issues with the rest of the development and deployment process and plan. The constraints were as follows: • Keep costs at a minimum
Disrupt project personnel as little as possible Produce results before the first deployment (alpha) was installed and configured
SOFTWARE ENGINEERING INSTITUTE | 3
MISSION DIAGNOSTIC PROCESS USED
Our assessment process consisted of the following high-level activities, which were consistent with the general approach for conducting an MD assessment: • Met with customer to present MD overview
• • • • •
Conducted initial fact-finding interviews Tailored the MD process and materials Interviewed project participants (ten interviews at one hour per interview) Conducted post-interview analysis Developed assessment report and presented results
The SEI analysis team had the following skills: • Risk assessment
• • • •
Software development and deployment Project and change management Business process and systems analysis Training and transition
The customer personnel4 we interviewed represented the following technical and project areas: • Business process analysis
• • • • •
Project management Program management Systems management Testing Training
We were unable to interview any software development personnel.
4 | CMU/SEI-2008-TN-004
3 Tailoring the Mission Diagnostic
Previously, we used the MD in two areas: cyber-security incident response and portfolio management. For this assessment, we were analyzing a software development project near its alpha deployment. We believed the general set of 10 driver questions would need to be adjusted to consider specific development and deployment issues as well as more general project issues. A new superset of drivers relevant to software development and deployment would be needed. Two additional factors also played into this MD assessment of the project. First, the staged beta and enterprise-wide deployment for the Application was already scheduled. In addition, there was ongoing, parallel development of applications and components that would be required for this Application to fully function. Thus, for this MD assessment we also needed to consider any issues that might affect development and deployment beyond alpha.
3.1 PRODUCT AND PROCESS RISK ANALYSIS
It was clear from early discussions with the customer that this project had critical aspects in both the product itself (the Application) and in the processes being used for development and deployment. Thus, we needed to focus on product and process aspects during this assessment. In addition, most personnel were involved in and knowledgeable about only one of these two aspects. Consequently, we developed two sets of drivers, a superset, focusing on the • technical aspects of developing and deploying the product
processes being used to manage the project and its development and deployment
This distinction would allow us to give the customer results that would help them understand whether their issues were associated with the Application itself, the processes they employed, or both.
3.2 DRIVER SUPERSET FOR A NEW DOMAIN
To develop this superset of drivers, we began with a series of interviews with senior managers and team leads to understand the critical aspects of this project, taking care to note what they believed could lead to success or failure. We drafted a set of proposed drivers and solicited internal SEI subject matter experts to ensure all critical aspects for software development and deployment were represented. Once we were satisfied with content completeness, we began mapping this proposed set against our general 10 drivers.5 While there was some overlap, we clearly identified several new areas that would need to be represented. Drivers specific to product and process were needed as well as ones specifically addressing development, deployment, and management issues. The end result was the creation of a driver superset for this new domain that consisted of 10 project drivers and 8 technical drivers.
These drivers are referenced in the Mission Diagnostic Protocol, Version 1.0: A Risk-Based Approach for Assessing the Potential for Mission Success [Alberts 2008] technical report.
SOFTWARE ENGINEERING INSTITUTE | 5
The 10 project drivers addressed aspects of • Organizational change and pressure
• • • • • • •
Sponsorship Project goals, funding, and schedule Plans Requirements Team skills and abilities Team coordination Risk management
The eight technical drivers addressed aspects of • Integration
• • • • • •
Operational functionality User preparedness Operator preparedness Security Infrastructure Roll-back and contingency planning
TELECONFERENCING FOR INTERVIEWS
Previous MD assessments conducted all interviews face-to-face. The interviewees for this MD assessment were scattered across several sites that were not geographically near one another. Because of travel costs and the need to minimize impact on project personnel, we decided to conduct all interviews using teleconferencing instead of our usual face-to-face interviews. We interviewed managers and team leads first and, if needed, reserved the right to conduct a second round of interviews with additional project personnel. As it turned out, we did not need to conduct the additional interviews.
3.4 OUTCOME-BASED SCENARIO ANALYSIS
In previous applications of MD, we used a simple algorithm to estimate the potential for success. With the complexity of this project, the customer wanted a third-party, expert opinion rather than an estimate. Therefore, we chose outcome-based scenario analysis since it is a more robust analysis approach that looks at the potential for success of different project outcomes.
6 | CMU/SEI-2008-TN-004
4 Results of the Conducted Assessment
We conducted the tailored MD, focusing on the alpha deployment of the Application. A sanitized summary of some of the important aspects of the results is provided here.
4.1 THREE OUTCOME SCENARIOS
Having gathered quite a bit of information from the interviews, and a limited amount of data from documentation provided by the customer, we were able to do a root cause analysis. While the answers we received6 for each question varied widely, it was not difficult to identify the major themes that represented critical issues: • Organizational practice
• • • •
Project management Support activities Local or task-related activities Integration
Based on these major themes, we created three outcome scenarios to analyze and to determine their potential for success. These outcome scenarios represented minimal, moderate, and good pictures of success for the alpha deployment of the Application. 1. Minimal success—the project would identify key problem areas with the Application, the infrastructure, and deployment sites 2. 3. Moderate success—the Application would be able to successfully complete some core customer functions Good success—the Application would be able to successfully complete core customer functions and would operate successfully with the infrastructure and all other relevant components and applications
The three outcomes were analyzed with the driver data to determine their potential for success. We determined that only the Minimal success outcome was likely to occur. The outcomes with Moderate and Good levels of success both looked unlikely. In other words, only the outcome which set a relatively low bar of success was likely to be achieved. In fact, the alpha deployment of the Application was most likely to be more of a proof of concept as opposed to a true alpha test.
Nearly all of the drivers had answers from interviewees that covered the entire range of possibilities from Yes to No. There was little consistency of viewpoints. In fact, some of the positive answers were based on firm beliefs by the interviewees that issues were being addressed, even if they did not know by whom.
SOFTWARE ENGINEERING INSTITUTE | 7
Long-term deployment issues were also identified during the interviews. Some issues arose from specific questions and some were identified by the analysis team as we looked at the data we collected. These issues can be characterized as having to do with planning, integration, site preparation, and user training. For example, the following issues were identified:
Management, organizational, and political conditions will likely continue to have an adverse affect on the development. Schedule compression is likely to continue. Coordination problems exist within the project team as well as enterprise-wide. Major problems exist related to coordination and management of external dependencies. No master schedule exists for the enterprise-wide deployment. Local software customizations may have occurred, and local infrastructures are unknown. Increases in scale, complexity, and integration are likely. Rollout of additional applications requiring integration is likely to occur. Security requirements will increase in complexity. No testing environment exists for the new architecture.
• • • • • • • • •
8 | CMU/SEI-2008-TN-004
5 Lessons Learned
Based on this MD assessment, there were several lessons we learned that we will apply to future MD engagements and future work in MOSAIC.
5.1 NEW SET OF DRIVERS
Our three sets of drivers from previous customer engagements, general, incident management, and portfolio management, have been useful for most circumstances. With the completion of this project, we now have another set of drivers to use for software development and deployment projects. While we need to pilot these drivers on another project to verify their effectiveness, we are confident that they will require minimum alteration for future use. This revised driver superset was successfully used during the interviews without change except for one small adjustment (see Wording of Driver Questions). By asking the right questions, tailored to this particular project, the interviews were efficient and effective, and few follow-on questions were needed to clarify points.
5.2 WORDING OF DRIVER QUESTIONS
Two of the questions were misinterpreted by the interviewees, leading to early confusion on the answers. Specifically, one driver was troublesome as the interviewees attempted to parse an implied double-negative: Do management, organizational, and political issues have little or no impact on alpha phase activities? Interviewees heard the words political issues and impact and promptly answered “Yes,” thinking this meant political issues had a large impact on the project. We eventually changed the wording of the driver to avoid confusion and reversed the scale on the answers already collected to be consistent with the rest during analysis. Future MD assessments will continue to keep this potential for misinterpretation in mind.
5.3 NEW TECHNIQUE FOR INTERVIEWING
This was the first time we relied solely on teleconferencing for conducting interviews and had to create a new technique.7 As part of our research, we develop, test, and refine methods and techniques for success-driven management. We refined this new technique somewhat during successive teleconferences. Since we were not able to assess body language, we had to pay close attention to verbal cues. It was much easier than we anticipated to gauge the interviewee’s level of comfort with the interview, their commitment to the project, and any concerns or biases they had. Nearly all of the interviewees were completely forthcoming and the telephone presented no real barrier to communication or data gathering. Interviewees appeared to be candid and did not express any reservation
This technique is as yet unnamed.
SOFTWARE ENGINEERING INSTITUTE | 9
or hesitation about participating. In addition, our more traditional face-to-face interview format of one facilitator and one or more note-takers continued to hold up well during the teleconferences. We appreciated the facilitator’s ability to keep the interview moving along since rambling could have been an issue given the short time frames for the teleconferences and the difficulties of scheduling additional time.
5.4 SUCCESS OF OUTCOME-BASED SCENARIO ANALYSIS
While we used the outcome-based scenario analysis to provide a clearer picture of possible success to the customer, we also wanted to test the validity of our results. Therefore, we used the standard MD algorithmic analysis8 to verify the reasonableness of our outcome-based scenario analysis. The standard algorithmic analysis, which does not provide context when interpreting results, determined the overall potential for project success was between low and moderate.9 If we gave the customer the results of the standard algorithmic analysis, it would have painted a bleak picture and simply told them the project was in trouble. The scenario-based results we provided were just as bleak. However, by identifying and analyzing the three different outcome scenarios and different sets of tradeoffs, we were able to elaborate and provide the customer with a clearer view of what outcomes were likely for this project and ensure them that some success was possible. Our results indicated that one of the outcomes was likely to occur and the other two were not likely to occur. Adding the three scenario scores produced an overall score (using a simple average) that was equivalent to the standard algorithmic score (low and moderate), providing an informal validation of our use of the outcome scenarios.
5.5 COMMUNICATIONS AND LOGISTICS
As with any assessment, all communications and logistics were to be handled by an SEI point of contact (POC) and a customer POC; however, vacation schedules led to multiple SEI points of contact. To complicate matters, the customer POC let the interviewees schedule their own interviews creating many customer points of contact and stretching out the timeframe for interviews. Some of these occurrences could have been alleviated if we were on the customer site; often, the forcing function of a site visit makes communications and logistics run more smoothly. To alleviate such occurrences in future analyses, we see the need to create liaison responsibilities to ensure • One POC each for the SEI and the customer
All logistics and communications are properly tracked, managed, and handled in a timely fashion. Any follow-up activities are completed.
The simplest algorithm applies a set number of points to each driver, based on the answer. These are added together to reach a total score that is compared to a defined evaluation scale to determine the potential of success for the project. For example, if there were 10 drivers, with a maximum of 10 points each, one could assume a total score of 90 would indicate a very high potential for project success. The terms low and moderate are part of the scale used in the Mission Diagnostic Protocol, Version 1.0: A RiskBased Approach for Assessing the Potential for Mission Success [Alberts 07].
10 | CMU/SEI-2008-TN-004
All analysis team personnel and customer personnel are identified and introduced. Written ground rules and agreement for customer and SEI POC
SPONSORS FOR THIS ASSESSMENT
This MD engagement had two distinct sponsors, in two different parts of the customer organization. This eventually led to unclear ownership of the results and difficulties in delivering the final results, particularly as their specific goals and objectives became increasingly disjointed as time passed. Inconsistent communications increased the amount of confusion about who the primary stakeholder actually was. In the future, we will clarify the goals and objectives of all sponsors and stakeholders and continue to verify those goals and objectives at key points in the schedule, particularly prior to delivering any results.
SOFTWARE ENGINEERING INSTITUTE | 11
In summary, there were several outcomes from this MD assessment that will become part of our future work and direction.
6.1 MISSION DIAGNOSTIC AND IN-DEPTH ASSESSMENTS
A full-blown, in-depth assessment (e.g., a detailed technical assessment, complete risk evaluation, or architectural assessment) could not have been performed with the limited resources available to assess the Application alpha deployment. While the MD was a high-level as opposed to an indepth assessment, it none-the-less found similar issues that have been discovered in previous indepth project assessments. As such, the MD could possibly be used as a cost-effective, quick assessment leading to more in-depth assessments or in combination with such assessments. For this particular assessment, the MD took much longer than usual due to the teleconferencing restriction and the availability of interviewees. Done within its normal, compressed time frame (23 days with proper logistical planning), an MD might be advantageous in conjunction with indepth assessments focused on specific problem areas identified by the MD. The MD drivers might also provide an alternative structure for data analysis for in-depth assessments, particularly those with less-formal processes.
6.2 OUTCOME-BASED SCENARIO ANALYSIS DRIVES NEW PROTOCOL
The use of the outcome-based scenario analysis was borrowed from another assessment method, Mission Assurance Analysis Protocol (MAAP) [Alberts 05]. It added a degree of complexity to the basic MD. As such, we feel we have the basis for a new protocol that is more than the basic MD, but not as complex as the more resource-intensive MAAP. We will document this new protocol at a future date after additional testing and refinement.
6.3 BASIS FOR NEW FRAMEWORK FOR RISK
Working with this customer required us to create the revised drivers and consider the different issues facing the customer, including the very distinctive layers of responsibility and communication. During the analysis, we began refining our understanding of risk and success management and the different layers of information, responsibility, communication, and risk mitigation that occur across an organization and across multiple organizations. This new understanding has lead to research on a new framework for success management complete with an integrated perspective of risk.
12 | CMU/SEI-2008-TN-004
URLs are valid as of the publication date of this document. [Alberts 2005] Alberts, Christopher & Dorofee, Audrey. Mission Assurance Analysis Protocol (MAAP): Assessing Risk in Complex Environments (CMU/SEI-2005-TN-032, ADA441906). Software Engineering Institute, Carnegie Mellon University, 2005. http://www.sei.cmu.edu/publications/documents/05.reports/05tn032.html [Alberts 2008] Alberts, Christopher & Dorofee, Audrey. Mission Diagnostic Protocol, version 1.0 (CMU/SEI-2008-TR-005). Software Engineering Institute, Carnegie Mellon University, 2008. http://www.sei.cmu.edu/publications/documents/08.reports/08tr005.html
SOFTWARE ENGINEERING INSTITUTE | 13
14 | CMU/SEI-2008-TN-004
REPORT DOCUMENTATION PAGE
Form Approved OMB No. 0704-0188
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188), Washington, DC 20503.
AGENCY USE ONLY
(Leave Blank) 4. 6. 7.
TITLE AND SUBTITLE
March 2008 5.
REPORT TYPE AND DATES COVERED
Lessons Learned Applying the Mission Diagnostic AUTHOR(S) Audrey Dorofee, Lisa Marino, Christopher J. Alberts
PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)
Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) HQ ESC/XPK 5 Eglin Street Hanscom AFB, MA 01731-2116
PERFORMING ORGANIZATION REPORT NUMBER
CMU/SEI-2008-TN-004 10. SPONSORING/MONITORING
AGENCY REPORT NUMBER
11. SUPPLEMENTARY NOTES 12A DISTRIBUTION/AVAILABILITY STATEMENT Unclassified/Unlimited, DTIC, NTIS 13. ABSTRACT (MAXIMUM 200 WORDS) The Mission Success in Complex Environments (MSCE) research team along with Acquisition Support Program (ASP) staff applied the Mission Diagnostic (MD) to a project for the development and broad deployment of a software application. The purpose of this MD engagement was to conduct a rapid evaluation of the Application project’s potential for success during its alpha phase deployment, knowing additional deployments were to quickly follow. Software development and deployment was a new domain for applying the MD, and this particular project included additional constraints. Therefore, we modified the basic MD in several ways. We developed a new set of MD drivers for use with software development and deployment projects, used teleconferencing to collect data from project personnel, and included outcome-based scenarios to assess a variety of mission outcomes. From this experience, we have also derived the basis for a new success-driven framework with an integrated risk perspective. This technical note describes the adaptation of the MD necessary for this customer and the lessons we learned from its use 15. NUMBER OF PAGES 14. SUBJECT TERMS risk assessments, risk analysis, software deployment, software deployment risk, software development, software development risk, evaluating risk, risk evaluation 16. PRICE CODE 17. SECURITY CLASSIFICATION OF
12B DISTRIBUTION CODE
18. SECURITY CLASSIFICATION
OF THIS PAGE
CLASSIFICATION OF ABSTRACT
20. LIMITATION OF
Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. Z39-18 298-102
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.