Professional Documents
Culture Documents
Case Study: Agile Se Process For Centralized Sos Sustainment at Northrop Grumman
Case Study: Agile Se Process For Centralized Sos Sustainment at Northrop Grumman
Abstract. In 2015 Northrop Grumman’s GCSS-J Systems Engineering group in Mclean, Virginia,
supporting the DISA GCSS-J PMO, had been sustaining and evolving a critical information
service portal for 12 independent user groups accessing 22 independent systems. This web-based
portal is a centralized Systems-of-Systems (SoS) hub, dealing with unpredictable
independent-system changes, mitigating immediate-priority security needs, and replacing
uncontrollable obsolescence of COTS software elements – all the while deploying new capability
in six month increments requested by expectant users. The systems engineering process combines
elements of Scrum, and contract waterfall requirements simultaneously on three release instances,
one in development, one in accreditation test, and one in deployed use – with a wave-like transition
among the three instances every six months. The process had six years of effective employment
and evolution, winning praise from GAO and users alike. Most notable is the real-time control
model for re-prioritizing work-in-process, the intimate involvement of customer and users in the
agile systems engineering process, and the never-ending evolution with all life-cycle stages in
simultaneous activity.
Introduction
An INCOSE project-in-process is seeking a generic Agile Systems Engineering Life Cycle Model
(ASELCM). The project reviews effective agile systems engineering (SE) processes in three-day,
on-site, structured-analysis workshops; and develops case studies to support the eventual life cycle
model. This article is one of those case studies. Companion case studies range across software,
hardware, and firmware agile SE processes, and include (Dove, Schindel, Scrapper 2016, Dove,
Schindel, Hartney 2017, Dove, Schindel 2017).
This case study is based on the three-day workshop held August 26-28, 2015, that analyzed a
six-year-mature agile systems engineering (SE) process employed by Northrop Grumman’s
Global Combat Support System – Joint (GCSS-J) group in Herndon, Virginia. The GCSS-J
process, supporting the DISA GCSS-J PMO, sustained and evolved functional capability for a
military-critical centralized systems-of-systems web-hub that enables 12 independent military
user communities access to 22 independent data sources, depicted in Figure 1.
Notable process concepts that will be discussed include:
• Intimate stakeholder involvement in
the SE process.
• Asynchronous and simultaneous life
cycle stage activity, in never-ending
system growth and evolution.
• Hybrid Scrum/Waterfall/Wave
process model integration, in
contract conformance.
• CMMI level 5 procedure discipline,
providing seamless new-release
operational stability.
• Awareness and mitigation of
external environment evolution.
• Real-time optimal process-control
model, for re-prioritizing
development-increment activity and Figure 1. Centralized System-of-System hub as
acting on feedback. web-portal for accessing 22 independent databases.
The process to be described went live in 2009 to replace a waterfall process initiated in 1996 that
took too long to evolve new capabilities. The process, depicted in Figure 2, decouples
back-to-back Scrum-based six-month development increments from subsequent six-month phases
of accreditation (government security test) and deployed operation. Development starts with a
five-day planning session, and utilizes one or two 10-day Z sprints following the four 20-day
development sprints. Z sprints are dedicated to defect correction and baseline stabilization. Four
development sprints is typical, but three- and five-sprint releases occur on occasion. GCSS-J
employs about 60 people on the program.
Process effectiveness had been demonstrated consistently over six years in seamless cutover of
new capability every six months. The effectiveness of this process is marked by a US Government
Accountability Office (GAO) congressional report, citing it as one of “seven investments [that]
were successfully acquired in that they best achieved their respective cost, schedule, scope, and
performance goals (GAO 2011).”
Overview
Systems engineering life cycle stages and processes formed the process analysis framework. It is
neither expected nor necessary that workshop-analyzed systems engineering processes utilize the
15288 (ISO/IEC/IEEE 2015) process framework, but the 15288 standard provides an analysis
framework that encompasses generic systems engineering activities, regardless of what they may
be called.
Figure 2. Yellow path depicts development-decoupled single-release waterfall progression.
Green path depicts three active releases using common automated daily build and test for new
development, operational patching, and security updates. Development sprints start with a
½-day meeting that combines a Preliminary Design Review (PDR) for the new sprint and a
Critical Design Review (CDR) for the prior sprint.
Asynchronous and simultaneous life-cycle stage and process activity is a hallmark of effective
agile systems engineering processes, as
modeled in Figure 3. In Figure 2 the
Development cycle functions implicitly as
depicted in Figure 3, while the four
simultaneous release-status levels function
explicitly as simultaneous stages of
Development, Production, Utilization, and
Retirement.
Systems and Software Engineering — Life
Cycle Management — Part 1: Guide for Life
Cycle Management (ISO/IEC 2010)
recognizes six “commonly encountered”
system life cycle stages. Figure 3 adds a
seventh life-cycle stage, Research, as a
critically necessary element of effective
agile systems engineering life cycle models,
as will be seen later.
Counter to the implication that a progression
Figure 3. Purposes for each Life Cycle Model
through stages is sequentially expected,
stage, adapted from (ISO/IEC 2010, p 14), with
(ISO/IEC 2010, p 32) clearly accommodates
added Research stage. asynchronous and simultaneous activity in
any and all stages with this clarification: “…one can jump from a stage to one that does not
immediately follow it, or revert to a prior stage or stages that do not immediately precede it. … one
applies, at any stage, the appropriate life cycle processes, in whatever sequence is appropriate to
the project, and repeatedly or recursively if appropriate.”
Agile systems engineering processes are justified and effective when it is expected that the
engineering environment and activities will be subject to changes that affect the ongoing
development effort throughout the project.
A framework for characterizing the general/high-level dynamic nature of the engineering
environment provides guidance for what must be accommodated in five categories:
capriciousness, uncertainty, risk, variation, and evolution (CURVE). This framework was
introduced in (Dove and LaBarge 2014) as UURVE, changed here to CURVE by replacing
Unpredictability with Capriciousness as it provides a more descriptive acronym.
GCSS-J characterized their CURVE environment as follows:
Capriciousness/Unpredictability (unknowable situations):
• External data sources change their services at will.
• COTS (Common Off The Shelf) software upgrades deprecate existing interfaces.
Uncertainty (randomness with unknowable probabilities):
• Software and/or hardware may go end-of-life at any point.
Risk (randomness with knowable probabilities):
• May not be able to meet 15-day schedule for delivery of security fixes.
Variation (knowable variables and variance ranges):
• Number of security vulnerabilities to address varies greatly week-to-week.
• Development man-hours available for capability evolution in competition with higher
priority patches and security updates.
Evolution (gradual successive developments):
• As technology changes, the program must port existing capability to new technology.
The AAP instance of the GCSS-J process in Figure 4 serves to frame the discussion of the key
process elements and their relationships. The process architecture is structured to configure a
variety of process activities with personnel and other resources as and when needs arise. Activities
draw upon pools of available resources, assembled into a variety of configurations upon need,
interfacing with each other according to resource-interconnect rules and standards. The four
activities depicted in Figure 4 relate to Figure 3 as instances of Production, Development,
Research, and Concept stages.
The principle intent of this section is to discuss the passive enabling infrastructure and the active
facilitating infrastructure, but necessarily starts with a description of the resources.
The AAP calls out the resources that are employed in assembling process-activity configurations:
• Scrum Team Resources – Typically three Scrum teams were active on three simultaneous
sprints within the same development release, with teams configured and reconfigured from
appropriate resources as needed.
• Systems Engineers – one per Scrum team, served as the liaison to the PMO personnel
for requirements, worked directly with the Scrum team to oversee the implementation
of the requirements, and supported the test team to ensure the requirements are tested
accurately and completely.
• Architects – one per Scrum team, defined the software architecture for their portion of
the system, met weekly with the other architects and the Chief Engineer to enhance the
overall system architecture, peer reviewed the code developed by the team, and
supported the scrum team developers as a mentor.
• Scrum Masters – one per Scrum team, lead the scrum team on a daily basis, managed
the personnel on the scrum team, worked with the other scrum masters to coordinate
and share resources, and represented the Scrum team at the daily Scrum-of-Scrum
meetings.
• Developers – five code developers per Scrum team.
• Testers – one per Scrum team that did daily testing of the prior night’s code build.
• Contractors – The GCSS-J program utilized personnel from both the prime contractor
and five sub-contractors. GCSS-J operated in a badgeless environment with no
separation of duties based on prime and sub status. This approach to providing Scrum
team resources improved the program’s ability to attract and keep top talent no matter
what the badge said.
• New Hires – Regardless of the program, some Scrum team resource attrition will occur.
An on-boarding process was in-place to provide guidance on the GCSS-J processes,
with automation to expedite the construction of a new employee’s development
environment, and mentoring to quickly bring a new employee up to speed.
• Security Team – Information assurance specialists.
• PMO Personnel (Program Management Office product owners) – Established and
re-established release priorities.
• Technical Engineering Management – Architectural oversight and program technical
direction.
• War Fighters (Users) – Did first-look sprint testing, contributed to release-requirements
development, and participated in next-release theme development.
• Story Backlog – user stories developed for each release and parceled out to development
sprints according to evolving priorities.
• Technical Debt – User stories not completed for a release could be carried over to
subsequent releases as technical debt. Stories were built for all tasks of any kind.
• Parameterized Widgets 1 – Parameterized GUI capability code enables reusable
employment with similar but varying user-story needs.
1
Widget: “a generic term for the part of a GUI [graphical user interface] that allows the user to interface with the
application and operating system. Widgets display information and invite the user to act in a number of ways.” Vangie
Beal, www.webopedia.com/TERM/W/widget.html
• Sprint Releases – Typically four sprint releases occur sequentially in a six-month
development increment, each augmenting and improving the prior release.
Figure 5 shows an overview of the team activities with the development Sprint at its center.
Notably, PDR and CDR were embedded within the Sprint, where PDR reviewed the upcoming
Sprint and CDR reviewed the prior Sprint, both in a single meeting day.
2
DevOps: A general concept that integrates certain development and operations activities rather then have them occur
by independent teams with independent methods. In this article, DevOps refers to GCSS-J use of a single process
employing automated tools for creating and testing developer code and patches made by operations.
role descriptions, assigned to appropriate personnel, and embedded within the process to ensure
that effective process-activity is possible at unpredictable times.
• Resource mix evolution – ensures that existing resources are upgraded, new resources are
added, and inadequate resources are removed, in time to satisfy needs. This responsibility
was triggered by situational awareness, and dispatched as shown in Figure 3 activities.
• Resource readiness – ensures that sufficient resources are ready for deployment at
unpredictable times. This responsibility was ongoing, and dispatched as shown in Figure 3
activities.
• Situational awareness – monitors, evaluates, and anticipates the operational environment
in relationship to situational response capability. This responsibility was ongoing, and
dispatched according to the situation of interest, typically by Architects, Systems
Engineering Team, Information Security Team, Technical Management.
• Activity assembly – assembles process-activity configurations. This responsibility was
triggered by situational awareness, as and when needed, and dispatched according to the
activity of interest, typically by Architects, Systems Engineering Team, or Information
Security Team.
• Infrastructure evolution – evolves the passive and active infrastructures as new rules and
roles become appropriate to enable response to evolving needs. This responsibility was
triggered by situational awareness, and dispatched by the Chief Engineer.
Lessons Learned
An agile SE process is necessarily and inherently a continuous learning-based process, and
accommodates frequent adjustment to the SE process based on lessons learned.
• When first implemented the GCSS-J agile SE process started with the traditional two-reviews
(PDR and CDR) at two-days each for each of four sprints. A total of eight days for a four-sprint
3
Boehm, Barry. 1988. A Spiral Model for Software Development and Enhancement. IEEE Computer. May.
4
SAFe refers to the Scaled Agile Framework developed by and described in Leffingwell, Dean. 2007. Scaling
Software Agility: Best Practices for Large Enterprises. Addison-Wesley Professional.
5
Schwaber, Ken and Jeff Sutherland. 2013. The Scrum Guide. www.scrum.org.
6
Dahmann, Judith, Jo Ann Lane, George Rebovich, Jr. and Kristen J. Baldwin. 2011. An Implementers View of
Systems Engineering for Systems of Systems. IEEE International Systems Conference, Montreal, Canada, 4-7 April.
release. These reviews evolved into half day reviews that combined both the PDR for the
current sprint with the CDR for the prior sprint in the half day combined-review session. The
combined PDR/CDR reviews were initially held during the sprint planning week, but that was
found to be disruptive and didn’t work well, so they were moved to occur on the first day of a
sprint. There was no overall PDR/CDR for an operational release, just ones for each sprint.
• The Functional Requirements Working Group (FRWG), which includes both the customer and
GCSS-J personnel, was getting bogged down in design discussions rather than being focused
on reviewing and approving the user stories to be worked. A White Board Session, suggested
and so-named by the customer, was added where the customer and GCSS-J personnel do a lot
of the pre-planning and understanding of what the customer wants at a detailed level. This
allowed attendees to focus each meeting on its respective goal.
• The systems engineering team discovered that some COTS and OSS software embedded in the
GCSS-J system were no longer supported, increasing security exposure to an unacceptable
level. A monthly look-ahead on the versions of COTS and OSS being used was instituted,
allowed GCSS-J to incorporate a plan for upgrades into the rhythm of the program.
• The information security team watched the time allowed by the customer for responding to
high-priority security bulletins shrink over time, and the frequency increase. This prompted a
look-ahead for newly discovered vulnerabilities in available security discussions that would
likely become security-bulletin material.
• The program started with a lot of churn in requirements, with changes being made often and
too late (once coding had already started). GCSS-J used the development hour metrics to show
the customer the effects of that churn: less capability for the same hours. This led to an
agreement that user stories would be solidified before the start of the sprint and no/minimal
changes were allowed during the sprint. If requirements changed, the changes would be
implemented in a later user story.
• Correcting each defect and implementing each new feature required a good deal of
development time because the changes were scattered throughout the code base. GCSS-J
negotiated bundled fixes instead of one-off fixes. Bundled fixes were several
changes/corrections that applied to a particular widget or section of code. By making all of the
changes to one area, the developer cold fix several things more efficiently, and the testing
could also focus on that one area. This increased the number of changes that could be
accomplished within the allotted time. However, this did mean that some changes might get
delayed if they were singular to a widget or code section and not a high priority. Those changes
could be delayed until additional changes to the area of the code base were required. High
priority changes were still made, even if they were isolated.
• Widgets had been developed for a number of years, but initially each scrum team did it their
own way, with no standards in place. Eventually the whole user interface was overhauled so
everything had a consistent look and feel.
• User interface components, referred to as widgets, were constructed to accept parameterized
data as input. The widgets could be reused in appropriate places simply by changing the
parameters passed in. This had a positive effect by allowing code reuse, reducing debug time
(fix it in one place and several bugs are removed), and reduced the overall number of failed test
cases because existing mature code was used instead of creating yet another widget.
• Automation for code builds and deployment was very successful, but systems operations and
security patching were still handled separately. The systems operations team agreed to merge
their work into the software development automation solution. The integration was successful
and the program then moved into a DevOps paradigm. All changes to the system, whether
software development, operational system patches, or security fixes, flowed through the same
create-build-deploy-test cycle. This streamlined the process to the point that the program was
able to achieve the short release cycle required by the customer.
Metrics
• Required by CMMI Level 5 and corporate oversight, the program collected a lot of metrics.
Metrics were tracked for every sprint: technical debt, level of effort actuals compared to
estimates, number of defects found in testing per user story, defect change over time, how
much SLOC (source lines of code) and how much it changed each sprint. Trends were watched
over time. A spike or bad trend caused a root cause analysis and a task team was spun off to
find out why. Defect rate was watched closely to indicate too much is being tried when that
goes up. This instigated a debate: the defect rate was extraordinarily low, and considered a
possible sign of too much caution that might permit pushing a little harder with results still
good enough. If a spike occurred in SLOC the customer and the GCSS-J personnel wanted to
know why, so analysis was done ahead of time in anticipation of questions. Velocity was
tracked. An important part of the development scrum was monitoring velocity and the burn
down rate. Velocity was not measured in story points but rather in actual hours, at customer
insistence. This gave the customer incredible visibility into exactly what was being done,
because they knew how many people and how many hours every sprint. That varied, so every
bit of maintenance wanting to be done was coordinated with the customer, who saw how
maintenance takes away from new functionality. This resulted in a great deal of trust, but also
micro-management and overhead. Everything to be done was expressed as a user story, with
the customer deciding the priorities.
• Statistics were collected on code defects, re-deliveries, document red lines, and operational up
and down time. These predicted how many defects could be expected, how often, and the lines
of code that might cause defects.
Trajectory Uncertainty & Risk; Reducing, Trading, & Sharing Risk; Decisions
Figure 8 is the formal interactions model of the agile loops summarized by Figure 5. The emergent
effect of these loops is to traverse System 1 configuration space shown in Figure 7, with increment
directions of this trajectory set by the following interactions of Figure 8:
• Initiate Product Backlog
• Review Priority Items & Set Sprint Thematic Goal
• Forecast Sprint Content Items
• Analyze Future Item Requirements
• Split, Merge, Rescope Future Items
Stakeholder Product Scrum Development Development Target Target System
(incl. Customer) Owner Master Team Environment System Environment
Performing a Project
Planning Project
Project
Initiated Initiate Product Backlog
Feedback Loop
Refining Future Sprint Backlog
Scrum-Scrum
Analyze Future Item Requirements
Sprint Time
Split, Merge, Rescope Future Items Window Ends
Release-Release
Feedback Loop
Update Product Backlog
Inspected Product
Not Ready for Release
(not “Done”) Inspected Product
Retrospective Conducting Sprint Process Retrospective Ready for Release
Completed Review Process & Environment (“Done”)
Release
Subsequent Life Cycle of Product Release Life Cycle
Perform Target Interaction Ended
Consume Resources
Figure 9: “Fixed” Agents apply past learned patterns from “Learning” Agents
Although agile methods often emphasize learning by human teams, in this project we are also
examining the formalized accumulation of persistent knowledge that is not necessarily
human-based in all cases. Compared to other, more product line-oriented, cases of System 1, the
GCSS-J target information system was seen as not intended for deployment in multiple application
configurations, and a higher balance of learned persistent memory about this single target
information system was seen to be in the learned patterns of human team members. In Figure 6 and
9 the two “databases” shown represent (a) relatively fixed knowledge of System 1 and
environment previously learned and available for use, versus (b) the more dynamic new learning
also potentially occurring about that same domain.
Concluding Remarks
The ASELCM project reviews well-working agile systems-engineering processes – so this case
study may sound a little rosy, but it is factual and intended to be instructive. This case study
reviews the process that developed and perpetually evolved a web-based software product, which
in some circles is assumed to be the ideal environment for agile software development concepts:
small feature chunks are natural, a loosely coupled product architecture is inherent, continuous
delivery is possible, and DevOps is easily accommodated. But the analysis-team’s eventual task is
to find necessary principles with universal agile-system-engineering application. This and other
ASELCM case studies are intended to provide a foundation for eventual extraction of those
universal principles.
Initially the analysis team viewed the GCSS-J process as a maintenance process, rather than a
systems-engineering process. The distinction was rooted in a belief that systems engineering is
about developing and deploying a new system for the first time, and that subsequent upgrades
would simply be maintenance of that original system. But a revelation occurred. Before the
analysis workshop ended we realized we were analyzing an agile process that evolved an agile
system, perpetually. When a deployed systems’ environment continues to evolve it is necessary
that systems engineering is also a continuous process, punctuated by a series of next generation
evolved system instances. This is system life extension, a core conceptual view for systems
expected to live effectively in today’s rapidly evolving environments, and requires a life-extending
agile architecture pattern as the basis of the deployed system’s structure and design. It is not
uncommon for web-based software systems to enable and exhibit continuous evolution, but the
lesson of import is the general concept of perpetual evolution applied to all system types that need
to provide sustained service.
This article introduced a circular asynchronous and simultaneous agile-life-cycle model
framework, featuring the addition of a Research seventh stage. Unlike sequential-stage waterfall,
agile systems engineering processes may be conducting activities in any and all of the stages
simultaneously and without progressive sequencing. The newly acknowledged Research stage
attends to active external and internal situational awareness – the critical driver of agile capability
expression in all of the other stages. Active awareness and enabled mitigation of situational threats
and opportunities to process and product alike, throughout the perpetual systems engineering
process and the target system’s life cycle, is a distinguishing feature of effective agile systems
engineering – previously unrecognized explicitly for its fundamental driving role. The active
Research awareness and mitigation activity breaths life into the agile systems engineering
process, taking it beyond the repetitive execution of traditional development-sprint procedures.
This article illuminated a process of optimal control that contends effectively with potential
disruptions to the integrity of a system-of-systems composed of independent constituent systems
that change without notice; and contends effectively with a centralized hub that must give priority
to rectifying frequent short-notice security vulnerability bulletins and frequent obsolesce of
COTS/OSS elements. These disruptive situations impact the ability to execute new capability
development planned for sprints, mitigated by frequent real-time re-prioritization of target system
release evolution. The agility of the GCSS-J process to sustain continued service with integrity
under frequent disruption has not appeared in the literature to the authors’ knowledge.
References
Dove, R. and R. LaBarge. 2014. Fundamentals of Agile Systems Engineering – Part 1 and Part 2.
International Council on Systems Engineering, International Symposium, Las Vegas, NV,
30Jun-3Jul. www.parshift.com/s/140630IS14-AgileSystemsEngineering-Part1&2.pdf
Dove, R., W. Schindel, and C. Scrapper. 2016. Agile Systems Engineering Process Features
Collective Culture, Consciousness, and Conscience at SSC Pacific Unmanned Systems
Group. Proceedings International Symposium. International Council on Systems
Engineering. Edinburgh, Scotland, 18-21 July. www.parshift.com/s/ASELCM-01SSCPac.pdf
Dove, R., W. Schindel. 2017. Case Study: Transition to Scaled-Agile Systems Engineering at
Lockheed Martin’s Integrated Fighter Group. Unpublished working paper.
www.parshift.com/s/ASELCM-04LMC.pdf
Dove, R., W. Schindel, R. Hartney. 2017. Case Study: Agile Hardware/Firmware/Software
Product Line Engineering at Rockwell Collins. Proceedings 11th Annual IEEE
International Systems Conference. Montreal, Quebec, Canada, 24-27 April.
www.parshift.com/s/ASELCM-02RC.pdf
GAO. 2011. GAO-12-7, Report to Congressional Committees: Information Technologies –
Critical Factors Underlying Successful Major Acquisitions. US Government
Accountability Office. October. http://www.gao.gov/assets/590/585842.pdf
ISO/IEC/IEEE. 2015. Systems and Software Engineering — System Life Cycle Processes.
ISO/IEC/IEEE 15288:2015(E) first edition 2015-05-15. Switzerland.
ISO/IEC. 2010. Systems and Software Engineering — Life Cycle Management — Part 1: Guide
for Life Cycle Management. ISO/IEC TR 24748-1:2010(E) first edition. Switzerland.
Schindel, W. and R. Dove. 2016. Introduction to the Agile Systems Engineering Life Cycle MBSE
Pattern. Proceedings International Symposium. International Council on Systems
Engineering. Edinburgh, Scotland, 18-21 July.
www.parshift.com/s/160718IS16-IntroToTheAgileSystemsEngineeringLifeCycleMBSEPattern.pdf
US Department of Defense. 2014. Strategy for Improving DoD Asset Visibility. Department of
Defense. Washington, DC (US): Assistant Secretary of Defense for Logistics and Materiel
Readiness. January.
http://gcss.army.mil/Documents/Articles/Strategy_for_Improving_DoD_Asset_Visibility.pdf
Acknowledgements
The agreement to host the workshop, choice of process to analyze, and arrangements for enabling
the workshop were the efforts of Emmet (Rusty) Eckman III, Director, Business-Facing,
Engineering and Sciences, Northrop Grumman, Sector Technical Fellow, and INCOSE ESEP.
The SE process analysis described is this article is the collective output of a workshop team that
includes (alphabetically): Judith Dahmann (MITRE), John Davidson (Rockwell-Collins), Rusty
Eckman (Northrop Grumman), Rick Dove (Paradigm Shift International), Tim Fulcher (Northrop
Grumman), Kevin Gunn (MITRE), Suzette Johnson (Northrop Grumman), Mark Kenney
(Northrop Grumman), Ken Laskey (MITRE), Sunil Lingayat (Northrop Grumman), David
Lempia (Rockwell-Collins), Chris McVicker (Northrop Grumman), Erin Payne (Northrop
Grumman), Avinash Pinto (MITRE), Jack Ring (OntoPilot), Bill Schindel (ICTT Systems
Sciences), and Chris Scrapper (US Navy).
Biographies
Rick Dove is CEO of Paradigm Shift International, specializing in agile systems
research, engineering, and project management; and an adjunct professor at
Stevens Institute of Technology teaching graduate courses in agile and
self-organizing systems. He chairs the INCOSE working groups for Agile
Systems and Systems Engineering, and for Systems Security Engineering, and
is the leader of the INCOSE Agile Systems Engineering Life Cycle Model
Discovery Project. He is an INCOSE Fellow, and the author of Response
Ability, the Language, Structure, and Culture of the Agile Enterprise.
Bill Schindel is president of ICTT System Sciences. His engineering career
began in mil/aero systems with IBM Federal Systems, included faculty service
at Rose-Hulman Institute of Technology, and founding of three systems
enterprises. Bill co-led a project on Systems of Innovation in the INCOSE
System Science Working Group, co-leads the Patterns Working Group, and is a
member of the lead team of the INCOSE Agile Systems Engineering Life Cycle
Project.