Professional Documents
Culture Documents
Application Summary
For Sales and Operations Planning
Revision: 8
Date: 29-Aug-2016
Copyright © 2017 - 2017 Kinaxis Inc. All rights reserved. This document may not be reproduced or transmitted in any form
(whether now known or hereinafter discovered or developed), in whole or in part, without the express prior written consent of
Kinaxis.
Kinaxis RapidResponse contains technology which is protected under US Patents #7,610,212, #7,698,348, #7,945,466, #8,015,044,
and #9,292,573 by the United States Patent and Trademark Office. It is also protected under the Japan Patent Office, Japanese
patent #4393993 and the Intellectual Property India Office, patent # 255768. Other patents are pending.
This document may include examples of fictitious companies, products, Web addresses, email addresses, and people. No association
with any real company, product, Web address, email address, and person is intended or should be inferred.
Printed in Canada.
Table of Contents
Table of Contents ........................................................................................................................ 2
Document Revision History ....................................................................................................... 3
1 - Introduction ............................................................................................................................ 4
1.1 Objective ......................................................................................................................... 4
1.2 Functionality................................................................................................................... 4
1.3 Measurements ................................................................................................................ 5
1.4 Roles and Responsibilities ........................................................................................... 5
2 - Business Processes .............................................................................................................. 7
2.1 Business Process Flow Diagram ................................................................................. 7
2.1.1 Inputs ........................................................................................................................ 16
2.1.2 Outputs ..................................................................................................................... 17
2.1.3 Scenario Structure .................................................................................................... 17
3 - Resources ............................................................................................................................ 22
3.1 Metrics, Dashboards and/or Scorecards ................................................................... 28
3.1.1 Dashboards .............................................................................................................. 28
3.1.2 Corporate Metrics Scorecard .................................................................................... 28
3.2 Other Resources: Processes, Task Flows, Workbooks ........................................... 29
3.2.1 Processes ................................................................................................................. 29
3.2.2 Taskflows.................................................................................................................. 30
3.2.3 Workbooks ................................................................................................................ 31
4 - Integration Requirements ................................................................................................... 32
4.1 Data and Integration Requirements ........................................................................... 32
4.1.1 Database Scale ........................................................................................................ 32
4.1.2 Configuration Points ................................................................................................. 33
4.1.3 Control Table Configuration...................................................................................... 50
4.1.4 Detailed Data Requirements .................................................................................... 54
4.1.5 Closed Loop and Data Reconciliation ...................................................................... 63
4.2 Data Integrity Requirements ....................................................................................... 67
1 ‐ Introduction
Sales and Operations Planning is an iterative business management process that aims to achieve focus,
alignment and synchronization across multiple functions of an organization. The S&OP process routinely
reviews customer demand and supply resources, and develops a plan to achieve the organizational
financial and strategic objectives.
The Sales and Operations and Planning Application is outlined below. The scope of this Application is
limited to the following processes and sub‐processes: Overall S&OP Process, Update Annual Plan, S&OP
Supply / Demand Rebalancing and Executive S&OP Meeting. This Application also includes several role
based dashboards and if it is purchased without any other application, the expectation is that the
necessary data will be provided as input to this Application, for example, Demand Plan or Supply Plan
data.
1.1 Objective
The objective of Annual Plan Management is to establish an annual plan against which progress may be
monitored and measured throughout the year. This includes setting financial targets through the year
for various categories and levels of product and/or regions or almost any other hierarchy selection that
measures and categorizes forecasted demand.
These targets include not just an absolute target number but a measure of acceptable variability from
that target. The actuals measured through the year will be compared to the targets and will alert when
results are forecasted to be out of the acceptable tolerance levels (limits).
The unconstrained forecast will be presented as input to the supply‐side of the business to determine if
this can actually be supplied. Operations will evaluate it to determine what can be supplied (as a
separate activity) and will feed that information back into this process to define the constrained
consensus demand plan.
The objective of S&OP Supply / Demand Rebalancing is, therefore, to create the constrained consensus
forecast, and the constrained aggregate supply plan for presentation to the S&OP Executives.
The objective of the Executive S&OP Meeting is to select an achievable sales and operations plan to
execute to.
1.2 Functionality
The functional capabilities of the S&OP application include:
Usually the annual plan is defined at some hierarchy level. As such, hierarchies must be defined to
manage where the targets and limits are entered and maintained.
1.3 Measurements
Standard measures include:
1) Ending Inventory Value: Summary of the value of the projected ending inventory at the end of
each period.
2) Margin %: Summary of the difference between revenue and cost of goods sold by period,
expressed it as a percentage of revenue.
3) Revenue Value: Summary of the revenue for actual orders and forecast, by period.
4) On Time Delivery to Request: Historical and projected percentage, by period, of order lines
available on or before their request date.
5) Key Constraint Utilization %: Summary of the actual load, by period, expressed as a percentage
of the available load for key constraints.
Role Responsibility
S&OP Process Owner Monitors the overall sales & operations planning process to ensure that
decisions are made on time and the process is moving along to
expectations.
Finance Maintains the financial targets and limits in the annual plan. Also monitors
and reports on performance against those targets and maintain the
finance operating plan.
(Demand Planning only) This role also contributes to the demand forecast,
as required.
Demand Planner Owns the consensus demand forecast. It is the demand planner’s role to
add the adjustments and overrides to the consensus plan to create a
realistic forecast that represents all the various parties with vested
interests (finance, sales and marketing as well as operations). Cleaning up
the historical actual data and generation of the statistical forecast, if
applicable, is also the role of the demand planner.
This role is highly collaborative, and will work with the finance, sales,
marketing, supply planning and operations roles.
Sales and Marketing Sales and marketing each have their own view of the forecast demand and
provide input during the executive S&OP review meeting.
(Demand Planning only) As part of the Demand Planning process, these
roles will provide input to the demand forecast to ensure that the resulting
consensus forecast represents their view.
Supply Planner Provides a high level supply plan in response to the consensus plan
2 ‐ Business Processes
2.1 Business Process Flow Diagram
The following business processes are driven in RapidResponse through the use of TaskFlows. These taskflows allow the user to have a guided
experience through the different business process flows. Details on the taskflows included with this blueprint can be found in section 3.2 Other
Resources.
The overall S&OP process flow shown above includes various roles and functions. The “Update Annual Plan”, “Rebalance Demand to Supply” and
“Executive Review and Approval” are sub‐processes that are expressed in more detail below. The Demand Planning, Aggregate Supply Planning
1. Does Annual Plan require revision?: This decision is generally a question of timing. The annual plan is usually established once per year
and not otherwise touched. There are occasional exceptions, however the criteria for determining this is outside the scope of
RapidResponse. As such, there are no resources defined to help with making this decision.
2. Define and update the Annual Plan: Using the Update Annual Plan task flow, link to S&OP Annual Plan or the S&OP Constraint Annual
Plan (if Capacity Planning Application is purchased) workbook or directly open the S&OP Annual Plan or S&OP Constraint Annual Plan
(if Capacity Planning Application is purchased) workbook, and create a private scenario based on the Baseline scenario to define and
update the Annual Plan.
3. Define and update Limits for the Annual Plan: Also using the S&OP Annual Plan or the S&OP Constraint Annual Plan (if Capacity
Planning Application is purchased) workbook, on the Settings worksheet in the same private scenario define and update Limits for the
AOP that will be used for alerting in subsequent sub‐processes.
2.1.1 Inputs
The targets are not dependent on historical data. Therefore the only real prerequisite data for this is the basic Part / Customer data. Targets for
categories may be defined and are normally maintained only in RapidResponse. If there is target information available from on external source,
it may be imported.
Out of the box, a number of targets and limits may be set up and used for the default “categories” of forecast. Refer to section 4.1.4.1 Annual
Plan Targets and Settings for the complete list.
Limits are expressed as Warning and Critical % of the target. A negative limit means values exceeding the target are desirable.
The inputs for the S&OP Team Supply / Demand Rebalancing sub‐process include:
Constrained Supply Plan
Consensus Demand Plan
A small set of achievable Demand and Supply Plans
2.1.2 Outputs
The Annual Plan targets are compared to each forecast category being evaluated when determining the final consensus forecast in the
Consensus Demand Planning process. It is also reported as the target "Annual Plan" in the Finance dashboard.
While there are no predefined alerts defined to automatically report on deviation from the above targets, they may be easily setup.
The targets are stored in the ForecastDetails table under specific HistoricalDemandCategory values. The limits associated with these targets are
defined in the HistoricalDemandCategory directly. These categories (as well as others) have Type.ProcessingRule = “Target”. Section 4.1.3.3
HistoricalDemandCategory describes how to set up the categories.
The outputs of the S&OP Team Supply / Demand Rebalancing sub‐process include:
Consolidated Assumptions
Used by Process Owners to start the S&OP Cycle and as part of the
Publish S&OP cycle to update the Current S&OP scenario.
Current S&OP Child of S&OP Intermediate; this scenario represents the current, agreed‐ Y Y V N
upon S&OP Plan. Thought of in another way, it is a snapshot of latest
approved S&OP data.
This is the scenario in which the new demand and supply plans are
created, analyzed and brought forward for Executive approval.
S&OP users collaborate based on this scenario and commit changes back
to it.
Note: The ‘Project Baseline’, ‘Supplier Collaboration Automation’ and ‘Supplier Collaboration’ scenarios are only installed when those
Applications are purchased.
When working with multi‐server environments containing a Data Integration server (or a server where separate business processes take place),
it is recommended that a minimal scenario tree be maintained and utilized on the Data Integration Server.
The following scenarios would be required:
Data from the ERP system will be regularly updated into Enterprise Data. Any automated data modification would occur in Approved Actions,
with data updated into Baseline where any user data modification occurs. Data update into the Production server (post‐data integration server)
will happen from the Baseline scenario. It is likely that, depending on the process(es) moved to the Data Integration server, additional scenarios
will be required as children of Baseline to accommodate additional processes. If this is the case, scenarios will need to be committed to Baseline
prior to data update so all pertinent data is moved across servers.
3 ‐ Resources
The table below outlines the standard resources provided for each application
Executives
Finance
Sales
Marketing
Demand Planner
Supply Planner
Inventory Planner
Operations
The Corporate Metrics tab of the dashboard for all roles report:
Revenue Value
Ending Inventory Value
Margin %
On Time Delivery to Request
Key Constraint Utilization
The Process Owner dashboard contains an additional Process Owner Metrics tab which reports:
The Process Owner role has the Process Owner dashboard available to them as part of the S&OP
Application.
3.2.1 Processes
S&OP activities are pre‐defined in the S&OP Process Template, accessed via Processes in the Explorer.
Each of these operations and sub‐processes are laid out in the pre‐defined “S&OP Process”.
By default all activities are set to require status updates from ‘Every user in group’, if the customer
wishes to change this setting, navigate to the Activity Properties > Performers tab and select ‘Any
member of group’ as shown in the following screen shot:
FOR INTERNAL USE ONLY
If a portion of the S&OP Process is performed on a “Forecasting” or “Data Integration” server for
performance reasons, updates to the Process Instance should still be performed on the “Planning” or
“Production” server to ensure the necessary team members are updated with progress on the process.
If the Inventory Management Application is not purchased with S&OP, the Manage Inventory activity
should be removed from the S&OP Process Instance Template.
3.2.2 Taskflows
The following Taskflows are provided in support of this application:
As part of the S&OP process, the Process Owner is responsible for “Starting” and “Publishing” each
cycle. Upon Starting the process the S&OP Candidate scenario is created and published to the
appropriate S&OP groups (if this is the first cycle, after the first cycle the scenario is permanent) or is
updated from S&OP Intermediate. Upon Publishing the process, the S&OP Candidate scenario is
committed into S&OP Intermediate and a new S&OP Archive YYYYMMDD scenario is created. The
Current S&OP scenario is auto‐updated from S&OP Intermediate.
3.2.3 Workbooks
In addition to those workbook resources found in Section 3 Resources, the following workbooks support
the S&OP Application.
Process Activities – Workbook used by the Process Owner group to view and manage S&OP
Process Instances they own or in which they are a performer. Can be used to view end to end
process details and schedules, activity properties and late activities or performers
Process Calendar – Workbook used by the S&OP Process performers (team members) to
manage the status of activities assigned to them as part of an S&OP process instance
Process Widgets – The source workbook for S&OP Process widgets, contain the visualizations
displayed on the Process Owner dashboard
FOR INTERNAL USE ONLY
4 ‐ Integration Requirements
4.1 Data and Integration Requirements
4.1.1 Database Scale
It is imperative that the consultant and customer both understand the database scale that is being
created. This typically needs to be a concern when integrating large amounts of historical data or
performing heavy disaggregation operations, both are items in the S&OP and Demand Planning
Applications, but that does not mean it should not be thought about and considered when deploying
other Applications.
For example, forecasting will create a series of detailed ForecastDetail records for eligible PartCustomer
records from a very high‐level summary perspective. The fact that we are forecasting at a high level does
not mean that we are not creating these detailed records and the scope of this can be easily missed. If
you are forecasting for 200K parts with an average of 200 customers each, then you are forecasting 40M
PartCustomers. Further, if your DisaggregationCalendar is set to weekly and you are forecasting for 2
years, then this will result in 40M * 104 or greater than 4 billion ForecastDetail records. This can be
supported in RapidResponse; however, the performance would be degraded.
With the introduction of vectors, tables that use this data type, don’t have the same record
limitations. For example, the HistoricalDemandSeriesDetail table is a vector on the
HistoricalDemandSeries table. Each HistoricalDemandSeries record can have 4B
HistoricalDemandSeriesDetails in its vector, so there is essentially no practical limit to the number of
detail records contained in a vector table.
As of release 2016.2, RapidResponse supports the concept of large tables, which can hold over 4B
records. This change will allow >4B records in all tables except control tables and the Site table.
With this concept of a large table, RapidResponse will use 64‐bit addressing, instead of 32‐bit
addressing. Only tables that need to be large will switch to this model, since it requires more memory
to store the address pointers. There is nothing for the user or any administrator to do, as this
conversion will be automatic, when needed.
Although this large table support is available and automatic inside RapidResponse, care must be
exercised as this limit is approached. As a small table grows in size (i.e., over time), once that table
exceeds ~3B records, an internal flag is turned on and the table will be converted to a large table the
next time RapidResponse is restarted. (This threshold was chosen to be far enough away from the 4B
‘small table’ record limit that a large surge of records would not be very likely. Exceeding 4B records
while a table is still small would cause disruption to RapidResponse.) During an initial data load, if there
is a chance that more than 4B records need to be loaded into a table, this process may need to be
executed in multiple steps, being sure to restart RapidResponse after the table has exceeded 3.2B
records, but before it exceeds 4B. Alternatively, if you need to start a table out as a ‘large’ table, contact
Support and they will be able to force the internal flag prior to reaching the threshold.
FOR INTERNAL USE ONLY
The Application Settings workbook has a worksheet, entitled "Application Variables", for
managing these profile variables. This workbook can be accessed through Administration pane,
System Settings, Application Settings (or through the Explorer).
Profile variables are used to set target categories. These all have names ending in "Target".
Profile variables are used to set demand categories. These all have names ending in either
“Category”, “Optimistic” or “Pessimistic” except for one called DP_DemandTypeForConsensus.
Details of these variables can be found in the Demand Planning Application Summary
Document.
Below is a list of the profile variables used to set Target categories, some of which are used in the S&OP
workbooks outlined for this application, but may also be used in additional Pre‐Defined Resources
within RapidResponse. For a complete list of which resources, you can use the Search Resources feature
to perform a search:
The list of Targets should be defined in the Baseline scenario. If they are defined in a different scenario
the Global Setting “Annual Plan target scenario” needs to be updated to reflect the correct one
otherwise you will get unexpected results in the Corporate Metrics scorecard:
In addition to the above variables, two more variables exist in the Application Variables worksheet of
the Application Settings workbook that set the Warning Limit and Critical Limit default values that are
used for alerting.
On install the default values are set as follows but can be overridden in the S&OP Annual Plan workbook
on the Settings worksheet. During the integration if the default values need to be adjusted navigate to
the Application Variables worksheet within the Application Settings workbook and edit the values as
needed. Out of the box these thresholds are not exposed in the pre‐defined resources; however, they
can be unhidden within any of the workbooks/widgets where Annual Plan targets are shown.
FOR INTERNAL USE ONLY
4.1.2.2 Macros
No macros are used in the various S&OP resources.
4.1.2.3 Automation
This filter can be used to focus on items that are newly introduced, or are going to be introduced.
The idea of this filter is to list Part (or PartCustomer) records that have not yet made it into production.
The filter expression is designed to look for parts where the sourcing data has not yet come into effect.
This may not be an appropriate test. For example, if the process for defining these new parts is to create
PartSource records with effectivity that runs from past to future, then the filter expression may need
some other means to identify these parts.
Careful thought needs to be put into what the expression will look like if it needs to be redefined to
meet the customers need. Ideally, NPI parts should be identified by a discrete code value in a field (no
wildcards), preferably in a referenced table.
FOR INTERNAL USE ONLY
4.1.2.5 Hierarchies
Hierarchies are used extensively throughout the S&OP, Demand Planning, Aggregate Supply Planning
and Capacity Planning Applications to allow the entry of data values at a specific intersection which then
disaggregates to the lowest level, PartCustomer. As such any hierarchies defined must be compatible
with (have a direct path to) the PartCustomer table.
Site
Constraint Group
Constraint Name
Region Group
Region
Constraint
Customer Group
Customer Name
Region Group
Region
Customer
Region Group
Region
Site
Part
Supplier Group
Supplier
Part
Region Group
Region
Supplier
If additional hierarchies are requested by the customer, they can be defined as required; however, if the
intended use is within the context of S&OP or related processes, there needs to be a direct link to the
PartCustomer table to support disaggregation logic.
Warning: do not use concatenated values. If you do, some disaggregation sheets will not work properly
if the sheet displays the “level below” the selected hierarchy. Also the system cannot use the
referenced table to enhance query performance. If you need a user‐friendly value, add a custom field
and do the concatenation in the data load or with a data change
A simplified responsibility model was introduced in 2014.4. A new type of resource is available called
Responsibility Definitions. This resource makes it possible for non‐administrators to assign responsibility
for data. Users or groups can be granted permission to create and share responsibility definitions, the
same as other RapidResponse resource types.
Note that to use this new type of resource, users must be able to view the responsibility scenario. It is
therefore recommended that a responsibility scenario is selected as one that is kept current and can be
shared with all users involved in creating responsibility definitions or assigning responsibility for data.
The responsibility model allows any workbook author to include responsibility information in workbook
columns, and format it so that users can easily contact the person responsible for data. This way, rather
FOR INTERNAL USE ONLY
than relying on an administrator to build a responsibility workbook to define which data changes are
important, users can decide on a case‐by case basis what is important.
It must be a scenario that everyone who needs to use responsibility definitions can be given
View access to.
It must be a scenario that is kept reasonably current, so that when a new record is added,
responsibility for that data can be assigned.
Choose a permanent scenario to reduce the risk of the responsibility scenario being accidentally
deleted.
System administrators can change the responsibility scenario in the Administration pane > System
Settings > Global Settings, listed in the Responsibility scenario under the Scenarios category:
Each Responsibility Definition is configured with a Base table, and Responsibility table(s) are identified.
For example, the Buyer Responsibility Definition Base table is the Part table, and has its responsibility
table configured as the BuyerCode table:
Multiple Responsibility tables can be identified if multiple variables exist on which to set Responsibilty.
For example, the Sales Responsibility Definition, based on the PartCustomer table, is configured to be
associated with four responsibility tables:
FOR INTERNAL USE ONLY
The standard data model contains a UserId field on the PlannerCode and BuyerCode tables that is a
string data type. The Part table references both of these tables, while the Constraint table references
the PlannerCode table. The field on these two tables allows you to assign a user to a specific Planner
code or Buyer code.
Responsibility definitions can be shared with users to create or edit responsibility assignments.
FOR INTERNAL USE ONLY
Responsibility assignments are listed in the Assignments table, which can be seen when the
Responsibility definition is opened.
The Resources workbook now contains a Responsibilities worksheet where all responsibility definitions
are listed.
A Responsibility Roles worksheet has also been added to the RapidResponse Administration workbook.
This worksheet is used to map Responsibility definitions to variables that are used in predefined
resources.
4.1.2.6.5 Collaboration
Once responsibility is assigned, the name of the responsible person can be rendered in a worksheet
using the RESPONSIBILITY function and by formatting the column to ‘Display as User’.
When the user is displayed in a worksheet, you have the ability to hover over the name to display a
contact card with pertinent information about the user, such as, email address, phone number, etc.
There is also the ability to ‘Send a Message’ through the displayed contact card. The message is sent to
the user’s message center and can be sent to the user’s email address if they choose to do so.
FOR INTERNAL USE ONLY
To send the message to an email address the individual user sets it up in their user options (assuming
the system administrator has configured their install to send email messages).
In order to initiate collaboration with the users, one would create a scenario and share it with the
appropriate responsible users and include a message about what data they require assistance/feedback
for. To initiate this collaboration with multiple users directly from the worksheet, the user does a multi‐
select on the names in the Responsible column, right clicks and selects “Share Scenario…”.
FOR INTERNAL USE ONLY
On install, responsibility is exposed and collaboration through user name is present in a number of
workbooks. In order to view resources where responsibility is exposed and collaboration through user
name is present, use the Tools > Search Resources function, and search for “Responsibility” in
expressions.
Of importance to note when identifying collaborators is that care should be taken to ensure that
multiple collaborators are not working on resolving the same part/component issue, as their changes
could conflict with each other.
The selected collaborators will receive notification that a collaboration scenario has been shared with
them, requiring them to review the data and possibly make changes to it in order to support the
request. Each collaborator should create a child scenario of the parent collaboration scenario in order
to simulate any required actions. If the issue is resolvable, the collaborator should commit the private
scenario with data changes and select to accept the collaboration request, while including any message
required to the requestor.
If the issue cannot be resolved, the collaborator would not commit their private scenario, but would
instead select to reject the collaboration request, including any message required to the requestor. In
FOR INTERNAL USE ONLY
this manner, the requestor can keep track of the collaboration progress and which changes can be
made. The private scenario can be deleted once it is no longer required.
If the collaboration was successful and the required changes can be implemented, the collaboration
owner will commit the collaboration scenario into its parent. This should only be done when it is
confirmed that the required actions will be implemented, as the changes may be sent via closed‐loop to
the transactional ERP system.
If the collaboration was unsuccessful and the required changes cannot be implemented, the
collaboration owner will change the status of the collaboration scenario to the appropriate status. Once
the unsuccessful collaboration scenario is no longer required, it should be deleted.
The collaboration scenario owner can decide how to be notified of changes and updates throughout the
collaboration process. This can be done when sharing the scenario by selecting the appropriate check‐
boxes in the “Notify me when” section on the Notify tab of the Share Scenario dialog. It can also be
changed throughout the collaboration process, if required, by accessing the Share Dialog at any time.
The following describes the functionality of each selection and the bulleted points below identify where
the messages appear:
Anyone responds – responses will be logged in the Scenario Properties Activity Log.
Everyone has responded ‐ responses will be logged in the Scenario Properties Activity Log.
Anyone modifies data in this scenario—sends a message to your Message Center when anyone
modifies data in the scenario.
At any time, the collaborators can send a note to the scenario owner indicating progress or issues
arising. This can be done by accessing the collaboration scenario properties and Responding via the
toolbar icon.
FOR INTERNAL USE ONLY
An Activity report can also be distributed to highlight scenario changes and activity, by accessing the
collaboration scenario properties and clicking on the ‘Distribute Activity Report’ icon.
4.1.2.7 Miscellaneous
Below is a list of other configuration points to be aware of for this application:
It is critically important that the DemandType on the PartCustomer be set to something other than null
for all PartCustomer records that are going to calculate a consensus forecast. If this is not done, then the
data change in the “Closed Loop Within Cycle” will fail.
It is equally important that the DemandType used for these PartCustomer records not be used for
imported DemandOrders (imported IndependentDemand records) nor used as any of the
AllocationDemandType values in the SupplyType table. That is, you should see no usage of these
PartCustomer DemandTypes in any scenario above the Baseline scenario for anything except
PartCustomers. In the Baseline scenario and any descendents, you will see them also used on
DemandOrders (IndependentDemand) but only after the data change in the “Closed Loop Within Cycle”
has run and not for any other IndependentDemands. That is, these demand types should only ever occur
in DemandOrders where the Order.Id starts with “CF: “. In no scenario at all should these demand types
show up as dependent demand. The reason there must not be any overlap in the use of these demand
types is that we need to ensure that double‐driving the demand does not happen in either Baseline nor
in the S&OP scenarios by turning them on and off appropriately. All of the other demand types used
must not be touched between the scenarios (they should either drive demand or not, the same way in
both scenario trees).
Also related to avoiding the double‐driving of demand, it is vital that in the S&OP Intermediate scenario
and all descendent scenarios, these PartCustomer DemandType controls all have the ProcessingRule set
to Ignore. In this scenario tree, we want all of the consensus forecast to drive from ForecastDetails and
not from the IndependentDemands that were written to Baseline and subsequently updated into S&OP
Intermediate. For exactly the same reason, however, it is required that these same PartCustomer
DemandType controls all have the ProcessingRule set to SalesForecast in all of the other (non‐S&OP)
scenarios [at least in Baseline down excluding the S&OP branch of scenarios].
It is important to understand that the PartCustomer DemandTypes are always treated as if the
DemandType.ProcessingRule = ‘SalesForecast’ when consensus forecasts are calculated. Therefore,
setting the ProcessingRule to Ignore in the S&OP scenarios for these PartTypes will not prevent them
from driving consensus forecast and real forecasted demands there.
In any of the cases where the consensus forecast is expected to be spread differently than the applicable
DisaggregationCalendar (either defined in the PartCustomer or, if null there, defined in the
SOPAnalyticsConfiguration), it is important that the DemandType.SpreadRule is set to None in the
Baseline scenarios and set to Spread in the S&OP scenarios. This is required because the data change in
the “Closed Loop Within Cycle” will create the IndependentDemand records in the Baseline scenario
already spread and we don’t want to try to spread them again in the Baseline. The selected
SpreadProfile does not need to be different in the scenarios.
FOR INTERNAL USE ONLY
HistoricalDemandCategory
The double‐driving forecast demand problem just discussed combined with the fact that PartCustomer
DemandTypes are always treated as if the DemandType.ProcessingRule = ‘SalesForecast’ when
consensus forecasts are calculated, has an implication on the settings of the HistoricalDemandCategory
between Baseline scenarios and S&OP scenarios. We need to ensure that calculated consensus forecast
does not occur in the Baseline scenarios but that it does occur in the S&OP scenarios.
The normal assumption is that there will be no non‐Target ForecastDetail records in any non‐S&OP
scenario. If this assumption is true, then nothing more needs to be done. However, if ForecastDetail
records are brought into RapidResponse at any scenario above the S&OP scenario and the
Header.Category.Type.ProcessingRule for these records is one of Forecast, ForecastOverride,
RebalancingAdjustment or RebalancingForecastOverride, then steps must be taken to prevent them
from producing consensus forecast in the Baseline scenarios but allowing them in the S&OP scenarios.
However, a safer approach is to completely disable these categories outside of the S&OP scenarios by
changing the HistoricalDemandCategoryType ProcessingRule from anything with a value of Forecast,
ForecastOverride, RebalancingAdjustment or RebalancingForecastOverride to None (only in the non‐
S&OP scenarios). In the S&OP scenarios these HistoricalDemandCategoryType records will be the correct
ProcessingRule of Forecast, ForecastOverride, RebalancingAdjustment or RebalancingForecastOverride.
Table and Field In scenarios above S&OP In S&OP Intermediate and all
Intermediate (Enterprise Data, descendant scenarios (Current
Baseline, etc.) S&OP, S&OP Candidate, etc.)
For all DemandType records used by PartCustomer records
(should never include DemandType records used by DemandOrders where the Id does not start with
“CF:” nor any that have SupplyTypes)
ProcessingRule SalesForecast Ignore
SpreadRule None Set as required for the
Consensus Forecast spreading
beyond the disaggregation
calendar.
For all HistoricalDemandCategory records that could be used to produce a Consensus Forecast in the
S&OP scenarios but have ForecastDetail records in the Baseline scenario. The Type ProcessingRule in
the S&OP scenarios would be one of Forecast, ForecastOverride, RebalancingAdjustment or
RebalancingForecastOverride but there are ForecastDetail records being imported above the S&OP
Intermediate scenario.
Type.ProcessingRule None Forecast, ForecastOverride,
RebalancingAdjustment or
RebalancingForecastOverride as
required.
Or, alternatively… 0.0% > 0.0%
FOR INTERNAL USE ONLY
For example, users in the Process Owner role would have the S&OP High Level Process – Process Owner
task flow selected to open on sign in as shown below:
In support of the S&OP Application, Users in the following groups should have the task flow noted below
set to open on sign in:
FOR INTERNAL USE ONLY
If the customer has purchased other applications in addition to S&OP, then the task flows noted in the
documentation for the individual application should override what is described here.
There is no Control Set reference, therefore there is no corresponding worksheet in the Control Sets
workbook, but it can be accessed in the Control Tables workbook.
4.1.3.2 HistoricalDemandCategoryType
This table supports S&OP and is used to define how demand data associated with a given
HistoricalDemandCategory is processed.
Note that the AggregationRule and UnitType fields only function if the ProcessingRule is “Target”.
FOR INTERNAL USE ONLY
4.1.3.3 HistoricalDemandCategory
This table is used to categorize values in both the HistoricalDemandActual and HistoricalDemandSeries
tables, identifying categories of forecast demand (sales, customer, stat) as well as categories of actual
demand (shipments, consumption).
The categories of demand must be defined here. There are a standard set of categories defined with a
variety of types, which would be defined in the previous section. Different types have differing
AggregationRule, ProcessingRule and UnitType settings. It is the ProcessingRule that typically defines
the purpose of the category. It can be set to:
• “None”: Use this to ignore standard Target categories that you are not using. This will keep them out
of workbooks and drop‐down lists.
• “Actual”: This is used for storing historical demand actuals. Values in this category can be used in
calculating the statistical forecast.
• “Target”: This is used for defining target metric values against which the S&OP annual plan is
measured.
• “Forecast”: This is used for storing streams of forecast demands or forecast adjustments. Values in
this category can be used in calculating the consensus forecast.
• “ForecastOverride”: This is used for specifying values to override the calculated consensus forecast.
• “ReBalancingForecastOverride”: This is used for specifying values to override the calculated consensus
forecast based on demand and supply balancing.
The AggregationRule of “Average” or “Sum” is only applicable to targets, while “None” covers
everything else.
The UnitType (“Quantity” or “Value”) is, similarly, only applicable to targets and determines if the target
is a currency or quantity. This will determine which field in the ForecastDetail and/or
HistoricalDemandSeriesDetail is significant (“Quantity” or “Value”) for the target.
FOR INTERNAL USE ONLY
The HistoricalDemandCategory record for targets also defines WarningLimit, CriticalLimit and an
associated DesiredResults value of either “High” or “Low”. These field values are only of significance to
target categories.
On install there are 51 Target variables defined in the Application Variables worksheet of the Application
Settings workbook During Base Data Integration the five Historical Demand Categories shown in the
screen shot below need to be created to match the measurements listed in section 1.3 of this
document (see screen shot below for list and default settings) in order to the dashboard widgets to
reflect Annual Plan targets. These targets will show up in the drop list of the Annual Plan Category
variable found in the S&OP Annual Plan or S&OP Constraint Annual Plan workbooks.
Should you require additional targets to be defined for the deployment, you will need to create the
categories in the Historical Demand Category worksheet of the Control Tables workbook. Ensure that
the Type is changed from “None” to one that reflects a HistoricalDemandCategoryType of “Target”,
“TargetSum”, “TargetAvg”, or “TargetValue” to get the appropriate ProcessingRule, AggregationRule
and UnitType.
Assumption Statuses:
Closed
Open
There is no pre‐defined application resource installed that allows you to edit, delete or add these values
and they are not auto‐created. If you need to add/change them, then you will need to create a
worksheet on the AssumptionType and AssumptionStatus tables.
4.1.3.5 PartType
All Parts for which you require consensus forecast to be generated in S&OP, the Part Types loaded must
have a ProcessingRule of either “MPS” or “MPSConfig”, otherwise the consensus forecast cannot be
generated.
4.1.3.6 AggregatePartCustomerType
Control table added in 2014.4 SU2 to enable or disable records in the AggregatePartCustomer table for
processing when determining disaggregation rates.
ProcessingRule: indicates whether AggregatePartCustomer records that belong to this type are
used in generating disaggregation rates; can be set to “Ignore” or “Use”
In order for a given AggregatePartCustomer value to be used in calculating forecast disaggregation rates,
it must reference an AggregatePartCustomerType record that has a ProcessingRule set to “Use”.
FOR INTERNAL USE ONLY
The Settings worksheet is actually editing the HistoricalDemandCategory record itself for those
Type.ProcessingRule = “Target” categories. Here is where the WarningLimit and CriticalLimit are set for
each target category.
FOR INTERNAL USE ONLY
4.1.4.2 Part
If the parts are going to include supply planning of any sort then all the supply policy fields must also be
populated. Reference the MPS and Aggregate Supply Planning Applications for those requirements.
All Part Types loaded for this must have a ProcessingRule of either MPS or MPSConfig in order
for the consensus forecast to be generated.
In order to use any of the currency disaggregation or reporting, it is important to populate the
AverageSellingPrice.
If the part is also going to load backlog (SalesActuals) with forecast consumption of the
consensus forecast, then the following fields need to be populated:
o DemandTimeFence
o BeforeForecastInterval
o AfterForecastInterval
o SpreadForecastInterval
PartSolution.Part Class: This field is used to classify a part. The part classification is used to help
determine if the proper planning, safety stock and replenishment strategies are set based on the
category of part. It is an enum list field found on the PartSolution table referenced from the Part
table. Valid values are: None, FinishedGoods, RawMaterial, WorkInProgress. It is used to classify
inventory in the ‘Part Class Value’ widget found on the Inventory Planner dashboard.
PartSolution.Part Strategy: This field is used to identify the manufacturing strategy of a part (eg.
Make‐To‐Stock, Make‐To‐Order, Assemble‐To‐Order). The strategy is used to help determine if
the proper planning, inventory, sales, and supply chain strategies are set based on the
manufacturing strategy of the part. It is a string field found on the PartSolution table referenced
from the Part table. If the customer ERP system does not contain this information it is expected
that it would be maintained by users directly in RapidResponse.
PartSolution.KeyPart: This is a Boolean field to identify a Part as being a Key or Critical part. Key
Parts are focused on in the S&OP analysis as they have pertinent limitations (long lead times,
single sourced, etc.) to the long term plan.
FOR INTERNAL USE ONLY
It is recommended that this table be set to not allow data update to delete records (found in the table
properties of the data model) in order to avoid unintentional cascading deletes in the PartCustomer, and
Historical tables.
As mentioned in the section “Pre‐defined Filters”, the effectivity date range of PartSource records can
be significant if the “NPI Items” filter is to be used. In particular, this filter assumes that NPI part records
can be identified by the first effective PartSource (PrimaryPartSource) having an EffectiveInDate value
that is on or after the system MRPDate minus 1 EffectiveDisaggregationCalendar.
Beyond effectivity dates, the other PartSource fields are only of significance to the supply planning
process.
4.1.4.4 PartUOMConversion
If a user is disaggregating a target from a higher level with a particular UOM, that target value will only
end up disaggregating to part/customers (or aggregate part customers) where there is a valid UOM
conversion factor for the UOM currently chosen. This is reasonable since we can’t assume (i.e. make up)
UOM conversion factors but the fact is that it’s invisible to the user. They don’t know that the
disaggregation is potentially happening to a subset of Part/Customers.
Similarly, if the targets are revenue or cost targets, there must be either a Part.AverageSellingPrice
defined or effective records in the CustomerPrice table or a Part.StdUnitCost for the Part/Customers for
the targets to include those PartCustomers.
4.1.4.5 Customer
The base table for the S&OP process is the PartCustomer table. As such, customers must be defined as
well. While we really don’t need to know much about the customers, at least one aspect of hierarchies is
usually associated with customer characteristics. As such, custom fields or references from either
Customer or CustomerGroup are often required.
It should be noted here that summary records in the PartCustomer table are typically handled by
creating a summary or decorated Customer Id. These special customers are usually maintained in
RapidResponse and not imported.
Also note that Customer has a Site which is normally setup as a key reference. This may be disabled but
is disabled for the entire database then. That is, if you setup the Customer table to be not site‐specific,
then ALL Customer records MUST reference the blank Site. Otherwise, none of the Customer records
should reference the blank Site. It’s all‐or‐nothing.
It is recommended that this table be set to not allow data update to delete records (found in the table
properties of the data model) in order to avoid unintentional cascading deletes in the PartCustomer, and
Historical tables.
FOR INTERNAL USE ONLY
4.1.4.6 PartCustomer
PartCustomers must be defined for every required Part and Customer combination. This is the base
table for the S&OP process. It is not enough to just have the Part and Customer records defined
separately. Put careful thought into what combinations of Part and Customer need to exist in order to
support the S&OP application as this is often an area where records are needlessly inflated.
This applies to any Parts that are going to be used as part of the Inventory Management application
within the S&OP Process context. If PartCustomer records do not exist, the High Inventory Violations
and Low Inventory Violations widgets on the Supply Planner dashboard will not render data. It should
be noted that for these instances the Customer can be defaulted to ‘NoCustomer’ or some other generic
value in order to reduce the number of records generated.
ForecastItem: This is a specific reference to the ForecastItem that will manage the statistical
forecast for this PartCustomer. This is normally maintained by the Statistical Forecasting Setup
dialog (or equivalent) and is left pointing to the blank ForecastItem on import.
DisaggregationCalendar: Was added to the PartCustomer table in 2014.4 in order to define the
periods that the forecast quantities can be disaggregated into for a part customer. If this field is
null, the disaggregation calendar specified in the SOPAnalyticsConfiguration table is used for
disaggregation.
BaseKey: If you can even see this LEAVE IT BLANK! ALWAYS!!!
DemandType: This reference to the DemandType may be left out and set to null. It is nullable.
However, if the calculated Consensus Forecast is expected to be spread from the
DisaggregationCalendar to something smaller (Month to Week or Day), then you can set this to
a DemandType with the SpreadRule set to “Spread” and a defined SpreadProfile provided. The
recommended practice is to set this to a valid DemandType.
NOTE: Whatever demand type is assigned, the processing rule should be set permanently to
“Ignore” in the S&OP Intermediate scenario. As part of the Closed Loop within Cycle script the
Consensus Forecast generated during the S&OP cycle is cross scenario updated in to the
IndependentDemand table the Baseline scenario to be used to drive demand in non‐S&OP child
scenarios. Setting the processing rule to “Ignore” for the PartCustomer.DemandType.Value in
S&OP Intermediate ensures that duplicate demand is not being driven within the S&OP scenario
tree.
Pool: If forecast consumption by Pool is required, then you can set the Pool for this
PartCustomer on this reference. Frequently, the Pool is set to the same value as the
Customer.Id. If the Pool is not equal to the Customer and one customer should belong to more
than one Pool, then you will need to create new Customer records with the ID and Pool
concatenated or some other mechanism to make them unique by Pool. This is because the Pool
reference in not a key reference on the PartCustomer table.
OrderPriority: You can override the Part.DefaultPriority for forecasts from this PartCustomer
here.
MinimumShelfLife: You can override the Part.MinimumShelfLife for forecasts from this
PartCustomer here.
FOR INTERNAL USE ONLY
4.1.4.7 AggregatePartCustomer
This table was added in 2014.4 SU2 and can be used to group part customers together to define the
level at which forecast disaggregation occurs. Forecasts are generally created for part customers,
however in some cases it might not be necessary to generate a forecast to such a detailed level. For
example, a part’s forecast might be made at an aggregate level for all customers within a given sales
region, or for all parts belonging to a given product family. The AggregatePartCustomer table can be
used to support generating forecasts at higher levels, and is used to link aggregate PartCustomer records
with actual part and customer combinations that are also defined in the PartCustomer table. In cases
where is it not necessary to disaggregate forecast values to the detailed part and customer level, this
can result in fewer records being generated and improved database performance.
Generally, to model this situation, aggregate customer grouping values (eg. customer regions) should be
added to the Customer table (or Part table, if grouping Parts into aggregate groups). Next, appropriate
PartCustomer records should be added to identify the new levels at which the Part(s) should be
aggregated. Finally, the AggregatePartCustomer records can be added to associate the detailed or
component part‐customer combinations with the aggregate part‐customer combinations for which the
forecast should be generated. See the Analytic and Data Model Guide for more details.
4.1.4.8 HistoricalDemandHeader
This represents a unique combination of each PartCustomer and Category required. In fact, the keys are
the references to PartCustomer and to Category (HistoricalDemandCategory).
Actuals: Vector Set data type; represents the HistoricalDemandActual records associated with
the historical demand header
ForecastDetails: Vector Set data type; represents the ForecastDetail records associated with
the historical demand header
o Note that that currency for money fields on the vectors MUST be defined through the
header record and never directly on the Actuals or ForecastDetails vectors themselves
FOR INTERNAL USE ONLY
Upgrade Note: If you have upgraded from RapidResponse 2014.2 (or earlier) to RapidResponse 2014.4
(or later) and in the process have not converted the five standard tables to contain vector data, the
following differences exist in this table:
The one non‐key field on this record is the ConsensusForecastWeight field. This is the override value for
the HistoricalDemandCategory.ConsensusForecastWeight. If this is set to anything other than ‐1.0, then
the category value has been overridden by this. This field should be defaulted to ‐1.0 and only
overridden for specific headers (Part/Customer/Category). Note that this field indicates the override for
the header for all dates (past to future). If a specific date range for the override is required, then use the
HistoricalDemandHeaderTimePhasedAttributes table instead.
There is a hidden BaseKey field. If you can even see this LEAVE IT BLANK! ALWAYS!!!
4.1.4.9 HistoricalDemandHeaderTimePhasedAttributes
The purpose of this table is to hold date‐effective overrides for the ConsensusForecastWeight for
specific Part/Customer/Category combinations (HistoricalDemandHeader). There are two keys on this
table along with 3 data fields. Note that this table is generally populated by the S&OP Demand Planning
Ratios workbook. Also note that, generally, a record is created for every DisaggregationCalender in the
worksheet bucket.
Note that records in this table are only applicable when the Header.Category.Type.ProcessingRule is set
to “Forecast”
Header: This is the key reference to the HistoricalDemandHeader that we are providing date‐
effective overrides for.
Id: This is a base key string that uniquely identifies this record within the set of records
referencing the same header. The value is not relevant but must be unique within the header.
As such, it is generally set to automatically generate a unique value on record creation.
EffectiveInDate: The date on which this records ConsensusForecastWeight value becomes
effective. A string version of this might be a good choice for the Id field if you are importing
these records rather than just maintaining them in the S&OP Demand Planning Ratios worbook.
EffectiveOutDate: The on which this records ConsensusForecastWeight value is no longer
effective.
ConsensusForecastWeight: This is the final override value and is expected to be somewhere
from 0.0 to 1.0. Zero is a legitimate override to say that this header will not produce consensus
demand for this period. If you don’t want to see the override any longer, then delete the
record(s).
4.1.4.10 HistoricalDemandOrder
This is a new table introduced in the Solutions Namespace in 2014.2. The purpose of this table is to
capture HistoricalDemandActual Order information for calculating Order level on time delivery metrics.
It contains the following fields:
Id: String key field containing the sales order number or order id.
Site: Reference to Site table to indicate what site the order belongs to.
FOR INTERNAL USE ONLY
CreatedDate: Date field for capturing the date the order was created in the ERP system.
4.1.4.11 HistoricalDemandActual
These records are associated with a particular HistoricalDemandHeader and they represent some form
of historical demand data, such as shipments. One category of these records is the normal source of
information for history in order to calculate the statistical forecast.
As of 2014.4, HDA records are stored in RapidResponse as a set of vector data on the
HistoricalDemandHeader table.
This table is not keyed. It is not expected that records will be deleted from here (except through a
regular table pruning task) and they cannot be referenced from another table.
There are 2 money fields. Normally, the money fields are expected to be provided in the currency of the
Header.PartCustomer.Part.Site.Currency reference, but you can change that.
Line: This is an optional string field that may be populated if required by the customer for
reporting. It represents the order line of the sales order and it is not required for calculating on
time delivery as it is assumed the historical demand actual records are already the line level
detail.
LineCreatedDate: This is an optional date field that may be populated if required by the
customer for reporting. It represents the date the sales order line was created and can be used
for calculating sales order lead times if required.
Date: This is the primary date field effective for this record. It is the date associated with this
actual shipment. The date used should be consistent with the historical demand series dates.
For example, if the historical demand series dates represent the dates that parts ship from the
dock, then this date should be a dock departure date.
CommitDate: This is the date the order is committed to ship. It is currently information‐only but
is available for determining some metrics.
RequestDate: This is the date the customer requested the order to ship. It is currently
information‐only but is available for determining some metrics.
Quantity: This is the number of parts in this actual shipment.
Line Quantity: This is an optional field that may be populated if required by the customer for
reporting and it represents the sales order line quantity for the associated sales order line, it
does not always equal the Quantity value on the historical demand actual record.
UnitPrice: The unit price for the historical demand. This value can be used to calculate revenue
associated with historical actual demand.
UnitCost: The unit cost for the historical demand. This value can be used in calculating margins
where the margin per unit is calculated as the difference between UnitPrice and UnitCost.
Route: This is an optional string field that may be populated if required by the customer for
reporting and represents the route the order took during shipping.
Shipset: This is a string field indicating if the record was part of a shipset and is used in
calculating order level on time delivery.
Order: This is a reference field to the HistoricalDemandOrder table described in the previous
section.
FOR INTERNAL USE ONLY
Note: If no HistoricalDemandActual records are included in the data, the Forecast Value Add widgets
need to be removed from the S&OP Dashboards as they will not render any data. This widget is found
on the Monitoring tab of each dashboard.
4.1.4.12 ForecastDetail
This table holds current forecast data at the detail level (after any disaggregation calculations have been
performed). Each record pertains to a given part, customer, and category combination, and shows
details such as the forecast quantity and date. Different types of forecasts can be stored in this table
such as the statistical forecast, sales forecast, marketing forecast, and so on.
The values shown in this table are based on aggregate values entered or maintained through various
resources in RapidResponse. For example, details of the statistical forecast are based on values
generated by the Save Forecast command in the S&OP Statistical Forecast workbook. Details of other
types of forecast are based on values entered in workbooks belonging to the related constituent group
(for example, values might be provided through the S&OP Marketing Forecast workbook, S&OP Sales
Forecast workbook, S&OP Finance Operating Plan workbook, and so on).
The ForecastDetail table is also used to store targets when the Header.Category.Type.ProcessingRule is
set to “Target”.
As of 2014.4, this table contains vector set data for records in the HistoricalDemandHeader table.
Changes to the ForecastDetail insert definition were introduced:
Upgrade note: If you have upgraded from RapidResponse 2014.2 (or earlier) to RapidResponse 2014.4
(or later) and the deployment has not converted the five standard tables to contain vector data, the
following differences exist in this table:
Unlike the HistoricalDemandActual table, this table is keyed and we expect records to be created,
modified and deleted. We need to be able to uniquely identify records. The three keys defined are:
Header: This is the key reference to the HistoricalDemandHeader which defines the
Part/Customer/Category combination.
Date: The date of this forecast item. This date ultimately drives MRP if the forecast is used in
the ConsensusForecast.
Id: This is a base key string that uniquely identifies this record within the set of records
referencing the same header. The value is not relevant but must be unique within the header.
As such, it is generally set to automatically generate a unique value on record creation.
There are two money fields that are normally expected to be provided in the currency of the
Header.PartCustomer.Part.Site.Currency reference. The other (non‐key) fields that need to be
populated on this table include:
FOR INTERNAL USE ONLY
Quantity: The amount of this forecast. For Target records, this is the field that will be used if
the Header.Category.Type.UnitType is “Quantity”.
Value: The monetary value associated with this forecast. For Target records, this is the field that
will be used if the Header.Category.Type.UnitType is “Value”.
UnitPrice: Allows for a unique unit price to be specified for this forecast order. For example,
this might be used to reflect the price effective during a sales promotion or prices that vary by
customer. If a non‐negative value is provided here, it is always reported in the
EffectiveUnitPrice field (typically used for revenue calculations). If a negative value is provided
here, the unit price is calculated based on matching records in the CustomerPrice or Part tables.
ProtectQuantity: Indicates whether the value in the Quantity field is modified when data is
edited in a grouped worksheet. This field is used in the Advanced Data Editing dialog box as part
of the expression that determines which records are not edited when a grouped worksheet is
edited. Valid values are:
o “Y”: the Quantity cannot be modified in summarized worksheets.
o “N”: the Quantity can be modified.
4.1.4.13 CustomerPrice
If prices are defined specific to customers for a part and/or time‐phased, then this information must be
provided in the CustomerPrice table. Note that if the price is not to be associated with a specific
customer (should be applied to all customers) but is still time‐phased, then the customer reference on
this table should be blank.
Other demands that should not consume forecast may be loaded here as well with an
Order.Type.ProcessingRule of “Regular”.
Occasionally, one DemandOrder is required to have lines (IndependentDemand records) that are a mix
of backlog that can consume forecast and others that may not. When this is true, define a set of
DemandStatus values that can be set on the IndependentDemand records with
ForecastConsumptionOverride set to either “Normal” or “DoNotConsume”, as required.
If the backlog is past due, then the date where it consumes forecast from can be either the DueDate of
the backlog or the DataDate depending on the PartType.ForecastConsumptionDateRule and the
Status.ForecastConsumptionDateRule.
4.1.4.15 BillOfMaterial
If forecast and consumption is happening across “planning BOMs”, then the MPSConfig type
BillOfMaterial records must be defined.
4.1.4.16 OnHand
OnHand is only required if you are doing supply planning.
FOR INTERNAL USE ONLY
NOTES:
FOR INTERNAL USE ONLY
1) If integrating with SAP, Supply Types of ‘FPO’ and ‘NB’ must be created to support Firm Planned
Order create and Purchase Requisition create transactions.
2) In order to update Unit Price on ScheduledReceipts that are created and sent back to the host
system (Firm Planned Order create and Purchase Requisition create transactions above) the
value needs to be provided in the RAW value in the EBM. As such the consultant needs to create
a custom field to hold this information until such time as the ability to generate transactions
from worksheet payloads is available. In conjunction with this the two Business Message
definitions will need to be updated to use the custom field rather than UnitPrice as they are
defined out of the box.
As noted in the Scenario Structure section, these closed loop transactions are initiated from specific
scenarios.
Within an S&OP Cycle, upon publishing the S&OP plan the S&OP Candidate scenario is committed to the
S&OP Intermediate scenario which then updates the Current S&OP scenario. Prior to committing the
S&OP Candidate scenario a scenario compare is performed against the S&OP Intermediate scenario.
Those records that have changed (Consensus Demand Plan, Safety Stock Quantity, or Supply Plan
Records) are captured in the Closed Loop Within Cycle workbook. The Run Closed Loop Within Cycle
Script Scheduled Task is initiated by the Process Owner prior to them publishing the S&OP Cycle. The
Closed Loop Within Cycle script performs three actions:
1) Modifies the OriginalRecordId on newly created scheduled receipt records (this is used for
reconciliation purposes which is discussed further down in this section);
2) Performs a cross scenario update of the Consensus Demand Plan records into the
IndependentDemand table in the Baseline scenario (this is used as the demand to drive
requirements for non‐S&OP applications) and;
3) It triggers RI (Rapid Integration) transactions to be sent back to the host system for
new/changed/cancelled ScheduledReceipts or modified Part.SafetyStockQty records. See figure
1 below.
FOR INTERNAL USE ONLY
Outside of an S&OP Cycle, prior to committing the Master Production Scheduling or the Order
Management scenarios a scenario compare is performed against the Baseline scenario. Those records
that have changed (Purchase Requisitions/Orders, Work Orders, Sales Orders, Transfer
Requisitions/Orders, Inventory Transfers, Safety Stock Quantity, or Demand Planning Parameters) are
captured in the Closed Loop Outside Cycle Order Management workbook, Closed Loop MPS workbook
or Closed Loop OF workbook, depending on the application(s) purchased. The Closed Loop Outside
Cycle Order Management Script and Closed Loop MPS Script are scheduled to run on a pre‐determined
schedule depending on the scenario prior to the scenario commit in to Baseline. These scripts perform
two actions:
1) Modifies the OriginalRecordId on newly created scheduled receipt records (this is used for
reconciliation purposes which is discussed further down in this section) and;
2) It triggers RI (Rapid Integration) transactions to be sent back to the host system for
new/changed/cancelled ScheduledReceipts, change/cancelled Sales Orders, modified
FOR INTERNAL USE ONLY
Those transactions marked as ‘M’ or ‘Manual’ are identified to the user through five workbooks: Manual
Closed Loop Constraint – identifies constraint property edits; Manual Closed Loop Demand Planning –
identifies planning BoM edits; and Manual Closed Loop Inventory Planning – identifies order policy,
inventory disposition and safety stock policy edits; Manual Closed Loop MPS – identifies planning BoM,
Number of Days Supply, Part Source properties, Order Priority edits; and Manual Closed Loop OF ‐
identifies independent demand record edits. If the edited records identified in these workbooks are not
manually entered in to the ERP system prior to the next Data Update, the changes will be lost.
For newly created records that are sent back to the ERP system via RI transactions the records created in
RapidResponse do not exist in the host system until they are sent via RI; therefore, there needs to be a
way to know which records those are once inserted into the ERP System so duplicate records do not
FOR INTERNAL USE ONLY
result on the next Data Update in to RapidResponse. In order to identify the record a concatenation of
the RapidResponse Record Id + Date is copied into a field called OriginalRecordId on the applicable table
and is sent on the outbound transaction.
This OriginalRecordId needs to be kept on the record when it is imported into the ERP system therefore
it needs to be defined as a custom field in the host ERP system in every table that will have new records
created via closed loop feeds. Also in the case of creating a new Purchase Requisition or Firm Planned
Order, the host system needs to carry this OriginalRecordId info to the Purchase Order or Work Order if
it is being auto‐generated. Then the OriginalRecordId needs to be included in the data extract for the
inbound transaction. Once the data update is performed reconciliation within RapidResponse is
completed using either the Data Reconciliation Within Cycle or Data Reconciliation Outside Cycle
workbooks.
Within an S&OP cycle this reconciliation is done through a multi‐scenario compare between the Baseline
scenario and the S&OP Intermediate scenario. When the duplicate record is identified a command runs
to delete the record from the S&OP Intermediate scenario, then the S&OP Intermediate scenario can be
updated. See figure 1 above.
Outside an S&OP cycle this reconciliation is done through a multi‐scenario compare between the
Approved Actions scenario and the Baseline scenario. When the duplicate record is identified a
command runs to delete the record from the Baseline scenario, then the Baseline scenario can be
updated. See figure 2 above.
An important note to make about the closed loop and reconciliation processes is that there needs to be
clear definition of where planning/execution is done, RapidResponse or ERP system, otherwise
differences in planning policies, calendars etc. could cause unexpected results from closed loop
transactions.
More detailed information about the closed loop process offered in support of all applications is
outlined in the Closed Loop Process training course. It is mandatory that users implementing this
solution participate in this training course.
One sanity check that might be useful (beyond just comparing the forecasts to the targets) would be to
ensure that there are, in fact, ForecastDetails records defined for the required set of target
HistoricalDemandCategory records and that the WarningLimit and CriticalLimit for those category
records are set reasonably:
The Data Integrity‐S&OP workbook can be used to monitor the accuracy of data that is supporting the
Sales and Operations Planning Application. Review the workbook and worksheet help for more detailed
information on the data integrity checks this resource performs.