Professional Documents
Culture Documents
Flow 4.2
Information in this document is subject to change without notice. For the most up to date
documentation, please contact the Flow Software team at:
support@flow-software.com
Contents
“You can only manage what you can measure” ...................................................................................... 8
Data ..................................................................................................................................................... 8
Information ......................................................................................................................................... 8
Action .................................................................................................................................................. 8
Understanding data … Quickly! ............................................................................................................... 9
Understanding data … Accurately! ........................................................................................................ 11
Collect ............................................................................................................................................... 16
Calculate ............................................................................................................................................ 16
Visualize............................................................................................................................................. 16
Time periods...................................................................................................................................... 18
Installation ......................................................................................................................................... 21
Measure Intervals.............................................................................................................................. 51
Measure Types .................................................................................................................................. 52
Aggregation Methods ............................................................................................................................ 54
Sum ................................................................................................................................................... 55
Average ............................................................................................................................................. 55
Minimum ........................................................................................................................................... 55
Maximum .......................................................................................................................................... 55
Range................................................................................................................................................. 55
First ................................................................................................................................................... 56
Last .................................................................................................................................................... 56
Delta .................................................................................................................................................. 56
Count ................................................................................................................................................. 56
Table .................................................................................................................................................. 79
Doughnut .......................................................................................................................................... 79
Gauge ................................................................................................................................................ 80
Lab 18: Create a Time Period Form for Data Entry ................................................................................ 91
Data
The diagram below illustrates your production facility ①, which produces vast amounts of data ②
about your processes. This data, in some form or another, is stored in one or more data stores ③. In
many cases these data stores are simple databases, like Excel, but in other cases they are industrial data
Historians, SQL databases or even online data repositories.
Information
With the huge volumes of data that are collected in these data stores, it becomes important how we
“see and understand” this data. You may be thinking “Advanced Analytics” or “Statistical Processing”,
but these powerful tools are not always necessary. There is a simple and increasingly valuable option,
and that is effective data visualization ④. Applying basic data visualization best-practices provides a
quick and accurate understanding of the data ⑤. This understanding creates a “picture” of how the
production facility is performing – a “picture” that can be compared to your organization’s goals.
Action
A comparison of actual performance to organizational goals allows for better and more frequent
decision making ⑥, which affects the production facility. The quicker and more accurately information
can be gained from the data ④, the faster the cycle of informed and effective decision making can be
iterated, ultimately enabling the organization to achieve its goals.
How many 3’s can you see in this string of numbers? How long does it take you to count them?
Going through all these numbers manually is not only error prone, but also time consuming.
How many 3’s can you see in this string of numbers? How long does it take you to count them?
The use of color helps us distinguish the 3’s from all the other numbers. Your brain’s visual cortex
“automatically” processes this data, making the task effortless and fast.
This example not only demonstrates how many 3’s there are, but also the distribution or pattern of 3’s
within the string of numbers, and that in itself provides additional information.
Everyone loves using Pie Charts, but which of the slices, A, B or C, is the biggest? How much bigger is it
than the others?
Your brain’s visual cortex is not so good at “seeing” angles. Turn the page and try again …
Which bar, A, B or C, is the biggest? How much bigger is it than the others?
Encoding the data points in this way has allowed us to more accurately obtain the information we
need.
Using size, rather than angles, is another key data visualization best-practice.
What is Flow?
Flow is a flexible production reporting and operational analysis solution for industry. Flow provides a
self-service environment that enables the understanding and continuous monitoring of context
enriched decision support information. Using Flow, you can collect, transform and calculate
information from multiple data sources automatically.
Flow prepares information for presentation and integration. By employing data visualization best-
practice, Flow transforms your measured data into information that is understood quickly and
accurately, allowing frequent and effective decisions to be made.
Visualize and share information contained in your Flow System via web-based reports and dashboards
or by using other Reporting or Visual Analytics tools. Schedule the sending of information, reports and
dashboards via email and other notification services.
This introductory video will give you an idea of where you would be able to use Flow in your
organization.
http://support.flow-software.com/Introductory-Video
Calculate
Information in Flow can be aggregated (rolled up) from smaller time periods (i.e. hours) to bigger
time periods (i.e. shifts / days / weeks / months)
Information in Flow can be used in calculations to create new information (i.e. KPIs, ratios,
efficiencies, etc.)
Information in Flow can be contextualized by production events (i.e. how much electricity was used
to make Grape juice compared to Apple juice during this month?)
Information in Flow can be validated and retrospectively edited with audit control and full version
history
Visualize
The Flow System is a single repository for all data collected (automatically or manually) and
calculated. This makes it a perfect source of information for Reporting and Integration into other
systems (e.g. MES, ERP, etc.)
The Flow System serves report information directly without the need for re-querying the underlying
data sources. This enhances the performance of the reporting and visualization layer, making for a
vastly improved user experience.
Flow makes use of a built-in web-based report and dashboard server.
Flow makes use of a built-in messaging system to schedule the sending of Flow information, reports
and dashboards via various Notification Services (e.g. Email, SMS, Flow Mobile).
Other Visualization or Business Intelligence tools can be used to create visual dashboards, reports
and provide self-service analyses (e.g. Tableau Software, Qlikview, Microsoft PowerBI, Dream
Report, Wonderware Intelligence, etc.)
Management teams demand more than just trends. They expect context rich information to be “lifted”
out of this detailed data. This involves summarizing the data into a form that is quickly and accurately
understood at a management level. The summarized information is not only demanded by
management teams, but is important to empower operators and team leaders to make their own
decisions.
To achieve this, they need a system that gives them decision support and insight. They want to know
things like:
Let’s see how Flow can help provide this decision support and insight.
Time periods
Flow transforms this detailed data into context-rich information by aggregating it into time and event
periods. Let’s consider the Filler 1 Electricity (kWh) totalizer. Its data is logged every second, but we’re
only interested in the number of kWh consumed every hour. If we “overlay” hourly time periods onto
this detailed trend, we can easily summarize how many kWh were used in each hour.
For the hour starting 06:00 and ending 07:00, the kWh consumed can be calculated as:
By repeating this process, a simple summary report of the kWh used every hour can be produced as
follows:
The process of summarizing data in Flow is known as data retrieval and aggregation. In the kWh totalizer
example, the data is retrieved from a data source for each time period and then aggregated to provide
a single piece of information for that time period (i.e. a single value for the hour starting at 06:00).
Flow makes use of different aggregation methods, depending on the detailed source data and how you want the
summarized information to be presented. In the kWh totalizer example, Flow used a Counter aggregation
method. Other aggregation methods include Minimum, Maximum, Range, Average, Sum, First, Last, Delta, Count,
Variance, Standard Deviation and Time in State. These will be discussed in more detail on page 54.
Event periods
Let’s take the Filler 1 Electricity (kWh) totalizer example a little further. For Filler 1, the Tag Historian is
storing data related to the Filler’s current product run. The detailed data relating to the Product is very
useful in terms of adding additional context to the information we “lift” out of the Tag Historian.
Flow understands the start and end of event periods. In this example, Flow starts an event when the
value of the Product changes from 0 to a positive number. Flow ends the event when the value of the
Product goes back to 0. Flow then “overlays” the event period onto the kWh detail and calculates how
many kWh were used during that production run:
By repeating this process and mapping the Product to a reportable description, a simple summary
report of the kWh used per production run, per product can be produced as follows:
Installed Components
Flow Bootstrap – communication between all Flow components and modules
Flow Config – configuration environment, create events, measures, calculations, monitor system
Deployed Components
Flow Platform – coordination between components in a Flow System
Flow Engine – automatic data source retrieval, calculation execution and limit evaluation
Flow Message Engine – automatic processing and scheduling of messages
Flow Report Server – Web-based Report and Dashboard Server
Flow Database – Microsoft SQL Server database to structure and store the Flow System’s data. A
Microsoft SQL Server installation is required in order to deploy a new Flow System.
The Flow System can be installed as a “stand-alone” system on a single machine, or it can be installed as a
distributed system across multiple machines. A distributed system should be used for large systems where engine
load balancing is required.
Note: Please make sure you have already installed Microsoft SQL Server, preferably as the “default instance”.
To get started with Flow, everything you need is contained in a single Windows Installer file (.msi). The
name of the installer file is “Flow 4.0.x.x.msi”, where the 4.0.x.x is the build number. The build number
relates to a specific release, which can be confirmed at support.flow-support.com.
Installation
Run the Flow Installation Package:
Click “Next” …
Select the installation path and click “Next” to begin the installation.
Distributed Installation
In the “real world” you would install the separate components of Flow on different machines. For example:
The Bootstrap would be installed on servers where you want to deploy the Flow Platforms, Engines, Message
Engines and Servers.
The Config Tool would be installed on your developers’ (or even business users’) computers for configuring
measures, events and reports.
By default, the Flow Bootstrap Service is installed to Startup Automatically. Confirm this setting by
opening its Properties …
Select the “Log On” tab and change the “Log on as:” option to a specific Account that has access to the
Microsoft SQL Database where the Flow Database will be deployed. This Account must have local
Administrator rights on the server where it is running. The local Administrator rights are required for
Flow to create Event Viewer log sources and start internal web service components.
Click “OK” and restart the Flow Bootstrap Service when prompted.
Some Data Sources that Flow will collect data from may require a special Account to allow access. An example of
this is Wonderware Historian. It is recommended that the Wonderware Network Account be used to run the Flow
Bootstrap Service.
When running Flow Config for the first time, you will be presented with the "Connect" dialog box. Use
this to create a new Flow System.
Name – This is a friendly name that you can use to describe your Flow System. Name this new Flow
System "The Juice Factory"
Server – This is the name (or IP address) of the Microsoft SQL Server where your new Flow System
will be created. If your SQL Server has a “named instance”, then use the full instance name. By
default, this is set to the name of the computer you are currently using.
Database – This is the name of the SQL Database where the data for your new Flow System will be
stored and organized. By default, the database will be named “Flow”. If you already have a
database named “Flow” on this SQL Server, you will need to give your new one a different name.
Username and Password – If you need to use SQL Authentication to create and access the SQL
Server, then specify the SQL login name and password, otherwise leave these fields blank to use
Integrated Security (i.e. your Windows logged on user account)
If necessary, change the “Time Zone” before creating your Flow System. The Time Zone of your system
cannot be changed once it has been created.
Click “Create” and Flow will deploy a new database to the Microsoft SQL Server and set it up for first-
time use. Once this has been completed, Flow will open your new system.
Overall Layout
Flow Config has been designed to be a drag ‘n drop environment. The layout of various tree views
enables rapid configuration of new measures, events, reports and messages. We think you’ll love it.
Model View
This is the “Model” tab on the left. It represents the Physical Model of your production facility. Flow
allows this model to be completely flexible, however, it is advisable to follow a structure like S88 or S95
where possible. It is a good idea to define a naming standard early in your configuration process.
Notice the “Deployment” and “Users” tabs at the bottom of the Model view. The Deployment tab
shows a model of your Flow System. It consists of Platforms, Engines, Message Engines and Report
Servers. The Users tab shows the users and groups assigned to the Flow System.
Flow “Zone”
This is a multi-purpose toolbar allowing dragging and dropping onto the various icons. You will use
these icons extensively while building your Flow Model.
Information Model
This is the “Reports” tab on the right. It represents an Information Model. Like the Model view, it is
completely flexible, however, it is advisable to model this view on the structure of your report
consumers (i.e. your audience).
Notice the other tabs at the bottom of the Information Model, namely, “Forms”, “Messages”,
“Toolbox”, “Context” and “Data Sources”. The Forms tab allows you to create data entry (or data
validation) forms. The Messages tab allows you to define and schedule the sending of information,
reports and dashboards via email and other notification services. The Toolbox tab allows the definition
of global system objects, like enumerations and functions. The Context tab allows you to define your
calendars and shift patterns, as well as model attributes. The Data Sources tab allows you to configure
data connections and view their namespaces.
Defaults
Along the top of the Flow Config window is a “Defaults” toolbar. When you are creating many new
Measures and Events, you will use the Defaults to help streamline your development process.
Editor Space
In the center of the Flow Config window is a space where all the object editors will open up. All objects
(i.e. Measures, Events, Reports, Forms, Messages, Engines, etc.) have their own editors, which will open
up when the objects are double-clicked.
Calendars and Shift Patterns tell Flow how to structure time-based data for reporting purposes. For
example, your reporting day may start at 06:00 in the morning or your reporting year may start in
March. The Flow calendars allow you to configure these things. When the Flow Engines run, they will
use the Calendars to create reporting “time periods”.
Select the “Context” tab under the Information Model view and double-click the “Production” calendar
in the “Calendars and Shifts” view …
In the calendar editor you will be able to set the following properties:
the “Manage this calendar externally” option, and then manually create time periods in the Flow
database using the “CreateManualTimePeriod” stored procedure.
Year starts in – used to define the month in which the reporting year will start (e.g. financial year)
Week starts on – used to define the day on which the reporting week will start.
Day starts at – used to define the hour and minute at which the reporting day will start.
Hours split into – used to define the duration of “minutely” time periods. It is important to note
that “minutely” in Flow does not mean a single minute, but rather a sub-hourly period that has this
duration. The smallest “minutely” time period duration is 5 minutes. If this setting is set to 60
minutes, then Flow will not allow the creation of minutely measures.
The above Flow Calendar settings can only be modified when all Flow Engines are undeployed.
The Refresh Offset for a measure is the “delay” or offset from the “Period End” to the time the retrieval
of a measure will be performed by the Flow Engines. Each calendar has a set of default refresh offsets
that will be used when a calendar is first linked to a measure.
Flow has the ability to automatically remove old data. This ensures a well maintained system. For
example, if we set the minutely time period’s Purge Age to 90, the Engines will automatically remove
minutely measures’ data that are older than 90 days. By default the Purge Age for each “Time Period”
type is set to 0 days, which implies that purging is disabled.
Purging is recommended for large systems that have accumulated a few years of data. If purging is not configured,
your Flow System may perform slowly.
Shift Patterns
Expand the “Production” calendar. By default, your new Flow System’s “Production” calendar has a
pre-configured Shift Pattern. Shift Patterns in Flow are defined by a Date and Time from which the Shift
Pattern is valid. Additional Shift Patterns can be added to the calendar, as long as they have a different
Date and Time. This mechanism provides the ability to change Shift Patterns over time.
Notice that this default Shift Pattern has a 3 x 8 hour shift scheme, repeating each day of the week. The
start of the week is defined by the Shift Pattern’s parent Calendar. The start of the day is defined by
the Shift Pattern’s parent Calendar.
By definition, a Shift Pattern has a duration of one week (i.e. 7 “Production” days). Flow will repeat the
pattern from the Date and Time definition every week, up until a new Shift Pattern definition is found.
If you needed to start a new 2 x 12 hour shift scheme, starting on Sunday 1st November 2016, you would
create a new Shift Pattern named “2016-11-01 06:00” under the “Production” Calendar. You would
then open the Shift Pattern and shuffle the individual shifts around as required. If you needed to
rename or add a new “Shift” type, you will find the Shifts defined in the “Toolbox” tab, under “Shifts”.
You can drag these “Shifts” onto the Shift Pattern canvas to create and manage your shift patterns.
Select the “Deployment” view tab at the bottom of the Model View …
If you are still having communication errors, confirm that port 4501 on that
computer has not been blocked by the firewall.
Once you have a running Platform (i.e. no white disk), you can right-
click on the Engine, and click “Deploy”.
The white disk will immediately turn red, but should disappear when
Flow has confirmed that the Engine is running. Give it a few seconds
to start the Engine.
If the red disk on the Engine does not disappear, it is likely that the
Flow Bootstrap Service has not been configured correctly. Please
review the Flow Bootstrap Service configuration on page 23.
To assist with any troubleshooting, Flow maintains an error and warning log in the Windows Event
Viewer on the computer where the Flow Bootstrap Service is running. Open the Windows Event Viewer
…
You will find a log called “Flow” under the “Applications and Services Logs”. If you see any Errors in the
Flow log, select it to view the error message. This message will be useful for you or your Flow support
contact to troubleshoot any problems with your Flow System.
The “Flow” log is created on any computer running the Flow Bootstrap Service.
To complete the deployment of your Flow System, right-click on the “Message Engine” and “Flow
Report Server”, and click “Deploy”. A fully deployed Flow System will have no red or white disks
displayed in the deployment view.
By default the “Flow Report Server” uses port 80. If the computer running the “Flow Report Server” has another
web server running on it, make sure the default Ports do not clash (e.g. Internet Information Server, SQL Server
Reporting Services, etc.) If necessary, change the “Flow Report Server” Port setting to 8000. You can access the
“Flow Report Server” settings by double-clicking it. See section on page 156.
You can also use the Flow Zone toolbar to drag new objects onto your model
Let’s assume you have a good understanding of your Juice Factory layout. Create the following folder
structure to represent the Juice Factory facility:
Model Locations
Flow provides the ability to set a “location” at each level in your Model View. A “location” is defined in
the Toolbox. Let’s create a location that can be used in your Model. Select the “Toolbox” tab, right-
click on the “Locations” item and create a new location. Use a location appropriate to your Model:
Now that you have defined a location, let’s allocate it to the root folder in your Model View. Double-
click “The Juice Factory” folder to open its editor and then set its location:
All folders in the Model View, including child folders, can have their location set. If a child folder is not
explicitly set, it will inherit the location of its parent folder by default.
Where Flow is used to provide information to a higher level visualization tools (e.g. Tableau, Qlikview, etc.), these
location properties are exposed in the Flow SQL database views (see section “Flow Database Views” on page 158).
Data Sources
At the top, the Flow System’s available Data Source types are listed.
By default, a new Flow System will have the following Data Source
types available:
Microsoft SQL
OPC HDA (connects to most Industrial Historians)
Simulator
Wonderware Historian (external toolkit installation required)
Wonderware Online
The Simulator is used for demonstration and training purposes.
Namespace
Once a Data Source connection has been added to the Flow System,
selecting it will populate its “Namespace”. The “Namespace” is
provided by the Data Source (e.g. Wonderware Historian provides a
folder tree containing all the tags configured).
The “Namespace” also provides a place where you can create your own folders and items. You can use
this functionality to create “shortcuts” to your favorite tags, or “templates” that would simplify the
creation of new measures (e.g. SQL query template when using the Microsoft SQL Data Source).
For this lab, let’s create a connection to the Simulator Data Source. Right-click on the Simulator Data
Source type and click “New”. Give the new Simulator connection the name “Historian”, and click “Save”
Select the new “Historian” connection in the Data Source view. Notice how the Namespace displays
the folders and tags available in the “Historian”. Think of the Namespace as a “window” directly into
the configuration of the “Historian”. If the Historian supports engineering units, they will be displayed
as part of the tag item. If the Historian supports tag descriptions, they will be displayed as tooltips when
you hover over the items.
For this lab, let’s create a single custom namespace item so that
you know how to do it when you need to.
Update the properties of the new item as per your requirements. In this example you can set the Tag
property to “020-FQ-001.PV” (an easy way to do this is to drag the “Tag” property of the actual 020-
FQ-001.PV item onto the Tag textbox). Change the Aggregation property to “Counter”.
Now can use this custom item just like you would have used any standard Data Source items. You will
see how to do this in the following labs.
Flow supports multiple connections. This means you can connect many Data Sources to your Flow System. You
may have a number of SQL Databases and Historians all connected to your Flow System.
When you have a large system, you can see where items from each Data Source are being used by clicking “Detail”
to open a Data Source dependency view:
Given that you know a few of the settings required to report on the Boiler Temperature, it is useful to
pre-select the Measure defaults as described on page 29.
Default Time Base – set this to Hourly since we want an hourly summary of the temperature.
Default Format – set this to 0.0 (to give us 1 decimal place).
Default Unit of Measure (UOM) – ignore this for now because the Historian supports engineering
units.
Default Backfill Date – set this to the beginning of last month (notice the hour starts at 06:00 based
on our calendar definition).
Drag the Metric icon from the Flow Zone onto the “Steam” folder …
By default the new Metric will take on the name of its parent folder.
Locate the 010-TT-001.PV Boiler Temperature tag from the Historian, drag it across to the Model View
and drop it onto the “Steam” folder.
When you dropped the tag onto the metric, Flow created a measure
for you. Notice how the icon describes a few of the measure’s
properties:
You should see the white disk appear on the measure and recursively up the model tree. Like in the
deployment view, this white disk indicates that there are undeployed objects in that branch.
The Engine you deployed earlier will now start processing this measure. The Engine will go and collect
summary information from the Historian for every hour back to the Backfill date and time, catch up to
“now”, and then continue to process every hour from now on.
If you select the “Deployment” view tab, you will notice the new Metric is allocated to the Engine for
processing. If more than one engine was configured in a distributed architecture, you could deploy
your metric to any of those engines.
Measure Editor
Let’s have a look at what is happening behind the scenes. Double-click on the new measure to open its
Editor …
General Properties
The top section of the Measure Editor displays a few general properties for the Measure:
Description – Measure description (in this case, the description was pulled through from the
Historian tag’s description)
Format – report format used to display the summary values (in this case, 1 decimal place)
Unit – unit of measure (in this case, the unit of measure was pulled through from the Historian tag’s
engineering unit. This is why we didn’t need to set the Default Unit of Measure.)
Backfill – the date and time used by the Engine to go back in history and retrieve the summary
information from the Historian
Context
By default, the Context section will show a chart of the summarized information relating to the
measure. In this case, the chart is displaying the last 12 hours of average Boiler Temperature in ˚C for
each hour. The vertical cursor line represents the following states:
Green - the measure is running. The cursor represents the last successful processing time.
Red - the measure is running, but has been "Backfilled". The cursor will turn Green as soon as the
Engine accepts the Backfill date.
Black - the measure is not running (i.e. it has been undeployed).
The same information can be displayed in a grid. Select the Grid tab …
Period Start – the start of the time period (in this case the start of the reporting hour).
Period End – the end of the time period.
Value – the formatted summary value retrieved from the Historian.
Quality – the OPC Quality of the data used to produce the summary value (192 = Good, 0 = Bad)
Duration – the duration of the time period in milliseconds.
Preferred – indicates whether this version of the value for the time period is used for reporting.
Version – the version of the value for this time period.
Captured – date and time when the value was actually retrieved / calculated / edited.
User – indicates if and who made changes to the measure value via the Flow Server.
The “Context” section contains configuration relating to what context the measure’s values are
summarized against. In this case, you will notice the “Production” calendar has already been added to
this measure’s context. Being an hourly measure, the Engine will process it against the “Production”
calendar’s definition for hourly time periods. In this case, the “Production” calendar’s hourly time
periods are standard hours (e.g. 06:00 to 07:00), but you could have set the calendar up for hours to
start at 15 minutes past the hour (e.g. 06:15 to 07:15).
Notice that a default “Refresh Offset” of 60 seconds has been configured for this calendar context. This
means that Flow will only attempt to get the current hour’s summary information from the Historian
60 seconds after the hour is completed. More than one “Refresh Offset” can be configured if required.
Processing
Expand the “Processing” section. Flow allows you to project a measure into the future by a number of
intervals. By default, a measure will be set to process “now” (i.e. 0 periods into the future). By setting
the “Projected” property of a measure to a positive integer value will set the measure to process into
the future. Notice that the chart in the “Context” section will show the “future” periods in the green
shaded area.
Projected measures are useful for information relating to plans or predictive calculations. For this lab,
leave the “Projected” value as 0 periods.
Retrieval
Expand the “Retrieval” section. Notice that this section contains information about the tag you dragged
across from the Historian Namespace.
Tag – the “Historian” tag for which summary data is retrieved (this was populated during the drag
n’ drop action, but can be edited if required, another tag can also be dragged over this textbox).
Aggregation – this determines how the detailed data (i.e. high-resolution data) should be
summarized into an hourly value for reporting purposes. By default, Flow uses the average
aggregation method.
Scaling Factor – this is a factor that the resultant summarized value is multiplied by in the case
where scaling is required (e.g. simple unit of measure scaling).
Filter Tag – when retrieving data from the Historian, Flow can use the value of another tag (or the
same tag) to filter out unwanted detailed data. This is discussed in detail in a section on page 154.
Dependents
This section displays any other objects in your Flow System that depend on this measure. Examples
include:
Change Log
This section displays a change log for this specific measure, from creation to deployment. Any changes
that are made to the measure’s configuration will be logged here.
To confirm the Change Log functionality, rename the “Steam” measure in the Model View to “Steam
Temperature”. Refresh the Change Log and notice the new log relating to the rename action.
Select the “Data Sources” tab at the bottom of the Information Model and double-click the “010-TT-
001.PV” Boiler Temperature item …
The “Data Source Preview” is a useful view of the detailed data. After opening the “010-TT-001.PV”
tag, play with “Zoom In”, “Zoom Out”. Double-click to zoom in. Notice the right-click context menu
options for each chart: “Pin to Zero”, and each item: “Format”, “Style”, “Width”, “Move” and
“Remove”.
The “Data Source Preview” window has two chart sections, top and bottom. By default the top section
is used, however, additional tags, measures and events can be added to these two sections.
Drag the “Steam Temperature (˚C)” measure from the Model View onto the bottom section. By default,
the “Pin to Zero” setting is unlocked. Notice how the time axis is always kept in synchronization
between the top and the bottom sections. This is useful for alignment and troubleshooting.
Remove the “Steam Temperature (˚C)” item from the bottom section and add it to the top section. You
should now see the overlap of the summarized measure values, changing every hour, overlaid on the
detailed value. Zoom out to about 8 weeks of data. You should notice that the legend for the detailed
values adds the word “Sampled”.
Be aware of “Sampled” because the chart only shows a subset of the detailed data, evenly spaced over
the period. This means that some important points may not be shown. As soon
as you zoom in enough, and the word “Sampled” disappears, you will know you
have all the data points displayed. Note that the grid view will always show all
data points.
Understanding Measures
As you have seen from the previous labs, a Measure can summarize data from a data source and store
the resultant information in your Flow System. In the previous labs, you created a Measure that
summarized data every hour. This means that for every hour (described by your calendar’s definition),
a summary value was created and stored in the Flow System for that Measure. However, Flow handles
more than just hourly time periods. Let’s see …
Measure Intervals
The following Measure intervals can be configured in Flow (notice the associated icons):
Minutely
Hourly
Shiftly
Daily
Weekly
Monthly
Quarterly
Yearly
Each Measure created in your Flow System will be associated with one of these intervals. Once created,
a Measure’s interval cannot be modified. If you create an hourly measure for the average Boiler
Temperature, but you also require the daily Boiler Temperature average, you will need to create a new
daily measure.
It is important to note that “Minutely” in Flow does not imply a single minute time period, but rather a
sub-hourly period that is defined by your calendars. The available “Minutely” durations are:
5 minutes
10 minutes
15 minutes
20 minutes
30 minutes
The reason Flow has these “Minutely” time periods is to accommodate the aggregation (roll up) of data to an
“Hourly” time period.
Measure Types
In the previous labs, you created a Measure that retrieved detailed data from a data source (i.e.
Historian) and summarized it into a single value stored in the Flow System. This Measure Type is known
as “Retrieved”. However, Flow allows the configuration of other Measure Types as well, let’s discuss …
Retrieved Measures
Retrieved Measures are used to collect data from the data sources you have connected to your Flow
System. These measures are responsible for collecting, summarizing and storing data from the “outside
world” in your Flow System.
Most Flow Data Source types represent tag-based time series data. For this reason, Flow has defined
a number of aggregation methods to standardize how detailed data is summarized into Flow. A detailed
description of how these aggregation methods function is provided on page 54.
Note that Flow does not replicate the detailed data from your data sources in the Flow System. It will only store
the summarized information in the Flow System.
It is not always possible to retrieve information automatically from a data source. Some information is
just not available in a data source. For this reason, Manually Entered Measures are used to allow
insertion of information into your Flow System by people.
Instrument data that isn’t available in your control systems and historians
Calculation factors that you will want to change over time
Aggregated Measure
Aggregated Measures are used to perform roll up calculations of other measures in your Flow System.
For example, you have already created a measure that summarizes the Boiler Temperature from your
Historian every hour. You can configure a new Aggregated Measure to calculate the daily average of
these hourly values. This means that the new daily measure does not have to re-query the detailed
data from the Historian to provide the daily summary values, it rather uses the information Flow has
already collected from the Historian for the hourly values. Furthermore, if any of the hourly values
need to be edited via the Flow Server, Flow will automatically re-calculate the daily Aggregated
Measure, thus maintaining the integrity of the information stored in your Flow System.
By using Aggregated Measures, you not only reduce the retrieval load on your data sources, but also increase the
performance of your Flow System.
Calculated Measures
Calculated Measures are used to configure your own calculations based on one or more measures in
your Flow System. This is an extremely powerful feature which allows you to generate additional
information, just like you would be able to do in Excel. For example, if you had an hourly measure for
production counts, and an hourly measure for electricity used, you could create an hourly calculated
measure for the ratio of electricity used per unit of production.
In addition to allowing the configuration of calculations, Flow has a built-in library of common functions,
similar to what you would find in Excel. You can even create your own User Defined Functions to
standardize and simplify your calculation expressions.
Flow allows you to use Calculated Measures inside other Calculated Measures, making nested calculations
possible.
Aggregation Methods
When retrieving detailed data from tag based time series data sources (i.e. Historians), Flow makes use
of a number of aggregation methods to standardize how this data is summarized and stored in the Flow
System. In a previous lab, you created a measure that calculated the average value of the Boiler
Temperature for each hour. The aggregation method you used was “Average”, but there are a few
others you should know about. Let’s discuss each of these aggregation methods in detail.
Let’s use a standard set of detailed data, and then apply each of the aggregation methods to the same
data set. Here is our data set …
Which, graphically, looks like this if we plot the points. The shaded area is the time period we are
interested in for our summary information.
Flow uses a “Stair Step” interpolation between each point, like this …
Because the first point in our data set is outside of our summary time period, Flow uses it as a
“boundary” value with a timestamp of exactly 06:00:00.
The Wonderware Historian Data Source type does allow the option to use a different Interpolation between
points, however, Flow will default to using “Stair Step”.
Sum
When using the “Sum” aggregation method, Flow will calculate the sum of points 2 to 13. It will exclude
the “boundary” value, since point 1 before the “boundary”, would have been used in the previous time
period’s calculation.
Average
When using the “Average” aggregation method, Flow calculates a time weighted average. The
“boundary” value is included in the calculation.
Minimum
When using the “Minimum” aggregation method, Flow determines the minimum value of all the points
within the time period, including the “boundary” value.
Maximum
When using the “Maximum” aggregation method, Flow determines the maximum value of all the points
within the time period, including the “boundary” value.
Range
When using the “Range” aggregation method, Flow determines the maximum and the minimum values
of all the points within the time period, including the “boundary” value, and then returns the difference
between the maximum and the minimum.
First
When using the “First” aggregation method, Flow will return the “boundary” value.
Last
When using the “Last” aggregation method, Flow will return the value of the last point before the end
of the time period. If a point falls exactly on the end of the time period (i.e. 07:00:00), it will be excluded
from the evaluation.
Delta
When using the “Delta” aggregation method, Flow will sum the difference between each consecutive
point, including the “boundary” point.
Count
When using the “Count” aggregation method, Flow will return the number of points within the time
period, excluding the “boundary” point.
Time in State
When configuring Flow to return a “Time in State” summary, you will need to specify the “State” setting
and condition. Flow will return the total duration (in milliseconds) that the “Stair Step” interpolation
evaluates to the specified “State” setting and condition. For example, in the above scenario, if the
“State” setting is specified as “=” 1.6, Flow will sum the duration between the “boundary” point and
point 2, and the duration between point 9 and point 10.
Variance
When using the “Variance” aggregation method, Flow will calculate the statistical population variance
for all the data points within the time period, including the “boundary” point.
Standard Deviation
When using the “Standard Deviation” aggregation method, Flow will calculate the statistical population
standard deviation for all the data points within the time period, including the “boundary” point.
Counter (Totalizers)
The “Counter” aggregation method is applicable to totalizers only. For this aggregation type, let’s look
at a different set of data:
When we plot this “totalizer’s” points, and then apply the “Stair Step” interpolation, we get the
following result. Notice the totalizer’s reset point.
Flow will sum the difference between each consecutive point, including the “boundary” value, until it
detects a negative change. If the negative change is greater than the “Deadband” setting, Flow
interprets this as a totalizer reset, and then continues summing the difference between following
consecutive points.
The Flow “Counter” aggregation method will handle multiple totalizer resets in any single time period. Null values
returned by the Data Source will be ignored in the “Counter” aggregation algorithm.
Counter Rollover
A totalizer is often configured in the instrument or in its controller to always reset at a specific value.
This specific value is known as the “Rollover” value.
Due to the nature of slight delays in data historization, it is possible that the raw data will not include a
point at exactly the rollover value, but rather a value slightly before, and slightly lower than the rollover
value.
For example, if this totalizer reset at exactly 3.5, Flow would “miss” 0.3 counts (3.5 rollover – 3.2 point 8).
Fortunately, Flow allows the configuration of a “Rollover” setting. When calculating the “Counter”
aggregation for a time period, Flow will use this “Rollover” setting to include the “missed” counts.
Custom Expression
Some Data Source types provide the ability to configure a “Custom Expression”, rather than using one
of the standard aggregation methods. Custom expressions allow for the querying of more than one tag
from the data source.
The results of the query are transposed into a “data” object that returns a set of columns containing
the tag values and qualities. The custom expression uses Microsoft.NET’s C# syntax. A single “Result”
object must be returned by the custom expression.
Custom expressions can be used for custom calculations (e.g. integrate one tag by another), lookups
(e.g. convert the returned values based on a lookup table), etc.
Drag the “Hourly” icon from the Flow Zone onto the
“Steam” metric.
General Properties
Double-click the new measure and edit the general properties as follows:
Retrieval Properties
Expand the “Retrieval” section. Notice that the “Retrieval” section is different for a Manually Entered
Measure compared to a Retrieved Measure. A Manually Entered Measure is configured to have a
“Default Value”. Just like an hourly Retrieved Measure, Flow will generate an hourly time period for an
hourly Manually Entered Measure. The value for each of these time periods depends on the “Default
Value” setting.
Set to previous value – Flow will set a new time period’s value to the value of the measure’s previous
time period. An “Initial Value” setting tells Flow what the first time period’s value will be set to.
This option is useful for calculation factors you may want to change over time, but not too
frequently.
Set to 0 – Flow will set a new time period’s value to 0.
Set to null – Flow will set a new time period’s value to a null (empty) value. This option is useful
when creating a measure for an instrument or value that needs to be inserted manually by a person
every time period (e.g. daily electricity reading from an external meter).
Set to measure value – Flow will set a new time period’s value to the value of another Flow measure
at the same point in time. This option is useful when you need to, for example, set an hourly
measure’s value to that of a weekly measure.
For your new “Steam Production Rating (tons/hr)” measure, set the “Default Value” setting to “Set to
previous value” and the “Initial Value” setting to 14.5 as follows:
Deploy the new measure and confirm that it is running and producing a value of 14.5 tons/hr every
hour:
Locate the 010-FT-001.PV Boiler Steam Output tag from the Historian, drag it across to the Model View,
and drop it onto the “Steam” folder. Rename the new Retrieved Measure to “Steam Production”.
Double-click the new measure …
Notice the unit of measure has been set to kg/s since this is the engineering unit of the 010-FT-001.PV
tag in the Historian. You need to compare the Boiler’s design rating in tons/hr to the Boiler’s actual
output. To do this you are going to convert the actual output from kg/s to tons/hr, and you are going
to use the “Scaling Factor” to achieve this.
The “Scaling Factor” is a multiplier applied to the summary value retrieved from the data source. The
multiplied summary value is then stored in the Flow System.
Unit – tons/hr
Scaling Factor – 3.6
Deploy the new measure. Confirm that it is retrieving and scaling a summary value from the Historian.
However, Flow allows you to create calculations based on other measures within your Flow System.
You are now going to create a Calculated Measure that indicates the Efficiency (%) of the Boiler against
its design rating.
Let’s first set your default measure Format and Unit of Measure (UOM) on the “Defaults” toolbar …
Drag the “Steam Production (tons/hr)” measure onto the Flow Zone Calculator icon …
Flow will create and configure a new Calculated Measure. Rename it to “Steam Production Efficiency”.
You will notice the red disk displayed on the new measure’s icon. This indicates that it has a problem.
Hover your cursor over the measure to see a message, “Calculation has not been validated”. This is
normal after creating a new Calculated Measure. You need to open the measure’s editor and expand
the “Retrieval” section …
Dependents – the calculation dependents are displayed on the left. Calculations in Flow are defined
by a Date and Time from which the calculation is valid. Additional calculations can be added to the
dependency tree, as long as they have a different Date and Time. This mechanism provides the
ability to change calculations over time, but not lose the ability to backfill a measure to a time when
a different calculation was used.
Expression – the expression on the right is the actual calculation, which is defined using
Microsoft.NET’s C# syntax. Flow simplifies the “writing” of the expression by supporting double-
click and drag ‘n drop into the expression.
You will see the “Steam Production (tons/hr)” measure has already been added to the dependency
tree. Drag the “Steam Production Rating (tons/hr)” measure from the Model View onto the Date and
Time in the dependency tree.
Now edit the expression as follows to calculate the Efficiency of the Boiler as a percentage. After placing
your cursor in the expression, you can double-click the measure in the dependency tree to place the
relevant text into the expression at your cursor position. Alternatively, you can drag the measure from
the dependency tree into the expression.
Once you are happy with your calculation’s expression, click the “tick” button to validate the expression.
If there are no problems with your expression, Flow will save it and you will notice the red disk
disappears.
Deploy the calculated measure and confirm that it is generating a percentage for the Boiler’s Efficiency.
Notice that the calculation expression makes use of the “.Value” property of a measure. The “Insert” button
allows you to use other properties of a measure (i.e. “.Quality”, “.Duration” in milliseconds, “.PeriodStart” and
“.PeriodEnd”)
You will notice in the calculation dependency tree that a calculation’s dependents and expression is defined by a
date and time. This represents the date and time from which this calculation is valid.
This functionality allows for calculations to change over time (i.e. water reticulation calculation changes because
of a new meter installation), but still maintain data integrity during a measure “backfill”.
In addition to the built-in functions, Flow allows you to create your own User Defined Functions. Right-
click on “User Defined” and click “New”, “Function”. Name the new function “Efficiency”. Double-click
on it to open its editor …
The function “Definition” is a standard Microsoft.NET C# static method. Don’t be scared of it, just type
it out as shown above.
Click the “tick” button to validate the “Definition”. If there are any problems, Flow will provide an error
message to explain what you have done wrong.
Open the “Steam Production Efficiency (%)” measure and expand the “Retrieval” section. Undeploy
the measure. Delete the expression. Drag the “Efficiency” function from the “Toolbox” into the
expression editor.
Select the text “double Actual” and double-click the “Steam Production (tons/hr)”. Flow will replace
the selected text with the measure’s name and value property. Do the same for the “double Rating”
and validate your new expression.
As discussed on page 52, Aggregated Measures are used to perform roll up calculations of other
measures in your Flow System. Let’s use an Aggregated Measure to roll up the hourly summary
information into daily information.
General Properties
Double-click the new measure and notice the general properties have been copied from the hourly
“Steam Production Rating (tons/hr)” measure. Because this is a daily measure, change the “Unit” to
“tons/day” as follows:
Retrieval Properties
Expand the “Retrieval” section. Notice that the “Retrieval” section is different for an Aggregated
Measure.
Measure – this is the measure Flow will roll up (i.e. aggregate) to create new summary information.
To change this measure, drag another measure from the Model View onto this setting.
Aggregation – this is the aggregation method Flow will use to calculate the roll up. Flow will default
to “Sum”, but available aggregation methods include “Average”, “Minimum”, “Maximum”,
“Range”, “First”, “Last” and “Counter”.
Interval – this is the “boundary” interval Flow will use to calculate the roll up summary information.
This property can be changed when cumulative roll up calculation are required (e.g. “Week to
Date”, or “Month to Date”, etc.)
Scaling Factor - this is a multiplier applied to the result of the aggregation. This is useful when you
need to present an aggregated value in a different Unit of Measure (e.g. the hourly measure can
be kWh, but the monthly aggregated value should be in MWh).
Deploy the new measure and confirm that it is running and producing a value of 348.0 tons every day:
Now create daily Aggregated Measures for “Steam Production (tons/day)” and “Steam Temperature
(˚C)”. For the daily “Steam Temperature (˚C)”, make sure you change the “Aggregation” property to
“Average”, not “Sum”. It would not make sense to sum the hourly average temperature summaries!
Select the “Reports” tab. This is where you are going to create an “Information” model that represents
the audience (or consumers) of your Flow System’s information. The Boiler information you have
configured in your Flow System would typically be useful for the Engineering or Utilities Managers.
Create a new folder in the “Reports” tab for “Engineering”. Now create a new “Hourly Report” in this
folder (you can do this by dragging the “Hourly” icon from the Flow Zone, or by right-clicking on the
folder) …
General Properties
The top section of the Report Editor displays a few general properties:
Title – this is the report title. By default Flow uses a “placeholder” for the name of the report. This
can be edited to a fixed title or a combination of the placeholders and fixed text.
Description – report description.
Report Type – various types of reports can be configured. Depending on which type is selected,
different report definitions need to be provided. See the Understanding Report Types section on
page 79 for more explanation on the various report types.
Period
The “Period” section of the report definition provides information about the default period that a report
will display:
Definition
This section is used to define what needs to be presented in the report. A Flow report is made up of
one or more Report Sections. A Report Section contains one or more measures. Let’s build the report
definition:
Right-click in the “Definition” section and click “New Section” (or drag a folder from the Flow Zone).
Name the new section “Boiler”. Now drag the “Steam” metric from the Model View and drop it onto
the “Boiler” section …
Flow adds the hourly measures from the metric to the “Boiler” section. Individual measures can also
be dragged to a Report Section.
Opening a Report
Right-click on your report in the “Information” model, and select “Tasks\Open” to view the report in
your default web browser.
Notice how the structure of the visualization matches the structure of the report definition (i.e. Sections
and Measures).
Let’s change a few properties in the report definition. Open the “Engineering Hourly Report” definition
again.
In the “Boiler” section, move the “Steam Temperature (˚C)” measure to the bottom of the section
(right-click and “Move Down”).
Notice the differences when you refresh your visualization. The table has more data because it is
defaulting to a “moving window” of 24 hours. The order of the measures in the “Boiler” section has
been updated …
Click on the measure “Steam Production” to open a graphical “quick view” of the data. Notice how a
chart of the data is presented as an overlay. You can configure a “Line” (default) or “Bar” chart by
setting the “Chart” property of the measure in the report definition.
Click on an individual cell to drilldown to its Details, insert Comments and view any other version
information relating to that value. In this example, the inputs to the efficiency calculation are provided
for clarity. Further drilldown is possible by clicking on the cells in the Detail.
Deploying a Report
If you are using the default “Flow Report Server”, this step is done automatically for you. However, if
you are using one of the Report Server plugins (e.g. SQL Server Reporting Services), you will need to
manually deploy your reports to the respective report server. Select the “Deployment” view tab at the
bottom of the Model View and expand your report server component …
The report server contains a number of “visualization” templates. Drag your report definition onto a
compatible template …
Flow will open a visualization of your table report definition in your default web browser.
Table
Use this report type to provide detail. This is similar to a spreadsheet view of your data.
Chart “Quick View” – clicking the name of a measure in the report will open a Chart overlay
Cell “Drilldown” – clicking an individual cell will open a Drilldown overlay providing details of
that measure value (calculation inputs, comments and versions)
Doughnut
Use this report type to indicate the proportion of an attribute (e.g. Product) that makes up the whole:
This Doughnut chart has been made up of one Section containing two Measures. You can add more
Sections to create additional rows, or more measures to create additional columns.
Gauge
Use this report type to provide a visual of key performance indicators (KPIs). When configuring each
measure in the Gauge report type, set the minimum and maximum values. The start of each gauge is
at the top of the circle. Limit display can be configured.
This Gauge chart has been made up of two Sections containing two Measures each, hence the two by
two grid.
Scatter Plot
Use this report type to correlate one measure against another. Configure measures on the X and Y
axes. Optionally, use a third measure for the size of the bubble plotted.
Time Series
Use this report type to display measure values on time series charts. Line, bar and stacked bar charts
can be configured.
Item “Drilldown” – clicking an individual point or bar will open a Drilldown overlay providing
details of that measure value (calculation inputs, comments and versions)
Widget
Use this report type to display a single value for a set of measures’ last values. This is useful for
displaying key performance indicators on auto-updating dashboards.
A Flow Dashboard is a great way to pull multiple views of your information together, into a single place
that can be monitored at a glance.
In the “Reports” tab, right-click on the “Engineering” folder and click “New\Dashboard”. Flow will
create a new “Engineering Dashboard”. Open its editor …
General Properties
The top section of the dashboard editor displays a few general properties:
Title – this is the dashboard title. By default Flow uses a “placeholder” for the name of the
dashboard. This can be edited to a fixed title or a combination of the placeholder and fixed text.
Description – dashboard description.
Panels
The “Panels” section is a grid canvas which is always 12 blocks wide. By default, Flow displays 6 blocks
of height, but the height can be increased by dragging a panel to a larger size.
Dropping the “Engineering Hourly Report” onto the top left block of the canvas, Flow automatically
expands the new panel to all the available canvas space below and to the right of where you dropped
it. The panel’s “handles” can be used to resize it as required. In the labs that follow, you will need to
resize these panels to be able to fit other panels onto the dashboard.
Panel Properties
Panel Title - this is the panel’s title. By default, Flow uses a “placeholder” for the name of the report
in this panel. This can be edited to a fixed title or a combination of the placeholder and fixed text.
Panel Link – this is the URL for a “Web Page Panel” (see “Web Page Panel” below).
Refresh – this is the period on which the panel will automatically refresh itself. Flow defaults this
property to 300 seconds (5 minutes).
This simple dashboard demonstrates the canvas and panel functionality. Notice how a panel has a “title
bar” at the top. The “title bar” can be removed by setting the panel’s “Title” property to blank (empty).
By default all new users belong the “Everyone” group. Any users
who sign in to the Flow Report Server will be added to the
“Everyone” group.
Add a User
Add a Group
To add a new group, right-click and select “New Group”. Edit the
name of the group.
By default, every folder has “Allow” access assigned to the “Everyone” group. Assign “Allow” access to
other groups by dragging them into the “Allow” section of the folder editor. Note that as soon as a
group other than “Everyone” is added to the folder, the “Everyone” group is removed:
When groups are assigned to folders, the reports and dashboards belonging to those folders will only
be accessible to users that belong to the assigned groups.
Flow allows you to set up one or more limit definitions. These limits can be used for various functions
(i.e. targets not achieved, data validation checking, bonus target achieved, etc.)
In the Flow “Toolbox”, under “Limits”, right-click on “Simple” and select “New”, “Limit”. Name the new
limit “Site Target”.
You will notice the only thing that can be edited on the limit is the “High”, “Low” and “Target” colors
that will be used in the reports.
The instance of a Limit is defined as a Date and Time from which the Limit is valid. Additional instances
of this Limit can be added, as long as they have a different Date and Time. This mechanism provides
the ability to change Limits over time, but not lose the ability to backfill a measure to a time when a
different Limit was used.
The instance of the Limit in the Measure is where you will define your Target, High and Low values for
this specific Measure. Flow will evaluate each value retrieved or calculated for the Measure against the
High and Low values. If the High and/or Low values are exceeded, an “exception” will be generated.
The Target value is used for reporting only.
For this “Steam Temperature (˚C)” measure, set the Target to 94.5, set the Low to 94.5, and set the
High to 101.0. If Flow detects any values below 94.5 (i.e. the Low setting) or any values above 101.0
(i.e. the High setting), an exception will be generated.
A High or Low setting can be left blank to exclude it from being evaluated.
Right-click the Limit instance and select “Backfill”. Flow will start evaluating existing summary values
for this measure.
Notice the High and Low colors coming through on the report. Notice the Target displayed as well.
In this example, the value retrieved for “Steam Temperature (˚C)” for a specific time period will be
evaluated against the value of the manual measure “Steam Temperature High (˚C)” for that same time
period.
Frequently changing limits – rather than changing a Measure’s limit configuration frequently, rather
create a manual Measure to “store” the limit value. Link the manual Measure to the limit, and then
create a Form where users can update the manual Measure as and when required. See the Lab on
forms for data entry on page 92.
Cumulative Targets – when tracking progress during a time period (e.g. every hour during the
production day), create a cumulative target Measure to use as a linked limit against a cumulative
production Measure. In this example, the straight line is the cumulative target during the day. At
11:00, there was excessive use of water, thus sending the cumulative water usage over the limit,
but the team ended the day within their daily usage target.
Examples of where you would use a Flow Form to manually enter information include:
Instrument data that isn’t available in your control systems and historians
Calculation factors that you will want to change over time
Flow Forms can also be used to validate and modify retrieved data. This is important in any reporting
system, since there will always be times where systems fail and hence provide incorrect data. For
example, you may be collecting summary information for a Mixing Tank Level by retrieving data from a
level transmitter tag in your Historian (e.g. 101-LT-001.PV). What happens if that level transmitter is
removed from service for maintenance or re-calibration? During the time that it is out of service, the
Retrieved Measure in Flow would have no data to summarize. However, an estimated value could be
inserted by a person, with a comment if required.
Let’s create a Time Period Form for your Boiler. Select the “Forms” tab at the bottom of the
“Information Model” view. Similarly to the “Reports” tab, you can create a structured model to
organize your forms.
Create a new folder in the “Forms” tab for “Engineering”. Now create a new “Time Period Form” in this
folder by right-clicking on the folder …
Definition
A Flow Form consists of one or more “tabs” (similar to “sheets” in an Excel workbook). A tab then
contains one or more measures. Let’s build the form definition:
Right-click in the “Tabs” section and click “New Tab” (or drag a folder from the Flow Zone). Name the
new tab “Boiler”. Now drag the “Steam” metric from the Model View and drop it onto the “Boiler” tab.
Flow adds all the measures from the metric to the “Boiler” tab. Individual measures can also be dragged
to a Form tab.
Authentication
Click the “Sign In” button to authenticate with the Flow Server:
A Flow Form provides the familiarity of a spreadsheet. Notice how the structure of the form follows
the structure of the form definition you created earlier.
Even though your “Boiler” tab configuration included hourly and daily measures, Flow will only display
the measures as per the “Interval Selector”.
Entering Data
After selecting a cell, you can start typing a value to insert or update data. Select one of the hourly
“Steam Production Rating” cells and edit its value to 14.8 (press enter to accept the new value).
Where measure values have a bad quality, you will notice a red disk in the cell. This provides a visual
indicator of suspect or bad quality data.
If you have bad quality data, but you’re actually satisfied with the value, you can “Accept” the value and
Flow will set its quality to good. Double-click on the cell to display its details, select the “Version” tab
and hover over the version with the bad quality.
Notice the “Accept” tick button. Clicking this button will create a new version of value, but with a good
quality. Notice the new version is set to the “Preferred” version. This instructs Flow to use this value
for any calculations and for display on reports and dashboards. The date and time of the version change
is also recorded against the user that made the change.
Dependent Measures
Select the “Steam Production Efficiency” measure. Notice that the selection border is black, not green.
This means that this cell cannot be edited. The reason for this is that it is a calculated measure, which
depends on the values of other measures in the Flow System.
In this case, the efficiency measure is calculated from the “Steam Production Rating” and the actual
“Steam Production”. Edit the “Steam Production” value to 14.8, wait a few seconds for the Flow Engine
to recalculate the efficiency and click the “Refresh” button. Notice how the calculated measure has
been updated correctly.
Entering Comments
Double-click on the cell you edited and enter a comment explaining why you needed to change that
value.
Flow allows more than one comment to be entered for each measure value. Notice in the form, any
cells that have one or more comments will display a gray disk.
Form Permissions
Similarly to the group based permissions applied to Reports and Dashboards (see page 86), the same
group based permissions apply to Form folders.
Locate the “FL001.State” (Filler 1 State) tag in your Historian. Double-click on it to open the Data Source
Preview …
Zoom out to show about 15 hours of detailed data. Notice the Filler 1 cycles through a pattern of states.
The states are:
0 = Idle
10 = Setup
20 = Running
30 = CIP
Flow can be used to automatically detect the start and end of event periods. Let’s define an event that
starts when the Filler state changes to 10 (“Setup”) and ends when the Filler state changes to 30 (“CIP”).
Drag the Event icon from the Flow Zone onto the “Filler 1” folder …
By default the new metric will take on the name of its parent folder.
Rename this event to “Filler 1 Run”.
Event Editor
Double-click on the new event to open its Editor …
General Properties
The top section of the Event Editor displays a few general properties for the Event:
Triggers
The event triggers define how Flow determines the start of an event and the end of an event. Locate
the “FL001.State” tag in the Historian and drag it into the “Triggers” section …
Flow creates a “Start Trigger”. Notice Flow has created a link to the “FL001.State” tag and setup the
default “Trigger” condition properties. Let’s discuss these trigger properties:
Tag – this is the tag Flow will monitor in the data sources and evaluate against the “Trigger”
condition properties.
Trigger – this defines the type of condition Flow will use to detect the trigger event.
Condition – this tells Flow how to evaluate the tag’s value against the “Condition Value”.
Condition Value – this is the value Flow evaluates the tag’s value against to determine whether an
event is triggered.
For the “Start Trigger”, set the trigger “Condition Value” to 10. Flow will use this to start a new event
period when it detects the tag’s value is equal to 10 (“Setup” state).
Now drag the “FL001.State” tag onto the “Triggers” section again …
Flow creates an “End Trigger”. Set this trigger’s “Condition Value” to 30 (“CIP” state).
Deploy the Event and open the “Triggers” diagnostic chart window. Refresh the diagnostic chart and
confirm that event periods have been detected and created by Flow.
Note: If you create an Event with no triggers, you will need to create a “Event Period” form to manually insert
event periods.
Zoom out to view a day or two of detailed data. You will notice the “FL001.Product” tag value changes
between 0 and 4. 0 represents an “Idle” state for the Filler, but 1 to 4 represent the various products
or brands that your Juice Factory produces:
1 = Apple
2 = Grape
3 = Orange
4 = Raspberry
Let’s create a Flow Enumeration for the Filler Product. Select the “Toolbox” tab, right-click on
“Enumerations” and click “New”, “Enumeration”. Name your new Enumeration “Filler Product”, and
open its editor …
In the “Ordinals” section, right-click and select “New”. Flow will create a new ordinal for the integer
value 1. Let’s discuss the Ordinal’s properties:
Ordinal – this is the integer value that Flow associates with the ordinal’s string “Value”.
Value – this is the string value that is mapped by Flow for the ordinal.
Color – this is the color associated with the ordinal. The color will be used for reporting purposes.
The color is defined as a hexadecimal color code (see www.color-hex.com for color palettes).
Description – this is the ordinal’s description.
We are going to use this Enumeration to create an Event Attribute in the next lab.
An Attribute can either be “Retrieved” from a data source, or “Manually Entered” by a person on an
Event Period form.
Let’s add context to the “Filler 1 Run” Event you created earlier. Open the “Filler 1 Run” Event and
expand the “Attributes” section. Undeploy your Event so that you can make configuration changes to
it. Right-click in the “Attributes” section and select “New”, “Attribute”, “Retrieved”. Name your new
Attribute “Product”:
Locate the “FL001.Product” tag in your Historian and drag it onto the “Product” Attribute.
For the properties of this segment, change the “Enumeration” property to “Filler Product”.
Segments
A “Retrieved” Event Attribute can be made up of one or more Segments concatenated together. The
following Segment types are available:
Retrieved – the value of the segment is retrieved from a Data Source. If the retrieved value is an
integer, the segment can be linked to an Enumeration to map the integer values to string values.
Constant – this is a constant string that can be added as part of an Attribute. This is useful for
generating Attributes for Batch Numbers.
Period Start – this segment uses the start date and time of the event period. Use the “Format”
property to change the way you want the date and time to be displayed.
Period End – this segment uses the end date and time of the event period. Use the “Format”
property to change the way you want the date and time to be displayed.
Period Index – this segment uses a unique Index that Flow assigns to each event period created (see
section below).
Index
Every event period created by Flow for an Event will generate a unique Index identifier. This Index auto-
increments every event period. Expand the “Index” section of the “Filler 1 Run” Event …
There are a few properties that can be defined for an Event’s Index:
Initial Value – this is the initial value that will be used the first time the Event is deployed (or, the
first value that will be used of an Index “Reset”).
Reset using – this specifies the Calendar to be used when applying a “Reset” rule to the Index.
Leaving this blank will ensure the Index will never reset.
Reset interval – this specifies how often the Index will reset using the above Calendar.
Create a new “Retrieved” Attribute called “Work Order”, and add the following Segments to it:
Constant “FL001”
Period Start formatted to “yyyy” (i.e. only the year)
Period Index formatted to “0000” (i.e. always 4 digits)
Deploy your “Filler 1 Run” event again, open the diagnostic grid on the Attributes and confirm the
attribute values coming through for each event period:
Notice how the “Product” attribute has been mapped to a string value for the product.
Notice how the “Work Order” attribute has been made up of:
Select the “Reports” tab. Create a new folder in the “Reports” tab for “Packaging”. Now create a new
“Event Report” in this folder (you can do this by dragging the “Event” icon from the Flow Zone, or by
right-clicking on the folder) …
Double-click the new report definition to open its editor. Set the “Default Period” to start at the
beginning of the Production week. Create a new Section called “Filler 1” in the Definition. Drag your
“Filler 1 Run” Event onto the “Filler 1” Section …
Right-click on the report, click “Tasks”, “Open”. Flow will open a visualization of your event report:
Notice how the structure of the visualization matches the structure of the report definition (i.e.
Sections, Events and Event Attributes).
Let’s change a few properties in the report definition. Open the “Packaging Event Report” definition
again.
In the “Filler 1” Section, select the “Filler 1 Run” event and set its “Description” property to “Work
Order”. Set the “Identifier” to “Work Order”.
In the “Filler 1” Section, right-click the “Work Order” Attribute and click “Delete”. This will remove
it as a column in the event report.
Notice the differences when you refresh your visualization. The Index column has been replaced by the
“Work Order” column as an identifier, and the “Work Order” attribute column has been removed.
Let’s create an Event Period Form for your Filler. Select the “Forms” tab at the bottom of the
“Information Model” view. Create a new folder in the “Forms” tab for “Packaging”. Now create a new
“Event Period Form” in this folder by right-clicking on the folder …
Definition
Right-click in the “Tabs” section and click “New Tab” (or drag a folder from the Flow Zone). Name the
new tab “Filler 1”. Now drag the “Filler 1 Run” event from the Model View and drop it onto the “Filler
1” tab.
Notice the structure of the Event form. The event Index, Start and End is always displayed on the left.
Any attributes that you have added to the form definition will appear on the right (in the order you
have specified).
Let’s focus on the event with Index 146. Let’s assume that for that Period, the Filler actually produced
“Grape” flavor, not “Apple”. Select the cell and edit the value to “Grape”:
Using this form you can modify the Index, Start or End of the event period. After clicking commit, the
Flow Engine will start processing the changes. Other linked information within the Flow System will
also need to be updated. Wait a few seconds and click the “Refresh” button.
After refreshing, you will notice that the original period with Index 146 has been replaced with the
details of the period that was 147.
To insert an event period above an existing event period, hover over the period and click the “Insert”
button (+). Provide the details required to create the new event period. Notice the Period Start and
End are defaulted to “fill” the space between the existing event periods.
After committing your new event period details, wait a few seconds for the Flow Engine to process the
required changes and click the “Refresh” button. Notice that the Flow Engine has processed the
Attribute information for the inserted event. We can edit these values if required.
Note: If you do not see an “Insert” button, this means that there is no space available to insert a new
period. You would need to first edit the existing periods to create space for a new period to be inserted.
To add an event period, click the “Create” button (+) at the bottom of the form. Enter the event details,
commit and wait a few seconds for the Flow Engine to process the required changes, then click the
“Refresh” button.
Let’s see how we can do this with new measures. Set your Measure Defaults to “Hourly” with a format
of “0” (i.e. no decimal places).
In the Model View, create a new Metric in the “Filler 1” folder. Flow will name your new Metric “Filler
1”. Locate the “FL001.BottleCount” tag in your Historian, drag it across to the “Filler 1” Metric and
rename it to “Filler 1 Good Production”. Set its retrieval aggregation method to “Counter” (because it
is a counter tag).
Before you deploy your new measure, expand the “Context” section, and drag the “Filler 1 Run” Event
into it ...
Let’s discuss what you have just done. By adding the “Filler 1 Run” Event into the “Context” of the new
measure, you have told Flow to “overlay” the time period information from the measure and the event
period information from the event. Flow effectively “slices” the overlaid time and event period
information, allowing you to present it from different perspectives (or dimensions).
Let’s add this measure to the Event Report you created earlier. Open the “Packaging Event Report”,
expand the “Filler 1” Section and drag this new “Filler 1 Good Production (bottles)” measure onto the
“Filler 1 Run” event. Set its “Description” property to “Good Bottles” …
Refresh your “Packaging Event Report” and notice the additional column of information …
Retrieved
Add a Retrieved Measure called “Filler 1 Bad Production (bottles)” using tag “FL001.BottleCount.Reject”
and the “Counter” aggregation method. Add the “Filler 1 Run” Event to its context.
Manually Entered
Add a Manually Entered Measure called “Filler 1 Rating (bottles)” for the design specification of the
Filler. Add the “Filler 1 Run” Event to its context. Expand the “Retrieval” section set the properties as
follows:
Because an Event is associated with this Measure, Flow needs to know how the “Initial Value” property
is used. The overlay of event periods onto time periods literally slices the time period into pieces. Flow
wants to know whether each slice’s value is assigned to the initial value, or whether the whole time
period’s value is assigned to the initial value, in which case the slices’ values will be assigned a
proportional split of the time period’s value. For the “Filler 1 Rating (bottles)” measure, use “Split initial
value of time slices”.
Calculated
Add a Calculated Measure for “Filler 1 Total Production (bottles)” which is the sum of “Filler 1 Good
Production (bottles)” and “Filler 1 Bad Production (bottles)”.
Add a Calculated Measure for “Filler 1 Efficiency (%)” which is the ratio of “Filler 1 Total Production
(bottles)” to “Filler 1 Rating (bottles)”. You can use the User Defined Function for “Efficiency” created
earlier.
Add a Calculated Measure for “Filler 1 Quality (%)” which is the ratio of “Filler 1 Good Production
(bottles)” to “Filler 1 Total Production (bottles)”. You can use the User Defined Function for “Efficiency”
created earlier.
Aggregated
Add an Aggregated Measure for “Filler 1 Good Production DTD (bottles)”. This is an hourly measure
which will sum the “Filler 1 Good Production (bottles)” measure every hour, but “reset” every day. This
will result in a DTD (“Day to Date”) calculation. Drag the “Filler 1 Good Production (bottles)” measure
from the Model View onto the “hourly” Flow Zone icon, not the “daily” icon. Set the Retrieval
properties as follows:
Just before you deploy all your new measures, open the Flow Monitor from the “VIEW”, “Monitor”
main menu item. Notice the top section for Measure processing is empty. This indicates that the Flow
System is up to date.
Deploying the metric will deploy all of its measures. Notice the Flow Monitor has a number of hourly
measures’ time periods to process. Hover your cursor over the bars to see what each one represents.
You should notice the bars starting to drop (including a rough rate calculation). When there are no bars
remaining in the Measures section, Flow is up to date again.
Open the “Packaging Event Report” you created earlier. Expand the “Filler 1 Run” event and add the
“Filler 1 Efficiency (%)” and “Filler 1 Quality (%)” measures to it …
Create a new “Packaging Dashboard” in the “Packaging” report folder. Open it and add the “Packaging
Event Report” to the canvas. Resize the panel to 9 x 3 blocks.
Your Packaging Line 1 Manager is particularly interested in the hourly good production, the “Day to
Date” good production and the quality. Create a new hourly “widget” report for these three measures
and add it to the dashboard. Name it “Filler 1 Widget”, open it and change the Report Type to “Widget”.
Refresh your dashboard view to see the layout of the widget report. Notice the “rows” as per the
widget definition. Adding more measures to each “row” provides the flexibility to create a grid of
widgets.
We have space on our dashboard for one more report. Let’s create a time-based report for “Filler 1”
that shows our production and efficiency for each hour, for each product. Create a new hourly report
in the “Packaging” report folder. Open it and create the following report definition …
Notice the “DTD” measure has been placed in the “Cumulative Aggregations” Day placeholder for the
“Good Production” report measure. This will add a column at the end of the report to display the “DTD”
value.
Notice the “Context” property of each report measure. Even though you are creating an hourly (time-
based) report, because these measures are associated with an event, you can “slice” the report by the
Event Attribute context.
Index Page – click this link to open the Index / Menu for the Flow System. The Index displays the
Dashboards and Reports in alphabetical order.
o Favorite – if you are signed in, click here to favorite a report or dashboard. After refreshing,
all favorites are moved to the top of the menu
o Show Usage – click here to display a chart indicating the frequency of use
o Show More – if you are not signed in, the menu will only show the top few items. Click
“Show more” to display all items.
Pop Out – when hovering your mouse over a dashboard panel, notice the “pop out” button. Click
the “pop out” button to open the panel in a separate browser tab.
Sign In – signing in allows you to view dashboards, reports and forms that you have access to. It
also allows you to set/remove your favorite dashboards, report and forms. Click here to sign in
using your Windows user account:
Messaging System
The Flow Messaging System extends the reach of Flow information by pushing configured messages,
reports and dashboards to external Notification Services like Email, SMS (Short Message Service) or
Flow Mobile, either on a schedule or when an event occurs.
This functionality can be used to notify groups of people of certain events, or send them certain
information periodically. Using the SMS or Flow Mobile Notification Servers to deliver messages
augments the social nature of our smart device lifestyle. Keep your operations managers “in the loop”
without them having to constantly monitor reports and dashboards.
Message Channel
Message Channels are used to group messages by production area, department or even project. The
Channels are also responsible for defining who receives messages. A user can be “subscribed” to a
Message Channel if they are given access.
Message Trigger
This defines when a message will be compiled and sent. A trigger can be one of the following types:
Time Period Trigger – this defines a scheduled trigger based on a Calendar period (i.e. daily)
Event Period Trigger – this defines an event trigger based on an Event definition (i.e. end of
batch)
Limit Exceeded – this defines a limit trigger based on a measure’s limit being exceeded (i.e.
temperature too high)
Message Contents
This defines what a message will contain (e.g. measure values, measure limits, event periods, report
screenshots, dashboard screenshots, etc.).
Notification Services
This defines how a message will be sent (i.e. the external delivery mechanism):
SMTP Email
SMS (via web services)
Flow Mobile (iOS App)
Notification Services
Messages
Right-click in the message definition section to create a new “Channel”. Name it “Packaging”.
Double-click the “Packaging” Channel to open its editor. The “Allow” section of the editor defines which
user groups have access to receive messages defined in this channel. By default, the “Everyone” group
has access to the channel. However, just because a group is given access to the channel, does not mean
the users in that group are “Subscribed” to the channel.
The “Subscribed Users” section of the editor defines which users are subscribed to receive messages
defined in the channel.
Select the “Users” tab at the bottom of the Model View, and drag the “Packaging” group into the
“Allow” section. Notice the available users that can now subscribe to the channel. Both “Windows
Users” and “Message Recipient” users can subscribe to Message Channels.
Drag an individual user from the “Allow” section to the “Subscribed Users” section.
Drag a whole group from the “Allow” section to the “Subscribed Users” section. Note that this will
subscribe all the users currently defined in that group, but will not automatically link users that are
added to the group at later stage.
From the Flow Server, a signed in user can view the Channels they have access to via their “Profile”.
Clicking the available channels in the “Subscriptions” sections allows the user to toggle which
channels they want to subscribe to.
Right-click the “Packaging” Channel and create a new Message. Name it “Packaging Daily Scorecard”.
General Properties
The top section of the editor displays the general properties for the Message:
Message Trigger
The message trigger determines when a message needs to be compiled and sent. For this lab, lets
create a message that is sent on a daily schedule. You will create a new “Time Period Trigger” in one of
the following ways:
Right-click in the Message Triggers section to create a new “Time Period Trigger”.
Drag the Daily icon from the Flow Zone into the Message Triggers section.
When – either the start or the end of the period, in this case “Daily” (i.e. at the end of the day)
Calendar – this specifies which calendar the time period is derived from. If the calendar’s day starts
and ends at 06:00, then this will be the time period used for “the end of the day” trigger evaluation.
Refresh offset – this is a number of seconds after “the end of the day” that Flow will attempt to
compile the message contents. A default value of 300 seconds (i.e. 5 minutes) is set. If you find
that some information is not being provided in the compiled messages, you may need to increase
this Refresh Offset.
Delay sending – this is the number of seconds after “the end of the day” that Flow will send the
message. If the delay is set to the same value as the Refresh Offset, then Flow will send the message
as soon as it is compiled. If “the end of the day” is 06:00, you may want to set the delay to 1800
seconds (i.e. 30 minutes) to send the message at 06:30.
Message Contents
Now that we have determined when the “Packaging Daily Scorecard” will be triggered, let’s define what
the message should contain. Expand the “Message Contents” section:
You will notice that by default, Flow creates some initial message content. The message content is
defined by one or more Sections, and each section can contain one or more Segments. Think of sections
as a grouping of your message content.
In a message, one section can be defined as the message subject. This is displayed as the bold section.
Within each section, one segment can be defined as that section’s heading. This is displayed as the
bold segment.
Looking at the default content Flow has created, the section “Packaging Daily Scorecard” is set to be
the subject (i.e. bold). The single segment within the section is the text “Packaging Daily Scorecard”.
This is the text that will be used by Flow as the subject of the message.
Create two new sections, one named “Summary” and the other name “Dashboard”:
Notice that the new segment is set to represent the heading (i.e. bold) of the “Summary” section. The
segment text is also set to “Summary”, as derived from the section name. Edit this segment text to
“Packaging Daily Summary”.
Create a new “Measure Value” segment. Drag the “Filler 1 Total Production” measure from the Model
View across to the new segment’s “Measure” textbox:
The segment text is made up of static text and placeholders. Right-click in the segment text section to
add one or more available placeholders. For this lab, use the simple example:
When Flow evaluates this segment, the placeholders will be replaced by actual names, values or text
strings, e.g.
You can change the segment text as required. See the section on page 139 for a description of the
available placeholders you can add for each Segment type.
To finish our message contents, let’s add the “Packaging Dashboard” to the “Dashboard” section. Select
the “Reports” tab at the bottom of the Information Model, find the “Packaging Dashboard” you created
earlier, and drag it onto the “Dashboard” section:
Segment Text – this can be left blank (default). If any text or placeholder is included here, Flow will
use it as a hypertext link to open the dashboard via a default browser.
Attachment – by default this is set to “Image”, at “Screen HD 1080” size. Flow will render the
dashboard to an image and include it with the message contents. A 5 second timeout is provided
to create the dashboard rendering, but this value can be increased where required.
Notification Services
Now that we have defined when and what needs to be sent in our message, let’s define how Flow will
send the message to the Message Channel’s subscribers.
Flow makes use of Notification Service plug-ins to send messages. Think of a Notification Service as a
mechanism to deliver a message to an “external” system (e.g. an email system). For this lab, you will
configure an Email Notification Service to deliver your message.
A message definition must have at least one Notification Server assigned to it. Before we can assign an
Notification Server to this message, we will need to configure one.
The “Smtp Email” Notification Service requires the following properties to be set:
Click “Save” and then drag your “Internal Email” Notification Server into the message definition:
Some Notification Servers will require additional properties to be set (e.g. email priority).
Once deployed, the Flow Message Engine will automatically manage the scheduling, compiling and
sending of messages.
In order to facilitate the testing of your message definition and Notification Services, you can manually
trigger a message by right-clicking in the triggers section of a message and selecting “Trigger now”:
Note: Use the Event Viewer on the platform where the Message Engine is deployed to monitor for any message
processing errors. See the section on page 34 for troubleshooting information.
Note: When the Message Engine is ready to send a message, it will pass the message and recipient information
to the configured Notification Services. If the Notification Service is unavailable at this time, the Message Engine
will not attempt to send the message again.
Constant
A “Constant” segment type has no placeholders, only free text.
Measure Value
The following placeholders are available for a “Measure Value” segment type:
Measure
o [Measure] – measure’s name
o [Description] – measure’s description
o [UOM] – measure’s unit of measure
Value
o [Value("0.00")] – measure’s value for most recent period (if value is null, then use 0.00)
o [Quality] – measure’s quality for most recent period
o [Version] – measure’s version for most recent period
Limit
o [Limit] – limit’s name
o [Target] – limit’s target
o [High] – limit’s high setting
o [Low] – limit’s low setting
o [Exceeded("Ouch!", "What happened?")] – if limit has been exceeded, then Flow will
randomly select one of the text parameters (more text parameters can be used in a comma
separated list)
o [Achieved("Well done team!", "Good Work!", "Keep it up!")] – if limit has been achieved,
then Flow will randomly select one of the text parameters
Period
o [PeriodStart("HH:mm")] – period’s start date formatted as per parameter
o [PeriodEnd("HH:mm")] – period’s end date formatted as per parameter
o [Duration] – period’s duration
Condition
o [Positive("Great improvement!", "Keep it up!")] – if value is positive, then Flow will
randomly select one of the text parameters (more text parameters can be used in a comma
separated list)
o [Negative("What happened?", "Ouch!")] – if value is negative, then Flow will randomly
select one of the text parameters
Relative – similar to above, but with an additional parameter that specifies a previous period index
o [Value(-1, "0.00")]
o [Quality(-1)]
o [Version(-1)]
o [Target(-1)]
o [High(-1)]
o [Low(-1)]
o [PeriodStart(-1, "HH:mm")]
o [PeriodEnd(-1, "HH:mm")]
o [Duration(-1)]
A “Measure Attribute Value” segment can be used when a measure is sliced by additional Event context.
The following placeholders are available for a “Measure Attribute Value” segment type:
Measure
o [Measure] – measure’s name
o [Description] – measure’s description
o [UOM] – measure’s unit of measure
Attribute
o [Attribute] – attribute’s name (e.g. Product)
o [AttributeValue] – attribute value’s name (e.g. Orange, Apple)
Value
o [Value("0.00")] – measure’s value for most recent period (if value is null, then use 0.00)
o [Duration] – period’s duration relating to this attribute value
o [Quality] – measure’s quality for most recent period
o [Version] – measure’s version for most recent period
Period
o [PeriodStart("HH:mm")] – period’s start date formatted as per parameter
o [PeriodEnd("HH:mm")] – period’s end date formatted as per parameter
Relative – similar to above, but with an additional parameter that specifies a previous period index
o [Value(-1, "0.00")]
o [Quality(-1)]
o [Version(-1)]
o [PeriodStart(-1, "HH:mm")]
o [PeriodEnd(-1, "HH:mm")]
o [Duration(-1)]
Event Period
An “Event Period” segment can be used when a message is triggered by an Event Period. The following
placeholders are available for a “Event Period” segment type:
Event
o [Event] – event’s name
Period
o [PeriodStart("HH:mm")] – period’s start date formatted as per parameter
o [PeriodEnd("HH:mm")] – period’s end date formatted as per parameter
o [Duration] – period’s duration
o [Index("0000")] – period’s index formatted as per parameter
Relative – similar to above, but with an additional parameter that specifies a previous period index
o [PeriodStart(-1, "HH:mm")]
o [PeriodEnd(-1, "HH:mm")]
o [Duration(-1)]
o [Index(-1, "0000")]
Event
o [Event] – event’s name
Attribute
o [Attribute] – attribute’s name (e.g. Product)
o [AttributeValue] – attribute value’s name (e.g. Orange, Apple)
Period
o [PeriodStart("HH:mm")] – period’s start date formatted as per parameter
o [PeriodEnd("HH:mm")] – period’s end date formatted as per parameter
o [Duration] – period’s duration
o [Index("0000")] – period’s index formatted as per parameter
Relative – similar to above, but with an additional parameter that specifies a previous period index
o [PeriodStart(-1, "HH:mm")]
o [PeriodEnd(-1, "HH:mm")]
o [Duration(-1)]
o [Index(-1, "0000")]
Event
o [Event] – event’s name
Measure
o [Measure] – measure’s name
o [Description] – measure’s description
o [UOM] – measure’s unit of measure
Value
o [Value("0.00")] – measure’s value for most recent period (if value is null, then use 0.00)
o [Quality] – measure’s quality for most recent period
o [Version] – measure’s version for most recent period
Attribute
o [Attribute] – attribute’s name (e.g. Product)
o [AttributeValue] – attribute value’s name (e.g. Orange, Apple)
Period
o [PeriodStart("HH:mm")] – period’s start date formatted as per parameter
o [PeriodEnd("HH:mm")] – period’s end date formatted as per parameter
o [Duration] – period’s duration
o [Index("0000")] – period’s index formatted as per parameter
Condition
o [Positive("Great improvement!", "Keep it up!")] – if value is positive, then Flow will
randomly select one of the text parameters (more text parameters can be used in a comma
separated list)
o [Negative("What happened?", "Ouch!")] – if value is negative, then Flow will randomly
select one of the text parameters
Relative – similar to above, but with an additional parameter that specifies a previous period index
o [PeriodStart(-1, "HH:mm")]
o [PeriodEnd(-1, "HH:mm")]
o [Duration(-1)]
o [Index(-1, "0000")]
Report Link
The following placeholders are available for a “Report Link” segment type:
Report
o [Report] – report’s name
Dashboard Link
The following placeholders are available for a “Dashboard Link” segment type:
Dashboard
o [Dashboard] – dashboard’s name
o [Link] – dashboard’s hypertext link
Notification Servers
Flow makes use of Notification Service plug-ins to send messages. Think of a Notification Service as a
mechanism to deliver a message to an “external” system (e.g. an email system). The Notification
Services available to deliver your messages include:
Smtp Email
Smtp (Simple Mail Transfer Protocol) is the internet standard for electronic mail submission.
For further details on configuring the “Smtp Email” Notification Service, see the section on page 135.
Twilio
Twilio is a web service that provides bulk messaging. The “Twilio” Notification Service makes use of the
SMS (Short Message System) Twilio API. The “Twilio” Notification Service does not support Flow
message segments that use attachments (i.e. report or dashboard images).
For you to use the “Twilio” Notification Service, you will need to create your own Twilio account at
https://www.twilio.com. For testing purposes, Twilio offers a limited number of free SMS messages,
thereafter, you will need to pay for the Twilio service.
Note: In order to send messages from your Twilio account, you will need to create a “sender” Twilio number. This
number can be bought from Twilio based on the country of use. In addition to this, each recipient’s number must
be verified in Twilio as a “Verified Called ID” using the recipient’s name and number.
When creating a new “Twilio” Notification Service in Flow, you will need to provide the following details:
Bulk SMS
BulkSMS is a web service that provides bulk SMS messaging. The “BulkSms” Notification Service does
not support Flow message segments that use attachments (i.e. report or dashboard images).
For you to use the “BulkSms” Notification Service, you will need to create your own BulkSMS account
at https://www1.bulksms.com/register. For testing purposes, BulkSMS offers a limited number of free
SMS message credits, thereafter, you will need to pay for SMS bundles based on usage.
When creating a new “BulkSms” Notification Service in Flow, you will need to provide the following
details:
Flow Mobile
Flow Mobile is an app that runs on iOS. Flow Mobile is a collaborative messaging app that allows
channel subscribers to interact with each other and receive messages from Flow Systems via “Flowbot”.
When creating a new “Flow Mobile” Notification Service in Flow, you will need to provide the following
details:
Name – set this to a name that describes the Flow Mobile system.
Domain Mask – Flow Mobile authorizes users via their email address. This mask will only allow
messages to be sent to users that have an email address that uses this domain. This property can
be a comma separated list of domains. At least one domain mask must be specified.
Note: The Flow Mobile app is currently available for iOS only. Please search “Flow Software” in the Apple App
Store.
Copying an Event
When pasting an event into a folder, Flow will use the source and destination folders in a “search and
replace” manner. Copy “Filler 1 Run” and paste it into the “Filler 2” folder. Flow uses the source folder
(i.e. “Filler 1”) and replaces it in the name of the new event with the destination folder (i.e. “Filler 2”).
It is recommended that you standardize on a naming convention for folders and events so that this
paste functionality can be used to your benefit.
Open the new “Filler 2 Run” Event, set the “Tag” properties to the correct references for triggers and
attribute segments and then deploy …
Copying a Metric
Similar to an Event, when copying a metric into a folder, Flow will use the source and destination folders
in a “search and replace” manner. Copy the “Filler 1” metric and paste it into the “Filler 2” folder. Flow
uses the source folder (i.e. “Filler 1”) and replaces it in the name of the new metric (and all the metric’s
measures) with the destination folder (i.e. “Filler 2”).
After pasting the new metric, you will need to edit the retrieval properties of any Retrieved Measures,
as well as validate the calculations in any Calculated Measures.
Here are a few guidelines and an example to get you started. In this example, we have a single
"Quantity" value recorded every hour for a specific "Work Order" and corresponding "Product". It is
possible that more than one record could be inserted for a single hour during a "Work Order" change
over. These records are stored in a SQL table called "ProdData". We will be creating an Hourly measure.
Measure Values
When retrieving data for a Measure's value, your SQL script must return a single record for the time
period being queried, e.g.
But this would sum all the records in the table. We need to give Flow a bit more information about the
time periods to query. At any point in time, Flow knows what time period it is busy processing for our
measure. We can use Flow [Placeholders] to augment this SQL query:
Flow will now sum the records for the hour that is being processed.
For this example, we may want to filter our SQL query for the "Filler 3" Machine only.
One last thing ... because this query makes use of a SQL aggregation method (i.e. "sum"), it would return
a NULL value if there were no records for a specific hour. Flow will handle the NULL values, but it may
be more elegant for the SQL query to return 0 rather than NULL:
Event Triggers
When retrieving data for an Event's triggers, your SQL script must return zero or more "Timestamps",
e.g.
But this would return the "Start" value for all the records in the table. We need to give Flow a bit more
information about the time span to query. Like the measure value's query, we can use Flow
[Placeholders] to augment this query:
Flow will now return all the "Starts" found during the last period being processed.
For the example data above, this would result in a new Event Period being created every hour at least.
What we actually want is a new Event Period every time the "Work Order" changes. Let's change our
SQL query to a "group by" on the "Work Order" column and select the minimum "Start":
group by WorkOrder
Flow will create a new Event Period every time the Work Order changes.
group by WorkOrder
Like the measure value and event trigger queries, we can use a Flow [Placeholder] to retrieve an Event
Attribute Segment's value:
select Product
from ProdData
Flow defaults the [TimeStamp] placeholder to 0 seconds after the Event Period has started. This can
be changed by modifying the attribute segment's "Retrieve Point".
For the example data above, let's create a "Work Order" Attribute and a "Product" Attribute:
select WorkOrder
from ProdData
select Product
from ProdData
In this example, the Measure Values, Event Triggers and Event Attribute Segment values all came from
the same SQL table. Something to consider is that each of these Flow retrievals could come from
different data sources.
Advanced Concepts
This section describes a few concepts that are not typically used every day. However, it is important to
understand that Flow can be used to achieve these things.
Filter Tag – this is the tag Flow will use to compare against the filter condition to determine whether
the detailed data should be used in the calculation or not.
Filter Comparator – this is the comparator part of the filter condition.
Filter Value – this is the value part of the filter condition.
Filter Default – this is used by Flow if all the detailed data is filtered out of a time period. Flow will
use this default value for the measure value for that time period.
In this example the measure has a connection to the “Historian” and to the “Planning System”. The
“Historian” connection uses a specific tag to retrieve an average value. The “Planning System”
connection may use a SQL query to collect a value from a validated data source.
But when would you use this? Open the “Context” section of a Retrieved Measure and select the
“Properties” tab …
Where more than one connection has been linked to a Retrieved Measure, the Refresh Offset
properties can be associated with a specific connection.
Let’s discuss a scenario where this would be useful. Let’s assume a manually validated number in the
“Planning System” Data Source is only available at 09h00 every day (maybe it needs to be checked by
a person first). However, the morning shift handover meeting happens at 06h00. You would like to
provide an “initial” value from your “Historian” tag for the 06h00 meeting, and then when the “Planning
System” data is available at 09h00, get that value for your measure. To do this, you would set the “60”
second Refresh Offset to use the “Historian”, and create another Refresh Offset of “10800” seconds (3
hours) which uses the “Planning System”. If the value from the “Planning System” at 09h00 is different
from the value retrieved from the “Historian” at 06h00, Flow will create a new version of the measure’s
value and set it to “Preferred”, hence making it the value used in reports.
Relative Period
When selecting a dependent measure in a measure calculation, select the “Properties” tab. Set the
“Dependent” property to “Previous period” and the “Relative End” to -1.
Flow will now use the previous period of this measure in your calculation expression.
Relative Range
Set the “Dependent” property to “Previous range”, “Relative Start” to -30 and the “Relative End” to 0.
Flow will now use an array of values in your calculation expression. You will need to use this array in a
Built-In or User Defined Function.
This Relative Range calculation is useful for “moving window” calculations (e.g. moving average, moving
sum, etc.)
Logo – add your company logo file. Flow will use it in the header of the reports and dashboards.
Color 3 – this is the color Flow uses for the dashboard panel title bars. The default color is the Flow
green, but it is recommended that this be changed to one of your company’s accent colors.
Port – this is the port used by the Flow Report Server. Port 80 is the default http value.
Your Flow System can be extended by adding new modules to it. Some of the components can be
updated by importing the latest releases. Use the “Import” menu to add or update these modules.
Why is this important to you? If Flow Software releases a new Data Source type in the future, you
would be able to add that new module here and be able to use the new Data Source without needing
to reinstall your Flow System.
Flow Directories
The Flow System uses a number of standard Operating System directories. In general, these directories
do not need to be accessed, but for information purposes, they are listed below:
Bootstrap
Config
The Flow Simulator is configured using an XML file. You will find this file in your “C:\ProgramData\Flow
Software\Flow\Config\Simulator” folder. You can modify this file little by little until you are comfortable
with how it works.
Once you have built your “Simulator.xml” file, you can add it to your Simulator connection by editing
its “Definition” property and clicking “Save”…
After installing Flow, each Flow System you create will automatically generate a Demonstration License.
This license will allow you to run 100 measures, 5 events and unlimited reports for 30 days.
Software Requirements
Operating Systems
Windows 10 64-bit
Windows Server 2012 64-bit
.Net 4.5
Hardware Requirements
There are many deployment architecture options available for Flow. Flow has been designed for a
distributed and modular architecture, but can be installed and deployed on a single server.
Flow Config
The Flow data store is a relational Microsoft SQL database. For a Flow System consisting of 10 000
measures, the database can accumulate around 60 million records per year, depending on the
granularity of measure configuration.
Appropriate hardware sizing must be performed to meet expected Flow retrieval and reporting
performance. As a minimum, the following is recommended:
Licensing
Flow is currently available in the following sizes:
A rough guide for sizing your Flow System, is to use 10% of your Historian tag count as a starting point.
Please join the online Flow Community. Ask questions, make suggestions, become a known expert.