Professional Documents
Culture Documents
Version 2021
M i cr o St r at egy 2021
M ar ch 2023
Copyright © 2023 by MicroStrategy Incorporated. All rights reserved.
Trademark Information
The following are either trademarks or registered trademarks of MicroStrategy Incorporated or its affiliates in the United States and certain other
countries:
Other product and company names mentioned herein may be the trademarks of their respective owners.
Specifications subject to change without notice. MicroStrategy is not responsible for errors or omissions. MicroStrategy makes no warranties or
commitments concerning the availability of future products or versions that may be planned or under development.
CON TEN TS
Overview of Platform Analytics 6
Object Hierarchy 76
Security Filter 93
Prompt Hierarchy 95
Cache Hierarchy 100
Configuration Objects Hierarchy 110
118
Over vi ew o f Pl at f o r m An al yt i cs
Platform Analytics is the new monitoring tool in MicroStrategy 2019 that
captures real-time data from the MicroStrategy platform, and uses this data
to enable a smarter administration and to provide a better experience to your
MicroStrategy users. Platform Analytics allows you to optimize the
performance of your MicroStrategy system, engage your users with your
analytics content, and secure your data and tracking who is accessing it.
l Users: To understand what users do when they are in the system, and to
improve their experience and engagement with MicroStrategy.
l Cubes: To ensure that cubes are being fully leveraged to improve the
performance of the key content in MicroStrategy environments.
See the following topics for more information about Platform Analytics:
Pl at f o r m An al yt i cs Ar ch i t ect u r e
an d Ser vi ces
Components and Architecture
The following is a list of components that are part of the overall Platform
Analytics dependencies and architecture:
l Platform Analytics Store: This component reads the data that the
Intelligence Telemetry and Identity Telemetry producers send to the
Telemetry Server layer, transforms this data, and loads it in the Platform
Analytics Repository.
l Platform Analytics Cube: This data import cube contains 14 days’ worth
of Platform Analytics data and is used to feed data to the all the standard
Platform Analytics dossiers.
Services
The following third-party services are automatically installed along with
Platform Analytics. These services allow Platform Analytics to capture and
analyze data from the MicroStrategy platform.
Apache Kafka
Kafka is a distributed streaming platform that processes and stores real-time
data streams. It provides horizontal scalability, low latency, and high
throughput. The Kafka producers allow an application, such as the
Intelligence Server, to publish records to one or more topics, while the Kafka
consumers allows applications to subscribe and consume the data available
in those topics.
Apache Zookeeper
ZooKeeper is a centralized coordination service. It facilitates the
implementation of distributed applications by providing low-level
synchronization and configuration functionality that is frequently useful in
distributed applications.
Redis
Redis is an in-memory data structure store. It provides caching mechanisms
that are ideal to optimize the performance of data-intensive distributed
services like Platform Analytics.
l The Telemetry Store will prevent two consumer instances from running on
the same machine in order to help reduce the risk of data loss and data
integrity issues. However, customers should ensure that a separate
Telemetry Store is not installed and running on a separate machine and
configured to an already occupied Platform Analytics Repository.
l There must only be one Telemetry Store consumer writing data to one
Platform Analytics Repository. If there are two Telemetry Store consumers
writing data to the same PA Repository, there will be data loss and data
integrity issues.
l If using Remote Desktop Services (RDS), it's easy to setup replicas for
read access if there are use cases for heavy read queries against Platform
Analytics data.
l If you are using RDS or self managed PostgreSQL, it's easier to manage
system resources and perform capacity planning.
Client Telemetry
Client telemetry was introduced in MicroStrategy 2021 Update 8 (December
2022). Starting in MicroStrategy 2021 Update 9, this preview feature is out-
of-the-box.
l Developers
l BI Architects
l Administrators
Developers and architects can discover end user interaction patterns with
dossiers, longevity of viewing sessions, and take actions to increase the
engagement. This can be done by understanding what pages users have
been visiting, learning how much time was spent in a dossier, investigating
whether certain analytics tools (such as drilling, sorting, or showing totals by
default) and ultimately learning the sequential order of user clicks. If you
have multiple custom applications deployed, you can gauge the success of
each deployment to deliver content to end users.
Administrators can detect potential system bottlenecks that were not visible
before, such as sluggish client receive times or device/browser rendering
times. In addition, Platform Analytics captures the entire history of devices
used to connect to the platform. Insightful data about device types or client
versions can help administrators ensure that the user base remains on the
most recent and secure version of the client.
l MicroStrategy App
Related Topics
l Providing a fully scalable ETL layer that is always on and does not require
scheduling.
l Using a simplified data model that utilizes less space in the data
warehouse.
l Enabling users who are not administrators to see the data they care about.
on a count against
the fact_object_
component table.
A sample report
is:
Attributes:
Metadata, Project,
Component
Object,
Component Object
Type, Object
Type,
Metrics: Objects
Configuration ID Derived
Object Owner attributes.
GUID
Description The owner_id
form of all
Location configuration
Version objects.
Abbreviation
Connection ID Session ID
Source Source
DESC DESC
SDESC: IF
((Position
("MICROSTRATE
GY", [EM_
CONNECT_
DESC]) = 0),
[EM_CONNECT_
DESC], SubStr
([EM_CONNECT_
DESC], 15,
(Length([EM_
CONNECT_
DESC]) - 14)))
Consolidation ID Object ID Abbreviation and
Access Granted
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name
Contact ID (GUID) Recipient ID
DESC GUID
Name
Cube Hit ID Action Type ID In Platform
Indicator Analytics, the filter
DESC DESC named Action
CUBE Type = Cache Hit
EXECUTION shows only
executions that hit
DB a cache. If the
EXECUTION report, document,
or dossier
execution is done
against the
database, it is not
a Cache Hit and
falls under the
Action Type =
Execution.
Custom Group ID Object ID Abbreviation and
Access Granted
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name
Day ID (2016-12-01) Date ID (2017-01- Day of Week
01) (element) is a
DESC (12/01/2016 standalone
Thursday) Day of attribute in
Week@desc Platform Analytics.
Short Desc (Wednesday)
(12/01/2016 Thu)
Day of
Date (2016-12-01) Week@short
desc (Wed)
Location Modification
Timestamp
Version
Version
Abbreviation
Status
Access Granted
Timestamps attribute)
(Compound)
Status@desc
EXEC REQ TS (Status
attribute)
EXEC START
TS Initial Queue
Duration (ms)
EXEC FINISH (fact)
TS
Action Start
Timestamp
UTC (metric)
Action Finish
Timestamp
UTC (metric)
Document Job ID (Compound) Job Step ID
Step Sequence
Sequence StepID Job@ID (Job
attribute)
JobID
Session@GUI
SessionID D (Session
Timestamps attribute)
(compound) Job Step Start
STEP_ST_TS Timestamp
(UTC) (fact)
STEP_FN_TS
Job Step
Finish
Timestamp
(UTC) (fact)
Document Job ID Job Step ID In Platform
Step Type Type Analytics,
DESC DESC document and
UNKNOWN report step types
are combined into
HTML a single attribute
RENDERING called Job Step
Type.
Document ID Object Type ID In Platform
Type Analytics,
DESC DESC dashboard is
HTML HTML renamed to
DOCUMENT Document dossier.
Dashboard Dossier
Document Document
Report
Writing
Document
Error ID (Job error Status/Status Status Category
code) Category provides a high-
level grouping to
DESC (Exact analyze individual
error message) error messages
based on the
JobErrorCode.
Status is the
exact error
message that is
recorded in the
logs. Most errors
in the logs are
recorded at the
unique job and
session level.
Therefore, when
trying to
determine what is
the "most frequent
error," which
occurs in my
MicroStrategy
environment, an
aggregate count
of errors at the
status level
almost always
results in 1.
To improve the
reporting for this
use case, Platform
Analytics can
aggregate the
telemetry the at
the level of Status
Category.
Error Indicator ID Status/Status ID If you are trying to
Category analyze successful
DESC DESC actions in Platform
NO ERROR Successful Analytics, use the
OOTB filter called
ERROR Action Status =
Success.
Session@GUI
D (Session
attribute)
history list
message@sta
tus (History
List Message
attribute)
Inbox Action ID Action Type ID
Type
DESC DESC
ADD Create
History List
REMOVE Message
BATCH (add)
REMOVE Delete
RENAME History List
Message
CHANGE (remove
STATUS and batch
remove)
EXECUTE
Rename
REQUESTED History List
Message
(rename)
Change
History List
Message
Status
(change
status)
Execute
History List
Message
View
History List
Message
(execute)
different ports.
For this reason,
the attribute name
is Intelligence
Server Instance.
The Intelligence
Server Machine is
represented as a
separate attribute
without the port.
N/A N/A Intelligence ID In Platform
Server Analytics, there
Machine Name are three
IP attributes to
represent the
Intelligence Server
configuration:
Intelligence Server
Machine,
Intelligence Server
Instance, and
Intelligence Server
Definition. A single
Intelligence Server
Machine can host
multiple
Intelligence Server
instances running
on different ports.
For this reason,
the attribute name
is Intelligence
Server Instance.
The Intelligence
Server Machine is
represented as a
separate attribute
without the port.
Location Modification
timestamp
Version
Location
Abbreviation
Version
Access Granted
Owner Name
Owner Name
refresh
Policies
Intelligence ID Intelligence ID
Server Cluster Server
DESC
Cluster
Job Error Code CID (Composite ID Status ID Status Category
Form because can Category provides a high-
be same JOB DESC level grouping to
ERROR CODES analyze individual
with different error messages
DESCRIPTIONS) based on the
JobErrorCode.
Job Error Code
Status is the exact
Error error message that
Description is recorded in the
logs. Most errors
Metadata ID Metadata ID
DESC (GUID) GUID
Metadata
Connection
String
LDESC:
01/01/2005
Saturday
SESSION_ID
Session@GUI
Prompt Answers D (Session
attribute)
Count of Answers
Prompt
Answer@DES
C (Prompt
Answer
attribute)
Prompt
Actions (fact)
Prompt Type ID Prompt Type ID
STRING DESC
ELEMENTS Attribute
Element
DOUBLE Prompt
OBJECTS Object
Prompt
Prompt
Value
Prompt
Embedded
Prompt
Quarter of ID Derived Definition: In Platform
Year Attribute Quarter Analytics, the
DESC (1st ([Date@ID]) quarter of year
Quarter) can be created
Short Desc (Q1) using a derived
attribute in
MicroStrategy
based off the Date
attribute.
Report ID Object ID Access Granted
and Abbreviation
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name
Report Job CID Multiple Job@ID (Job Assigned Cost is
Facts/Attribut attribute) not available in
Job ID es Platform
Session@GUI Analytics.
Session ID D (Session
Error DESC attribute) To derive the No-
of_prompts per
No_of_prompts Status@DESC report job, build a
(Status report with the
PRIORITIZATION attribute) following
Priority <derived> definition:
Number Attributes:
Job l
DESC SQL
Pass@DESC
DB ERROR (SQL Pass
MESSAGE attribute)
Timestamps Database
Error
EXEC_ST_TS Message@des
EXEC_FN_TS c (attribute)
SQL Pass
Start
Timestamp
(UTC) (fact)
SQL Pass
Start
Timestamp
(UTC) (fact)
Timestamps Sesion@GUID
(Session
EXEC_ST_TS attribute)
EXEC_FN_TS Job Step Start
Timestamp
(UTC) (fact)
Job Step
Finish
Timestamp
(UTC) (fact)
Report Job ID Job Step ID
Step Type Type
DESC DESC
Twitter
Data Import
Facebook
Data Import
Google Big
Query
Single
Table Data
Import
Custom
SQL
Wizard
Data Import
Single
Table Data
Import
Open
Refine Data
Import
Remote
Data
Source
Data Import
Spark
Server
Data Import
Report/Docum ID Object ID
ent Indicator Category
DESC DESC
REPORT Reports
DEFINITION
Documents
DOCUMENT
DEFINITION
Location timestamp
Version Version
Abbreviation Status
Access Granted
timestamp
Modification
timestamp
Status
Project Name
(to support
CM delete
command)
Table ID Object ID Access Granted,
Abbreviation, and
Name Name Location are not
GUID GUID available in
Platform Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted <appended as
Table Size a suffix to the
table name>
Table Prefix
<appended as
Owner Name a prefix to the
table name>
Owner Name
Template ID Object ID Access Granted
and Abbreviation
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Required
Standard Auth
Allowed
Trusted Auth
User Id
Version
Email
Login
LDAP Link
NT Link
WH Link
User Group ID User Group ID Access Granted,
Abbreviation, and
Name Name Location are not
GUID GUID available in
Platform Analytics.
Description Description
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Version
Abbreviation Status
Access Grated
User Group ID User Group ID
(Parent)
DESC Name
GUID
Description
Creation
timestamp
Modification
timestamp
Status
Week of year ID (201701) Week ID (201701)
Week Start Date Week Begin
Short Desc (Y
2017)
Database ID Status Database
Error Indicator Error Indicator
DESC
0
NO DB ERROR
1
DB Error
Delivery ID Status ID
Status
Indicator DESC DESC
FAILED
SUCCESSFUL
Cache ID Action ID
Creation Category
DESC DESC
Indicator
NO CACHE Cache
CREATION Creation
CACHE
CREATION
SELECT Select
AGGREGATE Aggregate
FROM From
WHERE Where
ORDER BY Order By
Number of
Subscription
Subscription Number of executions of a subscription.
Executions
Executions
Number of
Number of Number of subscriptions in the
Subscriptions in
Subscriptions metadata.
Metadata
Subscription Subscription
Sum of all execution times of a
Execution Duration Execution Duration
subscription in hh:mm:ss.
(hh:mm:ss) (hh:mm:ss)
Subscription Subscription
Sum of all execution times of a
Execution Duration Execution Duration
subscription in seconds.
(secs) (s)
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
RP Number of Ad-Hoc
Ad-hoc Jobs Counts the number of ad-hoc jobs.
Jobs
Standalone
RP Number of Data Provides the number of report executions
Report
Requests requested by users.
Executions
RP Number of Jobs
Jobs Counts the number of job executions.
For Concurrency
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
Reporting
RP Number of Jobs
Database Jobs Counts the number of jobs hitting the database.
hitting Database
RP Number of Jobs
Jobs Today Counts the number of job executions today.
Today
RP Number of Jobs Non Cache Hit Counts the number of job executions that did not
w/o Cache Hit Jobs hit a server cache.
RP Number of Jobs Non Element Counts the number of job executions that did
w/o Element Loading Load Jobs not result from an element loading.
RP Number of Jobs Cache Hit Jobs Counts the number of job executions that did hit
with Cache Hit a server cache.
RP Number of Jobs Failed DB Jobs Counts the number of job executions that
with DB Error caused a database error.
RP Number of Jobs Element Counts the number of job executions that did
with Element Loading Loading Jobs result from an element loading.
RP Number of Jobs Security Filter Counts the number of job executions that did
with Security Filter Jobs have a security filter applied.
RP Number of Jobs SQL Execution Counts the number of job executions that
with SQL Execution Jobs execute SQL.
RP number of
Subscription Counts the number of job executions run
Narrowcast Server
Jobs through MicroStrategy Narrowcast Server.
jobs
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
Distinct
RP Number of Prompts
Prompts Counts the number of prompts in a report job.
Executed
RP Number of Report
Counts the number of job executions that
Jobs from Document Child Jobs
resulted from a document execution.
Execution
RP Number of Result Report Row Counts the number of result rows returned from
Rows Count a report execution.
RP Number of Result Cube Row Counts the number of rows in a OLAP View
Rows for View Report Count Report job.
RP Percentage of Ad-
% Ad-Hoc Jobs Percentage of ad-hoc jobs vs. total jobs.
Hoc Jobs
RP Percentage of Jobs % Cache Hit Percentage of jobs that hit a server cache vs.
with Cache Hit Jobs total jobs.
RP Percentage of % Cube Cache Percentage of jobs that hit a server cache vs.
Jobs with Cube Hit Hit Jobs total jobs.
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
RP Percentage of Jobs % Failed DB Percentage of jobs with database error vs. total
with DB Error Jobs jobs.
RP Percentage of
% Failed Jobs Percentage of jobs with any error vs. total jobs.
Jobs with Error
RP Percentage of
% Subscription Percentage of jobs from Narrowcast Server vs.
Narrowcast Server
Jobs total jobs.
jobs
RP Percentage of % Prompted
Percentage of prompted jobs vs. total jobs.
Prompted Jobs Jobs
RP Percentage of % Subscription
Percentage of scheduled jobs vs. total jobs.
Scheduled Jobs Jobs
RP Jobs with No Data Jobs with No Counts the number of jobs that returned no
Returned Data Returned data.
RP Export Engine Export Engine Counts the number of report jobs passing
Jobs Jobs through the export engine.
Number of Sessions Sessions per Provides the number of sessions created per
per User User connected user.
DP Average Number of Jobs per Provides the average number of document jobs
Jobs per Session Session per user session.
RP Average Number Jobs per Provides the average number of job executions
of Jobs per Session Session per session.
RP Average Number of
Jobs per User Average number of job executions per user.
Jobs per User
P Number of Data
Data Request Counts the number of jobs requested by a user
Request Jobs with
Jobs with Error and encountered an error.
Error
RP Number of Jobs Jobs w/o Counts the number of jobs that don't create a
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
Cache
w/o Cache Creation cache.
Creation
Jobs with
RP Number of Jobs
Cache Counts the number of jobs that create a cache.
with Cache Creation
Creation
RP number of
Subscription
Narrowcast Server Counts the number of subscription jobs.
Jobs
jobs
RP Number of Users Users Who Counts the number of distinct users that
who ran report Execute Jobs executed jobs.
RP Export Engine Export Engine Counts the number of jobs passing through the
Jobs Jobs export engine.
RP Average Daily Use Avg Job Provides the max Execution Duration (in
Duration per job Execution seconds) of jobs. A jobs Execution Duration
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
(hh:mm:ss) Duration (s) records the total time spent during a job.
Users Who
DP Number of Users Counts the number of distinct users executing a
Ran
who ran Documents document.
Documents
Number of Users Users Who Counts the number of distinct users executing a
running reports Ran Reports report.
Users Who
Number of Users Counts the number of distinct users executing a
Ran
running documents document.
Documents
Enterprise Platform
Manager Analytics Description
Metric Name Metric Name
Last # of
Last Cube Row
Intelligent Cube Provides the last cube row count.
Count
Rows
Last Intelligent Last Cache Size Provides the last Intelligent cube size in KB recorded
Cube Size (KB) (KB) by the Intelligence Server.
Number of
Sessions (IS_ Counts the number of sessions from the table IS_
Sessions
CUBE_ CUBE_ACTION_FACT.
ACTION_FACT)
Enterprise Platform
Manager Analytics Description
Metric Name Metric Name
Intelligent Cube
Size (KB)
Cache Size (KB) Provides the size of a cache instance in KB.
Last Intelligent
Cube Size (KB)
Last # of
Intelligent Cube
Rows Cube Row Count Counts the total number of rows contained in a cube.
# of Rows in an
Intelligent Cube
Number of Number of
Dynamically Dynamically Counts the number of jobs from reports not based on
Sourced Report Sourced Report Intelligent Cubes but selected by the engine to go
Jobs against Jobs against against an Intelligent Cube because the objects on
Intelligent Intelligent the report matched what is on the Intelligent Cube.
Cubes Cubes
Number of
Number of Cube
Intelligent Cube Counts the number of times a cube was published.
Publishes
Publishes
Number of
Number of Cube
Intelligent Cube Counts the number of times a cube was refreshed.
Refreshed
Refreshes
Number of
Number of Cube
Intelligent Cube Counts the number of times a cube was republished.
Republishes
Republishes
Number of Jobs
Number of Jobs
with Intelligent Counts the number of jobs hitting the cube.
With Cube Hit
Cube Hit
Enterprise Platform
Manager Analytics Description
Metric Name Metric Name
Cubes
Average Cube
Publish Time Average Time to
(hh:mm:ss) (IS_ Publish a Cube Measures the average time a cube was published.
CUBE_ACTION_ (hh:mm:ss)
FACT)
Number of Number of
Counts the number of consolidations
Consolidations in Consolidations in
stored in the projects.
Metadata Metadata
Servers.
Number of Server
Number of Intelligence Counts the number of Intelligence
Definitions in Metadata
Servers in Metadata Servers.
Number of Number of
Counts the number of transformations
Transformations in Transformations in
stored in the projects.
Metadata Metadata
Number of Users (IS_ Number of Users in Counts the number of users stored in
USER_PROJ_SF) Metadata the Intelligence Servers.
Number of Users in
Number of Users in Counts the number of users stored in
Metadata (EM_USER_
Metadata the Intelligence Servers.
VIEW)
Enterprise Platform
Manager Analytics Description
Metric Name Metric Name
HL Days HL Days
Since Last Since Last
Provides the days since the last action was performed.
Action: Any Action: Any
action action
HL Days
HL Days
Since Last Provides the days since the last request for contents of
Since Last
Action: an inbox message.
Action: View
Request
HL Last
HL Last
Action Provides the date and time of the last action performed
Action Date:
Timestamp: on an inbox message.
Any Action
Any Action
HL Number of HL Number of
Provides the number of inbox message actions that
Actions with Actions with
resulted in errors.
Errors Errors
HL Number of HL Number of Provides a count of jobs that occur with inbox message
Jobs Jobs requests.
HL Number of HL Number of
Provide the number of inbox messages.
Messages Messages
Enterprise Platform
Manager Analytics Description
Metric Name Metric Name
Messages Messages
with Errors with Errors
HL Number of HL Number of
Provides the number of requests for the contents of an
Messages: Messages
inbox message.
Requested Viewed
Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name
Successful Successful
Report jobs with execution time less than 1 minute.
Jobs < 1 min Jobs <1 min
Successful Successful
Report jobs with execution time greater than 5 minutes.
Jobs > 5 min Jobs > 5 min
Avg. CPU
Avg Job CPU Average CPU time taken by the Intelligence server for
Duration per
Duration (s) processing a job.
Job (secs)
Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name
RP Max.
Elapsed Max Job
Duration per Elapsed Provides the maximum report duration per report job.
Job Duration (s)
(hh:mm:ss)
RP Max.
Execution Max Job
Provides the maximum job execution duration per report
Duration per Execution
job.
Job Duration (s)
(hh:mm:ss)
RP Max.
Queue Max Job
Duration per Queue Provides the maximum queue duration per report job.
Job Duration (s)
(hh:mm:ss)
DP CPU
Sum Job CPU Total CPU time taken by the Intelligence server for
Duration
Duration (s) processing a job.
(secs)
RP CPU
Sum Job CPU Total CPU time taken by the Intelligence server to
Duration
Duration (s) process a job.
(msec)
RP Average
CPU Duration Avg Job CPU Average CPU time taken by the Intelligence server to
per Job Duration (s) process a job.
(msecs)
Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name
Sum Job
RP Prompt
Prompt Total time taken to answer the set of prompts for a job.
Answer
Answer For example, Prompt Answer Time.
Duration (secs)
Duration (s)
RP Average
Avg Job Total
Queue Average time a job was in the Intelligence Server queue
Queue
Duration per before the Intelligence Server started processing it.
Duration (s)
Job (secs)
RP Average
Avg Job
Execution Average time taken by the Intelligence Server to execute
Execution
Duration per a job, including the database execution time.
Duration (s)
Job (secs)
RP Data Sum Data Measures the sum time taken by the Intelligence Server
Request Error Request Error to process failed jobs. The jobs must be triggered
Elapsed Elapsed manually.
Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name
Duration
Duration (s)
hh:mm:ss
RP Timed out Counts the number of jobs that were timed out by the
Timeout Jobs
jobs Intelligence server.
RP Execution Sum
Measures total time taken by a job that does not hit a
Duration for Database Job
Cache/Intelligence Cube and runs against the database
SQL Executing Execution
i.e. jobs that execute SQL.
Reports Duration (s)
Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name
(hh:mm:ss)
RP Execution Sum Job Provides the total Execution Duration (in seconds) of
Duration Execution jobs. A jobs Execution Duration records the total time
(hh:mm:ss) Duration (s) spent during a job.
RP Elapsed Max Job Provides the max Elapsed Duration (in seconds) of jobs.
Duration Elapsed A jobs Elapsed Duration records the total time spent
(secs) Duration (s) during an execution including the total queue time.
RP Elapsed Sum Job Provides the total Elapsed Duration (in seconds) of jobs.
Duration Elapsed A jobs Elapsed Duration records the total time spent
(hh:mm:ss) Duration (s) during an execution including the total queue time.
RP Data
Sum Data
Request
Request Provides queue duration during a report, document, or
Queue
Queue dossier job execution.
Duration
Duration (s)
hh:mm:ss
Sum Data
RP Data
Request
Request Provides the duration of time for answering the set of
Prompt
Prompt Answer prompts in a report, document, or dossier job.
Answer Time
Time hh:mm:ss
(s)
RP Data
Sum Data
Request
Request Measures the sum time taken by the Intelligence server
Elapsed
Elapsed to process jobs. The jobs must be manually triggered.
Duration
Duration (s)
hh:mm:ss
Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name
RP Average Average
Queue Queue
Measures the average time a job waits in queue. This job
Duration per Duration per
must be triggered.
Data Request Data Request
Job seconds Job (s)
RP Average
Avg Job Provides the average Prompt Answer Duration (in
Prompt
Prompt seconds) of jobs. A jobs Prompt Answer Duration
Answer Time
Answer records the total time spent answering a prompt during a
per Job
Duration (s) job.
(hh:mm:ss)
Average
RP Average
Prompt
Prompt Answer Measures the average time taken by the Intelligence
Answer
Duration per server to process a prompt answer. This job must be
Duration per
Data Request triggered.
Data Request
Job seconds
Job (s)
RP Average Avg Job Provides the average Execution Duration (in seconds) of
Execution Execution jobs. A jobs Execution Duration records the total time
Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name
Duration per
Job Duration (s) spent during a job.
(hh:mm:ss)
RP Average Average
Execution Execution Measures the average time taken by the Intelligence
Duration per Duration per server to execute the failed job. This job must be
Data Request Data Request triggered.
Job seconds Job (s)
RP Average
Elapsed Avg Job Provides the average Elapsed Duration (in seconds) of
Duration per Elapsed jobs. A jobs Elapsed Duration records the total time
Job Duration (s) spent during an execution including the total queue time.
(hh:mm:ss)
RP Average Average
Elapsed Elapsed
Measures the average time taken by the Intelligence
Duration per Duration per
server to process the job. This job must be triggered.
Data Request Data Request
Job seconds Job (s)
RP Average
Average
Elapsed
Elapsed Measures the average time taken by the Intelligence
Duration per
Duration per server to process the failed job. The job must be
Data Request
Data Request manually triggered.
Error Job
Error Job (s)
seconds
Pl at f o r m An al yt i cs Dat a M o d el
Object Hierarchy
The O bject hierarchy and fact tables track all key schema (tables, facts,
attributes, etc) and application objects (reports, dashboard, cubes, etc)
stored in the MicroStrategy Metadata(s) being monitored by Platform
Analytics. The object hierarchy does not record data related to configuration
objects (subscriptions, schedules, users, user groups, etc). Configuration
objects are stored in separate hierarchies.
lu_object_category
The O bject Category is a high-level categorization of types of objects in the
metadata, such as reports, attributes, documents, metrics, and more. This
table and the corresponding attribute act as key filters/selectors for
analyzing particular types of objects in the metadata. The data in this table
is static and predefined.
l Attributes
object_category_ varchar
desc l Columns (128)
l Reports
l Cubes
lu_object_type
The O bject Type for a specific O bject stored in the metadata(s) being
monitored. This attribute provides more granular grouping options for
objects. For example, if an object’s category is Cube, its type may be OLAP
Cube or Data Import Cube. The data in this table is static and predefined.
Data-
Column Description
Type
object_ smallint
The fixed numeric ID for the Object Type.
type_id (6)
object_ The fixed list of object Types. Sample elements include: varchar
type_desc l OLAP Cube (128)
Data-
Column Description
Type
lu_object_extended_type
The Object Extended Type for a specific Object Type stored in the metadata
(s) being monitored. This attributes provides more granular object types,
such as MDX reports or data import cubes. The data in this table is static
and predefined.
Data-
Column Description
Type
extended_ The fixed numeric ID for the Extended Type. This column is the
int(11)
type_id source of the Object Extended Type attribute.
lu_object
The O bject contains the distinct application or schema object stored in the
metadata for a specific project. Each object has a unique GUID and is
defined at the Project level.
varchar
object_guid The GUID of the Object in the metadata.
(32)
ObjectName:SignedInt
varchar
object_desc The description of the object.
(512)
The UTC date when the object was first created. This
creation_date date
column is the source of the Object Creation Date attribute.
The latest date from when the object was last modified.
modification_ The date will continue to update as the object is
date
date modified. This column is the source of the Object
Modification Date attribute.
creation_
The UTC timestamp for when the object was first created. datetime
timestamp
The latest UTC timestamp for when the object was last
modification_
modified. The timestamp will continue to update as the datetime
timestamp
object is modified.
l Hidden
l Deleted
l Ad-hoc
varchar
object_version The version ID of the object.
(32)
The flag used to track if the object has been certified in the
metadata. The flag can be:
varchar
object_certified l Not Applicable
(14)
l N
l Y
lu_component_object_category
A view on the lu_object_category warehouse table. This table tracks the
categorization of child component objects nested within an Object. The data
in this table is static and predefined.
WH Table Data-
View Table Column Description
Column Type
lu_component_object_type
A view on the lu_object_type warehouse table. This table tracks the Object
Types of child Component Objects nested within an object. It provides a
more granular analysis of the Object Category. The data in this table is
static and predefined.
component_
object_ The numeric ID of the corresponding smallint
object_
category_id Component Object Category. (6)
category_id
lu_component_object
A view on the lu_object warehouse table. This table lists the distinct
application or schema objects stored in the metadata for a specific project.
Each Component Object has a unique GUID and is defined at the Project
level.
component_ varchar
object_desc The description of the Component Object.
object_desc (512)
component_
object_ The navigation path to the Component varchar
object_
location Object in the Project. (1024)
location
l Y
fact_object_component
An O bject in MicroStrategy can exist as a standalone entity or it may be
used by other objects and therefore can be the C omponent Object . The
relationship between O bjects and their C omponent Objects is stored in the
fact_object_component table. This table stores only the current direct
relationship between an object and its components. For example, if an
attribute is removed from a report, it will be removed from the fact_object_
component table.
Data-
Column Description
Type
Abstract Attribute
Attribute
Attribute Role
Attributes
Attribute Transformation
Derived Attribute
Recursive Attribute
Cards Card
Columns Column
Consolidations Consolidation
Custom Group
Custom Groups
Element Grouping
Document
Dossiers Dossier
Facts Fact
Filter
Filter Segment
User Folder
Folders
System Folder
System Hierarchy
Hierarchies
User Hierarchy
Managed Attribute
Managed Column
Managed Consolidation
Managed Hierarchy
Managed Card
Managed Folder
Managed Metric
Managed Object
Metric
Metric Extreme
Metrics
Metric Subtotal
Reference Line
System Subtotal
Training Metric
Projects Project
Embedded Prompt
Level Prompt
Prompt
Value Prompt
Base Report
Datamart Report
Graph Report
Grid Report
Reports
Incremental Refresh Report
SQL Report
Text Report
Database Table
Logical Table
Templates Template
Transformations Transformation
Unknown Unknown
lu_object_status
The latest status of the Object. The O bject Status continues to change as
the Object is modified. The status will always reflect the most recent state.
An object is defined as an Application or Schema Object stored in the
metadata. It does not include the status of the configuration objects
Data-
Column Description
Type
object_
The defined numeric ID for the Object Status. tinyint(4)
status_id
The current status of the Object. The status changes if the object
is modified, i.e. marked as hidden or deleted from the metadata.
The object status elements include:
object_
l Element Load Object
status_ varchar
desc l Ad Hoc (25)
l Visible
l Deleted
l Hidden
lu_object_owner
lu_object_owner is a view on the lu_mstr_user table in the warehouse. The
lu_object_owner table is used to track the user who created the object or
another user who currently owns the object. The owner usually defines the
permissions for how the object can be used and by whom.
object_owner_ varchar
mstr_user_guid The metadata GUID of the User object.
guid (32)
l Deleted
fact_object_change_journal
This fact table stores the historical change journal modification information.
By joining this table with other lookup tables, like lu_object, lu_account, and
lu_account, the user can analyze who changed what object at which time.
The objects that track the change journal information include all the object
types in the lu_object_type tables. Adding Change Journal Fact tables to the
Platform Analytics Repository enables administrators to analyze the object
modification history for all objects in the metadata(s) being monitored by
Platform Analytics.
change_type_
The fixed ID for the Object Change Type. tinyint(4)
id
transaction_ datetime
MicroStrategy internal use.
timestamp (3)
lu_change_type
The Change Type is the object change types a user performs on an object.
For example, creating a new object or deleting an object.
Data-
Column Description
Type
0 Reserved
1 Reserverd2
2 Save Objects
3 Reserverd3
4 Delete Objects
5 Garbage Collection
12 Copy Object
Security Filter
lu_security_filter
The list of Security Filter objects and the corresponding descriptive
information from the MicroStrategy metadata(s) being monitored by Platform
Analytics. For more information about Security Filter Objects, see
Restricting Access to Data: Security Filters.
security_filter_ varchar
The GUID of the Security Filter object.
guid (32)
security_filter_ The name of the Security Filter object stored in the varchar
name metadata. (255)
security_filter_ varchar
The detailed description of the Security Filter object.
desc (512)
creation_ The UTC timestamp for when the Security Filter was first
datetime
timestamp created.
l Hidden
varchar
folder_guid MicroStrategy internal use.
(32)
transaction_
MicroStrategy internal use. datetime
timestamp
security_filter_ varchar
Version ID of the Security Filter.
version (32)
fact_action_security_filter_view
This fact table tracks which security filters were applied on a particular
execution. By joining this table with the fact_access_transaction_view table,
the user can analyze which Security Filters were applied for each execution.
The S ecurity Filter Sequence represents order the security filter was
applied, when multiple security filters are applied during an execution.
Data-
Column Description
Type
parent_ The auto-generated numeric Parent Action ID. This is a source bigint
tran_id column for the Parent Action attribute. (20)
security_ bigint
The auto-generated numeric ID of the Security Filter object
filter_id (20)
Data-
Column Description
Type
security_ The source column for the Security Filter Sequence fact.
filter_ Represents the sequence order a security filter was applied when int(11)
sequence multiple security filters are applied during an execution
Prompt Hierarchy
lu_prompt
The lu_prompt table contains the distinct prompt objects stored in the
metadata for a specific project.
Each p rompt has a unique GUID and is defined at the Project level.
varchar
prompt_guid The GUID of the prompt object in the metadata.
(32)
varchar
prompt_name The name of the prompt stored in the metadata.
(255)
varchar
prompt_desc The detailed description of the prompt.
(512)
l Deleted
creation_
The UTC timestamp for when the prompt was first created. datetime
timestamp
The latest UTC timestamp for when the prompt was last
modification_
modified. The timestamp will continue to update as the datetime
timestamp
prompt is modified.
l Level Prompt
l Object Prompt
l Value Prompt
varchar
prompt_version The version ID of the prompt.
(32)
l Y
lu_prompt_type
The P rompt Type for a specific P rompt stored in the metadata(s) being
monitored. This attribute provides groupings of the prompts. The data in this
table is static and predefined.
Data-
Column Description
Type
prompt_ The fixed numeric ID of the Prompt Type. This is the source smallint
type_id column of the Prompt Type attribute. (6)
l Embedded Prompt
l Value Prompt
fact_prompt_answers
This fact table stores the P rompt Answer for each prompt during an
execution. During execution, each time a prompt is answered, a new record
is added into this table. If the user does not answer a prompt selection, the
prompt answer is recorded as null.
Example report:
Data-
Column Description
Type
varchar
prompt_id The auto-generated numeric ID for the prompt.
(255)
prompt_ The source for Prompt Order fact. Indicates in what order a user smallint
order_id answered a series of prompts in a prompted object. (6)
The prompt answer chosen by the user. The prompt answer can
prompt_ varchar
include a metric value, an object, an attribute list, custom text,
answer (2048)
etc. This is the source column of the Prompt Answer attribute.
Cache Hierarchy
The C ache hierarchy provides analysis for C ache Object r elated actions.
Cube actions can include both executions (Cube publish, Report hit Cube,
etc) as well as cube administration tasks (Cube Load, Cube Unload, Delete
Cube, etc).
The fact_action_cube_cache table stores the data for Cache instances used
during a cube action. Key facts include: C ache Expiration Timestamp (UTC),
Cache Last Update Timestamp (UTC), Cache Size (KB), Historical Hit
Count, and Hit Count.
lu_cache_object
Cache Objects stored in the table represents the cube objects for which the
c ache was created. The cache hierarchy only stores information related to
cube and report caches. This table is a view on table lu_object.
Warehouse
View Table
Table Description Data-Type
Column
Column
l OLAP Cube
cache_object_
creation_ The UTC timestamp for when the
creation_ datetime
timestamp Cache Object was first created.
timestamp
Warehouse
View Table
Table Description Data-Type
Column
Column
cache_object_
modification_ The timestamp of when the Cache
modification_ Datetime
timestamp Object was last modified.
timestamp
cache_object_ object_status_
The status of the Cache Object. tinyint(4)
status_id id
cache_object_ varchar
object_version The version of the Cache Object.
version (32)
l Y
lu_cache
The cache instances are stored in the lu_cache table. This stores all cube
cache instances created in the metadata over time. Cache instances are
identified based on the GUID.
Data-
Column Description
Type
bigint
cache_id The auto-generated ID for the cache instance.
(20)
Data-
Column Description
Type
instance_guid (32)
lu_cache_type
Cache Type i s the categorization of the C ache. Only report Cache and
Intelligence Cube Cache types are tracked.
Data-
Column Description
Type
cache_
The fixed ID for the Cache Type. tinyint(4)
type_id
cache_ The predefined list of Cache Types. For example, a sample varchar
type_desc element includes an Intelligence cube cache. (25)
lu_cache_object_owner
The lu_cache_object_owner table is used to track the user who created the
Cache Object or the user who currently owns the Cache Object. The C ache
Object Owner usually defines the permissions for how the C ache Object can
be used and by whom. The lu_cache_object_owner table is a view on the lu_
mstr_user table in the warehouse.
l Deleted
lu_cache_object_type
The C ache O bject Type represents the type of objects the cache instances
created. This attribute provides more granular grouping options for the
Cache Objects. The data in this table is predefined and this is a view on the
table lu_object_type.
cache_
object_ smallint
object_type_ The fixed ID for the Cache Object Type.
type_id (6)
id
lu_cache_status
The C ache Status indicates the status of the cache instance. The Cache
Status can change for the cache instance overtime. Therefore, the Cache
Status is stored in the fact_latest_cube_cache and fact_action_cube_cache
tables to track the latest, historical, and changing status over the life of the
cube cache instance. For a detailed explanation of the Cube Status values,
see KB31566: MicroStrategy 9.4.x - 10.x Intelligent Cube status indication
and workflow .
Data-
Column Description
Type
cube_
The fixed numeric ID for the Cache Status. int(11)
status_id
The description form of the Cache Status. The status can be the
combination of any of the following elements:
l Processing
l Active
l Filed
cube_ l Monitoring Information Dirty
varchar
status_
l Dirty (255)
desc
l Loaded
l Load Pending
l Unload Pending
l Imported
l Foreign
Cache Project
The C ache Project attribute is a logical table alias based off the lu_project
table in the warehouse.
fact_action_cube_cache
The f act_action_cube_cache t able records the transactions telemetry
related to Cube Cache instances as well as key metrics for each cube
action.
l Historical Hit Count - the number of times the Intelligent cube is used
by reports/documents/dossiers since it was published. This number
will increment regardless of cache updates.
l Cache Size (KB) - records the size of the cube cache instance in KB.
l Cube Last Update Timestamp (UTC) - The UTC timestamp when the
cube was last updated.
l Cube Modification
l Cube Executions
l Cache Hit
l Cache Creation
historical_
Historical Hit Count of a cube instance. bigint(20)
hit_count
last_update_
Last update timestamp (UTC) of the cube. datetime
timestamp
fact_latest_cube_cache
The fact_latest_cube_cache table records only the latest transaction related
to the cube cache instance.
l Hit Count - the number of times the Intelligent Cube has been used by
reports/documents/dossiers since it was last updated. Hit count will
increase every time the report/document/dossier gets executed and hits
the cache but will reset when the cache is updated
l Historical Hit Count - the number of times the Intelligent Cube has been
used by reports/documents/dossiers since it was published. This number
will increment regardless of cache updates.
l Cache Size (KB) - records the size of the cube cache instance in KB.
l Cube Last Update Timestamp (UTC) - The timestamp (in UTC timezone)
when cube was last updated.
Data-
Column Description
Type
iserver_
The auto-generated ID for the cube cache instance. bigint(20)
instance_id
historical_hit_
Historical Hit Count of a cube instance. bigint(20)
count
transaction_
MicroStrategy internal use. bigint(20)
timestamp
last_update_
Last update timestamp (UTC) of the cube. datetime
timestamp
lu_metadata
List of the metadata repositories that contain the configuration, schema, and
application objects being monitored by Platform Analytics.
Data-
Column Description
Type
Data-
Column Description
Type
guid Analytics.
lu_project
List of the p rojects in the metadata repositories that are being monitored by
Platform Analytics. All applications, schema, and configuration objects are
stored at the project level.
varchar
project_guid The GUID of the project.
(32)
varchar
project_name The name of the project stored in the metadata.
(255)
creation_
The UTC timestamp for when the project was first created. datetime
timestamp
The latest UTC timestamp from when the Project was last
modification_
modified. The modification timestamp will continue to datetime
timestamp
update as the project is modified.
transaction_
MicroStrategy internal use. datetime
timestamp
project_ varchar
The version ID of the project.
version (32)
lu_db_type
The list of the database type in the metadata repositories.
lu_db_version
The list of the database version in the metadata repositories.
db_version_ The description of the database version for the database varchar
desc instance. (225)
lu_db_login
List the d atabase login created in the metadata. All configuration objects are
stored at the metadata level. For more information about database
instances, s ee C reating a database login .
Data-
Column Description
Type
db_login_ varchar
The Name of the database login.
name (32)
varchar
db_login_guid The GUID of the database login.
(255)
transaction_
MicroStrategy internal use. datetime
timestamp
db_login_ varchar
The version ID of the database login.
version (32)
lu_db_instance
List the d atabase instance created in the metadata. All configuration objects
are stored at the metadata level. For more information about database
instances, s ee C reating a database instance .
db_instance_ varchar
The GUID of the database instance.
guid (32)
db_instance_ varchar
The name of the database instance.
name (255)
db_instance_ The long description of the database instance added in the varchar
desc Properties editor. (255)
creation_ The UTC timestamp for when the database instance was
datetime
timestamp first created.
transaction_
MicroStrategy internal use. datetime
timestamp
lu_db_connection
List the d atabase connections created in the metadata. All configuration
objects are stored at the metadata level. For more information about
database connections, s ee C reating a Database Connection and H ow to
Manage Database Connections .
db_
varchar
connection_ The GUID of the database connection.
(32)
guid
db_
varchar
connection_ The name of the database connection.
(255)
name
transaction_
MicroStrategy internal use. datetime
timestamp
db_
varchar
connection_ The version ID of the database connection.
(32)
version
data_source_ The name of the configured DSN for the database varchar
name connection. (4096)
lu_db_connection_map
The D atabase Connection Map i s an attribute to link the unique
combinations of D atabase Connection , Database Instance , and D atabase
Login , which were used during an action (i.e. running a report). The ID value
is auto-generated, but does not represent anything meaningful on its own.
All configuration objects are stored at the metadata level.
database connection.
Action Hierarchy
Every Intelligence server and Identity server log is processed and stored at
the transactional ( A ction, Parent Action ) level in the main fact table access_
transaction.
The action hierarchy allows users to analyze the type of actions that a user
is performing on the system. A sample of transaction types (i.e. A ction
Each Action has a corresponding S tatus a nd Status Category . The Status
and Status Category are used to differentiate successful or failed
transactions.
l Action - The Action fact records the unique tran_id recorded in the
access_transaction table
l Total Queue Duration (ms) - The Total Queue Duration (ms) fact
records the total time spent waiting in queue for the job to be executed
in milliseconds. This includes initial queue time and the queue time
between different steps.
l Initial Queue Duration (ms) - Total time spent by the job waiting in
initial queue to begin execution in milliseconds.
l Job Step total Queue Duration (ms) - Time spent waiting in the
queue between job steps in milliseconds.
l Elapsed Duration (ms) - Total time spent executing a job from queue
to finish in milliseconds.
l Job Start Timestamp (UTC) - The timestamp (in UTC timezone) a job
began execution.
access_transaction
The amount of time a job spent waiting for the user to input
prompt_ an answer to prompts required to execute the job in
int(11)
answer_time milliseconds. This is the source column of the Prompt
Answer Duration (ms) fact.
facility_ The Facility street Address ID where the Badge transaction bigint(20)
lu_action_type
The A ction Type is type of action the user performs, for example creating a
new subscription, exporting a report to PDF, or scanning a Badge QR code.
The action type is a common attribute across both MicroStrategy and Badge
transactions.
Data-
Column Description
Type
action_type_ varchar
The detailed type of action the user performs.
desc (255)
lu_action_category
Action Category is a grouping of A ction Types . It provides a broader
analysis of user actions i.e. Executions or Badge Actions. The Action
Category attribute enables filtering when analysis on a single type of
transaction is desired.
Policy Policy.
Export to CSV
Indicates a user exported a view report to
with Cube Cache
CSV and hits the cube cache.
Hit
Export to Plain
Indicates a user exported a view report to
Text with Cube
Plain Text and hits the cube cache.
Cache Hit
Export to HTML
Indicates a user exported a cube-based
with Cube Cache
document to HTML from history list
Hit(Developer
subscription and hits the cube cache.
Only)
Export to Plain
Indicates a user exported a normal report to
Text with Cache
Plain Text and hits the cache.
Hit
Execute and
Indicates a user exported a report to Plain
Export to Plain
Text without hitting any cache.
Text
Execute with
Indicates a user exported a report using a
Dynamically
dynamically sourced cube and hit the
Sourced Cube
cache.
Cache Hit
Status (errors)
The S tatus and S tatus Category attributes are used to track the
success/failure for both MicroStrategy and Badge transactions.
lu_status_category
The categorization of an action status recorded in the logs.
Data-
Column Description
Type
status_ smallint
The auto-generated numeric ID of the status category.
category_id (6)
lu_status
The status of whether the action by the user was successful, denied, or
resulted in a specific error message. For Badge, there is set list of denied
types. For MicroStrategy the exact error message is recorded. The status_
desc column stores the exact error message recorded from the Intelligence
Server logs and can be used to analyze the unique job execution details.
Data-
Column Description
Type
cancel_
A flag representing whether the job was cancelled or not. tinyint(4)
indicator
Parent Job is a result of a job triggering another child job. For example,
when a document with reports as datasets is executed, it will first create a
document job, which will trigger several child jobs for report execution. In
this example, the job associated with the document execution is a parent job
of report execution jobs. Standalone report execution will not have a parent
job.
lu_job_step_type
This table lists the Intelligence Server tasks involved in executing a report or
a document. Below is list of all the possible values for Job Step.
Data-
Column Description
Type
step_type_
The fixed numeric ID for the document or report job type. int(11)
id
The Job Type that was executed against the Intelligence server.
Job Types can include,
l MD Object Request
l Close Job
l SQL Engine
l SQL Execution
l Analytical Engine
step_type_ varchar
l Resolution Server
desc (255)
l Report Net Server
l Element Request
l Document Execution
Data-
Column Description
Type
l Document Send
l Request Execute
l Datamart Execute
l Document Formatting
l Document Manipulations
l Export Engine
l Post-processing Task
l Delivery Task
Job Step
Description
Type
MD Object
Requesting an object definition from the project metadata
Request
Close Job Closing a job and removing it from the list of pending jobs
SQL Engine SQL is generated that is required to retrieve data, based on schema
SQL Execution SQL that was generated for the report is executed
Analytical
Applying analytical processing to the data retrieved from the data source
Engine
Resolution
Loading the definition of an object
Server
Report Net
Transmitting the results of a report
Server
Element
Attribute element browsing
Request
Get Report
Retrieving a report instance from the metadata
Instance
Error Message
Sending an error message
Send
Output
Sending a message other than an error message
Message Send
Find Report
Searching or waiting for a report cache
Cache
Document
Executing a document
Execution
Document
Transmitting a document
Send
Job Step
Description
Type
Update Report
Updating report caches
Cache
Request
Requesting the execution of a report
Execute
Datamart
Executing a datamart report
Execute
Document
Constructing a document structure using data from the document’s
Data
datasets
Preparation
Document
Exporting a document to the requested format
Formatting
Document
Applying a user’s changes to a document
Manipulation
Apply View
Reserved for future use
Context
The cube instance is located from the Intelligent Cube Manager, when a
Find Cube
subset report, or a standard report that uses dynamic caching, is
Task
executed.
Update Cube The cube instance is updated from the Intelligent Cube Manager, when
Task republishing or refreshing a cube.
Post-
processing Reserved for future functionality.
Task
Job Step
Description
Type
Document
Dataset A virtual task only used for statistics manager and enterprise manager to
Execution record the time that spend on dataset execution.
Task
Document Will be triggered after SQL Engine step discovers prompts, collect
Process unanswered prompts and present them to client. After get answers
Report with launch jobs to execute these dataset which contains unanswered
Prompt prompts.
Data Import
Data
This task prepares the data for multiple tables in data import cubes.
Preparation
Task
Remote
Server
Direct access on remote MSTR project
Execution
Task
Import
Dashboards Asynchronous Import of Dashboards
Async Task
fact_step_sequence_view
This table is used when the Document and/or Report Job Steps option is
enabled for Advanced Statistics logging via Command Manager. It stores
information on each processing step of a document/dossier/report
execution. It is best used for troubleshooting the performance of an object at
the job level.
l Job Step Start Timestamp (UTC) - the timestamp (in UTC timezone)
when the Job Step begins.
l Job Queue Duration (ms) - the fact calculates the time spent waiting
in queue for the job to be executed in milliseconds.
l Job CPU Duration (ms) - the time spent on CPU during the job
execution in milliseconds.
l Job Step Duration (ms) - the total execution time for the job
execution in milliseconds.
parent_tran_
The auto-generated numeric action ID. bigint(20)
id
step_start_
The UTC timestamp when the job step started. datetime
timestamp
step_finish_
The UTC timestamp when the job step finished. datetime
timestamp
job_queue_
The Queue duration in milliseconds. bigint(20)
time
job_cpu_
The CPU duration in milliseconds. bigint(20)
time
step_
duration_ The total execution duration time in milliseconds. bigint(20)
time
lu_session_view
Each user that connects to the MicroStrategy Intelligence server and/or
project has a unique S ession connection GUID. A user cannot log in to a
project without first having a session to the Intelligence server. However, a
user can have a session to the Intelligence server without connecting to a
project (i.e. performing administrative task in Developer). The lu_session_
view table tracks the unique session connection information at the project
and metadata level.
For each unique user S ession that is created, there will be an I ntelligence
Server Instance, a Session Source , a C lient Server Machine, a nd a D evice .
Data-
Column Description
Type
varchar
session_guid The GUID of the Session.
(32)
session_ The ID of the session source that was used to establish the
bigint(20)
source_id user session connection.
metadata_id The metadata ID for which the user session was connected. bigint(20)
lu_session_source
Each S ession that is created as a user connection to the Intelligence server
and Project has a source. The Session Source r epresents the client or tool
that the user used to establish a connection.
Data-
Column Description
Type
session_ bigint
The fixed numeric ID value for the Session Source.
source_id (20)
0 Not Applicable
1 Developer
3 Web Administrator
4 Intelligence Server
5 Project Upgrade
session_
6 Web varchar
source_
(255)
desc 7 Scheduler
8 Custom Application
9 Narrowcast Server
10 Object Manager
13 Command Manager
14 Enterprise Manager
Data-
Column Description
Type
16 Project Builder
17 Configuration Wizard
18 MD Scan
19 Cache Utility
20 Fire Event
22 Web Services
23 Office
24 Tools
25 Portal Server
26 Integrity Manager
27 Metadata Update
28 COM Browser
29 Mobile
31 Health Center
32 Cube Advisor
34 Desktop
35 Library
36 Library iOS
37 Workstation
39 Library Android
Data-
Column Description
Type
40 Workstation MacOS
41 Workstation Windows
42 Desktop MacOS
43 Desktop Windows
44 Tableau
45 Qlik
46 PowerBI
47 Microsoft Office
lu_sql_pass_type
This table stores the static list of S QL Pass Types . Each SQL Pass that is
recorded in the fact_sql_stats table will have a corresponding SQL Pass
Type.
Data-
Column Description
Type
sql_pass_
The fixed numeric ID for the SQL Pass Type. int(11)
type_id
Data-
Column Description
Type
The descriptive name for the SQL Pass Type. The SQL Pass
Type can include:
l Select
l Create Table
l Analytical
l Select Into
l Metric Qualification
l User Defined
Data-
Column Description
Type
l MDX Query
l Sap Bapi
lu_sql_clause_type
This table stores the static list of SQL Clause Types. Each SQL Pass that is
recorded in the fact_sql_stats table will have a corresponding SQL Clause
Type.
Data-
Column Description
Type
sql_clause_ smallint
The fixed numeric ID value for the SQL Clause Type.
type_id (6)
0 Not Applicable
1 Select
8 From
16 Where
17 Order By
fact_sql_stats
This table contains the S QL Pass information that is executed on the
warehouse during a report job executions. Each SQL Pass is recorded at the
Parent Action level and one action can correspond to multiple SQL Passes.
This fact table is best used for performance analysis of reports execution
times to determine inefficient report definitions. Data will be available only
when the Advanced statistics option is enabled during configuration in
Command Manager.
The fact_sql_stats table is the source for the facts listed below:
sql_pass_
The sequence number of the SQL pass. int(11)
sequence_id
sql_start_
The UTC timestamp when the SQL Pass began. timestamp
timestamp
sql_end_
The UTC timestamp when the SQL Pass finished. timestamp
timestamp
For example,
sql_pass_
l Create Index int(11)
type_id
l Insert Into Values
execution_ The total time spent on the SQL Pass Statement. Defined
bigint(20)
time as the start timestamp minus the end timestamp.
total_tables_ The number of tables hit by the SQL pass. This is the
smallint(6)
accessed source column for the SQL Pass Tables Accessed fact.
lu_db_error
This table stores the list of Database Error Messages. Each SQL Pass that
is recorded in the fact_sql_stats table will have a corresponding db_error_
id.
db_error_ The full text of the database error message returned from the Varchar
desc server. (4096)
fact_report_columns
Data-
Column Description
Type
sql_ The SQL Clause Type ID that corresponds to which type of SQL
smallint
clause_ clause was executed against the given column / table. See lu_
(6)
type_id sql_clause_type for more details.
table_id The auto-generated Table ID that the SQL statement was run bigint(20)
Data-
Column Description
Type
lu_recipient
The R ecipient t able is used to track the contact that received a distribution
services message from a MicroStrategy A ccount . The Recipient can be:
1. A user object in the metadata: Name and Email are the same as in the
values stored in the metadata
For more information about different types of contacts, see Creating and
Managing Contacts for MicroStrategy Distribution Services.
Only executions related to subscriptions will have a valid recipient. All the
ad-hoc object executions will have a default recipient assigned to them. For
example, a user who is executing a report does not have a recipient. In
these logs, a default (recipient_id = -1) is assigned. To analyze on
subscription executions, exclude the recipient_id = -1.
Data-
Column Description
Type
recipient_ varchar
The GUID of the recipient.
guid (32)
recipient_ varchar
Name of the recipient who received the message.
name (255)
recipient_ The email address or file path of the recipient who received varchar
address the message. (512)
lu_subscription_base
In MicroStrategy, it is possible to trigger one subscription which is sent to
multiple users at the same time. In this case, there will be a P arent
Subscription, which is linked to child S ubscriptions. T he lu_subscription_
base table is used to track both the Parent and child Subscriptions. If a
Subscription does not have a parent, the same ID is repeated.
subscription_ varchar
The GUID of the subscription stored in the metadata.
guid (32)
subscription_ varchar
The name of Subscription stored in the metadata.
name (255)
subscription_ The HTML link for managing the subscription on a Java varchar
url_j2ee based web server. (8192)
subscription_ The HTML link for managing the subscription on a .Net varchar
url_dotnet based web server. (8192)
creation_
The UTC timestamp when the subscription is first created. datetime
timestamp
l Deleted
type_id
transaction_
MicroStrategy internal use. datetime
timestamp
lu_subscription
In MicroStrategy, it is possible to trigger one subscription which is sent to
multiple users at the same time. In this case, there will be a P arent
Subscription, which is linked to child S ubscriptions. T he lu_subscription
view table tracks Subscriptions created in the metadata(s) being monitored.
For more details about creating subscriptions, see S cheduling reports and
documents: Subscriptions . Note that parent subscriptions are not included in
this view table. See the lu_parent_subscription for more details about parent
subscriptions.
l Inactive
l Deleted
subscription_ subscription_
The ID of the type for the subscription. int(11)
type_id type_id
subscription_ subscription_
The ID of the owner of the subscription. bigint(20)
owner_id owner_id
lu_parent_subscription
In MicroStrategy, it is possible to trigger one subscription which is sent to
multiple users at the same time. In this case, there will be a P arent
Subscription, which is linked to child S ubscriptions. T he lu_parent_
subscription view table tracks Subscriptions created in the metadata(s)
being monitored. For more details about creating subscriptions, see
S cheduling reports and documents: Subscriptions.
l Deleted
type_id type_id
lu_subscription_type
The table is the predefined list of Subscription Types. Each Subscription h as
a corresponding Subscription Type , see T ypes of Subscriptions for more
details.
Data-
Column Description
Type
subscription_ smallint
The fixed numeric ID for the subscription type.
type_id (6)
l Email
l File
subscription_ varchar
l Print
type_desc (255)
l Custom
l History List
l Client
l Cache Update
Data-
Column Description
Type
l Mobile
l Personal View
l FTP
lu_subscription_owner
lu_subscription_owner is a view on the lu_mstr_user table in the warehouse.
The lu_subscription_owner table is used to track the user who created the
object or another user who currently owns the object. The owner usually
defines the permissions for how the object can be used and by whom.
The lu_owner view table is mapped to two logical tables in the Platform
Analytics project, O bject Owner and S ubscription Owner .
Warehouse
View Table
Table Description Data-Type
Column
Column
Warehouse
View Table
Table Description Data-Type
Column
Column
l Deleted
lu_schedule
The lu_schedule table contains the distinct S chedule objects stored in the
metadata. Each schedule has a unique GUID and is defined at the metadata
level. For more information about schedule objects, see Creating and
managing schedules.
varchar
schedule_guid The GUID of the Schedule object in metadata.
(32)
schedule_ varchar
The name of the Schedule stored in the metadata.
name (255)
varchar
schedule_desc The detailed description of the Schedule object.
(512)
creation_ The UTC timestamp for when the Schedule was first
datetime
timestamp created.
l Hidden
schedule_type_
The numeric ID of the corresponding Schedule Type. tinyint(4)
id
transaction_
MicroStrategy internal use. datetime
timestamp
schedule_ varchar
The version ID of the schedule.
version (32)
lu_schedule_type
Each S chedule h as a corresponding S chedule Type . The Schedule Type
can be Time-Based, Event-Based, or a Send Now subscription. For more
details reference, see T ime-triggered schedules and E vent-triggered
schedules .
Data-
Column Description
Type
schedule_
The fixed numeric ID for the schedule type. tinyint(4)
type_id
l Unknown
schedule_type_ varchar
desc l Time-Based (128)
l Event-Based
l Send Now
lu_event
The full list of E vent o bjects and the corresponding descriptive information
from the MicroStrategy metadata(s) being monitored by Platform Analytics.
For more details about event objects, see A bout events and event-triggered
schedules .
varchar
event_guid The GUID of the Event object in the metadata.
(32)
varchar
event_name The name of the Event stored in the metadata.
(255)
varchar
event_desc The detailed description of the Event object.
(512)
creation_
The UTC timestamp for when the Event was first created. datetime
timestamp
l Hidden
transaction_
MicroStrategy internal use. datetime
timestamp
varchar
event_version The version ID of the event.
(32)
lu_delivery_format
Not all subscription types have the same delivery formats. For more details
about Subscription Types and Delivery Formats, see T ypes of subscriptions .
Data-
Column Description
Type
delivery_ smallint
A fixed numeric ID of the Delivery Format.
format_id (6)
l CSV
l Dataset
l Editable XML
l Excel
l Flash
l Graph
l HTML
delivery_ varchar
format_desc l HTML5 (255)
l Interactive XML
l MSTR File
l PDF
l Phone
l Plain Text
l Presentation
l Tablet
l XML
lu_subscription_device
Lists the devices used to receive a subscription.
Subscription_ varchar
The name of the device.
device_name (255)
Subscription_ varchar
The GUID of the device.
device_guid (32)
Subscription_ varchar
The version ID of the device.
device_version (32)
Subscription_ varchar
The description of the device.
device_desc (512)
Creation_
The timestamp the device was created. datetime
timestamp
Modification_
The timestamp the device was modified. datetime
timestamp
lu_history_list_message_view
List the report or document execution result that is stored in a user's
personal H istory List Message folder. Each user has their own History List
folder with messages that can either be stored in a database or file system.
history_list_ varchar
The GUID of the History List Message in metadata.
message_guid (32)
history_list_ The most recent title of the History List Message. The title varchar
message_title can be modified at any time. (512)
creation_ The UTC timestamp for when the History List Message was
datetime
timestamp first created.
User Hierarchy
lu_account
The lu_account table is designed to integrate U sers from multiple data
sources such as MicroStrategy metadata users, Usher, Physical Access
Systems (PACS) into a common user identity. The Account is linked to a
U ser based on a common email address form. If no email address is
available, the account_login will be used. For example, if two metadata’s
with the duplicated MicroStrategy user objects are being monitored by
Platform Analytics, the accounts will be linked based on the login.
A U ser is the unique identity of the person. Each User can have multiple
A ccounts from different data sources. For example, multiple badges from
Usher and the user objects created in the metadata of the MicroStrategy
Platform. The U ser attribute allows analysis on all of the user information
across these different data sources.
Account Type will distinguish from which source the account was created.
For example, MicroStrategy User or a specific network.
varchar
account_login The account login or domain name of the account.
(255)
l Inactive
account_ l Enabled
tinyint(4)
status_id l Disabled
l Deleted
varchar
mstr_user_guid The GUID of the User object in MicroStrategy metadata
(32)
The phone number of the Badge account used for device varchar
account_phone
enrollment. (75)
longitude The most recent longitude value of the Badge account. double
latitude The most recent latitude value of the Badge account. double
last_action_
The most recent action timestamp of the Badge account. datetime
timestamp
last_location_
The most recent location timestamp of the Badge account. datetime
timesta mp
varchar
account_desc The description of the account.
(512)
mstr_user_ varchar
The version ID of the MicroStrategy user.
version (32)
varchar
ldap_link The link to the account in LDAP.
(512)
varchar
nt_link The link to the account in NT.
(255)
varchar
wh_link The link to the account in the WH.
(255)
password_
expiration_ How often a password wille xpire for the given account. int(11)
frequency
password_
The date the password for the account will expire. datetime
expiration_date
password_
Whether the password for the account is allowed to be
change_ varchar(7)
changed.
allowed
password_
This account is required to change its password on the
change_ varchar(7)
next login.
required
trusted_auth_ varchar
The ID of the trusted authentication user.
user_id (255)
lu_account_role
The A ccount Role indicates the level of access or privileges for the
A ccount . This table is specific to Badge and is populated by the badge
privileges granted through Network Manager for the specific badge. For
more information, see R ole Management . For MicroStrategy metadata
accounts, the default value is MicroStrategy User.
Data-
Column Description
Type
account_
ID of the account role. int(11)
role_id
Data-
Column Description
Type
lu_account_status
The current A ccount Status of an A ccount . This column is common for both
MicroStrategy and Badge accounts. The account status can change over
time, for example an account can begin as active and is later updated to be
deleted.
Data-
Column Description
Type
account_
ID of the current account status. tinyint(4)
status_id
l Inactive
l Disabled
l Enabled
lu_account_type
The A ccount Type will distinguish from which data source the A ccount was
created. For example, a MicroStrategy user in the metadata or a specific
badge name added through Network Manager (see B adge Name ) .
Data-
Column Description
Type
The account type indicated the source of from where the account
originated.
account_ l MicroStrategy User - the user was created in the metadata varchar
type_desc l MicroStrategy Guest User (255)
lu_user
A U ser is the consolidated identity of multiple A ccounts . Each User can
have multiple accounts from different sources. For example, multiple badges
from Badge or users created in the metadata of the MicroStrategy platform.
The U ser a ttribute allows analysis on all of the user information across
these different data sources.
Data-
Column Description
Type
department_ The employee number for the department head in which the
bigint(20)
owner_id user belongs.
division_ The employee number for the division head in which the user
bigint(20)
owner_id belongs.
group_owner_ The employee number for the group head in which the user
bigint(20)
id belongs.
Data-
Column Description
Type
The employee number for the unit head in which the user
unit_owner_id bigint(20)
belongs.
lu_network
A N etwork is a group of connected a ccounts . The Network is configured
through Network Manager. For MicroStrategy metadata users, a default
network called MicroStrategy Network is assigned.
creation_
The timestamp for when the Network was first created. datetime
timestamp
lu_validating_account
This table is a view on the lu_account table. A Validating Account is specific
to Badge and represents peer-to-peer authentications. An account can be
validated through scanning a QR code or entering the Badge Code of
another badge . The Badge Code is unique to each user and can be
configured to change routinely. For example, this code can be given over the
phone to identify yourself if you are talking to someone who does not know
you. By default, the Badge Code is 4 or 8 digits in length and updates every
hour.
Data-
Column Description
Type
validating_account_ The name of the account who has been validated by varchar
name another user. (255)
validating_account_ The email of the account who has been validated by varchar
email another account. (255)
validating_account_ The picture of the account who has been validated by varchar
picture another account. (1024)
lu_user_group
The list of U ser Groups is the MicroStrategy metadata. This table stores
information specific to MicroStrategy metadata. Badge accounts do not
belong to metadata user groups. Therefore, all Badge Accounts are
assigned default value as MicroStrategy Badges.
user_group_ varchar
The metadata GUID of the User Group object.
guid (32)
user_group_ varchar
The name of the User Group stored in the metadata.
name (255)
user_group_ The description added in the properties dialog box for the varchar
desc User Group object. (512)
l Deleted
user_group_ varchar
The version ID of the User Group.
version (32)
rel_account__usergroup
The relationship table between Microstrategy user objects and the
immediate parent user groups in the metadata. This table does not store
indirect relationships between users and user groups. This table will not
have a row for user groups that does not have a user directly in it.
Data-
Column Description
Type
rel_childuser__usergroup
This is a relationship table between Microstrategy User Groups and their
parent User Groups in the metadata. A single User Group can be a child to
multiple parent User Groups, which could recursively belong to other parent
User Groups. This table is specifically used to form the recursive
relationship between User Groups and all parent User Groups (direct or
indirect). All indirect relationships are resolved to create a direct
relationship in this table.
Data-
Column Description
Type
child_ The ID of the child user which belongs to the corresponding bigint
group_id parent User Group. (20)
user_
The parent User Group ID. bigint(20)
group_id
HR Organization Hierarchy
This information is intended to enrich the user level analysis. All attributes
and tables corresponding to the HR organization hierarchy must be manually
provided by importing a csv file. For instructions, see Importing an
Organizational Hierarchy. Two sample reports using the Department,
Division, Group, Unit, User attributes are provided at the end of this section.
The login attributes can be used a security filters to restrict data available to
only managers corresponding to only those users who are direct reports.
These attributes and tables are elective and available to enrich analysis, but
not critical for the Platform Analytics project.
lu_department
The D epartment attribute is the highest level attribute of an HR organization
hierarchy and is a consolidation of multiple D ivisions . The information must
be provided via csv file.
Data-
Column Description
Type
department_ The name of the Department. For example, Sales, Finance, varchar
desc Technology, etc. (255)
lu_department_owner
The Department Owner is the manager for the corresponding D epartment .
Each Department can have only one owner.
Data-
Column Description
Type
department_ bigint
Auto-generated numeric ID for the Department Owner.
owner_id (20)
department_ The name of the Department Owner or manager for the varchar
owner_desc department. For example, John Smith. (255)
lu_department_owner_login
The D epartment Owner Login is the username for each D epartment Owner .
For example, John Smith is the department owner for the Technology
department. His login is jsmith.
Data-
Column Description
Type
lu_division
The D ivision is a consolidation of multiple G roups within the organization
hierarchy.
Data-
Column Description
Type
division_ The name of the Division. For example, North America Sales, varchar
desc Corporate Finance, etc. (255)
lu_division_owner_login
The D ivision Owner Login is the username for each D ivision Owner .
Data-
Column Description
Type
lu_group
A G roup is a consolidation of multiple U nits within the organization
hierarchy.
group_desc The name of the Group. For example, West Coast Sales. varchar(255)
lu_group_owner_login
The G roup Owner Login is the username for each G roup Owner .
Data-
Column Description
Type
lu_unit
A U nit is a consolidation of multiple U sers within the organization hierarchy.
unit_desc The name of the Unit. For example, Seattle Sales Team. varchar(255)
lu_unit_owner_login
The U nit Owner Login is the username for each U nit Owner .
Data-
Column Description
Type
Example Report
Time Hierarchy
The Time hierarchy attributes are based off UTC timezone. Both
MicroStrategy Intelligence Server and Identity Server send the server
timezone with the transactional logs. The timezone is then standardized to
UTC in the Platform Analytics ETL.
In addition to the standard time attributes (Day, Month, Year, Month of Year,
etc), there are supplementary attributes to provide additional levels of
analysis. They include:
l Time Period : Predefined time periods such as Yesterday, Last Week, etc.
lu_date
The source table for the D ate attribute which tracks MicroStrategy and
Badge transactions. Each day, a new date entry is added to the lu_date
table.
month_of_year_
The Month of Year ID. tinyint(4)
id
The Year ID. The source column for the Year attribute
year_id int(11)
in the Platform Analytics project.
previous_ The Month ID of the quarter the day resides in from int(11)
quarter_month_
the previous month.
id
lu_month
The lu_month tracks on which M onth that a MicroStrategy or Badge
transaction occurred.
Data-
Column Description
Type
The descriptive form of the Month. The format is Month, Year. varchar
month_desc
For example, February, 2018. (32)
lu_month_of_year
List on which M onth of Year t he MicroStrategy or Badge transaction
occurred.
Data-
Column Description
Type
month_of_year_
The fixed numeric ID for the Month of Year. tinyint(4)
id
Data-
Column Description
Type
l March
l Mar
lu_day_of_week
Day of Week indicates which day the MicroStrategy or Badge transaction
occurred.
l Monday
day_of_week_desc varchar(25)
l Tuesday
l Wednesday
l Tue
l Wed
lu_part_of_week
Part of Week indicates whether the MicroStrategy or Badge transaction
occurred on the Weekend or Weekday.
Data-
Column Description
Type
lu_time_period
Time Period is used to track predefined rolling time windows. The
predefined Time Periods are intended to be overlapping. For example, the
Time Period for Last Week will include the actions for Yesterday and Last 2
Months will include all the actions for all other time windows. The rel_date_
timeperiod table is updated daily in the Platform_Analytics_daily_etl.xxx
procedure and therefore the Time Period attribute does not store data for the
current date.
Data-
Column Description
Type
time_period_id The fixed numeric ID for the defined Time Periods. tinyint(4)
The descriptive form of the Time Period. The Time Period are
defined as:
rel_date_timeperiod
The relationship table used to track the rolling Time Periods. The predefined
Time Periods are intended to be overlapping. For example, the Time Period
for Last Week will include the actions for Yesterday and Last 2 Months will
include all the actions for all other time windows. The rel_date_timeperiod
table is updated daily in the Platform_Analytics_daily_etl.xxx procedure and
therefore the Time Period attribute does not store data for the current date.
time_period_id The fixed numeric ID for the defined Time Period. tinyint(4)
lu_week
The source table for the W eek attribute which tracks MicroStrategy and
Badge transactions. The lu_week table stores the week elements until
overflow in the year 2035.
lu_week_time_window
Week Time Windows is used to track predefined rolling week windows. The
Week Time Windows are consecutive and not overlapping. For example, the
Last Week will include the last seven dates. It will not overlap with the dates
for two weeks ago. The rel_date_weektime_window table is updated daily in
the Platform_Analytics_daily_etl.xxx procedure and, therefore, the Week
Time Windows attribute does not store data for the current date.
Data-
Column Description
Type
rel_date_weektime_window
The relationship table used to track the Dates for the rolling W eek Time
Windows . The Week Time Windows are consecutive and not overlapping.
For example, the Last Week will include the last seven dates. It will not
overlap with the dates for two weeks ago. The rel_date_weektime_window
table is updated daily in the Platform_Analytics_daily_etl.xxx procedure and
therefore the Week Time Windows attribute does not store data for the
current date.
Data-
Column Description
Type
week_time_window_
The fixed numeric ID for the Week Time Window. tinyint(4)
id
lu_minute
The M inute when a Badge or MicroStrategy transaction occurs.
Data-
Column Description
Type
For example:
minute_ varchar
desc l 10:09 - represents 10:09 am (8)
lu_hour
The H our when a Badge or MicroStrategy transaction occurs.
For example,
l 1AM
l 2AM
lu_part_of_day
The P art of Day when a MicroStrategy or Badge action occurs (i.e. Morning,
Afternoon). The Part of Day is predefined based on the relationship with
Hour.
Data-
Column Description
Type
part_of_
The fixed numeric ID for the Part of Day. tinyint(4)
day_id
The descriptive form representing the time range for the Part of
Day. The Part of Day can be:
lu_quarter
Data-
Column Description
Type
For example:
quarter_ varchar
l Q1 2017
desc (25)
l Q2 2018
l Q3 2019
previous_ The fixed numeric ID of the quarter previous to the current one. int(11)
Data-
Column Description
Type
quarter_of_ The fixed number ID of the quarter number within the year.
tinyint(4)
year_id For example Q3 2019 would be 3.
For Badge app transactions, the app sends the timezone of the users device
in the logs. This timezone is used to populate the Local Time hierarchy
attributes. MicroStrategy transactions do not currently have the user device
timezone and therefore do not use the Local Time hierarchy. All local time
hierarchy attributes (local date, local month, etc.) are logical table views in
the Platform Analytics MicroStrategy project.
Servers Hierarchy
The Server attributes allow for analysis at the machine level. The
Intelligence server and Client Server Machine attributes provide the name
and/or IP information for where an Intelligence server and Client Server are
hosted.
Multiple server definitions can be available, but you can install only one
Intelligence server on one Intelligence server machine and the Intelligence
server uses only one Server Definition at a time.
lu_iserver_instance
List the I ntelligence Server Instances that have been configured to monitor
statistics using Platform Analytics. An I ntelligence Server Instance uniquely
identifies a MicroStrategy Intelligence server hosted on a machine. One
Intelligence server uses only one server definition at a time.
Data-
Column Description
Type
iserver_
varchar
definition_ The GUID of the server definition stored in the metadata.
(32)
guid
iserver_
varchar
definition_ The name of the server definition stored in the metadata.
(255)
name
iserver_port_
The port which the Intelligence Server Instance is configured. int(4)
number
iserver_ The I-server Cluster ID. This is -1 by default. See the example smallint
cluster_id of SQL for adding the cluster information below. (4)
iserver_
The machine name where the Intelligence server is varchar
machine_
configured. (255)
name
lu_iserver_machine
List the I ntelligence server machines that have I-Servers configured to
monitor statistics using Platform Analytics. An Intelligence server machine
can have one or more instances of an Intelligence server running on
different port numbers. An Intelligence server machine can have the IP,
name, or both.
Data-
Column Description
Type
lu_iserver_cluster
There is no automated method for determining the I ntelligence server
cluster information from the logs. However, this information can be manually
updated. Examples of the procedure to update the lu_iserver_cluster and lu_
iserver_instance tables are provided below. The Intelligence server cluster
attribute is hidden by default.
Data-
Column Description
Type
iserver_cluster_ The Intelligence server cluster name that was inserted varchar
name through custom SQL. (255)
lu_client_server_machine
This table stores the C lient Server Machine information where an
application server, such as the Web Server or Mobile Server, is hosted.
Data-
Column Description
Type
Client Telemetry
Client telemetry was introduced in MicroStrategy 2021 Update 8 (December
2022). Starting in MicroStrategy 2021 Update 9, this preview feature is out-
of-the-box.
Attribute/Metric Table
Description Folder
Name Mapping
lu_client_
action_
Client Action Category for a client action
category A12. Clients
Category (execution or manipulation)
lu_client_
action_type
Attribute/Metric Table
Description Folder
Name Mapping
fact_client_
Client Action Unique ID for each client action A12. Clients
executions
lu_client_
Numerical version of the
application_
Client Application MicroStrategy client
version A12. Clients
Version application from which a client
fact_client_
request was made
device_history
lu_client_
Type of MicroStrategy client
application_
application from which a
version_
Client Application request was made (Library
category A12. Clients
Version Category Web, Library Mobile iOS,
lu_client_
MicroStrategy Application, and
application_
so on.)
version
lu_client_
For executions that hit a
cache_type
Client Cache Type cache, indicates the type of A12. Clients
fact_client_
cache hit
executions
Attribute/Metric Table
Description Folder
Name Mapping
lu_network_
Network availability from which type
Network Type A12. Clients
a client request was made fact_client_
executions
lu_device
fact_client_
The unique ID and name of the executions
A08. Client
Device device with which a user fact_client_
Devices
performs an action. device_history
access_
transactions
Attribute/Metric Table
Description Folder
Name Mapping
lu_device_type
The make and/or model of the
lu_device A08. Client
Device Type phone, tablet, or computer
fact_client_ Devices
used to perform an action.
device_history
lu_os
fact_client_
The name and version of the A08. Client
OS device_history
operating system. Devices
access_
transactions
lu_custom_
An attribute that provides a list A02.
application
Custom Application of custom applications in the Configuration
fact_client_
metadata. Objects
executions
Total number of
dossier/document executions fact_client_
Client Actions M09. Clients
and manipulations performed executions
by the client. 2
Attribute/Metric Table
Description Folder
Name Mapping
Attribute/Metric Table
Description Folder
Name Mapping
The Device Configuration hierarchy attributes are primarily used to track the
Badge and Communicator mobile client configuration telemetry. The device
configuration telemetry is tracked at the transactional level. Each Badge
transaction records the exact operating system , device, and m obile
configuration. Therefore, the historical upgrade or modification of the device
configuration is recorded. To see the latest device configuration for a user,
sort descending on the Timestamp attribute.
lu_device
The D evice stores the unique ID and name/IP of the client device with which
a users performs an action. For Usher iOS clients, the device_desc is the
name registered on the mobile device. For Usher Android clients, the
device_desc is the name of the model. For MicroStrategy clients, the name
is the IP address from the client.
The Device ID is always unique for both Usher and MicroStrategy client
actions. For Usher, the ID is generated by Usher Server. For MicroStrategy,
the ID is auto-generated by Platform Analytics ETL.
Data-
Column Description
Type
device_ bigint
The corresponding Device Type ID for the Device.
type_id (20)
lu_device_type
The Device Type records make and/or model of the phone, tablet, or
computer used to perform a Badge action. All MicroStrategy actions are
assigned a static Device Type of Personal Computer.
Data-
Column Description
Type
device_ bigint
The auto-generated numeric ID of the Device Type.
type_id (20)
l Google Pixel
l LG-D820
device_
The corresponding Device Category ID. int(11)
category_id
lu_device_category
The Device Category records the grouping of device types. All MicroStrategy
actions are assigned a static Device Category of Personal Machine.
Data-
Column Description
Type
device_category_
The fixed numeric ID of the Device Category. int(11)
id
l Android
device_category_ varchar
desc l iPhone (255)
l iPad
l Tablet
Data-
Column Description
Type
l Personal Computer
l Not Applicable
lu_os
The O S i s used to record the exact operating system which is on the Badge
app or Communicator app used in an Identity transaction. The O S is tracked
at the transactional level, therefore, each Identityaction will record the
unique OS version and OS Category.
l iOS 11.2.5
lu_os_category
The OS Category i s used to record the type of operating system which is
running on the Badge app mobile client. The O S is tracked at the
transactional level, therefore, each Badge action will record the unique OS
version and OS Category.
Data-
Column Description
Type
l Android
os_category_ varchar
desc l iOS (255)
l OS x
l Not Applicable
lu_mobile_app
The M obile App r ecords the exact Mobile App version on the Badge users’
device during a transaction. The M obile App is tracked at the transactional
level, therefore, each Badge action will record the unique app version.
MicroStrategy transactions do not record the Mobile App information and are
mapped to a default Unknown.
Data-
Column Description
Type
mobile_ bigint
The auto-generated numeric ID for the Mobile App.
app_id (20)
l Communicator/10.10.002
Data-
Column Description
Type
l Not Applicable
mobile_
app_ The corresponding Mobile App Category ID. tinyint(4)
category_id
lu_mobile_app_category
The M obile App Category i s used to record the type of Mobile App which is
running on the users mobile client. The M obile App is tracked at the
transactional level, therefore, each Badge action will record the unique
Mobile App version.
MicroStrategy transactions do not record the Mobile App information and are
mapped to a default ‘Unknown’.
l Badge
mobile_app_category_
varchar(255)
desc l Door Reader
l Communicator
l WatchKit
l Not Applicable
User Entity
A U ser E ntity can inherit project access privileges from two Sources:
1. User ( self)
Source Entity
A Source can inherit project access privileges from three Privilege Sources:
1. User ( self)
3. Security Roles
Relationship Tables
The relationship table rel_user_entity_source captures the relationship
between a User Entity and its Source of privileges. Sources may be the User
itself or a parent User Group.
For more information about project access privileges, see List of Privileges.
rel_user_entity_source
A User can inherit privileges directly from two Sources:
1. User ( self)
user_entity_
The auto-generated numeric ID for the User Entity. bigint(20)
id
lu_user_entity_view
The list of all possible User Entities. Therefore, it contains Users and
Contacts. It is a view based on lu_entity limited to entity_type_ids in (1,4).
WH Table
Column Description Data-Type
Column
user_entity_ varchar
entity_desc The description of the User Entity.
desc (255)
user_entity_ varchar
entity_guid The GUID of User Entity.
guid (32)
l Disabled (0)
lu_user_entity_type_view
A User Entity can be of type: User or Contact. Therefore, this lookup table
contains a list of two static elements. This lookup table is a view based on
lu_entity_type limited to entity_type_ids in (1,4).
Data-
Column Description
Type
l Contact (4)
lu_source_entity_view
The list of all possible Sources. Therefore, it contains Users and User
Groups. It is a view based on lu_entity limited to entity_type_ids in (1,2).
WH Table
Column Description Data-Type
Column
varchar
source_name entity_name The name of the Source.
(255)
varchar
source_desc entity_desc The description of the Source.
(255)
WH Table
Column Description Data-Type
Column
user_entity_ varchar
entity_guid The GUID of Source.
guid (32)
l Disabled (0)
rel_source_privilege_source_scope
A Source can get its privileges from three Privilege Sources:
1. User (self),
2. User Groups (all direct and indirect Parent User Groups) and
audit_ The timestamp the License Audit was triggered, sent by the
timestamp
timestamp Intelligence server to Kafka logs.
lu_scope
The Scope attribute was introduced to optimize SQL query and ETL
performance. It is not intended to be used in ad-hoc reports. The level of
assigning Privileges can differ depending on the Privilege Source.
Security Roles can grant a set of privileges but restrict them to a few
projects. Whereas, User and User Group privileges are applicable globally
for all the projects. To determine which privilege is assigned to which list of
project(s), Scope is used. It represents the list of projects for which the
privilege source is applicable.
rel_scope_project
The relationship table between S cope and P roject . Scope is used to
represent the list of projects for which a User/User Group inherits a list of
privileges from Security role. This table maintains the relationship between
scope and the corresponding projects.
metadata_id The ID for the corresponding metadata for each Project. bigint(20)
rel_privilege_source_privilege_group
A relationship table between Privilege Source (which is User, User group, or
Security Role) and their corresponding set of privileges represented by the
Privilege Group.
audit_ The timestamp the License Audit was triggered. This is sent timestamp
lu_privilege_source_view
The list of all possible Privilege Sources. Therefore, it contains Users, User
Groups and Security Roles. A view based on lu_entity limited to entity_type_
ids in (1,2,3).
WH Table
Column Description Data-Type
Column
privilege_ varchar
entity_name The name of the Privilege Source.
source_name (255)
privilege_ varchar
entity_desc The description of Privilege Source.
source_desc (255)
privilege_ varchar
entity_guid The GUID of Privilege Source.
source_guid (32)
WH Table
Column Description Data-Type
Column
l Disabled
lu_privilege_source_type_view
A Privilege Source can be of Type: User, User group or Security Role.
Therefore this lookup table contains a list of three static elements. This
lookup table is a view based on lu_entity_type limited to entity_type_ids in
(1,2,3).
Data-
Column Description
Type
privilege_source_
The fixed ID for privilege source type. int(11)
type_id
lu_privilege_group
This table is used for internal purpose only. A Privilege Group represents a
unique set of privileges applied to a Privilege Source. Multiple Privilege
Sources with the same set of privileges will be assigned the same Privilege
Group.
rel_privilege_group_privilege
This is a relationship table between Privilege Groups and their set of
Privileges. The join between rel_privilege_source_privilege_group, rel_
privilege_group_privilege, and lu_privilege gives a list of privileges that are
assigned directly to each Privilege Source. Such a list includes only the
directly assigned privileges and not the inherited privileges.
fact_user_entity_resolved_privilege
This table contains the resolved list of Privileges and their associated
Product (including those directly applied to a user, those from a parent
group, and those from a security role applied to a user or parent group) for
each User Entity in the metadata.
user_entity_
The auto-generated ID value for the User Entity. bigint(20)
id
license_
entity_status_ The fixed ID Value for the status of the entity. tinyint(4)
id
lu_license_entity_status_view
An entity (from lu_user_entity_view, lu_privilege_source_view, lu_source_
entity_view) in the license model can be either enabled or disabled.
Therefore this lookup table contains a list of two static elements. This lookup
table is a view based on lu_account_status limited to account_status_id in
(0,1) for enabled/disabled.
l Disabled (0)
lu_product
lu_product is lookup table for all the Products. To access each product a
user needs to have a set of privileges.
lu_privilege
The static list of all P rivileges . For more information about project access
privileges, see L ist of Privileges .
Data-
Column Description
Type
privilege_ varchar
The description for the Privilege.
desc (255)
lu_gateway
A Gateway is an access point that requires the authentication of an account.
A gateway is a unique physical, logical, or Badge desktop resource into
which the account authenticates. It is the consolidation of the data from the
Application, Space, and Desktop tables. The columns are populated by the
PACS system configured in Network Manager, the Logical Applications
configured in Network Manager, or when a user configured the Badge
Desktop client for unlocking personal machines.
For example,
varchar
gateway_desc
l Elevator A (255)
l Office 365
l MAC-JSMITH
creation_
The UTC timestamp when the gateways was first created. datetime
timestamp
gateway_
The numeric ID of the corresponding gateway category. tinyint(4)
category_id
lu_gateway_category
A Gateway Category is an automated categorization of the gateway
populated through the Platform Analytics ETL. If the transaction does not
correspond to accessing a Badge resource, for example, uploading a new
lu_application
An Application is a logical web application that requires the authentication
from an account. For example, an Application could be Salesforce, Office
365, or Rally. For a full list of web application that can be configured with
Badge, see Signing into Web Applications.
l Office 365
lu_space
A Space is a physical building access point that requires the authentication
of an account. The Space desc form is populated directly by the values in
the PACS system. The PACS system is configured through Network
Manager (see Configuring PACS).
l Elevator A
creation_ The UTC timestamp when the PACS space was first
datetime
timestamp imported through Network Manager.
lu_floor
A F loor is the grouping of spaces. For example, Lobby, 10th Floor, Parking
Garage, etc.
The name of the Floor to which the space is mapped to and varchar
floor_desc
created in Network Manager. (255)
creation_ The UTC timestamp when the Floor was first created
datetime
timestamp through Network Manager.
modification_ The latest UTC timestamp when the Floor was modified.
datetime
timestamp The value will change as the mapping to spaces or names
lu_facility
A F acility is the grouping of floors, such as Headquarters or London Office,
and is representative of a building. Each F acility has a F acility Address
associated with the location of the building. A network administrator can add
the address of the facility which to provide an additional level of analysis of
Badge transactions.
The Platform Analytics ETL stores the facility address and draws a 500-
meter radius around the facility. Any Badge transaction (long/lat) that
happens within the 500-meter radius is mapped to the stored facility
address. Any transaction outside the radius is mapped to Not Applicable.
The true location of the user is never manipulated for the Longitude/Latitude
attributes.
If a user is detected within two facility radiuses, the Platform Analytics ETL
will choose the facility which is the closest distance. Both the Facility and
Facility Address are configured through Network Manager. The radius is not
configurable.
There are multiple benefits for adding with the Facility Address. When two
users perform an Badge transaction from the same location, the mobile
device can send longitude/latitude data that is different. This a limitation
with geo data from mobile device. The Facility Address attribute allows an
l Did the user open the S pace (i.e. PACS system) remotely?
The name of the facility to which the Floor is mapped. The varchar
facility_desc
value is added through Network Manager. (255)
creation_ The UTC timestamp when the Facility was first created
datetime
timestamp through Network Manager.
location_desc The street address, city, state, country of the facility added varchar
updated_
MicroStrategy internal use. int(11)
location_flag
lu_campus
A C ampus is a collection of F acilities. The campus to facility mapping is
configured through Network Manager.
The name of the campus to which the facilities are mapped varchar
campus_desc
in Network Manager. (255)
creation_ The UTC timestamp when the Campus was first created
datetime
timestamp through Network Manager.
lu_facility_address
This table is a view on the lu_facility table. Each Facility has a Facility
Address associated with the location of the building.
Data-
Column Description
Type
facility_
The numeric ID of the facility address. bigint(20)
address_id
facility_
The name of the facility to which the address corresponds. The varchar
address_
value is added through Network Manager. (255)
desc
facility_ The street address, city, state, country of the facility added
varchar
street_ through Network Manager. The address is used for the 500-
(1096)
address meter-radius ETL logic mentioned previously.
lu_beacon
A B eacon can be configured to provide access to physical gateways/spaces
or identify a users location.
varchar
beacon_desc The name of the beacon configured in Network Manager.
(255)
l Inactive
The major value for your beacon. If you have more than one
beacon_major building in your network, you can have multiple major int(11)
values.
lu_bar_code
A Barcode can represent people (such as an employee, a customer, a
contractor, or a party to a transaction) and objects (such as a vehicle,
computer, package, contract, or transaction receipt). Scanning a barcode
links the person, place, or thing identified by the barcode with the Badge
user who performed the scan.
You can scan third-party barcodes and QR codes to display the data in
Platform Analytics. The barcode is scanned using the Badge app. The exact
barcode string is stored in the lu_bar_code table. The string can be parsed
using derived attributes in MicroStrategy.
Scanning the codes allows you to create reports and dossiers that tie the
scanning transaction and the data inside the code to Badge, Identity, and
telemetry features. This feature also provides a way to document a
transaction or link a Badge user and another person or object, using an
identity-centric and location-aware approach. For example, you can
document the path of a package from its source to its destination, providing
information on who, where, and when the package was handled.
Data-
Column Description
Type
bar_code_
The numeric ID of the bar code. bigint(20)
id
bar_code_
The ID of the corresponding barcode type. int(11)
type_id
lu_bar_code_type
The format type of the barcode scanned by the Badge app.
Data-
Column Description
Type
bar_code_
The numeric ID of the barcode type. int(11)
type_id
bar_code_ The format type of the barcode the user scans. The Badge app varchar
type_desc supports scanning the following types, (255)
Data-
Column Description
Type
l AZTEC
l CODE39
l CODE93
l CODE128
l DATAMATRIX
l EAN8
l EAN13
l INTERLEAVED2OF5
l PDF417
l QR
l UPC-E
lu_desktop
A D esktop is the name of the user’s Mac or Windows machine that was
locked or unlocked using their Badge app.
l MAC-JSMITH
l WAS-RJONES
l John’s MacbookPro
lu_desktop_unlock_setting
The Desktop Unlock Setting is configured by the end user to control the
proximity that triggers the unlock feature on either a Mac or a Windows
machine. This setting can be changed at any time and therefore is stored at
the transactional level in the access transaction fact table.
Data-
Column Description
Type
desktop_
unlock_ The numeric ID of the desktop unlock setting. tinyint(4)
setting_id
Data-
Column Description
Type
l Close
l Nearby
l Far
In MicroStrategy Badge the Longitude and Latitude are logged for all
transactions on Badge app. The location setting can be managed from
Network Manager and the personal device of the user.
The rows in the tables are the exact values provided by Google. In some
cases, Google will change the value returned in the API of the
address/city/state/country and a new record will be added to the table. For
example, Colorado and CO.
In the Location hierarchy, the Address attribute is the direct child of City,
State and Country attributes. This will help reduce the in-memory joins
between the attributes to improve the performance of reports and dossiers
related to the Location Hierarchy.
lu_country
The list of countries returned by the Google® API from a corresponding
latitude/longitude transaction by the user.
l Unknown
l United States
l Germany
l China
lu_state
The list of states returned by the Google® API for which there has been a
corresponding latitude/longitude transaction by the user. Not all longitude
and latitude coordinates are associated with a State. In these cases, the row
will be populated with No State (<name of the country>).
l Unknown varchar
state_desc
l Location Services Disabled (255)
l No State(Germany)
l Virginia
lu_city
The list of cities returned by the Google® API for which there has been a
corresponding latitude/longitude transaction by the user. Not all longitude
and latitude coordinates are associated with a city. In these cases, the row
will be populated with Unknown (<name of the state>).
l Unknown
l Unknown (Virginia)
l Tysons Corner
state_id The numeric ID for the corresponding state to each city. bigint(20)
country_id The numeric ID for the corresponding country to each city. bigint(20)
lu_address
The list of Address returned by the Google® API from a corresponding
latitude/longitude transaction by the user. Not all longitude and latitude
coordinates are associated with an address. In these cases, the row will be
populated with Unknown (<name of the state>).
city_id The numeric ID for the corresponding city to each address. bigint(20)
In the Platform Analytics project, there is no project schema based off the
warehouse tables. To build a custom report, use the Communicator Inbox
Messages data import cube.
lu_usher_inbox_messages
This table stores the list of messages and the parent message sent from
Communicator. It also stores the sender device and location information at
the time each message was sent.
creation_
This is the UTC timestamp from when the message was sent. datetime
timestamp
network_id The numeric ID of the network for which the sender belongs. bigint(20)
usher_
The numeric ID of device from which the message was sent. bigint(20)
device_id
os_id The numeric ID of OS from which the message was sent. int(11)
lu_usher_inbox_responses
This table stores the list of possible responses for each message sent from
Communicator.
There are two types of messages, the first type is confirmation. This is when
the admin states a question and the Identity server user would reply by
hitting the key Confirm. Confirm is the default value we use if there are no
options given. The second type of message is one sent with options. For
example, if the Communicator admin asked “Do you like red or blue?" This
table would then get two entries, one for red with its own response_id and
one for blue with its own response_id with both having the same message
ID.
Data-
Column Description
Type
Data-
Column Description
Type
response_ varchar
This is the response of the user using the Badge.
desc (25)
bigint
message_id This is the message id generated by the Identity server.
(20)
fact_usher_inbox_messages
This table stores each of the Communicator messages sent to the individual
Badge users. It also stores the time each message was sent.
responder_ For Platform Analytics only, this is the account for the
bigint(20)
account_id person who received the message.
responder_ For Platform Analytics only, this is the badge for the person
bigint(20)
badge_id who received the message.
sent_ This is the UTC timestamp from when the message was
datetime
timestamp sent.
local_sent_ This is the local timestamp from when the message was
datetime
timestamp sent.
sent_date This is the UTC date from when the message was sent. date
local_sent_
This is the local date from when the message was sent. date
date
fact_usher_inbox_messages_view
This View is used to track who responded to a specific message over the last
14 days only.
sent_
The UTC timestamp from when the message was sent. datetime
timestamp
local_sent_ The local timestamp when the message was sent to the
datetime
timestamp user.
fact_usher_inbox_responses
This is the fact table that tracks each Communicator response. If an admin
sends 100 messages but only 50 people reply, this table would have 50
entries.
responder_ For Platform Analytics only, this is the account for the
bigint(20)
account_id person who received the message.
responder_ For Platform Analytics only, this is the badge for the person
bigint(20)
badge_id who received the message.
user_device_ The numeric ID of device from which the message was sent. bigint(20)
id
os_id The numeric ID of os from which the message was sent. int(11)
network_id The numeric ID of the network for which the sender belongs.
response_ This is the UTC timestamp from when the reply to the
datetime
timestamp message was sent.
local_
Local timestamp from when the reply to the message was
response_ datetime
sent.
timest amp
response_ This is the UTC date from when the reply to the message
date
date was sent.
local_
This is the local date from when the reply to the message
response_ date
was sent.
date
fact_usher_inbox_responses_view
This view is to help easily track which message is replied to by which users
over the last 14 days.
responder_ For Platform Analytics only, this is the account for the
bigint(20)
account_id person who received the message.
User_device_
The numeric ID of device from which the message was sent. bigint(20)
id
response_ The UTC timestamp from when the reply to the message
datetime
timestamp was sent.
local_
The local timestamp from when the reply to the message
response_ datetime
was sent.
timest amp
The historical files version are also tracked in this table. The latest version
of the procedures and DDL files will indicate which Platform Analytics file
versions are currently running.
For example:
filetype varchar(32)
l DDL
l Procedure
l Data Update
insert_ts The UTC timestamp when the file was called by the installer. datetime
etl_network_control
This table controls which Networks will be processed into the Platform
Analytics warehouse. By default, the table is empty, so data for all the
networks is processed. If an ID of a Network is inserted in this table,
Platform Analytics will only process this specific network and exclude all
others.
In st al l i n g Pl at f o r m An al yt i cs
The Platform Analytics monitoring tool is a component of the MicroStrategy
platform. Thus, you can install Platform Analytics on Windows and Linux by
using the MicroStrategy Installation Wizard. View the Platform Analytics
Prerequisites before installing.
Once you have successfully installed the components mentioned above, you
need to configure Platform Analytics. For more information, see Configuring
Platform Analytics. If you want to update your Platform Analytics project, see
the Upgrade Help.
l The following ports must be open and available in the machines where you
will install the Telemetry Server(s) and Platform Analytics:
l 2181
l 2888 and 3888 (only if you plan to cluster three or more Telemetry
Servers)
l 5432
l 6379
l 9092
l You must create a MicroStrategy user in the group System Monitors >
System Administrators or have access to the default Administrator user.
Remote access can be enabled at the time of creating the DB User or after
by following these steps:
On Linux:
/opt/mstr/MicroStrategy/Repository/pgsql/PGDATA/pg_
hba.conf
3. Replace <DB User Name> with the user installing the Platform
Analytics Repository.
4. The database user can now connect to this MySQL server instance
from any remote machine.
Co n f i gu r i n g Pl at f o r m An al yt i cs
After you have successfully installed all the components required for
Platform Analytics to work, you will need to configure Platform Analytics.
The following sections can assist you during the configuration process:
1. Configure
a. Single Node
c. High Throughput
One of the key benefits of Platform Analytics is that it can monitor multiple
MicroStrategy environments from a centralized location. However, after
completing the initial configuration above, Platform Analytics will only
monitor the environment that you configured to log telemetry. To monitor
additional environments, you must individually configure them to send
telemetry to Platform Analytics. See Monitor Metadata Repositories Across
Multiple Environments for more information.
A single node telemetry configuration can be used to process data from one
or more MicroStrategy Intelligence server clusters. This topic discusses how
to configure single node telemetry on your enterprise platform.
l The same environment that hosts your Intelligence server or one of its
clusters.
l Installation Restrictions
l Install Components
Installation Restrictions
Currently, Telemetry server is an integral part of Intelligence server
installation.
Install Components
Start by installing components on the corresponding environments.
l Platform Analytics
l MicroStrategy Repository
l MicroStrategy Intelligence
l MicroStrategy Intelligence
3. Create and Configure the Platform Analytics Project for Single Node
Telemetry
/opt/mstr/Microstrategy/install/Repository/pg_data/pg_
hba.conf
There are two different levels of statistics that can be enabled for logging,
basic and advanced. The Configuration wizard can only configure basic
statistics, while Command Manager can configure both.
Telemetry Server Host: The Kafka server fully qualified domain name
or IP address. The default is 127.0.0.1 if Platform Analytics is installed
on same machine. If not, provide the Platform Analytics machine's IP
address (Machine 4 in this example).
Telemetry Server Port: The Kafka server port number. The default is
9092.
You must have the following privileges to enable statistics from Command
Manager:
3. If you are using more than one Intelligence server environment, repeat
the above steps for all Intelligence server environments.
Say for example, Platform Analytics and Telemetry server are installed on
machine 4 and the Intelligence server is installed on Machines 1, 2, and 3.
You can use the Configuration wizard on machines 1, 2, and 3 to create and
configure the Platform Analytics project.
5. Click Next.
l PlatformAnalyticsConfigurationNew.scp
l PlatformAnalyticsConfigurationNew_PostgreSQL.scp
l PlatformAnalyticsConfigurationUpgrade.scp
l PlatformAnalyticsConfigurationUpgrade_PostgreSQL.scp
2. Installation Prerequisites
Windows:
INSTALL_PATH\Repository\pgsql\PGDATA\pg_hba.conf
Linux:
INSTALL_PATH/Repository/pg_data/pg_hba.conf
2. Installation Prerequisites
1. The following components must be installed on Machine 1, 2 and 3, as
shown in the above diagram.
l MicroStrategy Intelligence
l Platform Analytics
l MicroStrategy Intelligence
There are two different levels of statistics that can be enabled for logging,
basic and advanced. The Configuration wizard can only configure basic
statistics, while Command Manager can configure both.
Telemetry Server Host: The Kafka server fully qualified domain name
or IP address. The default is 127.0.0.1 if Platform Analytics is installed
on same machine. If not, provide the Platform Analytics machine's IP
address (Machine 4 in this example).
Telemetry Server Port: The Kafka server port number. The default is
9092.
You must have the following privileges to enable statistics from Command
Manager:
3. If you are using more than one Intelligence server environment, repeat
the above steps for all Intelligence server environments.
5. Click Next.
l PlatformAnalyticsConfigurationNew.scp
l PlatformAnalyticsConfigurationNew_PostgreSQL.scp
l PlatformAnalyticsConfigurationUpgrade.scp
l PlatformAnalyticsConfigurationUpgrade_PostgreSQL.scp
Analytics) can only consume data from a single Kafka node or single Kakfa
cluster.
All Kafka nodes should be in cluster and multiple Kafka clusters are not
supported.
1. Install Components
1. Install Components
Start by installing components on the corresponding environments.
l MicroStrategy Intelligence
l MicroStrategy Intelligence
l Platform Analytics
1. Go to Services.
sc delete MSTR_PlatformAnalyticsConsumer
c. Close Services.
5. Go to Services.
l Edit server.properties
l Edit zookeeper.properties
l Edit myid
Edit server.properties
1. Open server.properties for editing.
Windows location:
Linux location:
/opt/MicroStrategy/MessagingServices/Kafka/kafka_
x.x.x./config
In this example:
Machine 1: broker.id=1
Machine 2: broker.id=2
Machine 3: broker.id=3
broker.id=1
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
zookeeper.connect=10.27.18.73:2181,10.27.18.224:2181,10.27.36.168:2181
Edit zookeeper.properties
1. Open zookeeper.properties for editing.
Windows location:
Linux location:
/opt/MicroStrategy/MessagingServices/Kafka/kafka_
x.x.x./config
# To allow Zookeeper to work with the other nodes in your cluster, add
the following properties to the end of the zookeeper.properties file.
# initLimit=5
# syncLimit=2
# server.X= <IP address of the node>:2888:3888
# When adding this property, replace X above with the broker.id for the
node being referenced. A separate entry must be made for each node in the
cluster.
# For example,
initLimit=5
syncLimit=2
server.0=10.27.18.73:2888:3888
server.1=10.27.18.224:2888:3888
server.2=10.27.36.168:2888:3888
Edit myid
1. Open myid for editing. If this file doesn’t exist, you should create one.
Windows location:
Linux location:
/opt/MicroStrategy/MessagingServices/tmp/zookeeper
2. Confirm that the myid file does not have a hidden extension. In File
Explorer, go to View > Show > File name extensions to show
extensions. If your file has an extension, remove it.
3. Make sure the broker.id for each node matches the values you set in
server.properties.
broker.id=1
When restarting the services, it's important to note that all configuration file
changes must be completed first. For example, if you are adding two
additional Kafka nodes and you already have one existing node, then the
install and configuration should be completed on all three nodes before
restarting any of the services.
1. Start Zookeeper and Kafka on the main node before starting other
nodes.
Windows location:
Linux location:
/opt/MicroStrategy/Platform Analytics/conf
2. Add all telemetry node IP addresses to the file, using the following
format:
zookeeperConnection:IP1:port,IP2:port,IP3:port
bootstrap.servers: IP1:port,IP2:port,IP3:port
kafkaTopicNumberOfReplicas: 3
zooKeeperConnection: 10.27.18.73:2181,10.27.18.224:2181
bootstrap.servers: 10.27.18.73:9092,10.27.18.224:9092
There are two different levels of statistics that can be enabled for logging,
basic and advanced. The Configuration wizard can only configure basic
statistics, while Command Manager can configure both.
Telemetry Server Host: The Kafka server fully qualified domain name
or IP address. The default is 127.0.0.1 if Platform Analytics is installed
on same machine. If not, provide the Platform Analytics machine's IP
address (Machine 4 in this example).
Telemetry Server Port: The Kafka server port number. The default is
9092.
You must have the following privileges to enable statistics from Command
Manager:
3. If you are using more than one Intelligence server environment, repeat
the above steps for all Intelligence server environments.
l Troubleshooting
5. Click Next.
l PlatformAnalyticsConfigurationNew.scp
l PlatformAnalyticsConfigurationNew_PostgreSQL.scp
l PlatformAnalyticsConfigurationUpgrade.scp
l PlatformAnalyticsConfigurationUpgrade_PostgreSQL.scp
1. Launch Developer.
Troubleshooting
If Apache ZooKeeper cannot be restarted, ensure Kafka is fully configured.
2. Open the myid file and ensure the broker.id is the same as it
appears in server.properties. If they are different, this may be why
Apache ZooKeeper is not starting.
l The project source to the Intelligence Server which you wish to trigger the
initial load for Platform Analytics. For steps to create a Project Source, see
Create and Configure the Platform Analytics Project for Single Node
Telemetry.
The initial load of metadata objects can take several hours depending on
the size of the metadata. If the process gets interrupted then you must re-
trigger the initial load. Unsure if the process was complete? See Verify the
Initial Load of Object Telemetry.
2. Trigger the Event for the Scheduled Administration Task to send all
metadata object information to MicroStrategy Messaging Services.
The first message on the log file is the metadata object telemetry start
message. If the initial load is successfully triggered, you will see the
following information:
l Project distribution info: Indicates the host and projects that are
being loaded. If you are using a clustered environment, each node will
appear in this section with its associated projects.
Fo r Exam p l e
The above message indicates that the initial load was triggered since the
Metadata object telemetry start message appears. This load is
also given a unique identifier:
ObjectTelemetryID:5707FDFC4046F238782CC89C8DA94DD8. This
identifier will appear in subsequent messages related to this initial load.
This message also provides a record of how many projects are going to load.
The Project distribution info line indicates the node, tec-w-
004718, that contains four projects: Human Resources Analysis
Module, MicroStrategy Tutorial, Platform Analytics, and
Configuration.
This number may not represent the final number of objects sent to Kafka
because objects can be added or deleted when the project object telemetry
is being sent.
This message is logged when the cluster node starts, progresses, and is
finished with the subscription instance telemetry process.
This message indicates that the current node finished the subscription
instance telemetry progress and that 26 subscriptions were successfully
sent to Kafka.
1. Prerequisites
5. Enable/Disable TLS
1. Prerequisites
1. Install and configure Platform Analytics first, prior to enabling client
telemetry.
Windows
Linux
/opt/tomcat/webapps/MicroStrategyLibrary/WEB-
INF/classes/config/configOverride.properties
This setting controls all of the projects in your metadata. If you want to
re-enable basic statistics for select projects using Command Manager,
see Enable Statistics from Command Manager.
5. Enable/Disable TLS
Enabling TLS enables a secured connection between the Library and
Telemetry servers.
To enable TLS from Workstation , you must manually configure TLS for the
Telemetry and Library servers. If TLS is not configured correctly for both
servers, attempting to enable TLS from Workstation will fail.
If you have a clustered Telemetry server, make sure TLS is configure properly
on each node.
1. Locate the configuration file. The path of this file may vary depending
on your configured install path for Tomcat.
Windows
Linux
/opt/tomcat/webapps/MicroStrategyLibrary/WEB-
INF/classes/config/configOverride.properties
In the file, all fields that must be modified have a description and
example available in configDefault.properties, which is in the
same folder.
2. Add the truststore path and password to the config file. Open
configOverride.properties and add the following fields if they do
not exist. If you make any changes, restart Tomcat.
trustStore.path=/<path to>/client.truststore.jks
trustStore.passphrase=<truststore password>
producer.kafkaProperties.security.protocol = SSL
producer.kafkaProperties.bootstrap.servers = host1:port1,host2:port2,...
l User Name: Enter the MicroStrategy user name that can access the
Intelligence Server.
l Password: Enter the password for the MicroStrategy user that can
access the Intelligence Server.
7. Click Next.
l PlatformAnalyticsConfigurationNew.scp
l PlatformAnalyticsConfigurationUpgrade.scp
l PlatformAnalyticsConfigurationNew_PostgreSQL.scp
l PlatformAnalyticsConfigurationUpgrade_PostgreSQL.scp
l Host: Type the host name of the Platform Analytics warehouse. By default,
this is set to the last successful connection value.
l User Name: Type the user name for the Platform Analytics warehouse. By
default, this is set to the value from PAConsumerConfig.yaml file
l Password: Type the password for the Platform Analytics warehouse user.
Depending on the warehouse type you choose for the Host and Port, you
must set the parameter whDbType to either "postgresql" or "mysql" in
the PAConsumerConfig.yaml file.
l Linux: /opt/MicroStrategy/PlatformAnalytics/Conf
You can also update the Platform Analytics repository using the
Configuration Wizard in interactive mode.
2. Click Enter.
If you did not change the values, leave as default. The default password can
be found at C:\Program Files (x86)\Common Files\MicroStrategy\Default_
Accounts.txt
4. Click Enter.
Advanced Configuration
The following sections cover optional setup procedures and are not required
for configuring Platform Analytics.
The YAML file structure is updated each release with new configuration
parameters or Telemetry Server topics. All modifiable values are retained
after an upgrade, so any customized parameters are not lost. However, all
newly-added fields are set to the default after an upgrade.
The YAML file is located on the machine where Platform Analytics was
installed using the MicroStrategy Installation Wizard.
l Linux: /opt/MicroStrategy/PlatformAnalytics/Conf
parentConfig:
numberOfConsumers: 1
pollTimeoutMillisec: 1000
kafkaProperties:
bootstrap.servers: "10.27.17.167:9092"
YAML uses the key: value notation. A single space is required after the
colon.
PAConsumerConfig.yaml Specifications
The PAConsumerConfig file consists of the following parts:
---
paParentConfig:
consumerGroupSuffix: ~
overrideKafkaOffsets: true
kafkaTopicNumberOfReplicas: 1
kafkaTopicsDoNotCreateList:
zooKeeperConnection: 127.0.0.1:2181
ignoreUsherTopics: false
kafkaConsumerProperties:
bootstrap.servers: 127.0.0.1:9092
paEtlConfig:
redisConnection:
redisServer: 127.0.0.1
redisPort: 6379
redisPassword:
dailyETLConfiguration:
scheduleHour: 5
scheduleMin: 2
viewCutoffRangeInDays: 14
beaconDedup: true
locationDedup: true
warehouseDbConnection:
whHost: 127.0.0.1
whUser: root
whPasswd:
whPort: 3306
whDb: platform_analytics_wh
whClientCertificateKeyStore:
whClientCertificateKeyStoreType:
whClientCertificateKeyStorePassword:
whTrustCertificateKeyStore:
whTrustCertificateKeyStoreType:
whTrustCertificateKeyStorePassword:
pgWarehouseDbConnection:
pgWhHost: localhost
pgWhUser: mstr_pa
pgWhPasswd: Ugjx+93ROzBsA2gwBOWT5Qlu6hbfg5frTBmLmg==,970sBwUbi4EowB/4
pgWhPort: 5432
pgWhDb: platform_analytics_wh
pgWhSSLcert: ~
pgWhSSLkey: ~
pgWhSSLrootcert: ~
pgWhSSLmode: ~
geoLocationTopic: Mstr.PlatformAnalytics.Geolocation
kafkaHealthCheckTopic: mstr-pa-health-check
usherProducerKeys:
- SourceProvisionBadgePhone
- SourceProvisionOrganization
- SourceEnvironmentVariables
- SourceOrganization
- SourceOrganizationBadge
- SourceBadgeAdminRole
- SourceBadge
- SourceGateway
- SourceGatewayHierarchyAndDef
- SourceBeacon
- SourceDevice
googleAPIConfig:
googleApiKey:
googleApiClientId:
businessQuota: 100000
freeQuota: 2500
sleepTimeQuery: 5
usherLookupTopic: Mstr.PlatformAnalytics.UsherLookup
usherServerConfig:
usherServerDbConnection:
usherServerMysqlAesKeyPath:
usherServerUrl:
usherServerUser:
usherServerPassword:
paTopicsGroupList:
-
name: UsherInboxMessage
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
-
name: UsherInboxResponse
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
-
name: Geolocation
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.PlatformAnalytics.Geolocation
-
name: UsherLog
numberOfConsumers: 2
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
- Mstr.IdentityServer.LocationLog
paParentConfig Settings
---
paParentConfig:
consumerGroupSuffix: ~
overrideKafkaOffsets: true
kafkaTopicNumberOfReplicas: 1
kafkaTopicsDoNotCreateList:
zooKeeperConnection: 127.0.0.1:2181
ignoreUsherTopics: false
kafkaConsumerProperties:
bootstrap.servers: 127.0.0.1:9092
For example
reprocess_incorrect_
log_johndoe_
1330111282018
For example:
FQDN1:PORT1,
FQDN2:PORT2,
FQDN3:PORT3
The comma-separated
Telemetry Server (Kafka)
cluster configuration (e.g.
127.0.0.1:9092 or FQDN1:PORT1,
bootstrap.servers pre-configured Kafka FQDN2:PORT2,
broker quorum FQDN3:PORT3).
paEtlConfig Settings
paEtlConfig:
redisConnection:
redisServer: 127.0.0.1
redisPort: 6379
redisPassword: ~
dailyETLConfiguration:
scheduleHour: 5
scheduleMin: 2
viewCutoffRangeInDays: 14
currentFactDataKeepDays: 180
beaconDedup: true
locationDedup: true
whDbType: postgresql
warehouseDbConnection:
whHost: 127.0.0.1
whUser: root
whPasswd: r9oJP5d6
whPort: 3306
whDb: platform_analytics_wh
pgWarehouseDbConnection:
pgWhHost: localhost
pgWhUser: mstr_pa
pgWhPasswd:
Ugjx+93ROzBsA2gwBOWT5Qlu6hbfg5frTBmLmg==,970sBwUbi4EowB/4
pgWhPort: 5432
pgWhDb: platform_analytics_wh
pgWhSSLcert: ~
pgWhSSLkey: ~
pgWhSSLrootcert: ~
pgWhSSLmode: ~
geoLocationTopic: Mstr.PlatformAnalytics.Geolocation
kafkaHealthCheckTopic: mstr-pa-health-check
usherProducerKeys:
- SourceProvisionBadgePhone
- SourceProvisionOrganization
- SourceEnvironmentVariables
- SourceOrganization
- SourceOrganizationBadge
- SourceBadgeAdminRole
- SourceBadge
- SourceGateway
- SourceGatewayHierarchyAndDef
- SourceBeacon
- SourceDevice
googleAPIConfig:
googleApiKey: ~
googleApiClientId: ~
businessQuota: 100000
freeQuota: 2500
sleepTimeQuery: 5
usherLookupTopic: Mstr.PlatformAnalytics.UsherLookup
For best
performance, use
a local Telemetry
Cache instance.
The password to
connect to the
Telemetry Cache
redisPassword empty string (Redis server) if
password
authentication is
enabled.
By default,
password
authentication is
not enabled.
The number of
days of data for
which the view
tables in the
Platform Analytics
Cube will hold in
memory during
viewCutoffRangeInDays 14 republish.
The number of
days of data which
the current fact
tables in the
Platform Analytics
Repository will
hold.
For the
PostgreSQL
currentFactDataKeepD warehouse, we
180
ays create the
historical table for
some fact tables
whose data
amount may be
very large. Like
access_
transactions, fact_
sql_stats. The
historical table'
For example, a
180- day default
means that the
current fact tables
will include data
from the last
rolling 180 days.
And all other data
will be stored in
the historical fact
tables.
A flag for
determining
whether de-
duplication of the
MicroStrategy
Badge beacon
tracking data is
turned on.
If true, the
Telemetry Store
beaconDedup true
ETL will remove
any duplicate
beacon actions if
all of the
conditions are
met:
l interacting with
the same
beacon
l within 180
seconds
A flag for
determining
whether de-
duplication of the
MicroStrategy
Badge location
tracking data is
turned on.
If true, the
Telemetry Store
ETL removes any
duplicate location
locationDedup true
tracking actions if
all of the
conditions are
met:
l within 60
seconds
The default
database type that
was used as the
Platform Analytics
Repository.
Beginning with
whDbType postgresql MicroStrategy
2020, the default
database is
"postgresql" but it
can also support
database type
"mysql".
The username
used to connect to
the Platform
Analytics
whUser pre-configured via installation
Repository where
the Telemetry
Store will store
data for reporting.
The password of
the user used to
connect to the
Platform Analytics
whPasswd pre-configured via installation
Repository where
the Telemetry
Store will store
data for reporting.
The default is
3306, set during
installation.
The database of
the Platform
Analytics
whDb platform_analytics_wh
warehouse.
The PostgreSQL
database
username used to
connect to the
pgWhUser mstr_pa Platform Analytics
Repository where
the Telemetry
Store will store
data for reporting.
The PostgreSQL
database
password of the
user used to
connect to the
Platform Analytics
Repository where
the Telemetry
pgWhPasswd pre-configured via installation Store will store
data for reporting.
This password is
encrypted during
the installation.
You can find the
unencrypted
password from file
"Default_
Accounts.txt"
which under the
(Windows:
C:\Program Files
(x86)\Common
Files\MicroStrateg
y\ or Linux:
./install/Repositor
y/)
The default is
5432, set during
installation.
The database of
the Platform
Analytics
pgWhDb platform_analytics_wh warehouse.
support.
The Telemetry
Server (Kafka)
topic for location
data geocoding
Mstr.PlatformAnalytics.Geoloc processing from
geoLocationTopic
ation the MicroStrategy
Badge mobile
app.
The Telemetry
Server (Kafka)
topic used for the
kafkaHealthCheckTopic mstr-pa-health-check health check.
SourceProvisionBadgePhone
SourceProvisionOrganization
SourceEnvironmentVariables
SourceOrganization This should not be
usherProducerKeys
SourceOrganizationBadge changed.
SourceBadgeAdminRole
SourceBadge
SourceGateway
SourceGatewayHierarchyAndDef
SourceBeacon
SourceDevice
Flag for
determining if the
logging True Google geocoding
API usage logging
is enabled.
Flag for
determining if the
alerting True Google geocoding
API usage logging
is enabled.
The number of
seconds to pause
between the
Google geocoding
sleepTimeQuery 5 API calls for
location data
processing.
usherServerConfig Settings
usherServerConfig:
usherServerDbConnection:
usherServerMysqlAesKeyPath:
usherServerUrl:
usherServerUser:
usherServerPassword:
paTopicsGroupList Settings
The following settings that are defined only at the topicsGroup level, not
at the ParentConfig.
paTopicsGroupList:
-
name: UsherInboxMessage
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
-
name: UsherInboxResponse
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
-
name: Geolocation
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.PlatformAnalytics.Geolocation
-
name: UsherLog
numberOfConsumers: 2
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
- Mstr.IdentityServer.LocationLog
Name Description
platform-analytics-custom-install.bat
This will always drop platform_analytics_wh database (if any) and re-
create it. If the warehouse had processed data previously, this option will
result in a loss of data.
This option will not drop the platform_analytics_wh database and will
preserve the historical data.
4. From the Platform Analytics directory, open the log folder and edit the file
platform-analytics-installation.log.
./platform-analytics-consumer.sh stop
./platform-analytics-usher-lookup-producer.sh stop
platform-analytics-custom-install.sh
4. In the Platform Analytics directory, open the log folder and edit the
file platform-analytics-installation.log.
6. In the Platform Analytics directory, open the bin folder and run the
following commands:
./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start
Before configuring the Daily ETL, you must have MicroStrategy and Platform
Analytics fully installed and configured. For more information, see Installing
Platform Analytics.
For example, if your server is set to UTC time zone, and you want to
schedule the Daily ETL to run at 11:05 pm, you would set the values as:
dailyETLConfiguration:
scheduleHour: 23
scheduleMin: 05
If you have more questions about Daily ETL, see Commonly Asked
Questions and Troubleshooting.
./platform-analytics-consumer.sh stop
For example, if your server is set to UTC time zone, and you want
to schedule the Daily ETL to run at 11:05 pm, you would set the
values as:
dailyETLConfiguration:
scheduleHour: 23
scheduleMin: 05
6. In the Platform Analytics directory, open the bin folder and run the
command:
./platform-analytics-consumer.sh start
If you have more questions about Daily ETL, see Commonly Asked
Questions and Troubleshooting.
req -new -key server.key -days 3650 -out server.crt -x509 -subj "/CN=IP or
HOSTNAME"
3. Open Command Prompt or File Explorer and navigate to where the server
certificate is located.
4. Copy the newly created server certificate to create the certificate authority:
ssl = on
ssl_ca_file = '\\LOCATION_OF_FILE\\root.crt'
ssl_cert_file = '\\LOCATION_OF_FILE\\server.crt'
ssl_key_file = '\\LOCATION_OF_FILE\\server.key'
ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
ssl_prefer_server_ciphers = on
If you receive an error, you may need to comment out tsa_policy1 in the
openssl.cnf file. Save and relaunch openssl as an Administrator.
2. Convert the private key into DER format using the command below:
The JDBC PostgreSQL driver used by Platform Analytics requires that the
key file be in DER format rather than PEM format.
pkcs8 -topk8 -inform PEM -in postgresql.key -outform DER -nocrypt -out
postgresql.key.der
3. Depending on the ODBC driver being used for PostgreSQL, a key store may
be required. To create a key store:
4. Copy the files that were created to the client machine and update the
PAConsumerConfig.yaml file with the below path to the certificate and
key.
pgWarehouseDbConnection:
pgWhHost: YOUR_HOST
pgWhUser: DB_USERNAME
pgWhPasswd: YOUR_PASSWORD
pgWhPort: 5432
pgWhDb: platform_analytics_wh
pgWhSSLcert: \LOCATION_OF_FILE\postgresql.crt
pgWhSSLkey: \LOCATION_OF_FILE\postgresql.key.der
pgWhSSLrootcert: \LOCATION_OF_FILE\root.crt
pgWhSSLmode: verify-ca
Change the file permissions and owner to the system user running
PostgreSQL:
If owned by root:
openssl req -new -key server.key -days 3650 -out server.crt -x509
-subj "/CN=IP OR HOSTNAME"
cp server.crt root.crt
ssl = on
ssl_ca_file = '/LOCATION_OF_FILE/root.crt'
ssl_cert_file = '/LOCATION_OF_FILE/server.crt'
ssl_key_file = '/LOCATION_OF_FILE/server.key'
ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
ssl_prefer_server_ciphers = on
cd /opt/MicroStrategy/PlatformAnalytics/bin
./mstr_pg_ctl restart
2. Convert the private key into DER format using the command below:
4. Copy the files that were created to the client machine and update
the PAConsumerConfig.yaml file with the below path to the
certificate and key.
pgWarehouseDbConnection:
pgWhHost: YOUR_HOST
pgWhUser: DB_USERNAME
pgWhPasswd: YOUR_PASSWORD
pgWhPort: 5432
pgWhDb: platform_analytics_wh
pgWhSSLcert: /LOCATION_OF_FILE/postgresql.crt
pgWhSSLkey: /LOCATION_OF_FILE/postgresql.key.der
pgWhSSLrootcert: /LOCATION_OF_FILE/root.crt
pgWhSSLmode: verify-ca
3. Right click in the list of schedules and select New > Schedule.
8. Set the Time to trigger to Execute All Day Every / Executing every:
15 minutes.
l Check the box next to the table name in the Data Source column
l Change the Refresh Policy to Update Existing Data and Add New
Data.
l Begin typing the view name into the Table text box.
Table mapping
fact_access_transactions_view fact_access_transactions_incremental
fact_action_cube_cache_view fact_action_cube_cache_incremental
fact_action_security_filter_view fact_action_security_filter_incremental
fact_latest_cube_cache fact_latest_cube_cache_incremental
fact_object_component fact_object_component_incremental
lu_account lu_account_incremental
lu_cache lu_cache_incremental
lu_cache_object lu_cache_object_incremental
lu_component_object lu_component_object_incremental
lu_history_list_message_view lu_history_list_message_incremental
lu_object lu_object_incremental
lu_project_object lu_project_object_incremental
lu_session_view lu_session_incremental
lu_status lu_status_incremental
lu_subscription lu_subscription_incremental
lu_validating_account lu_validating_account_incremental
The Platform Analytics Cube will now be incrementally refreshed with the
latest data every 15 minutes rather than one hour.
2. Create a user. If the mstr and mstr_pa users already exist from the
MicroStrategy installation, skip this step.
whDbType: postgresql
…
pgWarehouseDbConnection:
pgWhHost: YOUR_HOST
pgWhUser: mstr_pa
pgWhPasswd: YOUR_PASSWORD
pgWhPort: 5432
pgWhDb: YOUR_DATABASE_NAME
Windows: platform-analytics-custom-install.bat -o
install
Linux: ./platform-analytics-custom-install.sh -o
install
9. Update the DSN used for Platform Analytics by changing the database
in the DSN to the one you created.
You can also create or modify a response file with a text editor. For
information on all the parameters in the response file, see Platform Analytics
Response File Parameters.
Options Description
Options Description
PAProjectDSSUser=
The user name to log in to the Platform Analytics project.
The Data Source Name for the database that contains your
PAProjectDSNName=
Platform Analytics repository.
PAProjectDSNUserPwd= The password for the user name above for the Platform
Options Description
Exam ple
[PAProjectHeader]
PAProject=1
PAProjectEncryptPwd=1
PAProjectDSSUser=Administrator
PAProjectDSSPwd=password
PAProjectDSNName=PA_WAREHOUSE
PAProjectDSNUserName=root
PAProjectDSNUserPwd=password
PAProjectDSNPrefix=
l From within the Configuration Wizard. See, To Use a Response File with
the Configuration Wizard.
l From the Windows command line. See, To Use a Response File through
the Windows Command Line. This enables users to run the file without
using any graphical user interfaces.
1. From the Windows Start menu choose All Programs > MicroStrategy
Tools > Configuration Wizard.
2. Click Load.
3. Browse to the path where the response file is saved and click Open.
macfgwiz.exe -r "Path\response.ini"
If an error message is displayed, check the path and name you supplied for
the response file and make any required changes.
6. Type the fully qualified path to the response.ini file and press Enter.
For example:
/home/username/MicroStrategy/RESPONSE.INI
If an error message is displayed, check the path and name you supplied for
the response file and make any required changes.
To Use a Response File through the UNIX/ Linux Com m and Line
3. Enter the following command in the command line and press Enter.
/home/username/MicroStrategy/RESPONSE.INI
If an error message is displayed, check the path and name you supplied for
the response file and make any required changes.
TRIGGER SUBSCRIPTION "Platform Analytics Cube Every Hour_30" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";
TRIGGER SUBSCRIPTION "Remote Diagnostic Cube Every Hour_20" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";
TRIGGER SUBSCRIPTION "Compliance Telemetry Cube Every Hour_00" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";
TRIGGER SUBSCRIPTION "Platform Analytics Cube Every Hour_30" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";
TRIGGER SUBSCRIPTION "Remote Diagnostic Cube Every Hour_20" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";
TRIGGER SUBSCRIPTION "Compliance Telemetry Cube Every Hour_00" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";
TRIGGER SUBSCRIPTION "Capacity Planning Cube Every Hour_25" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";
Project Upgrades
TRIGGER SUBSCRIPTION "Capacity Planning Cube Every Hour_25" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";
Windows
Linux
/opt/MicroStrategy/MessagingServices/Kafka/kafka_2.13-
3.2.0/bin/zookeeper-shell.sh
Windows
.\zookeeper-shell.bat 10.23.39.148:2181,10.23.35.115:2181
Linux
./zookeeper-shell.sh 10.23.36.181:2181,10.23.33.221:2181
3. Verify the znodes in Zookeeper have the World privilege by running the
following command:
getAcl /brokers
setAcl -R / ip:<IP1>:cdrwa,ip:<IP2>:cdrwa,ip:<IP3>:cdrwa,...
setAcl -R /
ip:10.250.151.242:cdrwa,ip:10.250.155.99:cdrwa
getAcl /brokers
4. After all Zookeeper nodes are up, start all Kafka nodes.
5. Start Consumer.
Po st Co n f i gu r at i o n M ai n t en an ce
For long term maintenance of Platform Analytics, the following options are
available:
Copyright ©Backup
2023 All Rights Reserved
Prerequisites: 364
Plat fo r m An alyt ics
warehouseDbConnection:
l whHost: 127.0.0.1
l whUser: root
l whPasswd: encrypted_password
l whPort: 3306
l whDb: platform_analytics_wh
Restore Prerequisites:
pgWarehouseDbConnection:
l pgWhHost: 127.0.0.1
l pgWhUser: postgres
l pgWhPort: 5432
l pgWhDb: platform_analytics_wh
platform-analytics-data-migration-tool.bat
This is the Platform Analytics Data Migration Tool. The purpose of this
tool is to help migrate your data from an existing Mysql Warehouse to a new
PostgreSQL Warehouse.
Please select from the following options:
1) Backup
2) Restore
3) Backup and Restore
0) Exit
Migration Workflow
Backup
1. Provide the path to the directory where the MySQL backup will be stored.
Restore
1. Provide the path to the directory where the MySQL backup is stored.
2. The tool will prompt you again if you are sure you are okay to drop your
PostgreSQL platform_analytics_wh schema.
4. The backup data is then imported into the newly created platform_
analytics_wh schema.
In-Place Upgrades
If you are performing an in-place upgrade the best practice steps would be
as followed:
5. Enter the full desired directory path for the database to be backed up to and
restored from.
6. Wait until the backup is complete. The tool you will then prompt if it is okay
to recreate the PostgreSQL warehouse and select yes.
7. The program will then restore your MySQL backup files into your new
PostgreSQL warehouse and the data migration will be complete.
Copyright © 2023 All Rights Reserved 367
8. If you have installed Workstation and Service Registration, the service's
grouping and dependency information in MicroStrategy Workstation's
Topology view should be updated. When Topology is not updated, the view
Plat fo r m An alyt ics
4. Open Workstation and select the Topology tab. Consumer should now
appear to depend on and belong to the same group as Store
(PostgreSQL).
Parallel Upgrades
1. On your new MicroStrategy 2021 machine, populate the
PAConsumerConfig.yaml has the MySQL and PostgreSQL information
shown in the prerequisites above.
5. Enter the full desired directory path for the database to be backed up to and
restored from.
6. Wait until the backup is complete. The tool you will then prompt if it is okay
to recreate the PostgreSQL warehouse and select yes.
7. The program will then restore your MySQL backup files into your new
PostgreSQL warehouse and the data migration will be complete.
4. Open Workstation and select the Topology tab. Consumer should now
appear to depend on and belong to the same group as Store
(PostgreSQL).
Backup Prerequisites:
l /MicroStrategy/install/PlatformAnalytics/PAConsumerConf
ig.yaml populated with:
warehouseDbConnection:
l whHost: 127.0.0.1
l whUser: root
l whPasswd: encrypted_password
l whPort: 3306
l whDb: platform_analytics_wh
Restore Prerequisites:
pgWarehouseDbConnection:
l pgWhHost: 127.0.0.1
l pgWhUser: postgres
l pgWhPort: 5432
l pgWhDb: platform_analytics_wh
/opt/mstr/MicroStrategy/PlatformAnalytics/bin
./platform-analytics-data-migration-tool.sh
Migration Workflow
Backup
1. Provide the path to the directory where the MySQL backup will be
stored.
Restore
1. Provide the path to the directory where the MySQL backup is
stored.
2. The tool will prompt you again if you are sure you are okay to drop
your PostgreSQL platform_analytics_wh schema.
In-Place Upgrades
If you are performing an in-place upgrade the best practice steps would
be as followed:
5. Enter the full desired directory path for the database to be backed
up to and restored from.
6. Wait until the backup is complete. The tool will then prompt if it is
okay to recreate the PostgreSQL warehouse and select yes.
7. The program will then restore your MySQL backup files into your
new PostgreSQL warehouse and the data migration will be
complete.
$ su - mstr
$ /opt/MicroStrategy/_jre/bin/java -jar
/opt/MicroStrategy/ServicesRegistration/jar/svcsreg-admin.jar
migrate MicroStrategy-Platform-Analytics-Consumer MySQL
PostgreSQL
Parallel Upgrades
1. On your new MicroStrategy 2021 machine, populate the
PAConsumerConfig.yaml has the MySQL and PostgreSQL
information shown in the prerequisites above.
5. Enter the full desired directory path for the database to be backed
up to and restored from.
6. Wait until the backup is complete. The tool will then prompt if it is
okay to recreate the PostgreSQL warehouse and select yes.
7. The program will then restore your MySQL backup files into your
new PostgreSQL warehouse and the data migration will be
complete.
$ su - mstr
$ /opt/MicroStrategy/_jre/bin/java -jar
/opt/MicroStrategy/ServicesRegistration/jar/svcsreg-admin.jar
migrate MicroStrategy-Platform-Analytics-Consumer MySQL
PostgreSQL
1. Disable Services
3. Configure Kafka
4. Restart Services
l One environment with MicroStrategy and Platform Analytics fully installed and
configured. For more information, see Installing Platform Analytics.
Disable Services
Before configuring the new Kafka nodes, ensure that the Intelligence Server
Producer, Apache ZooKepper, Apache Kafka, and Platform Analytics
Consumer and Producer are disabled. If you already have more than one
node in the cluster, disable the services on all nodes.
3. When you're prompted to select the features you want to install, select
Telemetry Server.
Configure Kafka
Perform following steps for all nodes, including those that already exist in
the cluster.
l initLimit=5
Plat fo r m An alyt ics
server.0=10.27.18.73:2888:3888
server.1=10.27.18.224:2888:3888
6. Create a text file called myid containing only the broker.id of the node.
Ensure the file doesn't have a hidden extension. To check, click Check
View > Show/hide > File name extensions in File Explorer. Delete any
extension of your myid file.
Restart Services
After the installation and configuration on all Kafka nodes in the cluster are
complete, restart the Intelligence Server Producer, Apache ZooKeeper,
Apache Kafka, and Platform Analytics Consumer and Producer.
Apache Zookeeper
1. In Windows Services, start Apache ZooKeeper. Start the main node before
starting other nodes.
Apache Kafka
1. In Windows Services, start Apache Kafka.
Replace the hostname and port with the new Telemetry Server cluster
configuration for the Platform Analytics environment.
Troubleshooting
If Apache ZooKeeper cannot be restarted, ensure Kafka is fully configured.
2. Open the meta.properties file and ensure the broker.id is the same
as it appears in server.properties. If they are different, this may be why
Apache ZooKeeper is not starting.
3. If there is no telemetry in the Kafka topics, check if statistics are enabled for
Platform Analytics projects by running the following command in Command
Manager:
1. Disable Services
3. Configure Kafka
4. Restart Services
Disable Services
Before configuring the new Kafka nodes, ensure the Intelligence Server
Producer, Apache ZooKepper, Apache Kafka, and Platform Analytics
Consumer and Producer are disabled. If you already have more than one
node in the cluster, disable the services on all nodes.
./platform-analytics-consumer.sh stop
./platform-analytics-usher-lookup-producer.sh stop
./kafka-server-stop.sh
./zookeeper-server-stop.sh
Configure Kafka
Perform following steps for all nodes, including those that already exist
in the cluster.
l initLimit=5
l syncLimit=2
For example:
initLimit=5
syncLimit=2
server.0=10.27.18.73:2888:3888
server.1=10.27.18.224:2888:3888
6. Create a text file called myid containing only the broker.id of the
node.
Restart Services
After the installation and configuration on all Kafka nodes in the cluster
are complete, restart the Intelligence Server Producer, Apache
ZooKepper, Apache Kafka, and Platform Analytics Consumer and
Producer.
When restarting the services, it's important to note that all configuration
file changes must be completed first. For example, if you are adding two
additional Kafka nodes, plus have one existing node, then the install and
configuration should be completed on all three nodes before restarting
any of the services.
Apache Zookeeper
1. In the Kafka directory, found in
/opt/MicroStrategy/MessagingServices/Kafka/kafka_2.11-1.1.0/,
open the bin folder.
Apache Kafka
1. In the same folder, start Kafka on all nodes by running:
Replace the hostname and port with the new Telemetry Server
cluster configuration for the Platform Analytics environment.
./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start
Troubleshooting
If Apache ZooKeeper cannot be started, ensure Kafka is fully configured.
l One environment with MicroStrategy and Platform Analytics fully installed and
configured. For more information, see Installing Platform Analytics.
platform-analytics-encryptor.bat
For example:
whPasswd: 7YX+l/9HOr6DPpT0AiEVNzsnug==,x7F8IezkCtLjFFdX
10. Open Command Manager and connect to the Project Source that contains
the Platform Analytics project.
You do not need to enter the encrypted password. The password you
entered will be encrypted when stored in the metadata.
12. Execute the command. Your password is updated for the Platform Analytics
project.
./platform-analytics-consumer.sh stop
./platform-analytics-usher-lookup-producer.sh stop
[user@your-PA-machine bin]#./platform-analytics-encryptor.sh
For example:
whPasswd:
7YX+l/9HOr6DPpT0AiEVNzsnug==,x7F8IezkCtLjFFdX
9. In the Platform Analytics directory, open the bin folder and run the
following commands to start MicroStrategy Platform Analytics
Consumer and MicroStrategy Usher Metadata Producer:
./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start
10. Open Command Manager and connect to the Project Source that
contains the Platform Analytics project.
12. Execute the command. Your password is updated for the Platform
Analytics project.
# requirepass foobared
requirepass [password]
For example:
redisPassword:
c5eoCdW023nqmME9Nl2ZBntw5MdvBZEOQLd9zD6xVWSx3UjE,EnrazzMgibZDpHD
./redis.sh stop
./platform-analytics-consumer.sh stop
./platform-analytics-usher-lookup-producer.sh stop
# requirepass foobared
requirepass [password]
./redis.sh start
[user@your-PA-machine bin]#./platform-analytics-encryptor.sh
For example:
redisPassword:
c5eoCdW023nqmME9Nl2ZBntw5MdvBZEOQLd9zD6xVWSx3UjE,EnrazzMgibZDpHD
15. In the Platform Analytics directory, open the bin folder and run the
following commands:
./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start
Increasing the number of days means the size of the cache stored in-
memory on the Intelligence Server increases.
To see all dossiers and reports that are dependents of the Platform
Analytics Cube, right-click a cube and select Find Dependents.
Tables
To limit the amount of data returned in-memory after republishing the
Platform Analytics Cube, warehouse views have been created on top of
large lookup tables and on all fact tables. These tables are used in the local
schema cube definition but do not apply to the Platform Analytics project
schema. Any application object (report, dossier, document, OLAP Cube)
created based on the project schema returns data for all days in the Platform
Analytics Repository.
l fact_prompt_answers_view
l fact_action_security_filter_view
l fact_sql_stats_view
l fact_step_sequence_view
l fact_performance_monitor_view
l fact_action_cube_cache_view
l fact_access_transactions_view
l lu_session_view
l lu_history_list_message_view
For example:
viewCutoffRangeInDays: 14
6. To verify the change, run the SQL query below in the Platform Analytics
Repository:
./platform-analytics-consumer.sh stop
For example:
viewCutoffRangeInDays: 14
5. In the Platform Analytics directory, open the bin folder and run the
following command:
./platform-analytics-consumer.sh start
6. To verify the change, run the SQL query below in the Platform
Analytics Repository:
MySQL Maintenance
Since Platform Analytics stores telemetry in the Platform Analytics MySQL
Repository, it's important to maintain your MySQL database. There are four
recommended ways to maintain your database:
Where:
You can modify the syntax depending on the information you want to backup.
For more information on the database backup program, see Backup and
Recovery.
l Asynchronous Replication
l Semi-Synchronous Replication
In either case, you can configure your system so that Platform Analytics
Consumer writes to the master and the Intelligence server reads data from
one of the replicas. This is useful for systems with heavy read/write load and
if you have several custom cubes created using the Self Service Schema in
the Platform Analytics project.
All the three components must be in a healthy state for Platform Analytics to
successfully process telemetry logs. If any of these components are
unavailable, the Telemetry Store consumer and Identity Telemetry producer
will stop. Therefore, during start up, both the consumer and producer
execute a health check for the three components and generate a detailed
report with the results.
At times, one of the components may be in the process of starting and not in
a completely ready-state when the health check starts. In such situations,
the consumer and producer will perform three consecutive checks with a 60-
second delay between each check to confirm if the dependencies are in an
unhealthy state.
health check reports generated in the Platform Analytics log folder located
in the default install path:
l Linux: /opt/MicroStrategy/PlatformAnalytics/log
The name of the file identifies whether the report is corresponding to the
consumer or the producer.
For example,
platform-analytics-consumer-health-check-yyyymmddhhmmss.out
platform-analytics-usher-lookup-producer-health-check-yyyymmddhhmmss.out
1. Health Check
Health Check
During the health check, there are two checks being executed:
l Does the database user have the required privileges? For a full list of
installation prerequisites see, Platform Analytics Prerequisites.
The Health Check report provides a list of the privileges and the resulting
status. If all the checks are successful, the final line will read Warehouse
health check result is healthy.
If you receive any of the following errors in the Health Check, here are
suggested workarounds:
Platform Analytics supports MySQL versions 5.6, 5.7, and 8.0. For MySQL
8.0, SSL connection is enabled by default. Currently, Platform Analytics
does not support SSL for the database user connecting to MySQL. When you
create the database user for the Platform Analytics Consumer or Usher
Metadata Producer, specify the SSL/TLS option using the REQUIRE clause.
2. If the result for 'have_ssl' is 'YES' then SSL is enabled. Create the
user with mysql_native_password and REQUIRE NONE options to
connect without SSL.
If you see an error in your check, ensure Redis is running and that your
configuration is correct in the PAConsumerConfig.yaml file.
If you receive any of the following errors in the Redis Health Check, here are
suggested workarounds:
If Redis has been enabled with password authentication and the password is
missing in the PAConsumerConfig.yaml configuration file, the consumer
or producer will be unable to connect to Redis. To resolve this error, follow
the steps for Enable Password Authentication on the MicroStrategy
Telemetry Cache.
If you see an error in your check, ensure ZooKeeper and Kafka are started.
The Platform Analytics Health Check Utility performs all three health checks
that occur in Start-Up Health Checks and end-to-end telemetry checks to
verify that data can be produced by the Intelligence Server and consumed by
the Platform Analytics Consumer (Telemetry Store).
If you are using Linux, the Platform Analytics Health Check Utility is located
at /opt/MicroStrategy/PlatformAnalytics/bin. If you are using
Windows, it is located at C:\Program Files
(x86)\MicroStrategy\Platform Analytics\bin.
1. Health Check
Health Check
During the health check, there are two checks being executed:
l Does the database user have the required privileges? For a full list of
installation prerequisites see, Platform Analytics Prerequisites.
The Health Check report provides a list of the privileges and the resulting
status. If all the checks are successful, the final line will read Warehouse
health check result is healthy.
If you receive any of the following errors in the Health Check, here are
suggested workarounds:
Platform Analytics supports MySQL versions 5.6, 5.7, and 8.0. For MySQL
8.0, SSL connection is enabled by default. Currently, Platform Analytics
does not support SSL for the database user connecting to MySQL. When you
create the database user for the Platform Analytics Consumer or Usher
Metadata Producer, specify the SSL/TLS option using the REQUIRE clause.
2. If the result for 'have_ssl' is 'YES' then SSL is enabled. Create the
user with mysql_native_password and REQUIRE NONE options to
connect without SSL.
If you see an error in your check, ensure Redis is running and that your
configuration is correct in the PAConsumerConfig.yaml file.
If you receive any of the following errors in the Redis Health Check, here are
suggested workarounds:
resolve this error, connect to the machine hosting Platform Analytics and
confirm all fields under the redisConnection heading are correct in the
PAConsumerConfig.yaml file.
It's possible the Redis server failed to write the snapshot to the disk. If this
is the case, you can disable the RDP snapshotting process on the Redis
server.
./platform-analytics-usher-lookup-producer.sh stop
./platform-analytics-consumer.sh stop
#save 900 1
#save 300 10
#save 60 10000
save ""
./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start
If Redis has been enabled with password authentication and the password is
missing in the PAConsumerConfig.yaml configuration file, the consumer
or producer will be unable to connect to Redis. To resolve this error, follow
the steps for Enable Password Authentication on the MicroStrategy
Telemetry Cache.
If you see an error in your check, ensure ZooKeeper and Kafka are started.
If the record is found in both the appropriate Kafka topic and the warehouse,
the final line will read Change Journal health check result is
healthy.
If you see an error in your check, ensure the feature flag Messaging Service
for Platform Analytics is on in the Intelligence server and that the property
Telemetry Server enabled is set to True in the Intelligence server.
3. In the results, verify the feature flag Messaging Service for Platform
Analytics says ON. If the feature flag is OFF, run the following
command to turn it on:
4. To view the status of the property Telemetry Server enable, run the
command:
IP>:9092/batch.num.messages:5000/queue.buffering.max.ms:2000";
On Linux:
<Install>/PlatformAnalytics/bin
./platform-analytics-consumer.sh status
./platform-analytics-consumer.sh start
If restarting the server does not resolve the issue, check the logs under
<Install>/PlatformAnalytics/log/platform-analytics-
consumer.log or contact MicroStrategy Technical Support and attach the
folder <Install>/PlatformAnalytics/log to your case.
On Windows:
If restarting the server does not resolve the issue, check the logs under
<Install>/PlatformAnalytics/log/platform-analytics-
consumer.log or contact MicroStrategy Technical Support and attach the
folder <Install>/PlatformAnalytics/log to your case.
If the record is found in both the appropriate Kafka topic and the warehouse,
the final line will read Statistics health check result is
healthy.
If you see an error in your check, ensure Statistics are enabled for the
project and that Messaging Services is configured correctly.
On Linux:
<Install>/PlatformAnalytics/bin
./platform-analytics-consumer.sh status
./platform-analytics-consumer.sh start
If restarting the server does not resolve the issue, check the logs under
<Install>/PlatformAnalytics/log/platform-analytics-
consumer.log or contact MicroStrategy Technical Support and attach the
folder <Install>/PlatformAnalytics/log to your case.
On Windows:
If restarting the server does not resolve the issue, check the logs under
<Install>/PlatformAnalytics/log/platform-analytics-
consumer.log or contact MicroStrategy Technical Support and attach the
folder <Install>/PlatformAnalytics/log to your case.
Purpose
Prior to MicroStrategy 2021 Update 6, job statistics were not recorded when
a job was created. Thus, when the Intelligence server crashed, there was no
way to determine which jobs were active or the objects being manipulated.
Workflow
Once a job is created, a message is sent to the Kafka Topic with the
CREATETIME field. Upon job completion, another message is sent with the
COMPLETETIME field.
To change the default refresh rate, you can utilize MicroStrategy Developer
or Command Manager.
4. Step through the Schedule Wizard until you get to the Recurrence
Pattern dialog.
6. Click Next.
7. Click Finish.
For example,
platform-analytics-data-cleanup.ps1 -f
C:\Users\mstr\Desktop\YourTextFile.txt
./platform-analytics-data-cleanup.sh -f
/opt/mstr/PlatformAnalytics/bin/YourTextFile.txt
tail -f ../log/platform-analytics-data-cleanup.log
The commands used to purge Platform Analytics warehouse data are based
on different criteria, including:
l Projects: You can purge data from specific projects, but all those projects
must be in one metadata.
l Deleted Objects: You can purge the deleted objects and related data.
l Deleted Projects: You can purge the deleted projects and related data.
l DaysToKeep: You can purge data and only keep the latest data with the
given number of days.
DELETE_ALL_OBJECTS_IN_METADATA
DELETE_ALL_OBJECTS_IN_PROJECTS
DELETE_ALL_DELETED_OBJECTS
DELETE_ALL_DELETED_PROJECTS
DELETE_ALL_DELETED_OBJECTS_IN_METADATA
DELETE_ALL_DELETED_PROJECTS_IN_METADATA
DELETE_ALL_DELETED_OBJECTS_IN_PROJECTS
DELETE_ALL_FACTS
DELETE_ALL_FACTS_FROM_METADATA
DELETE_ALL_FACTS_FROM_PROJECTS
DELETE_ALL_FACTS_FROM_DELETED_OBJECTS
DELETE_ALL_FACTS_FROM_DELETED_PROJECTS
DELETE_ALL_FACTS_FROM_DELETED_OBJECTS_IN_METADATA
DELETE_ALL_FACTS_FROM_DELETED_PROJECTS_IN_METADATA
DELETE_All_FACTS_FROM_DELETED_OBJECTS_IN_PROJECTS
DELETE_ALL_OBJECTS_IN_METADATA
This command will purge the whole metadata you have given as well as the
related data, including those metadata in lu_metadata. The following tables
will be purged:
license
fact_sql_
stats
fact_step_
sequence
fact_
usher_
entity_
resolved_
privilege
fact_
usher_
inbox_
message
fact_
usher_
inbox_
response
lu_client_
session
lu_session
DELETE_ALL_OBJECTS_IN_PROJECTS
This command will purge all the projects which you have given and related
data, include those projects in lu_project. The following tables will be
purged:
DELETE_ALL_DELETED_OBJECTS
This command will purge all the deleted objects and related data in the
whole pa warehouse. The following tables will be purged:
Project Postgres
Metada Project Configurati Other Object Fact Only
ta s on Objects Objects s Tables Fact
Table
Project Postgres
Metada Project Configurati Other Object Fact Only
ta s on Objects Objects s Tables Fact
Table
DELETE_ALL_DELETED_PROJECTS
This command will purge all the deleted projects and related data in the
whole pa warehouse. The following tables will be purged:
fact_ historical_
report_ fact_sql_
columns stats
fact_sql_
stats
fact_step_
sequence
DELETE_ALL_DELETED_OBJECTS_IN_METADATA
This command will purge all the deleted objects under you given metadata
and related data. The following tables will be purged:
Project Postgres
Metada Project Configurati Other Object Fact Only
ta s on Objects Objects s Tables Fact
Tables
Project Postgres
Metada Project Configurati Other Object Fact Only
ta s on Objects Objects s Tables Fact
Tables
columns
fact_sql_
stats
fact_step_
sequence
fact_user_
entity_
resolved_
privilege
DELETE_ALL_DELETED_PROJECTS_IN_METADATA
This command will purge all the deleted projects under you given metadata
and related data. The following tables will be purged:
fact_report_ historical_
columns fact_
report_
fact_sql_ columns
stats
historical_
fact_step_ fact_sql_
sequence stats
historical_
fact_step_
sequence
DELETE_ALL_DELETED_OBJECTS_IN_PROJECTS
This command will purge all the deleted objects under you given projects
and related data. The following tables will be purged:
columns historical_
fact_
fact_sql_ report_
stats columns
fact_step_ historical_
sequence fact_sql_
stats
historical_
fact_step_
sequence
DELETE_ALL_FACTS
This command will purge all the fact tables in the whole pa warehouse. The
following tables will be purged:
stats historical_
fact_
fact_step_ report_
sequence columns
lu_session historical_
fact_sql_
stats
historical_
fact_step_
sequence
historical_
lu_session
DELETE_ALL_FACTS_FROM_METADATA
This command will purge all the fact tables in you given metadata lists. The
following tables will be purged:
columns prompt_
answers
fact_sql_
stats historical_
fact_
fact_step_ report_
sequence columns
lu_client_ historical_
session fact_sql_
lu_session stats
historical_
fact_step_
sequence
historical_
lu_session
DELETE_ALL_FACTS_FROM_PROJECTS
This command will purge all the fact tables in you given project lists. The
following tables will be purged:
answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence
DELETE_ALL_FACTS_FROM_DELETED_OBJECTS
This command will purge all the fact tables generated by deleted objects in
the whole pa warehouse. The following tables will be purged:
answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence
DELETE_ALL_FACTS_FROM_DELETED_PROJECTS
This command will purge all the fact tables which generated by deleted
projects in whole pa warehouse. The following tables will be purged:
answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence
DELETE_ALL_FACTS_FROM_DELETED_OBJECTS_IN_METADATA
This command will purge all the fact tables generated by deleted objects in
given metadata. The following tables will be purged:
answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence
DELETE_ALL_FACTS_FROM_DELETED_PROJECTS_IN_METADATA
This command will purge all the fact tables generated by deleted projects in
given metadata. The following will be purged:
answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence
DELETE_All_FACTS_FROM_DELETED_OBJECTS_IN_PROJECTS
This command will purge all the fact tables generated by deleted objects in
given project lists. The following will be purged:
answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence
l daysToKeep: If this value is 0, it will purge all the fact table. Assume this
value is a, then we will keep fabs(a) days data.
#doTestBeforePurge: true
#commandsToExecute:
# - commandName: DELETE_ALL_DELETED_OBJECTS
# - commandName: DELETE_ALL_DELETED_PROJECTS
# - commandName: DELETE_ALL_OBJECTS_IN_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2
# - commandName: DELETE_ALL_DELETED_OBJECTS_IN_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2
# - commandName: DELETE_ALL_DELETED_PROJECTS_IN_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2
# - commandName: DELETE_ALL_OBJECTS_IN_PROJECTS
# metadataList:
# - metadataId
# projectList:
# - projectGuid_1
# - projectGuid_2
# - commandName: DELETE_ALL_DELETED_OBJECTS_IN_PROJECTS
# metadataList:
# - metadataId
# projectList:
# - projectGuid_1
# - projectGuid_2
# - commandName: DELETE_ALL_FACTS_FROM_DELETED_OBJECTS
# daysToKeep: 60
# - commandName: DELETE_ALL_FACTS_FROM_DELETED_PROJECTS
# daysToKeep: 60
# - commandName: DELETE_ALL_FACTS_FROM_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2
# daysToKeep: 60
# - commandName: DELETE_ALL_FACTS_FROM_DELETED_OBJECTS_IN_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2
# daysToKeep: 60
# - commandName: DELETE_ALL_FACTS_FROM_DELETED_PROJECTS_IN_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2
# daysToKeep: 60
# - commandName: DELETE_ALL_FACTS_FROM_PROJECTS
# metadataList:
# - metadataId
# projectList:
# - projectGuid_1
# - projectGuid_2
# daysToKeep: 60
# - commandName: DELETE_All_FACTS_FROM_DELETED_OBJECTS_IN_PROJECTS
# metadataList:
# - metadatatId
# projectList:
# - projectGuid_1
# - projectGuid_2
# daysToKeep: 60
Windows: platform-analytics-purge-warehouse.ps1
Linux: ./platform-analytics-purge-warehouse.sh
You can access Platform Analytics data in three different ways depending on
your needs:
l Cube and Cache Monitoring: Ensure that cubes and caches are being
fully leveraged to improve the performance of key analytics content.
l Error Analysis: Detect errors and anomalies in the system and improve
the experience of MicroStrategy users by fixing those issues.
Prompt
RP Number of Jobs Not Count of Jobs Not Counts the number of jobs that
Containing Prompt Containing a Prompt do not contain a specific prompt
Answer Value Answer Value answer.
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
Jobs
Standalone
RP Number of Data Provides the number of report executions
Report
Requests requested by users.
Executions
RP Number of Jobs
For Concurrency Jobs Counts the number of job executions.
Reporting
RP Number of Jobs
Database Jobs Counts the number of jobs hitting the database.
hitting Database
RP Number of Jobs
Jobs Today Counts the number of job executions today.
Today
RP Number of Jobs Non Cache Hit Counts the number of job executions that did not
w/o Cache Hit Jobs hit a server cache.
RP Number of Jobs Non Element Counts the number of job executions that did
w/o Element Loading Load Jobs not result from an element loading.
RP Number of Jobs Cache Hit Jobs Counts the number of job executions that did hit
with Cache Hit a server cache.
RP Number of Jobs Failed DB Jobs Counts the number of job executions that
with DB Error caused a database error.
RP Number of Jobs Element Counts the number of job executions that did
with Element Loading Loading Jobs result from an element loading.
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
RP Number of Jobs Security Filter Counts the number of job executions that did
with Security Filter Jobs have a security filter applied.
RP Number of Jobs SQL Execution Counts the number of job executions that
with SQL Execution Jobs execute SQL.
RP number of
Subscription Counts the number of job executions run
Narrowcast Server
Jobs through MicroStrategy Narrowcast Server.
jobs
Distinct
RP Number of Prompts
Prompts Counts the number of prompts in a report job.
Executed
RP Number of Report
Counts the number of job executions that
Jobs from Document Child Jobs
resulted from a document execution.
Execution
RP Number of Result Report Row Counts the number of result rows returned from
Rows Count a report execution.
RP Number of Result Cube Row Counts the number of rows in a OLAP View
Rows for View Report Count Report job.
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
RP Percentage of Ad-
% Ad-Hoc Jobs Percentage of ad-hoc jobs vs. total jobs.
Hoc Jobs
RP Percentage of Jobs % Cache Hit Percentage of jobs that hit a server cache vs.
with Cache Hit Jobs total jobs.
RP Percentage of % Cube Cache Percentage of jobs that hit a server cache vs.
Jobs with Cube Hit Hit Jobs total jobs.
RP Percentage of Jobs % Failed DB Percentage of jobs with database error vs. total
with DB Error Jobs jobs.
RP Percentage of
% Failed Jobs Percentage of jobs with any error vs. total jobs.
Jobs with Error
RP Percentage of
% Subscription Percentage of jobs from Narrowcast Server vs.
Narrowcast Server
Jobs total jobs.
jobs
RP Percentage of % Prompted
Percentage of prompted jobs vs. total jobs.
Prompted Jobs Jobs
RP Percentage of % Subscription
Percentage of scheduled jobs vs. total jobs.
Scheduled Jobs Jobs
RP Jobs with No Data Jobs with No Counts the number of jobs that returned no
Returned Data Returned data.
RP Export Engine Export Engine Counts the number of report jobs passing
Jobs Jobs through the export engine.
Number of Sessions Sessions per Provides the number of sessions created per
per User User connected user.
DP Average Number of Jobs per Provides the average number of document jobs
Jobs per Session Session per user session.
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
RP Average Number Jobs per Provides the average number of job executions
of Jobs per Session Session per session.
RP Average Number of
Jobs per User Average number of job executions per user.
Jobs per User
P Number of Data
Data Request Counts the number of jobs requested by a user
Request Jobs with
Jobs with Error and encountered an error.
Error
Jobs w/o
RP Number of Jobs Counts the number of jobs that don't create a
Cache
w/o Cache Creation cache.
Creation
Jobs with
RP Number of Jobs
Cache Counts the number of jobs that create a cache.
with Cache Creation
Creation
RP number of
Subscription
Narrowcast Server Counts the number of subscription jobs.
Jobs
jobs
RP Number of Users Users Who Counts the number of distinct users that
who ran report Execute Jobs executed jobs.
RP Export Engine Export Engine Counts the number of jobs passing through the
Jobs Jobs export engine.
Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name
RP Average Daily Use Avg Job Provides the max Execution Duration (in
Duration per job Execution seconds) of jobs. A jobs Execution Duration
(hh:mm:ss) Duration (s) records the total time spent during a job.
Users Who
DP Number of Users Counts the number of distinct users executing a
Ran
who ran Documents document.
Documents
Number of Users Users Who Counts the number of distinct users executing a
running reports Ran Reports report.
Users Who
Number of Users Counts the number of distinct users executing a
Ran
running documents document.
Documents
managers and their direct reports. These attributes and tables are elective
and available to enrich analysis, but not a required feature for Platform
Analytics.
Before importing your .CSV file, ensure you've done the following:
l Located the IP address and Port of the database with your Platform Analytics
Repository.
l group_owner_id: The employee number for the group owner. Each group
can only have one Group Owner.
You can add optional columns to import into your Platform Analytic project:
employe departm depart divisio divisi grou grou busine busine employ employ
ment
e_email ent_ n_ on_ p_ p_ ss_ ss_ ee_ ee_
desc owner_ desc owne desc own unit_ unit_ first_ last_
id
r_id er_id desc owne name name
r_id
5. Under Select Destination, choose Use existing table and select the
stg_employee table from the drop-down.
6. Click Next.
7. At the Configure Import Settings dialog, confirm that your .csv file was
uploaded correctly.
8. Click Next.
9. Click Save.
Once you have addressed the root cause of the mismatched email
addresses, the Platform Analytics Daily ETL will automatically resolve
any missing users.
To limit the data that is processed into the Platform Analytics Repository,
insert a row(s) into the table with the network_id and network_desc. If a
MicroStrategy Badge network is inserted in this table, Platform Analytics will
only process this specific network and exclude all others.
Before adding an Intelligence Server cluster, ensure you've done the following:
l Located the IP address and Port of the database with your Platform Analytics
Repository.
4. Review the existing list of Badge Networks currently processed into the
Platform Analytics Repository. Note the networks you want to limit
processing the data for:
./platform-analytics-consumer.sh stop
./platform-analytics-usher-lookup-producer.sh stop
5. In the Platform Analytics directory, open the bin folder and run the
following commands:
./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start
For example if you create a category of resources that includes all of the
physical locations that are integrated with the Badge network, then you can
specify different subcategories of physical locations, such as elevators,
conference rooms, and private offices. You can then generate reports to
gather in-depth information about the resources.
Before adding an Intelligence Server cluster, ensure you've done the following:
l Integrated a physical access system with Badge. For steps, see Physical
Gateways.
l Obtained administer privileges for your Badge network and can access
Identity Manager.
6. Click Save.
7. Click Facility to add locations under the Campus you created. Provide
the location of the organization that you want to include:
2. In the Facility Address column, enter the address for the location.
8. Click Save.
2. Select the Facility from the drop-down list. Repeat this for each
floor that you want to map.
11. Click Space. All the available places that have been previously
integrated with Badge are shown.
1. Using the Floor drop-down menu, map each space to the correct
floor.
After you've configured your Places hierarchy, you can edit any level by
clicking the check box next to a place, and click Edit. To delete places within
your hierarchy, select the place or places, and click Delete. When finished,
click Save to make your changes.
M o n i t o r M et ad at a Rep o si t o r i es
Acr o ss M u l t i p l e En vi r o n m en t s
Platform Analytics can monitor multiple MicroStrategy environments and
consolidate the telemetry from these environments into a single repository
and project.
The initial load of metadata objects can take several hours depending on
the size of the metadata. If the process gets interrupted while server is
sending initial load of data then the user must Re-trigger the Initial Load of
Object Telemetry.
l A project source to the new Intelligence Server environment which you wish
to monitor.
l The username and password for a MicroStrategy user which has the
following privileges:
This process assumes Platform Analytics has been fully installed and
configured to processes Intelligence Server telemetry.
You can refresh the repository_id using the Command Manager steps
below:
2. Using Command Manager connect to the metadata for which you wish
to update the repository_id.
Metadata ® Event/Schedule/...
*In MicroStrategy 2019, there are more fields in the standardized connection
string and the process of standardizing the connection string is more
restrictive. This results in relatively fewer cases being processed and
standardized. If a metadata_connection_string fails to meet the
requirements of the standardization process, MicroStrategy falls back to
using the unprocessed metadata_connection_string. Starting in
MicroStrategy 2020, this process was made less restrictive and only
considers the key fields mentioned above.
The fields mentioned above are part of all telemetry, including statistics,
compliance, change journal, initial load, and advanced statistics.
The fields above are processed based on the values provided in the
metadata DSN. This is located in your odbc.ini file for Linux and the
ODBC Data Source Administrator for Windows.
l Parallel Upgrades
Parallel Upgrades
If you are using MicroStrategy 11.1.x and plan to upgrade to 11.2.x
(MicroStrategy 2020), you would most likely create a parallel 11.2.x
environment. This is the most common scenario and upgrade path. You
should create/certify a parallel environment with a newer version of
MicroStrategy before turning off the older version. To create a new
environment, use a backup of metadata databases and Platform Analytics
Warehouse. In this scenario, new metadata gets generated in Platform
Analytics Warehouse and previously existing metadata is no longer used
because one or more of the following fields change:
l metadata_guid This field remains the same since you used a metadata
backup
l host This field may or may not change. If you restored the metadata
database backup on the same database server where the original
metadata database existed, this field remains the same.
l port This field may or may not change. If you restored the metadata
database backup on the same database server where the original
metadata database existed, this field remains the same.
l database This field may or may not change. If you host the database on a
different server, you can choose the same metadata database name or
choose a different one. If you are hosting the metadata database backup
on the same server as the original metadata, then for most databases, the
database name needs to be different.
For the parallel upgrade path, MicroStrategy generates new metadata and
cannot use reuse existing metadata in the majority of cases. This means you
need to re-trigger the initial load again. If you do not want to go this route
and prefer to continue to track the data in the new environment under the
same metadata, use the solution below.
Solution: Before starting the consumer in the new environment, you can
update the lu_metadata table in Platform Analytics Warehouse to modify the
connection string corresponding to metadata of the original environment.
For example, if your current metadata_connection_string is:
and you restore a copy of the metadata database on the same database
server to a new database with the name poc_metadata_11_2, then you
can modify the connection string to:
With this change, Platform Analytics Consumer does not create new
metadata and therefore does not need to trigger an initial load.
General Recommedations
1. Use the exact same DSN for the metadata database in odbc.ini for all
the Intelligence Servers in a cluster.
2. If you are not setting a particular driver property in the DSN, remove its
corresponding key from the DSN instead of leaving it blank.