You are on page 1of 469

Pl a t f or m An a l yt i cs

Version 2021
M i cr o St r at egy 2021

M ar ch 2023
Copyright © 2023 by MicroStrategy Incorporated. All rights reserved.
Trademark Information
The following are either trademarks or registered trademarks of MicroStrategy Incorporated or its affiliates in the United States and certain other
countries:

Dossier, Enterprise Semantic Graph, Expert.Now, HyperIntelligence, HyperMobile, HyperScreen, HyperVision,


HyperVoice, HyperWeb, Information Like Water, Intelligent Enterprise, MicroStrategy, MicroStrategy 2019, MicroStrategy
2020, MicroStrategy 2021, MicroStrategy Analyst Pass, MicroStrategy Architect, MicroStrategy Architect Pass,
MicroStrategy Badge, MicroStrategy Cloud, MicroStrategy Cloud Intelligence, MicroStrategy Command Manager,
MicroStrategy Communicator, MicroStrategy Consulting, MicroStrategy Desktop, MicroStrategy Developer, MicroStrategy
Distribution Services, MicroStrategy Education, MicroStrategy Embedded Intelligence, MicroStrategy Enterprise Manager,
MicroStrategy Federated Analytics, MicroStrategy Geospatial Services, MicroStrategy Identity, MicroStrategy Identity
Manager, MicroStrategy Identity Server, MicroStrategy Integrity Manager, MicroStrategy Intelligence Server, MicroStrategy
Library, MicroStrategy Mobile, MicroStrategy Narrowcast Server, MicroStrategy Object Manager, MicroStrategy Office,
MicroStrategy OLAP Services, MicroStrategy Parallel Relational In-Memory Engine (MicroStrategy PRIME), MicroStrategy
R Integration, MicroStrategy Report Services, MicroStrategy SDK, MicroStrategy System Manager, MicroStrategy
Transaction Services, MicroStrategy Usher, MicroStrategy Web, MicroStrategy Workstation, MicroStrategy World, Usher,
and Zero-Click Intelligence.

Other product and company names mentioned herein may be the trademarks of their respective owners.
Specifications subject to change without notice. MicroStrategy is not responsible for errors or omissions. MicroStrategy makes no warranties or
commitments concerning the availability of future products or versions that may be planned or under development.
CON TEN TS
Overview of Platform Analytics 6

Platform Analytics Architecture and Services 8

Components and Architecture 8


Services 10
Architecture Best Practices and Details 11

Platform Analytics Architecture Examples 13


Client Telemetry 19
Platform Analytics vs Enterprise Manager 20

Platform Analytics Data Model 75

Object Hierarchy 76
Security Filter 93
Prompt Hierarchy 95
Cache Hierarchy 100
Configuration Objects Hierarchy 110
118

Action Hierarchy 118


Status (errors) 137
Job and Session 140
Distribution Services Hierarchy 156
User Hierarchy 172

User Group Hierarchy 181


HR Organization Hierarchy 184
Time Hierarchy 189
Badge Local Time 199
Servers Hierarchy 200
Client Telemetry 203
Device Configuration Hierarchy 209
Compliance Telemetry Hierarchy 215
Badge Resource Hierarchy 228
Badge Location Hierarchy 241

Communicator Inbox Messages 245


Platform Analytics Configuration Tables 250

Installing Platform Analytics 252

Platform Analytics Prerequisites 253

Configuring Platform Analytics 256

Configure Single Node Telemetry 256


Configure Telemetry with a Remote Platform Analytics Warehouse 267
Configure High Throughput or Advanced Architecture 276
Load Object Telemetry to the Platform Analytics Data Repository 294
Client Telemetry Configuration 302

Upgrade the Platform Analytics Project 309


Upgrade Platform Analytics Repository 310
Advanced Configuration 313

Post Configuration Maintenance 364

Migrate Data from a MySQL Database to a PostgreSQL Database 364


Add an Additional Kafka Node to an Existing Kafka Cluster Post Installation 375
Update the Database User Password Configured to the Platform Analytics
Repository 387
Enable Password Authentication on the MicroStrategy Telemetry Cache 391
Modify the Amount of Data Returned In-Memory for the Platform Analytics Cube 395
MySQL Maintenance 398
Start-Up Health Checks 401
Platform Analytics Health Check Utility 407
Advanced Job Telemetry 418

Modify the Default Cube Refresh Rate 419


Delete User Data 422
Purge Platform Analytics Warehouse 423

Accessing the Data Captured by Platform Analytics 444

Add Supplementary Data to Platform Analytics 445

Monitor Metadata Repositories Across Multiple Environments 461

How to Add Additional Environments to a Platform Analytics Deployment 462


How to Update a Repository ID Using Command Manager 463
How Platform Analytics Identifies Metadata 464
Plat fo r m An alyt ics

Over vi ew o f Pl at f o r m An al yt i cs
Platform Analytics is the new monitoring tool in MicroStrategy 2019 that
captures real-time data from the MicroStrategy platform, and uses this data
to enable a smarter administration and to provide a better experience to your
MicroStrategy users. Platform Analytics allows you to optimize the
performance of your MicroStrategy system, engage your users with your
analytics content, and secure your data and tracking who is accessing it.

Platform Analytics collects data from 10 areas of the MicroStrategy platform.


Below is a list of these areas with a brief explanation of what you can do with
that data.

l Environment: To understand the hardware specifications and services


available on MicroStrategy environments.

l System: To improve the performance and optimize the use of system


resources of MicroStrategy environments.

l Projects: To improve the stability and performance of the projects


available in MicroStrategy environments.

l Users: To understand what users do when they are in the system, and to
improve their experience and engagement with MicroStrategy.

l Content: To improve the performance of the analytics content (dossiers,


reports, documents) in MicroStrategy environments, and to ensure that the
content meets the needs of the end users.

l Cubes: To ensure that cubes are being fully leveraged to improve the
performance of the key content in MicroStrategy environments.

l Subscriptions: To ensure that subscriptions are executing properly, and


to prevent unused subscriptions from impacting the performance of
MicroStrategy environments.

Copyright © 2023 All Rights Reserved 6


Plat fo r m An alyt ics

l Quality: To improve the experience of end users by reducing the number


of issues that they encounter in MicroStrategy environments.

l Licensing: To determine how many users use MicroStrategy, which


products they are using, and how this compares to the license
entitlements.

l MicroStrategy Identity Server: To understand how your users utilize


MicroStrategy Badge to access logical and physical resources in your
organization.

See the following topics for more information about Platform Analytics:

l Platform Analytics Architecture and Services

l Platform Analytics vs Enterprise Manager

l Accessing the Data Captured by Platform Analytics

Copyright © 2023 All Rights Reserved 7


Plat fo r m An alyt ics

Pl at f o r m An al yt i cs Ar ch i t ect u r e
an d Ser vi ces
Components and Architecture

The following is a list of components that are part of the overall Platform
Analytics dependencies and architecture:

l Intelligence Telemetry: This component acts as a telemetry producer,


sending all the data generated by the Intelligence Server to the Telemetry
Server.

Copyright © 2023 All Rights Reserved 8


Plat fo r m An alyt ics

l Identity Telemetry: This component acts as a telemetry producer,


sending all the data generated by the Identity Server to the Telemetry
Server.

l Telemetry Server: This component serves as a message broker that


receives and temporarily houses all the data sent by the producers.

l Platform Analytics Store: This component reads the data that the
Intelligence Telemetry and Identity Telemetry producers send to the
Telemetry Server layer, transforms this data, and loads it in the Platform
Analytics Repository.

l Telemetry Cache: This component is used to improve the processing


performance of the Platform Analytics Store.

l Platform Analytics Repository: This data warehouse stores all the


MicroStrategy telemetry processed by the Telemetry Store. This data is
then used by the dossiers included with Platform Analytics project.

l Platform Analytics Project: This MicroStrategy project contains the out-


of-the-box schema and application objects for Platform Analytics,
including the standard Platform Analytics dossiers, attributes, metrics, and
cubes.

l Platform Analytics Cube: This data import cube contains 14 days’ worth
of Platform Analytics data and is used to feed data to the all the standard
Platform Analytics dossiers.

Copyright © 2023 All Rights Reserved 9


Plat fo r m An alyt ics

Services
The following third-party services are automatically installed along with
Platform Analytics. These services allow Platform Analytics to capture and
analyze data from the MicroStrategy platform.

Apache Kafka
Kafka is a distributed streaming platform that processes and stores real-time
data streams. It provides horizontal scalability, low latency, and high
throughput. The Kafka producers allow an application, such as the
Intelligence Server, to publish records to one or more topics, while the Kafka
consumers allows applications to subscribe and consume the data available
in those topics.

Apache Zookeeper
ZooKeeper is a centralized coordination service. It facilitates the
implementation of distributed applications by providing low-level
synchronization and configuration functionality that is frequently useful in
distributed applications.

Copyright © 2023 All Rights Reserved 10


Plat fo r m An alyt ics

Redis
Redis is an in-memory data structure store. It provides caching mechanisms
that are ideal to optimize the performance of data-intensive distributed
services like Platform Analytics.

Architecture Best Practices and Details


General Architecture Details
l Only one Platform Analytics Consumer (Telemetry Store) can be installed
per machine and per environment (when having one Platform Analytics
Repository); hence multiple instances or versions of the Telemetry Store
cannot be running in the same hosting machine.

l Multiple Telemetry Stores could potentially be running in different


machines when each of those Telemetry Stores are pointing to different
Platform Analytics repositories.

l The Telemetry Store consumer and the Intelligence Telemetry producer


have no direct dependency on each other. The Telemetry Store
communicates with the Telemetry Server/Telemetry Manager, Telemetry
Cache, and the Platform Analytics Repository. The Intelligence Telemetry
producer only sends data to the Telemetry Manager/Telemetry Server
cluster.

l By default, when you install Platform Analytics on the Intelligence Server,


the MicroStrategy installer will also install a single node of the Telemetry
Server and Telemetry Manager.

l The Telemetry Store will prevent two consumer instances from running on
the same machine in order to help reduce the risk of data loss and data
integrity issues. However, customers should ensure that a separate
Telemetry Store is not installed and running on a separate machine and
configured to an already occupied Platform Analytics Repository.

Copyright © 2023 All Rights Reserved 11


Plat fo r m An alyt ics

l It is supported to have two Telemetry Store consumers (in different


machines) writing to two separate PA Repositories as two independent
Platform Analytics configurations.

l One Telemetry Store consumer is capable of processing the telemetry logs


from two or more independent Intelligence Server clusters into one
Platform Analytics Repository. This configuration provides a consolidated
view of the statistics usage across multiple MicroStrategy environment.
For configuration steps, see Monitor Metadata Repositories Across
Multiple Environments.

l The Platform Analytics MicroStrategy project can be imported in any


existing or newly created MicroStrategy metadata.

l For an estimation of resource requirements for stable and performant


operation of the Platform Analytics components under a consistent
transactional load, see KB482872: Capacity Planning for Platform
Analytics.

Architecture Best Practices


l Platform Analytics supports installing all components on a single machine.
However, this is not recommended for production environments where
performance can be impacted by having limited machine resources being
shared by all services.

l Have three clustered nodes as the minimum for the Telemetry


Server/Telemetry Manager cluster for production environments or any
environment where data redundancy and failure tolerance are critical. This
will ensure that if a single Telemetry Server/Telemetry Manager node fails,
the logs produced by the Intelligence Telemetry producer are persisted on
the remaining two nodes.

l Have an odd number of clustered nodes of the Telemetry Server/Telemetry


Manager. This ensures having a "majority rule" to prevent data loss in
case of discrepancies and failures. For more information, see Apache

Copyright © 2023 All Rights Reserved 12


Plat fo r m An alyt ics

Kafka Data Replication and ZooKeeper configurations for the Telemetry


Server cluster.

l There must only be one Telemetry Store consumer writing data to one
Platform Analytics Repository. If there are two Telemetry Store consumers
writing data to the same PA Repository, there will be data loss and data
integrity issues.

For examples of supported architectures, see Platform Analytics


Architecture Examples.

Platform Analytics Architecture Examples


The following examples are not intended to be a comprehensive list of all
supported architectures. Instead, they illustrate the best practices and
general recommendations noted in Architecture Best Practices and Details.

l Single Node for all Platform Analytics Components

l Remote Platform Analytics Warehouse

l High Throughput/Advanced Architecture

Single Node for all Platform Analytics Components


In this configuration, all Platform Analytics components are installed on the
same machine. All Library and Intelligence server environments (single
nodes or clusters) must be configured to produce data on the same
telemetry-server node. The Platform Analytics warehouse is represented by
an out-of-the-box MicroStrategy Repository.

Copyright © 2023 All Rights Reserved 13


Plat fo r m An alyt ics

This is one of the simplest representation of Platform Analytics, where all


components are installed on a single machine (Machine 4 in the above
example).

Remote Platform Analytics Warehouse


In this architecture, all Platform Analytics components are on the same
machine except for the Platform Analytics warehouse. You can opt for an
out-of-the-box MicroStrategy Repository, or provision proprietary instance of
PostgreSQL.

Copyright © 2023 All Rights Reserved 14


Plat fo r m An alyt ics

How to evaluate if this architecture is right for you:

l If using Remote Desktop Services (RDS), it's easy to setup replicas for
read access if there are use cases for heavy read queries against Platform
Analytics data.

l RDS provides an easy option to increase system resources in the future as


your database grows over time.

Copyright © 2023 All Rights Reserved 15


Plat fo r m An alyt ics

l If you are using RDS or self managed PostgreSQL, it's easier to manage
system resources and perform capacity planning.

High Throughput/Advanced Architecture


High throughput architecture must be carefully considered, as its
advantages come with significant configuration and maintenance
requirements.

Generally, this approach is considered when:

1. You have numerous Intelligence server nodes with substantial


telemetry (high object, user, jobs count).

2. High availability properties of the architecture are required.

For high throughput architecture, we recommend using a cluster of


Telemetry server nodes. One telemetry store can consume data from a
single Telemetry server node or a Telemetry server cluster. This implies that
all the Telemetry server nodes should be in a single cluster.

Currently, Telemetry server is represented by Zookeeper and Kafka


components. You can install Telemetry server on a subset of nodes. The
only requirement is to maintain an odd number of Telemetry server nodes (3,
5, and so on). See the Zookeeper documentation for more information.
Multiple Telemetry server clusters are not supported.

Copyright © 2023 All Rights Reserved 16


Plat fo r m An alyt ics

Telemetry Server Nodes on the Same Machines as Intelligence


Server Nodes

Copyright © 2023 All Rights Reserved 17


Plat fo r m An alyt ics

Telemetry Server Nodes Not on the Same Machines as


Intelligence Server Nodes

Copyright © 2023 All Rights Reserved 18


Plat fo r m An alyt ics

Client Telemetry
Client telemetry was introduced in MicroStrategy 2021 Update 8 (December
2022). Starting in MicroStrategy 2021 Update 9, this preview feature is out-
of-the-box.

Client telemetry offers a realm of insightful data that can be instrumental in


enhancing user experience, as well as boosting user engagement with the
content and platform.

The feature is useful for:

l Developers

l BI Architects

l Administrators

Developers and architects can discover end user interaction patterns with
dossiers, longevity of viewing sessions, and take actions to increase the
engagement. This can be done by understanding what pages users have
been visiting, learning how much time was spent in a dossier, investigating
whether certain analytics tools (such as drilling, sorting, or showing totals by
default) and ultimately learning the sequential order of user clicks. If you
have multiple custom applications deployed, you can gauge the success of
each deployment to deliver content to end users.

Administrators can detect potential system bottlenecks that were not visible
before, such as sluggish client receive times or device/browser rendering
times. In addition, Platform Analytics captures the entire history of devices
used to connect to the platform. Insightful data about device types or client
versions can help administrators ensure that the user base remains on the
most recent and secure version of the client.

Finally, developers can optimize dossier content organization by


understanding device types (screen size) and improve user experience even
further.

Copyright © 2023 All Rights Reserved 19


Plat fo r m An alyt ics

Telemetry is captured from the following clients:

l MicroStrategy Library Web

l MicroStrategy Library Mobile (iOS)

l MicroStrategy App

Besides dossier or page level manipulations (such as dossier reset, switch


page), certain manipulations (sorting, drilling, and so on) are captured from
only grid visualizations.

Related Topics

Client Telemetry Configuration

Platform Analytics vs Enterprise Manager


Platform Analytics and Enterprise Manager are both monitoring tools that
capture data about MicroStrategy. However, Platform Analytics is the new
default monitoring tool for the MicroStrategy platform. Platform Analytics is
built on the latest technology and offers many advantages over its
predecessor. Some of these benefits include:

l Capturing data in real-time.

l Supporting the monitoring of multiple MicroStrategy environments.

l Providing a fully scalable ETL layer that is always on and does not require
scheduling.

l Collecting data from more areas of the MicroStrategy platform.

l Generating a smaller footprint on the Intelligent Server when capturing


telemetry.

l Using a simplified data model that utilizes less space in the data
warehouse.

l Including a set of powerful and highly curated dossiers.

Copyright © 2023 All Rights Reserved 20


Plat fo r m An alyt ics

l Surfacing data through other MicroStrategy products like Workstation.

l Enabling users who are not administrators to see the data they care about.

l Using user-friendly naming conventions.

For a smooth transition from Enterprise Manager to Platform Analytics, the


following topics list the similarities and differences between the tools:

Enterprise Manager to Platform Analytics Attribute Mapping


As the new default monitoring tool for the MicroStrategy platform, Platform
Analytics has a slightly different attribute structure to Enterprise Manager.
See the table below to understand what Enterprise Manager terms mean in
Platform Analytics.

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Ad-Hoc ID Object Status ID In Platform


Indicator Analytics, if an
DESC DESC object is ad-hoc,
NON-AD-HOC Ad Hoc the Object Type is
Adhoc Object.
AD-HOC Hidden Additionally, the
Object Status is
Deleted Ad hoc.
Visible

Address ID Recipient ID The Default


address is derived
DESC (email) Address in Platform
(email) Analytics by
Default (0/1)
Name adding a filter on
account@email
GUID address=
recipient@email
address.

Attribute ID Object ID Abbreviation and


Access Granted
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc

Copyright © 2023 All Rights Reserved 21


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Creation Date Creation Platform Analytics


timestamp can provide
Modification Date analysis for the
Modification count of
Location timestamp child/parent
Version Location objects for all
objects types, it is
Abbreviation Version not limited to
Access Granted attributes as in
Component Enterprise
Number of Objects Manager.
Children Objects Count of
Number of Owner Name Children in
Parents Platform
Analytics:
Owner Name
There is an OOTB
metric called
Component
Objects, which
calculates the
children of an
object. It is
derived based on
a count against
the fact_object_
component table.
A sample report
is:
Attributes:
Metadata, Project,
Object, Object
Type, Component
Object Type
Metrics:
Component
Objects
Count of Parents
in Platform
Analytics:
There is an OOTB
metric called
Objects, which
calculates the
parents of a
component object.
It is derived based

Copyright © 2023 All Rights Reserved 22


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

on a count against
the fact_object_
component table.
A sample report
is:
Attributes:
Metadata, Project,
Component
Object,
Component Object
Type, Object
Type,
Metrics: Objects

Attribute Form ID Object ID Abbreviation,


Access Granted,
Name Name Datatype ID, and
GUID GUID Datatype
Description are
Description Desc not available in
Platform Analytics.
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Datatype ID
Datatype
Description
Owner Name

Cancelled ID Status ID If the execution is


Indicator Category cancelled, the
DESC DESC Platform Analytics
NOT Successful Status Category is
CANCELLED Job xxx cancelled.
Job xxx
CANCELLED cancelled

Child Job ID Child Jobs In Platform


Indicator (metric) Analytics, the end-
DESC user can show
USER DATA Standalone only report jobs
REQUEST JOB Report triggered by

Copyright © 2023 All Rights Reserved 23


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

DATASET Executions document


CHILD JOB (metric) executions using
the two metrics
noted.
Additionally, you
can use a filter on
Parent Job if the
ID is -1. If the
Parent Job ID is -
1, it means the
user directly
executed the
report and did not
execute it as a
dataset of a
document or
dossier.

Column ID Object/Colum ID Abbreviation and


n Access Granted
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
In Platform
Creation Date Creation Analytics,
timestamp Datatype ID and
Modification Date Datatype
Modification Description are
Location timestamp appended as
Version Location suffixes to the
column name.
Abbreviation Version
Access Granted Owner Name
Datatype ID
Datatype
Description
Owner Name

Configuration ID Derived
Object Owner attributes.
GUID
Description The owner_id
form of all
Location configuration
Version objects.

Abbreviation

Copyright © 2023 All Rights Reserved 24


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Connection ID Session ID
Source Source
DESC DESC
SDESC: IF
((Position
("MICROSTRATE
GY", [EM_
CONNECT_
DESC]) = 0),
[EM_CONNECT_
DESC], SubStr
([EM_CONNECT_
DESC], 15,
(Length([EM_
CONNECT_
DESC]) - 14)))
Consolidation ID Object ID Abbreviation and
Access Granted
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name
Contact ID (GUID) Recipient ID
DESC GUID
Name
Cube Hit ID Action Type ID In Platform
Indicator Analytics, the filter
DESC DESC named Action
CUBE Type = Cache Hit
EXECUTION shows only
executions that hit
DB a cache. If the
EXECUTION report, document,
or dossier

Copyright © 2023 All Rights Reserved 25


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

execution is done
against the
database, it is not
a Cache Hit and
falls under the
Action Type =
Execution.
Custom Group ID Object ID Abbreviation and
Access Granted
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name
Day ID (2016-12-01) Date ID (2017-01- Day of Week
01) (element) is a
DESC (12/01/2016 standalone
Thursday) Day of attribute in
Week@desc Platform Analytics.
Short Desc (Wednesday)
(12/01/2016 Thu)
Day of
Date (2016-12-01) Week@short
desc (Wed)

Datamart ID Object Type ID In Platform


Report Analytics, the
DESC DESC Object Type =
REGULAR Datamart Datamart Report.
REPORT Report If the report is not
a datamart report,
DATAMART it is another object
REPORT type. For example,
Grid Report,
Graph Report, or
Transaction
Services Report.

DB Connection ID Database ID Access Granted,

Copyright © 2023 All Rights Reserved 26


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Name Connection Name Abbreviation,


Location, and
GUID GUID Connection
Description Description Strings are not
available in
Creation Date Creation Platform Analytics.
timestamp
Modification Date
Modification
Location timestamp
Version Version
Abbreviation Database
Access Granted Name

Connection String Database


Type
Database Name (attribute)
Database Type Database
Version
Database Version (attribute)
Default Database Data Source
User Name

DB Instance ID Database ID Access Granted,


Instance Abbreviation, and
Name Name Location are not
GUID GUID available in
Platform
Description Description Analytics.
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Version
Abbreviation
Access Granted

Device ID Subscription ID Location,


Device Abbreviation, and
Name Name Access Granted
GUID GUID are not available
in Platform
Description Description Analytics.
Creation Date Creation
Timestamp
Modification Date

Copyright © 2023 All Rights Reserved 27


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Location Modification
Timestamp
Version
Version
Abbreviation
Status
Access Granted

Delivery Type ID Subscription ID In Platform


Type Analytics,
DESC DESC elements are
EMAIL Email updated to be in-
line with what you
FILE File see in the
interface.
PRINTER Print
For example, what
CUSTOM History List Enterprise
INBOX Cache Manage
Update previously called
CLIENT INBOX is now
Mobile History List
CACHE
Message. The
Personal
MOBILE data granularity is
View
the same, but the
ALL FTP descriptions have
been updated.
Document ID Object ID Abbreviation and
Access Granted
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name
Document Job CID (Compound) Multiple Job@ID (Job
Attriutes/Metr attribute)
ID ics
Session ID Session@GUI
ERROR DESC D (Session

Copyright © 2023 All Rights Reserved 28


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Timestamps attribute)
(Compound)
Status@desc
EXEC REQ TS (Status
attribute)
EXEC START
TS Initial Queue
Duration (ms)
EXEC FINISH (fact)
TS
Action Start
Timestamp
UTC (metric)
Action Finish
Timestamp
UTC (metric)
Document Job ID (Compound) Job Step ID
Step Sequence
Sequence StepID Job@ID (Job
attribute)
JobID
Session@GUI
SessionID D (Session
Timestamps attribute)
(compound) Job Step Start
STEP_ST_TS Timestamp
(UTC) (fact)
STEP_FN_TS
Job Step
Finish
Timestamp
(UTC) (fact)
Document Job ID Job Step ID In Platform
Step Type Type Analytics,
DESC DESC document and
UNKNOWN report step types
are combined into
HTML a single attribute
RENDERING called Job Step
Type.
Document ID Object Type ID In Platform
Type Analytics,
DESC DESC dashboard is
HTML HTML renamed to
DOCUMENT Document dossier.

Dashboard Dossier
Document Document

Copyright © 2023 All Rights Reserved 29


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Report
Writing
Document
Error ID (Job error Status/Status Status Category
code) Category provides a high-
level grouping to
DESC (Exact analyze individual
error message) error messages
based on the
JobErrorCode.
Status is the
exact error
message that is
recorded in the
logs. Most errors
in the logs are
recorded at the
unique job and
session level.
Therefore, when
trying to
determine what is
the "most frequent
error," which
occurs in my
MicroStrategy
environment, an
aggregate count
of errors at the
status level
almost always
results in 1.
To improve the
reporting for this
use case, Platform
Analytics can
aggregate the
telemetry the at
the level of Status
Category.
Error Indicator ID Status/Status ID If you are trying to
Category analyze successful
DESC DESC actions in Platform
NO ERROR Successful Analytics, use the
OOTB filter called
ERROR Action Status =
Success.

Copyright © 2023 All Rights Reserved 30


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

If you are trying to


analyze failed
actions in Platform
Analytics, use the
OOTB called
Failed
MicroStrategy
Actions.
Status Category
provides a high-
level grouping to
analyze individual
error messages
based on the
JobErrorCode.
Status is the exact
error message that
is recorded in the
logs. Most errors
in the logs are
recorded at the
unique job and
session level.
Therefore, when
trying to determine
what is the "most
frequent error,"
which occurs in my
MicroStrategy
environment, an
aggregate count of
errors at the status
level almost
always results in
1.
To improve the
reporting for this
use case, Platform
Analytics can
aggregate the
telemetry the at
the level of Status
Category.
Event ID Event ID Access Granted,
Abbreviation, and
Name Name Location are not
GUID GUID available in

Copyright © 2023 All Rights Reserved 31


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Description Description Platform


Analytics.
Creation Creation
Timestamp timestamp
Modification Modification
Timestamp timestamp
Location Version
Version
Abbreviation
Access Granted
Export ID Action Type ID In Platform
Indicator Analytics tracks
DESC DESC whether the
NO EXPORT execution was
exported and what
EXPORTED type of export
action occurred
(Export to Excel,
Export to CSV,
Export to PDF,
etc). The MSTR
file download
actions is also
tracked, along with
the export
telemetry for
documents,
dossiers, and
reports objects.
In Platform
Analytics, you can
use both the
OOTB filter and
metric called
Export Engine
Jobs to analyze
export telemetry.
Fact ID Object ID Access Granted,
Abbreviation,
Name Name Datatype ID, and
GUID GUID Datatype
Description are
Description Description not available in
Platform
Creation Date Creation Analytics.

Copyright © 2023 All Rights Reserved 32


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Modification Date timestamp


Location Modification
timestamp
Version
Location
Abbreviation
Version
Access Granted
Owner Name
Datatype ID
Datatype
Description
Owner Name
Filter ID Object ID Access Granted
and Abbreviation
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name
Folder ID Object Location In Platform
Analytics, the
object@location
form of the Object
attribute contains
the entire folder
path.
Hour ID (22) Hour ID (22)
DESC (22-23) DESC (10PM)
24 Hour Desc (10
PM - 11 PM)

Inbox Action ID Multiple Timestamp@I


Attributes D (Timestamp
Start Time attribute)

Copyright © 2023 All Rights Reserved 33


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Inbox Message History List


ID Message@GU
ID (History
Session ID List Message
DESC attribute)

Session@GUI
D (Session
attribute)
history list
message@sta
tus (History
List Message
attribute)
Inbox Action ID Action Type ID
Type
DESC DESC
ADD Create
History List
REMOVE Message
BATCH (add)
REMOVE Delete
RENAME History List
Message
CHANGE (remove
STATUS and batch
remove)
EXECUTE
Rename
REQUESTED History List
Message
(rename)
Change
History List
Message
Status
(change
status)
Execute
History List
Message
View
History List
Message
(execute)

Copyright © 2023 All Rights Reserved 34


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Inbox ID History List ID


Message Message
DESC GUID
User-defined Title
name
Creation
Creation Date timestamp
Modification
timestamp
Status
Intelligence ID Intelligence ID Description,
Server Server Access Granted,
Definition Name Definition Name Abbreviation, and
GUID GUID Location are not
available in
Description Creation Platform Analytics.
timestamp
Creation Date
Modification
Modification Date timestamp
Location Version
Version
Abbreviation
Access Granted
Intelligence ID Intelligence ID In Platform
Server Server Analytics, there
Machine Port Instance Port are three
Name I-server attributes to
Machine represent the
Name Intelligence
Server
configuration:
Intelligence
Server Machine,
Intelligence
Server Instance,
and Intelligence
Server Definition.
A single
Intelligence
Server Machine
can host multiple
Intelligence
Server instances
running on

Copyright © 2023 All Rights Reserved 35


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

different ports.
For this reason,
the attribute name
is Intelligence
Server Instance.
The Intelligence
Server Machine is
represented as a
separate attribute
without the port.
N/A N/A Intelligence ID In Platform
Server Analytics, there
Machine Name are three
IP attributes to
represent the
Intelligence Server
configuration:
Intelligence Server
Machine,
Intelligence Server
Instance, and
Intelligence Server
Definition. A single
Intelligence Server
Machine can host
multiple
Intelligence Server
instances running
on different ports.
For this reason,
the attribute name
is Intelligence
Server Instance.
The Intelligence
Server Machine is
represented as a
separate attribute
without the port.

Intelligent ID Object ID Abbreviation and


Cube Access Granted
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date

Copyright © 2023 All Rights Reserved 36


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Location Modification
timestamp
Version
Location
Abbreviation
Version
Access Granted
Owner Name
Owner Name

Intelligent ID Action Type ID The same action


Cube Action types exist in
DESC DESC Platform Analytics,
Type
CUBE Cube but the
PUBLISH Publish descriptions and
elements have
CUBE VIEW Execute been updated to
HIT View reflect the current
Report with supported cube
CUBE Cube actions in
DYNAMIC Cache Hit MicroStrategy.
SOURCE HIT
Delete For example,
CUBE Cube Platform Analytics
DESTROY Cache has removed
CUBE obsolete actions
Activate (cube destroy) and
ACTIVATE Cube add new actions
CUBE Cache (Republish DI
DEACTIVATE Deactivate Cube with Multi-
Cube refresh Policies).
CUBE LOAD
Cache
CUBE UNLOAD
Load Cube
CUBE Cache
DOCUMENT
HIT Unload
Cube
REPUBLISH Cache
CUBE DATA
VIA UPDATE Execute
Document
REPUBLISH with Cube
CUBE DATA Cache Hit
VIA APPEND
Republish
REPUBLISH cube data
CUBE DATA via update
DYNAMICALLY
Republish
REPUBLISH cube data
CUBE DATA via append
VIA UPSERT
Republish

Copyright © 2023 All Rights Reserved 37


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

REFRESH cube data


CUBE DATA dynamically
BY
APPENDING Republish
VIA FILTER cube data
via upsert
REFRESH
CUBE DATA Refresh
BY DELETING cube data
VIA FILTER by
appending
REFRESH via filter
CUBE DATA
BY UPDATING Refresh
VIA FILTER cube data
by deleting
REFRESH via filter
CUBE DATA
BY Refresh
UPSERTING cube data
VIA FILTER by updating
via filter
REFRESH
CUBE DATA Refresh
BY cube data
APPENDING by
VIA REPORT upserting
via filter
REFRESH
CUBE DATA Refresh
BY DELETING cube data
VIA REPORT by
appending
REFRESH via report
CUBE DATA
BY Refresh
UPSERTING cube data
VIA REPORT by deleting
via report
Refresh
cube data
by updating
via report
Refresh
cube data
by
upserting
via report
Republish
DI Cube
with Multi-

Copyright © 2023 All Rights Reserved 38


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

refresh
Policies

Intelligent ID (GUID) Cache ID


Cube Instance
Cache
Instance
GUID

Intelligent ID Object Type ID Platform Analytics


Cube Type has removed the
DESC DESC obsolete object
BASE REPORT Report types that are no
longer supported
WORKING SET OLAP Cube in MicroStrategy
REPORT (i.e. PRIVATE
Intelligence BASE REPORT)
PRIVATE Cube and added Data
BASE REPORT Report Import Cubes to
REPORT Incremental the Object Types.
SERVICES Refresh
BASE REPORT Report
CSQL PRE Data Import
EXEC REPORT Cube
OLAP CUBE
REPORT
OLAP VIEW
REPORT
INCREMENTAL
REFRESH
REPORT

Intelligence ID Intelligence ID
Server Cluster Server
DESC
Cluster
Job Error Code CID (Composite ID Status ID Status Category
Form because can Category provides a high-
be same JOB DESC level grouping to
ERROR CODES analyze individual
with different error messages
DESCRIPTIONS) based on the
JobErrorCode.
Job Error Code
Status is the exact
Error error message that
Description is recorded in the
logs. Most errors

Copyright © 2023 All Rights Reserved 39


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

in the logs are


recorded at the
unique job and
session level.
Therefore, when
trying to determine
what is the "most
frequent error,"
which occurs in my
MicroStrategy
environment, an
aggregate count of
errors at the status
level almost
always results in
1.
To improve the
reporting for this
use case, Platform
Analytics can
aggregate the
telemetry the at
the level of Status
Category.

Metadata ID Metadata ID
DESC (GUID) GUID
Metadata
Connection
String

Metric ID Object ID Abbreviation and


Access Granted
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name

Copyright © 2023 All Rights Reserved 40


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Minute ID (2) Minute ID (2)


DESC (12:02 AM - DESC (00:02)
12:01 AM)
24 Hour Desc
(00:02 - 00:03)
Month ID (201701) Month ID (201701)
DESC (2017 Jan) DESC
(January,
Long Desc (2017 2017)
January)
Month of Year ID (1) Month of Year ID (1)
DESC (January) DESC
(Janurary)
Short Desc (Jan)
Short DESC
(Jan)
Object ID (2017-01-01) Object ID (2017-01-
Creation Date Creation Date 01)
DESC (01/01/2017
Sunday)
Object Exists ID Object Status ID Platform Analytics
Status considers
Desc DESC "hidden" as an
ALREADY Ad Hoc element of the
DELETED Object Status
FROM THE MD Hidden rather than a
REPOSITORY separate attribute.
Deleted
EXISTS IN Visible
THE MD
REPOSITORY
Object Hidden ID Object Status ID Platform Analytics
Status considers "hidden"
DESC DESC as an element of
VISIBLE Ad Hoc the Object Status
rather than a
HIDDEN Hidden separate attribute.
Deleted
Visible

Object ID (20050101) Object ID (2017-08-


Modification Modification 16)
DESC: 1/1/2005
Date Date

Copyright © 2023 All Rights Reserved 41


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

LDESC:
01/01/2005
Saturday

Owner ID Object Owner ID Access Granted,


Abbreviation, and
Name Name Location are not
GUID GUID available in
Platform Analytics.
Description Description
Location Version
Abbreviation Creation
timestamp
Modification
timestamp
Status
Login

Project ID Project ID Access Granted,


Abbreviation, and
Name Name Location are not
GUID GUID available in
Platform
Description Description Analytics.
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Version
Abbreviation Status
Access Granted

Prompt ID Prompt/Objec ID Access Granted


t and Abbreviation
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
In Platform
Creation Date Creation Analytics, prompts
timestamp are tracked in two
Modification Date lookup
Modification tables/attributes in
Location timestamp order to support
Version Location the analysis of

Copyright © 2023 All Rights Reserved 42


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Abbreviation Version prompt answers by


joining on fact_
Access Granted Owner Name prompt_answers.
Owner Name

Prompt CID Multiple Job@ID (Job SEQUENCE is not


Answer Attributes/Fa attribute) available in
REP_JOB_ID
cts Platform
Prompt
PR_ORDER_ Analytics.
Answer
ID Sequence
SEQUENCE (fact)

SESSION_ID
Session@GUI
Prompt Answers D (Session
attribute)
Count of Answers
Prompt
Answer@DES
C (Prompt
Answer
attribute)
Prompt
Actions (fact)
Prompt Type ID Prompt Type ID
STRING DESC
ELEMENTS Attribute
Element
DOUBLE Prompt
OBJECTS Object
Prompt
Prompt
Value
Prompt
Embedded
Prompt
Quarter of ID Derived Definition: In Platform
Year Attribute Quarter Analytics, the
DESC (1st ([Date@ID]) quarter of year
Quarter) can be created
Short Desc (Q1) using a derived
attribute in
MicroStrategy
based off the Date

Copyright © 2023 All Rights Reserved 43


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

attribute.
Report ID Object ID Access Granted
and Abbreviation
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name
Report Job CID Multiple Job@ID (Job Assigned Cost is
Facts/Attribut attribute) not available in
Job ID es Platform
Session@GUI Analytics.
Session ID D (Session
Error DESC attribute) To derive the No-
of_prompts per
No_of_prompts Status@DESC report job, build a
(Status report with the
PRIORITIZATION attribute) following
Priority <derived> definition:
Number Attributes:
Job l

Assigned Cost Priority@DES Metadata,


C (Job Priority Project, Object
TIMESTAMPS attribute) l Metrics:
EXEC_REQ_ Initial queue Component
TS Duration (ms) Objects
EXEC_ (fact) l Filter: "Object
START_TS Action Start Category =
Timestamp Reports" and
EXEC_FN_TS "Component
UTC (metric)
Object Category
Action Finish = Prompt"
Timestamp
UTC (metric)
Report Job CID Multiple SQL Pass
SQL Pass Facts/Attribut Sequence
SQL Pass es (fact)

Copyright © 2023 All Rights Reserved 44


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Sequence Job@ID (Job


attribute)
ID
Session@GUI
Report Job ID D (Session
Session ID attribute)

DESC SQL
Pass@DESC
DB ERROR (SQL Pass
MESSAGE attribute)
Timestamps Database
Error
EXEC_ST_TS Message@des
EXEC_FN_TS c (attribute)
SQL Pass
Start
Timestamp
(UTC) (fact)
SQL Pass
Start
Timestamp
(UTC) (fact)

Report Job ID SQL Pass ID


SQL Pass Type
DESC DESC
Type
Report Job ID Multiple SQL Pass
Step Facts/Attribut Sequence
Sequence Step ID es (fact)
Job ID Job@ID (Job
Session ID attribute)

Timestamps Sesion@GUID
(Session
EXEC_ST_TS attribute)
EXEC_FN_TS Job Step Start
Timestamp
(UTC) (fact)
Job Step
Finish
Timestamp
(UTC) (fact)
Report Job ID Job Step ID
Step Type Type
DESC DESC

Copyright © 2023 All Rights Reserved 45


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Report Type ID Object ID


Extended
DESC Type DESC
RELATIONAL Relational
MDX MDX
CUSTOM SQL Free Form
FREE FORM SQL
CUSTOM SQL Custom
WIZARD SQL
Wizard
FLAT FILE
Data Import
Data Import
Excel File
Data Import
Text File
Custom
SQL Data
Import
Table Data
Import
OAuth Data
Import
SalesForce
Data Import
Google
Drive Data
Import
Drop Box
Data Import
Google
Analytics
Data Import
Google Big
Query Data
Import
Google Big
Query
FFSQL
Data Import

Copyright © 2023 All Rights Reserved 46


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Twitter
Data Import
Facebook
Data Import
Google Big
Query
Single
Table Data
Import
Custom
SQL
Wizard
Data Import
Single
Table Data
Import
Open
Refine Data
Import
Remote
Data
Source
Data Import
Spark
Server
Data Import

Report/Docum ID Object ID
ent Indicator Category
DESC DESC
REPORT Reports
DEFINITION
Documents
DOCUMENT
DEFINITION

Schedule ID Schedule ID Access Granted,


Abbreviation, and
Name Name
Location are not
GUID GUID available in
Platform Analytics.
Description Description
Creation Date Creation
timestamp
Modification Date
Modification

Copyright © 2023 All Rights Reserved 47


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Location timestamp
Version Version
Abbreviation Status
Access Granted

Schedule ID Session ID In Platform


Indicator Source Analytics, if the
DESC DESC execution was a
NON- Scheduler result of a
SCHEDULED scheduled job, the
Session Source
SCHEDULED will be Scheduled.
To remove all
scheduled jobs
from the analysis,
add a filter on
Session Source !=
Scheduler. The
end-user can also
utilize the OOTB
filters Non
Subscriptions or
Subscriptions to
limit the data
accordingly.
Security Filter ID Security Filter ID Access Granted,
Abbreviation, and
Name Name Location are not
GUID GUID available in
Platform Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Owner Name
Access Granted
Owner Name
Session ID (GUID) Session ID Platform Analytics
stores the
Connection GUID connection/discon
Timestamp

Copyright © 2023 All Rights Reserved 48


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Disconnection nect actions in the


Timestamp fact_access_
transaction table.
Therefore, the
Timestamp
attribute plus the
Action Type are
used to represent
the connection
and disconnection
timestamp.
The same level of
analysis can be
derived from the
below sample
attributes/filters:
Attributes:
Session,
Timestamp
Filter: "Action
Type = Project
Login" or "Action
Type = Project
Logout"
SQL Execution ID Action Only Cubes and
Indicator Type/Action Reports objects
DESC Category can generate SQL.
NO SQL
EXECUTED Using the OOTB
SQL filter Action
EXECUTED Category = Cube
Executions returns
a list of cubes that
generate SQL.
Using the OOTB
filter Report
Executions returns
a list of reports
that generate SQL.
Subscription ID Subscription ID
DESC Name
(subscription
name) GUID
Creation

Copyright © 2023 All Rights Reserved 49


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

timestamp
Modification
timestamp
Status
Project Name
(to support
CM delete
command)
Table ID Object ID Access Granted,
Abbreviation, and
Name Name Location are not
GUID GUID available in
Platform Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted <appended as
Table Size a suffix to the
table name>
Table Prefix
<appended as
Owner Name a prefix to the
table name>
Owner Name
Template ID Object ID Access Granted
and Abbreviation
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version

Copyright © 2023 All Rights Reserved 50


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Access Granted Owner Name


Owner Name
Transformatio ID Object ID Access Granted
n and Abbreviation
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name
User ID Account/User ID Access Granted,
Abbreviation, and
Name Name Location are not
GUID GUID available in
Platform
Description Description Analytics.
Creation Creation
timestamp
Date
Modification
Modification Date timestamp
Location Account
Abbreviation Status
(Attribute)
Access Granted
Password
Enabled Status Expiration
Frequency
PWD Expiration
Freq Password
Change
Allow Change Allowed
PWD
Password
PWD Expiration Expiration
Date Date
Metadata Password
Change

Copyright © 2023 All Rights Reserved 51


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Required
Standard Auth
Allowed
Trusted Auth
User Id
Version
Email
Login
LDAP Link
NT Link
WH Link
User Group ID User Group ID Access Granted,
Abbreviation, and
Name Name Location are not
GUID GUID available in
Platform Analytics.
Description Description
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Version
Abbreviation Status
Access Grated
User Group ID User Group ID
(Parent)
DESC Name
GUID
Description
Creation
timestamp
Modification
timestamp
Status
Week of year ID (201701) Week ID (201701)
Week Start Date Week Begin

Copyright © 2023 All Rights Reserved 52


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

(2017-01-02 Date (2017-


00:00:00) 01-02)
Week Finish Date Week End
(2017-01-08 Date (2017-
23:59:59) 01-08)
Week Range
(01/02/2017-
01/08/2017)
Weekday ID (1) Day of Week ID (1)
DESC (Monday) DESC
(Monday)
Short Desc (Mon)
Year ID (2017) Year ID (2017)
DESC (Year 2017)

Short Desc (Y
2017)
Database ID Status Database
Error Indicator Error Indicator
DESC
0
NO DB ERROR
1
DB Error
Delivery ID Status ID
Status
Indicator DESC DESC
FAILED
SUCCESSFUL

Element Load ID Object Type ID


Indicator
DESC DESC
REGULAR Element
REPORT Load
Object
ELEMENT
LOAD
REPORT

Quarter ID (20171) Quarter ID (20171)


DESC (2017 DESC (Q1
Quarter 1) 2017)

Copyright © 2023 All Rights Reserved 53


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

Short Desc (2017


Q1)

Cache ID Action ID
Creation Category
DESC DESC
Indicator
NO CACHE Cache
CREATION Creation
CACHE
CREATION

Cache Type ID Cache Type ID


DESC DESC
NO CACHE HIT Report
Cache
SERVER
CACHE Intelligence
Cube
DEVICE Cache
CACHE
APPLICATION
CACHE

DB Table ID DB Table ID Access Granted


and Abbreviation
Name Name are not available
GUID GUID in Platform
Analytics.
Description Description
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner@id
Owner Name

SQL Clause ID SQL Clause ID


Type Type
DESC DESC
SELECT Select
SELECT Select
GROUP BY Group By

Copyright © 2023 All Rights Reserved 54


Plat fo r m An alyt ics

Enterprise EM Attribute PA Attribute


Manager Platform Forms
Forms Analytics
(EM) (PA) Attribu PA Attrib Notes
Attribute EM Attribute ute
Name Elements te Name
Elements

SELECT Select
AGGREGATE Aggregate
FROM From
WHERE Where
ORDER BY Order By

Job Priority ID Job Priority ID


Map
DESC DESC
UNKNOWN Low
Priority
LOW
PRIORITY Medium
Priority
MEDIUM
PRIORITY High
Priority
HIGH
PRIORITY
Job Priority ID Job Priority ID
Number
DESC
Low Priority
Medium
Priority
High
Priority
Hierarchy ID Object ID Access Granted
and Abbreviation
Name Name are not available
GUID GUID in Platform
Analytics.
Description Desc
Creation Date Creation
timestamp
Modification Date
Modification
Location timestamp
Version Location
Abbreviation Version
Access Granted Owner Name
Owner Name

Copyright © 2023 All Rights Reserved 55


Plat fo r m An alyt ics

Enterprise Manager to Platform Analytics Metric Mapping


Use the following pages to identify Enterprise Manager metric names in
Platform Analytics.

Command Manager Metric Names


Platform Analytics contains the same Command Manager metrics described
in Enterprise Manager.

Platform Analytics Metric


Description
Name

CM Delete Attributes Provides Command Manager syntax for deleting attributes.

Provides Command Manager syntax for deleting database


CM Delete DB Instances
instances.

Provides Command Manager syntax for deleting


CM Delete Documents
documents.

CM Delete Filters Provides Command Manager syntax for deleting filters.

CM Delete Metrics Provides Command Manager syntax for deleting metrics.

CM Delete Reports Provides Command Manager syntax for deleting reports.

Provides Command Manager syntax for deleting schedules.


CM Delete Schedules

Provides Command Manager syntax for deleting security


CM Delete Security Filters
filters.

Provides Command Manager syntax for deleting templates.


CM Delete Templates

Provides Command Manager syntax for deleting users from


CM Delete User
the metadata.

Provides Command Manager syntax for deleting user


CM Delete User Groups
groups.

Copyright © 2023 All Rights Reserved 56


Plat fo r m An alyt ics

Platform Analytics Metric


Description
Name

Provides Command Manager syntax for disabling Users in


CM Disable User
the metadata.

Subscription Metric Names


Many metric subscription metric names are the same for Enterprise Manager
and Platform Analytics; however, there are slight changes between the two.

Enterprise Manager Platform Analytics


Description
Metric Name Metric Name

Avg. Number of Avg Number of


Average number of recipients contained
Recipients per Recipients per
in a subscription.
Subscription Subscription

Avg. Subscription Avg Subscription


Average execution time of a subscription
Execution Duration Execution Duration
in hh:mm:ss.
(hh:mm:ss) (hh:mm:ss)

Avg. Subscription This metric is the average time (in


Avg Subscription
Execution Duration seconds) the server takes to execute a
Execution Time (s)
(secs) subscription.

Last Subscription Last Subscription


Last (latest) timestamp when a
Execution Finish Execution Finish
subscription was executed.
Timestamp Timestamp

Number of Distinct Number of Distinct Number of recipients that received


Recipients Recipients content from a subscription.

Number of Distinct Number of Distinct


Number of distinct subscriptions.
Subscriptions Subscriptions

Number of Errored Subscriptions with Number of subscriptions that resulted in


Subscriptions Errors an error.

Copyright © 2023 All Rights Reserved 57


Plat fo r m An alyt ics

Enterprise Manager Platform Analytics


Description
Metric Name Metric Name

Number of
Subscription
Subscription Number of executions of a subscription.
Executions
Executions

Number of
Number of Number of subscriptions in the
Subscriptions in
Subscriptions metadata.
Metadata

Subscription Subscription
Sum of all execution times of a
Execution Duration Execution Duration
subscription in hh:mm:ss.
(hh:mm:ss) (hh:mm:ss)

Subscription Subscription
Sum of all execution times of a
Execution Duration Execution Duration
subscription in seconds.
(secs) (s)

Job Metric Names

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

RP Number of Ad-Hoc
Ad-hoc Jobs Counts the number of ad-hoc jobs.
Jobs

Standalone
RP Number of Data Provides the number of report executions
Report
Requests requested by users.
Executions

RP Number of DB DB Tables Counts the number of database tables


Tables Accessed Accessed accessed.

RP Number of Jobs Jobs Counts the number of job executions.

RP Number of Jobs
Jobs Counts the number of job executions.
For Concurrency

Copyright © 2023 All Rights Reserved 58


Plat fo r m An alyt ics

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

Reporting

RP Number of Jobs
Database Jobs Counts the number of jobs hitting the database.
hitting Database

RP Number of Jobs
Jobs Today Counts the number of job executions today.
Today

RP Number of Jobs Non Cache Hit Counts the number of job executions that did not
w/o Cache Hit Jobs hit a server cache.

RP Number of Jobs Non Element Counts the number of job executions that did
w/o Element Loading Load Jobs not result from an element loading.

RP Number of Jobs Cache Hit Jobs Counts the number of job executions that did hit
with Cache Hit a server cache.

RP Number of Jobs Failed DB Jobs Counts the number of job executions that
with DB Error caused a database error.

RP Number of Jobs Element Counts the number of job executions that did
with Element Loading Loading Jobs result from an element loading.

RP Number of Jobs Counts the number of job executions that did


Failed Jobs
with Error cause an Intelligence Server or database error.

RP Number of Jobs Security Filter Counts the number of job executions that did
with Security Filter Jobs have a security filter applied.

RP Number of Jobs SQL Execution Counts the number of job executions that
with SQL Execution Jobs execute SQL.

RP number of
Subscription Counts the number of job executions run
Narrowcast Server
Jobs through MicroStrategy Narrowcast Server.
jobs

RP Number of Counts the number of job executions that did


Prompted Jobs
Prompted Jobs include a prompt.

Copyright © 2023 All Rights Reserved 59


Plat fo r m An alyt ics

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

Distinct
RP Number of Prompts
Prompts Counts the number of prompts in a report job.
Executed

RP Number of Report
Counts the number of job executions that
Jobs from Document Child Jobs
resulted from a document execution.
Execution

RP Number of Reports Counts the number of report definitions


Objects
Used executed.

RP Number of Result Report Row Counts the number of result rows returned from
Rows Count a report execution.

RP Number of Result Cube Row Counts the number of rows in a OLAP View
Rows for View Report Count Report job.

RP Number of Subscription Counts the number of job executions that did


Scheduled Jobs Jobs result from a schedule execution.

RP Number of SQL Counts the number of passes executed during


SQL Passes
Passes report execution.

Counts the number of SQL passes executed on


RP Number of WH WH SQL
the database. This metric excludes Analytical
SQL Passes Passes
SQL.

RP Percentage of Ad-
% Ad-Hoc Jobs Percentage of ad-hoc jobs vs. total jobs.
Hoc Jobs

RP Percentage of % Database Percentage of jobs that hit a database vs. total


Jobs hitting Database Jobs jobs.

RP Percentage of Jobs % Cache Hit Percentage of jobs that hit a server cache vs.
with Cache Hit Jobs total jobs.

RP Percentage of % Cube Cache Percentage of jobs that hit a server cache vs.
Jobs with Cube Hit Hit Jobs total jobs.

Copyright © 2023 All Rights Reserved 60


Plat fo r m An alyt ics

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

RP Percentage of Jobs % Failed DB Percentage of jobs with database error vs. total
with DB Error Jobs jobs.

RP Percentage of
% Failed Jobs Percentage of jobs with any error vs. total jobs.
Jobs with Error

RP Percentage of
% Subscription Percentage of jobs from Narrowcast Server vs.
Narrowcast Server
Jobs total jobs.
jobs

RP Percentage of % Prompted
Percentage of prompted jobs vs. total jobs.
Prompted Jobs Jobs

RP Percentage of % Subscription
Percentage of scheduled jobs vs. total jobs.
Scheduled Jobs Jobs

RP Jobs with No Data Jobs with No Counts the number of jobs that returned no
Returned Data Returned data.

RP Export Engine Export Engine Counts the number of report jobs passing
Jobs Jobs through the export engine.

Number of Sessions Sessions per Provides the number of sessions created per
per User User connected user.

DP Average Number of Jobs per Provides the average number of document jobs
Jobs per Session Session per user session.

RP Average Number Jobs per Provides the average number of job executions
of Jobs per Session Session per session.

RP Average Number of
Jobs per User Average number of job executions per user.
Jobs per User

P Number of Data
Data Request Counts the number of jobs requested by a user
Request Jobs with
Jobs with Error and encountered an error.
Error

RP Number of Jobs Jobs w/o Counts the number of jobs that don't create a

Copyright © 2023 All Rights Reserved 61


Plat fo r m An alyt ics

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

Cache
w/o Cache Creation cache.
Creation

Jobs with
RP Number of Jobs
Cache Counts the number of jobs that create a cache.
with Cache Creation
Creation

RP Number of Jobs Jobs with


Counts the number of job executions that
with Datamart Datamart
created a datamart.
Creation Creation

RP number of
Subscription
Narrowcast Server Counts the number of subscription jobs.
Jobs
jobs

RP Number of Non- Non-Canceled Counts the number of non-canceled job


Cancelled Jobs Jobs executions.

RP Number of Users Users Who Counts the number of distinct users that
who ran report Execute Jobs executed jobs.

RP Percentage of Jobs % Jobs with


Measures the percentage of jobs that created a
with Datamart Datamart
datamart vs. total jobs.
Creation Creation

RP Export Engine Export Engine Counts the number of jobs passing through the
Jobs Jobs export engine.

RP Average Number of Average of DB


Provides the average number of database result
DB Result Rows per Result Rows
rows per job execution.
Job per Job

Count of EM_USER_ Unused User


Counts the number of user groups that don't
ID from EM_USR_GP_ Groups in
have any users in the metadata.
USER table only Metadata

RP Average Daily Use Avg Job Provides the max Execution Duration (in
Duration per job Execution seconds) of jobs. A jobs Execution Duration

Copyright © 2023 All Rights Reserved 62


Plat fo r m An alyt ics

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

(hh:mm:ss) Duration (s) records the total time spent during a job.

Users Who
DP Number of Users Counts the number of distinct users executing a
Ran
who ran Documents document.
Documents

Number of Users Users Who Counts the number of distinct users executing a
running reports Ran Reports report.

Users Who
Number of Users Counts the number of distinct users executing a
Ran
running documents document.
Documents

Cube and Cache Metric Names

Enterprise Platform
Manager Analytics Description
Metric Name Metric Name

Last # of
Last Cube Row
Intelligent Cube Provides the last cube row count.
Count
Rows

Last Intelligent Last Cache Size Provides the last Intelligent cube size in KB recorded
Cube Size (KB) (KB) by the Intelligence Server.

Number of
Sessions (IS_ Counts the number of sessions from the table IS_
Sessions
CUBE_ CUBE_ACTION_FACT.
ACTION_FACT)

Intelligent Cube Average Cube


Measures the time duration an operation executed
Action Duration Execution Action
against cubes.
(secs) Duration (secs)

Copyright © 2023 All Rights Reserved 63


Plat fo r m An alyt ics

Enterprise Platform
Manager Analytics Description
Metric Name Metric Name

Intelligent Cube
Size (KB)
Cache Size (KB) Provides the size of a cache instance in KB.
Last Intelligent
Cube Size (KB)

Last # of
Intelligent Cube
Rows Cube Row Count Counts the total number of rows contained in a cube.

# of Rows in an
Intelligent Cube

Number of Number of
Dynamically Dynamically Counts the number of jobs from reports not based on
Sourced Report Sourced Report Intelligent Cubes but selected by the engine to go
Jobs against Jobs against against an Intelligent Cube because the objects on
Intelligent Intelligent the report matched what is on the Intelligent Cube.
Cubes Cubes

Number of
Number of Cube
Intelligent Cube Counts the number of times a cube was published.
Publishes
Publishes

Number of
Number of Cube
Intelligent Cube Counts the number of times a cube was refreshed.
Refreshed
Refreshes

Number of
Number of Cube
Intelligent Cube Counts the number of times a cube was republished.
Republishes
Republishes

Number of Jobs
Number of Jobs
with Intelligent Counts the number of jobs hitting the cube.
With Cube Hit
Cube Hit

Number of Users Number of Users


Counts the number of users hitting the cube cache.
hitting Intelligent hitting Cubes

Copyright © 2023 All Rights Reserved 64


Plat fo r m An alyt ics

Enterprise Platform
Manager Analytics Description
Metric Name Metric Name

Cubes

Number of View Number of View


Counts the number of view report jobs hitting a cube.
Report Jobs Report Jobs

Average Cube
Publish Time Average Time to
(hh:mm:ss) (IS_ Publish a Cube Measures the average time a cube was published.
CUBE_ACTION_ (hh:mm:ss)
FACT)

Metadata Analysis Metric Names


Many metric subscription metric names are the same for Enterprise Manager
and Platform Analytics; however, there are slight changes between the two.

Enterprise Manager Platform Analytics


Description
Metric Name Metric Name

Number of Attributes in Number of Attributes in Counts the number of attributes stored


Metadata Metadata in the projects.

Number of Number of
Counts the number of consolidations
Consolidations in Consolidations in
stored in the projects.
Metadata Metadata

Number of Custom Number of Custom Counts the number of custom groups


Groups in Metadata Groups in Metadata stored in the projects.

Number of DB Number of DB Counts the number of database


Connections in Connections in connections stored in the Intelligence
Metadata Metadata Servers.

Number of DB Number of DB Counts the number of database


Instances in Metadata Instances in Metadata instances stored in the Intelligence

Copyright © 2023 All Rights Reserved 65


Plat fo r m An alyt ics

Enterprise Manager Platform Analytics


Description
Metric Name Metric Name

Servers.

Number of Documents Documents in Metadata Counts the number of documents


in Metadata stored in the projects.

Number of Events in Number of Events in Counts the number of events stored in


Metadata Metadata the Intelligence Servers.

Number of Facts in Number of Facts in Counts the number of facts stored in


Metadata Metadata the projects.

Number of Filters in Number of Filters in Counts the number of filters stored in


Metadata Metadata the projects.

Number of Hierarchies Number of Hierarchies Counts the number of hierarchies


in Metadata in Metadata stored in the projects.

Number of Server
Number of Intelligence Counts the number of Intelligence
Definitions in Metadata
Servers in Metadata Servers.

Number of Metrics in Number of Metrics in Counts the number of metrics stored in


Metadata Metadata the projects.

Number of Projects in Counts the number of projects actively


Number of Projects
Metadata used in the Intelligence Server.

Number of Prompts in Number of Prompts in Counts the number of prompts stored


Metadata Metadata in the projects.

Number of Reports in Number of Reports in Counts the number of reports stored in


Metadata Metadata the projects.

Number of Schedules in Number of Schedules in Counts the number of schedules stored


Metadata Metadata in the Intelligence Servers.

Number of Security Number of Security Counts the number of security filters


Filters Filters in Metadata stored in the projects.

Number of Tables in Number of Tables in Counts the number of tables stored in


Metadata Metadata the projects.

Copyright © 2023 All Rights Reserved 66


Plat fo r m An alyt ics

Enterprise Manager Platform Analytics


Description
Metric Name Metric Name

Number of Templates Number of Templates Counts the number of templates stored


in Metadata in Metadata in the projects.

Number of Number of
Counts the number of transformations
Transformations in Transformations in
stored in the projects.
Metadata Metadata

Number of Users (IS_ Number of Users in Counts the number of users stored in
USER_PROJ_SF) Metadata the Intelligence Servers.

Number of Users Number of Users in Counts the number of users stored in


(Report Level) Metadata the Intelligence Servers.

Number of Users in
Number of Users in Counts the number of users stored in
Metadata (EM_USER_
Metadata the Intelligence Servers.
VIEW)

Total Number of Total Number of


Counts the total number of application
Application Objects in Application Objects in
objects stored in the projects.
Metadata Metadata

Total Number of Total Number of Counts the total number of


Configuration Objects Configuration Objects configuration objects stored in the
in Metadata in Metadata Intelligence Servers.

Total Number of Total Number of


Counts the total number of schema
Schema Objects in Schema Objects in
objects stored in the projects.
Metadata Metadata

History List Metric Names


Many metric subscription metric names are the same for Enterprise Manager
and Platform Analytics; however, there are slight changes between the two.

Copyright © 2023 All Rights Reserved 67


Plat fo r m An alyt ics

Enterprise Platform
Manager Analytics Description
Metric Name Metric Name

HL Days HL Days
Since Last Since Last
Provides the days since the last action was performed.
Action: Any Action: Any
action action

HL Days
HL Days
Since Last Provides the days since the last request for contents of
Since Last
Action: an inbox message.
Action: View
Request

HL Last
HL Last
Action Provides the date and time of the last action performed
Action Date:
Timestamp: on an inbox message.
Any Action
Any Action

HL Last HL Last Action


Provides the date and time of the last request for the
Action Date: Timestamp:
inbox message contents.
Request View

Provides the number of actions taken by a user on a


HL Number of HL Number of
history list message. Action include Rename, Delete,
Actions Actions
Mark as Read, Mark as Unread & Request Content.

HL Number of Provides the number of actions taken by a user on a


HL Number of
Actions by history list message. Action include Rename, Delete,
Actions
User Mark as Read, Mark as Unread & Request Content.

HL Number of HL Number of
Provides the number of inbox message actions that
Actions with Actions with
resulted in errors.
Errors Errors

HL Number of HL Number of Provides a count of jobs that occur with inbox message
Jobs Jobs requests.

HL Number of HL Number of
Provide the number of inbox messages.
Messages Messages

HL Number of HL Number of Provides the number of messages that contain an error.

Copyright © 2023 All Rights Reserved 68


Plat fo r m An alyt ics

Enterprise Platform
Manager Analytics Description
Metric Name Metric Name

Messages Messages
with Errors with Errors

HL Number of HL Number of
Provides the number of requests for the contents of an
Messages: Messages
inbox message.
Requested Viewed

Server Processing Time Metric Names


Many metric subscription metric names are the same for Enterprise Manager
and Platform Analytics; however, there are slight changes between the two.

Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name

Successful Successful
Report jobs with execution time less than 1 minute.
Jobs < 1 min Jobs <1 min

Successful Successful
Report jobs with execution time greater than 5 minutes.
Jobs > 5 min Jobs > 5 min

Successful Successful Report jobs with execution time between 1 and 5


Jobs 1-5 min Jobs 1-5 min minutes.

Avg. CPU
Avg Job CPU Average CPU time taken by the Intelligence server for
Duration per
Duration (s) processing a job.
Job (secs)

RP Last Last Job


Execution Request
Provides the request timestamp of the last job request.
Request Timestamp
Timestamp (UTC)

Copyright © 2023 All Rights Reserved 69


Plat fo r m An alyt ics

Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name

RP Max.
Elapsed Max Job
Duration per Elapsed Provides the maximum report duration per report job.
Job Duration (s)
(hh:mm:ss)

RP Max.
Execution Max Job
Provides the maximum job execution duration per report
Duration per Execution
job.
Job Duration (s)
(hh:mm:ss)

RP Max.
Queue Max Job
Duration per Queue Provides the maximum queue duration per report job.
Job Duration (s)
(hh:mm:ss)

DP CPU
Sum Job CPU Total CPU time taken by the Intelligence server for
Duration
Duration (s) processing a job.
(secs)

RP CPU
Sum Job CPU Total CPU time taken by the Intelligence server to
Duration
Duration (s) process a job.
(msec)

RP Average
CPU Duration Avg Job CPU Average CPU time taken by the Intelligence server to
per Job Duration (s) process a job.
(msecs)

RP Average Average time taken by the Intelligence Server to process


Avg Job
Elapsed a job. This includes the amount of time the job was in the
Elapsed
Duration per Intelligence Server queue and the amount of time taken
Duration (s)
Job (secs) by the Intelligence Server to execute a job.

Copyright © 2023 All Rights Reserved 70


Plat fo r m An alyt ics

Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name

RP Execution Max Job


Total time taken by the Intelligence Server to execute
Duration Execution
the job, including the database execution time.
(secs) Duration (s)

Total time taken by the Intelligence Server to process a


Max Job
RP Elapsed job. This includes the amount of time a job was in the
Elapsed
Duration (secs) Intelligence Server queue and the amount of time taken
Duration (s)
by the Intelligence Server to execute a job.

RP Average Avg Job


Prompt Prompt Average time taken to answer the set of prompts for a
Answer Time Answer job. For example, Prompt Answer Time.
per Job (secs) Duration (s)

Sum Job
RP Prompt
Prompt Total time taken to answer the set of prompts for a job.
Answer
Answer For example, Prompt Answer Time.
Duration (secs)
Duration (s)

RP Average
Avg Job Total
Queue Average time a job was in the Intelligence Server queue
Queue
Duration per before the Intelligence Server started processing it.
Duration (s)
Job (secs)

Max Job Total


RP Queue Total time a job was in the Intelligence Server queue
Queue
Duration (secs) before the Intelligence Server started processing it.
Duration (s)

RP Average
Avg Job
Execution Average time taken by the Intelligence Server to execute
Execution
Duration per a job, including the database execution time.
Duration (s)
Job (secs)

RP Data Sum Data Measures the sum time taken by the Intelligence Server
Request Error Request Error to process failed jobs. The jobs must be triggered
Elapsed Elapsed manually.

Copyright © 2023 All Rights Reserved 71


Plat fo r m An alyt ics

Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name

Duration
Duration (s)
hh:mm:ss

RP Timed out Counts the number of jobs that were timed out by the
Timeout Jobs
jobs Intelligence server.

Measures SQL size in number of characters submitted by


RP SQL Size SQL Size
the Intelligence server to the database.

Dossier Job Counts the number of distinct user sessions that


Sessions executed dossier jobs.

RP Session Documents Counts the number of distinct user sessions that


Count Job Sessions executed document jobs.

Report Job Counts the number of distinct user sessions that


Sessions executed report jobs.

Provides the overall Total Queue Duration (in seconds)


RP Queue Sum Job of jobs. A jobs Total Queue Duration records the total
Duration Total Queue time spent waiting in queue for the job to be executed,
(hh:mm:ss) Duration (s) including initial queue time and the queue time between
different steps.

RP Prompt Sum Job


Provides the total Prompt Answer Duration (in seconds)
Answer Prompt
of jobs. A jobs Prompt Answer Duration records the total
Duration Answer
time spent answering a prompt during a job.
(hh:mm:ss) Duration (s)

Max Job Provides the max Execution Duration (in seconds) of


RP Execution
Execution jobs. A jobs Execution Duration records the total time
Duration (secs)
Duration (s) spent during a job.

RP Execution Sum
Measures total time taken by a job that does not hit a
Duration for Database Job
Cache/Intelligence Cube and runs against the database
SQL Executing Execution
i.e. jobs that execute SQL.
Reports Duration (s)

Copyright © 2023 All Rights Reserved 72


Plat fo r m An alyt ics

Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name

(hh:mm:ss)

RP Execution Sum Job Provides the total Execution Duration (in seconds) of
Duration Execution jobs. A jobs Execution Duration records the total time
(hh:mm:ss) Duration (s) spent during a job.

RP Elapsed Max Job Provides the max Elapsed Duration (in seconds) of jobs.
Duration Elapsed A jobs Elapsed Duration records the total time spent
(secs) Duration (s) during an execution including the total queue time.

RP Elapsed Sum Job Provides the total Elapsed Duration (in seconds) of jobs.
Duration Elapsed A jobs Elapsed Duration records the total time spent
(hh:mm:ss) Duration (s) during an execution including the total queue time.

RP Data
Sum Data
Request
Request Provides queue duration during a report, document, or
Queue
Queue dossier job execution.
Duration
Duration (s)
hh:mm:ss

Sum Data
RP Data
Request
Request Provides the duration of time for answering the set of
Prompt
Prompt Answer prompts in a report, document, or dossier job.
Answer Time
Time hh:mm:ss
(s)

RP Data Sum Data


Request Exec Request Measures the sum time taken by the Intelligence server
Duration Execution to execute jobs. The jobs must be manually triggered.
hh:mm:ss Duration (s)

RP Data
Sum Data
Request
Request Measures the sum time taken by the Intelligence server
Elapsed
Elapsed to process jobs. The jobs must be manually triggered.
Duration
Duration (s)
hh:mm:ss

Copyright © 2023 All Rights Reserved 73


Plat fo r m An alyt ics

Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name

RP CPU Provides the total CPU duration (in seconds) of jobs. A


Sum Job CPU
Duration jobs CPU Duration calculates the time spent on CPU
Duration (s)
(msec) during the job execution.

RP Average Average View


Counts the average number of rows displayed for an
View Report Report Result
Intelligent Cube view report.
Result Rows Rows

RP Average Provides the average Total Queue Duration (in seconds)


Queue Avg Job Total of jobs. A jobs Total Queue Duration records the total
Duration per Queue time spent waiting in queue for the job to be executed,
Job Duration (s) including initial queue time and the queue time between
(hh:mm:ss) different steps.

RP Average Average
Queue Queue
Measures the average time a job waits in queue. This job
Duration per Duration per
must be triggered.
Data Request Data Request
Job seconds Job (s)

RP Average
Avg Job Provides the average Prompt Answer Duration (in
Prompt
Prompt seconds) of jobs. A jobs Prompt Answer Duration
Answer Time
Answer records the total time spent answering a prompt during a
per Job
Duration (s) job.
(hh:mm:ss)

Average
RP Average
Prompt
Prompt Answer Measures the average time taken by the Intelligence
Answer
Duration per server to process a prompt answer. This job must be
Duration per
Data Request triggered.
Data Request
Job seconds
Job (s)

RP Average Avg Job Provides the average Execution Duration (in seconds) of
Execution Execution jobs. A jobs Execution Duration records the total time

Copyright © 2023 All Rights Reserved 74


Plat fo r m An alyt ics

Platform
Enterprise
Analytics
Manager Description
Metric
Metric Name
Name

Duration per
Job Duration (s) spent during a job.
(hh:mm:ss)

RP Average Average
Execution Execution Measures the average time taken by the Intelligence
Duration per Duration per server to execute the failed job. This job must be
Data Request Data Request triggered.
Job seconds Job (s)

RP Average
Elapsed Avg Job Provides the average Elapsed Duration (in seconds) of
Duration per Elapsed jobs. A jobs Elapsed Duration records the total time
Job Duration (s) spent during an execution including the total queue time.
(hh:mm:ss)

RP Average Average
Elapsed Elapsed
Measures the average time taken by the Intelligence
Duration per Duration per
server to process the job. This job must be triggered.
Data Request Data Request
Job seconds Job (s)

RP Average
Average
Elapsed
Elapsed Measures the average time taken by the Intelligence
Duration per
Duration per server to process the failed job. The job must be
Data Request
Data Request manually triggered.
Error Job
Error Job (s)
seconds

Pl at f o r m An al yt i cs Dat a M o d el

Copyright © 2023 All Rights Reserved 75


Plat fo r m An alyt ics

Object Hierarchy

The ​ O bject​ hierarchy and fact tables track all key schema (tables, facts,
attributes, etc) and application objects (reports, dashboard, cubes, etc)
stored in the MicroStrategy Metadata(s) being monitored by Platform
Analytics. The object hierarchy does not record data related to configuration
objects (subscriptions, schedules, users, user groups, etc). Configuration
objects are stored in separate hierarchies.

The ​ O bject Category​ and ​ O bject Types​ are groupings/categorizations of


different metadata objects. A full List of Object Category and Object Types:
is provided at the end of the section.

The ​ C omponent Object​ hierarchy is used to track the relationship between


an object and all of its direct child components. An object in the metadata
can be both an Object and Component Object in Platform Analytics. The lu_
component_object tables are views on the underlying lu_object tables. All
objects are stored at the level of Metadata and Project.

The relationship between Objects and their child Component Objects is


stored in the fact_object_component table. This table stores only the most
recent relationship between an object and its components. For example, if
an attribute is removed from a report, it will be removed as a component in
the fact_object_component table.

Copyright © 2023 All Rights Reserved 76


Plat fo r m An alyt ics

lu_object_category
The ​ O bject Category​ is a high-level categorization of types of objects in the
metadata, such as reports, attributes, documents, metrics, and more. This
table and the corresponding attribute act as key filters/selectors for
analyzing particular types of objects in the metadata. The data in this table
is static and predefined.

Column Description Data-Type

object_category_id The fixed numeric ID for the Object Category. smallint(6)

The fixed list of Object Categories. Sample elements


include:

l Attributes
object_category_ varchar
desc l Columns (128)

l Reports

l Cubes

lu_object_type
The ​ O bject Type​ for a specific ​ O bject​ stored in the metadata(s) being
monitored. This attribute provides more granular grouping options for
objects. For example, if an object’s category is Cube, its type may be OLAP
Cube or Data Import Cube. The data in this table is static and predefined.

Data-
Column Description
Type

object_ smallint
The fixed numeric ID for the Object Type.
type_id (6)

object_ The fixed list of object Types. Sample elements include: varchar
type_desc l OLAP Cube (128)

Copyright © 2023 All Rights Reserved 77


Plat fo r m An alyt ics

Data-
Column Description
Type

l Data Import Cube

object_ The numeric ID of the corresponding Object Category. This smallint


category_id column is the source of the Object Category attribute. (6)

lu_object_extended_type
The Object Extended Type for a specific Object Type stored in the metadata
(s) being monitored. This attributes provides more granular object types,
such as MDX reports or data import cubes. The data in this table is static
and predefined.

Data-
Column Description
Type

extended_ The fixed numeric ID for the Extended Type. This column is the
int(11)
type_id source of the Object Extended Type attribute.

The fixed list of extended Types. Sample elements include:


extended_ varchar
l Data Import Google Drive
type_desc (255)
l Freeform SQL

lu_object
The ​ O bject​ contains the distinct application or schema object stored in the
metadata for a specific project. Each object has a unique GUID and is
defined at the Project level.

Column Description Data-Type

object_id The auto-generated numeric ID for the object. bigint(20)

Copyright © 2023 All Rights Reserved 78


Plat fo r m An alyt ics

Column Description Data-Type

varchar
object_guid The GUID of the Object in the metadata.
(32)

The name of the Object stored in the metadata. When the


object represented by a row is a column (object_type_id =
4, object_category_id = 3), then the DataType of the
varchar
object_name column is appended to the name as a suffix.
(255)
For example:

ObjectName:SignedInt

varchar
object_desc The description of the object.
(512)

The navigation path to the object in the project.


varchar
object_location For example:
(1024)
Platform Analytics/Shared Reports/1. Dossiers/Telemetry

The UTC date when the object was first created. This
creation_date date
column is the source of the Object Creation Date attribute.

The latest date from when the object was last modified.
modification_ The date will continue to update as the object is
date
date modified. This column is the source of the Object
Modification Date attribute.

creation_
The UTC timestamp for when the object was first created. datetime
timestamp

The latest UTC timestamp for when the object was last
modification_
modified. The timestamp will continue to update as the datetime
timestamp
object is modified.

The numeric ID of the latest status of the object. The


object_status_ status ID changes based on the latest modification. The
status can be: tinyint(4)
id
l Visible

Copyright © 2023 All Rights Reserved 79


Plat fo r m An alyt ics

Column Description Data-Type

l Hidden

l Deleted

l Ad-hoc

This column is the source of the Object Status attribute.

The numeric ID of the corresponding Object Type. This


object_type_id smallint(6)
column is the source of the Object Type attribute.

project_id The numeric ID of the corresponding Project. int(11)

owner_id The numeric ID of the corresponding Object Owner. bigint(20)

The numeric ID of the extended type of the object.


object_ For example, if the object is a Data Import Cube, its
extended_type_ extended type may be Data Import Google Big Query Build int(11)
id a Query. This column is the source of the Object Extended
Type attribute.

varchar
object_version The version ID of the object.
(32)

The flag used to track if the object has been certified in the
metadata. The flag can be:
varchar
object_certified l Not Applicable
(14)
l N

l Y

lu_component_object_category
A view on the lu_object_category warehouse table. This table tracks the
categorization of child component objects nested within an Object. The data
in this table is static and predefined.

Copyright © 2023 All Rights Reserved 80


Plat fo r m An alyt ics

WH Table Data-
View Table Column Description
Column Type

component_object_ object_ The fixed ID of the Object smallint


category_id category_id Component Category. (6)

component_object_ object_ The predefined list of Component varchar


category_desc category_desc Objects. (128)

lu_component_object_type
A view on the lu_object_type warehouse table. This table tracks the Object
Types of child Component Objects nested within an object. It provides a
more granular analysis of the Object Category. The data in this table is
static and predefined.

View Table WH Table Data-


Description
Column Column Type

component_ object_ smallint


The fixed ID for the Component Object Type.
object_type_id type_id (6)

component_ The predefined list of Component Object


object_ varchar
object_type_ Types. This column is the source of the
type_desc (128)
desc Component Object Type attribute.

component_
object_ The numeric ID of the corresponding smallint
object_
category_id Component Object Category. (6)
category_id

lu_component_object
A view on the lu_object warehouse table. This table lists the distinct
application or schema objects stored in the metadata for a specific project.
Each Component Object​ ​ has a unique GUID and is defined at the Project
level.

Copyright © 2023 All Rights Reserved 81


Plat fo r m An alyt ics

View Table WH Table Data-


Description
Column Column Type

component_ The auto-generated numeric ID for the


object_id bigint(20)
object_id Component Object.

component_ The metadata GUID of the Component varchar


object_guid
object_guid Object. (32)

The name of the Component Object stored in


component_ varchar
object_name the metadata. This column is the source of
object_name (255)
the Component Object attribute.

component_ varchar
object_desc The description of the Component Object.
object_desc (512)

component_
object_ The navigation path to the Component varchar
object_
location Object in the Project. (1024)
location

The numeric ID of the corresponding


component_ object_type_ Component Object Type. This column is the smallint
object_type_id id source of the Component Object Type (6)
attribute.

component_ The numeric ID of the corresponding


object_ extended_ Component Object Extended Type. This
extende d_ type_id column is the source of the Component
type_id Object Extended Type attribute.

project_id project_id The numeric ID of the corresponding Project. int(11)

component_ object_ varchar


The version ID of the Component Object.
object_version version (32)

The flag used to track if the object has been


certified in the metadata. The flag can be:
component_
object_ varchar
object_ l Not Applicable
certified (14)
certified l N

l Y

Copyright © 2023 All Rights Reserved 82


Plat fo r m An alyt ics

fact_object_component
An ​ O bject​ in MicroStrategy can exist as a standalone entity or it may be
used by other objects and therefore can be the ​ C omponent Object​ . The
relationship between ​ O bjects​ and their ​ C omponent Objects​ is stored in the
fact_object_component table. This table stores only the current direct
relationship between an object and its components. For example, if an
attribute is removed from a report, it will be removed from the fact_object_
component table.

Data-
Column Description
Type

object_id The auto-generated numeric ID for the Object. bigint(20)

component_object_ The auto-generated numeric ID for the Component


bigint(20)
id Object.

List of Object Category and Object Types:


Below is the full list of Object Categories and Object Types tracked in
Platform Analytics.

Object Category Object Type

Ad Hoc Objects Ad Hoc Object

Attribute Forms Attribute Form Category

Copyright © 2023 All Rights Reserved 83


Plat fo r m An alyt ics

Object Category Object Type

Abstract Attribute

Attribute

Attribute Role
Attributes
Attribute Transformation

Derived Attribute

Recursive Attribute

Cards Card

Columns Column

Consolidations Consolidation

Data Import Cube


Cubes
OLAP Cube

Custom Group
Custom Groups
Element Grouping

Derived Elements Derived Element

Document

Documents HTML Document

Report Writing Document

Dossiers Dossier

Element Load Objects Element Load Objects

Facts Fact

Copyright © 2023 All Rights Reserved 84


Plat fo r m An alyt ics

Object Category Object Type

Filter

Filters Filter Partition

Filter Segment

User Folder
Folders
System Folder

System Hierarchy
Hierarchies
User Hierarchy

Copyright © 2023 All Rights Reserved 85


Plat fo r m An alyt ics

Object Category Object Type

Managed Attribute

Managed Attribute Form

Managed Column

Managed Consolidation

Managed Data Import Cube

Managed Intelligent Cube

Managed Database Table

Managed Derived Element


Managed Objects
Managed Derived Attribute

Managed Logical Table

Managed Grid Report

Managed Hierarchy

Managed Card

Managed Folder

Managed Metric

Managed Object

Copyright © 2023 All Rights Reserved 86


Plat fo r m An alyt ics

Object Category Object Type

Data Mining Metric

Metric

Metric Extreme
Metrics
Metric Subtotal

Reference Line

System Subtotal

Training Metric

Projects Project

Attribute Element Prompt

Embedded Prompt

Level Prompt

Prompt Object Prompt

Prompt

Prompt Expression Draft

Value Prompt

Copyright © 2023 All Rights Reserved 87


Plat fo r m An alyt ics

Object Category Object Type

Base Report

Datamart Report

Graph Report

Grid and Graph Report

Grid Report
Reports
Incremental Refresh Report

Non Interactive Report

SQL Report

Text Report

Transaction Services Report

Security Filters Security Filters

Database Table

Logical Table

Tables Partition Database Table

Partition Logical Table

Partition Mapping Table

Templates Template

Transformations Transformation

Unknown Unknown

lu_object_status
The latest status of the Object. The ​ O bject Status​ continues to change as
the Object is modified. The status will always reflect the most recent state.
An object is defined as an Application or Schema Object stored in the
metadata. It does not include the status of the configuration objects

Copyright © 2023 All Rights Reserved 88


Plat fo r m An alyt ics

(subscriptions, schedules, users, etc). The configuration Objects Status is


tracked as a form of the attribute. For example, the Schedule attribute has a
status form to track its latest state.

Data-
Column Description
Type

object_
The defined numeric ID for the Object Status. tinyint(4)
status_id

The current status of the Object. The status changes if the object
is modified, i.e. marked as hidden or deleted from the metadata.
The object status elements include:
object_
l Element Load Object
status_ varchar
desc l Ad Hoc (25)
l Visible

l Deleted

l Hidden

lu_object_owner
lu_object_owner is a view on the lu_mstr_user table in the warehouse. The
lu_object_owner table is used to track the user who created the object or
another user who currently owns the object. The owner usually defines the
permissions for how the object can be used and by whom.

View Table WH Table


Description Data-Type
Column Column

The auto-generated numeric ID for the


object_owner_
mstr_user_id current Owner in the MicroStrategy bigint(20)
id
metadata.

object_owner_ varchar
mstr_user_guid The metadata GUID of the User object.
guid (32)

Copyright © 2023 All Rights Reserved 89


Plat fo r m An alyt ics

View Table WH Table


Description Data-Type
Column Column

The name of the User object in the varchar


object_owner_ mstr_user_ (255)
metadata that has ownership of a
name name
particular object.

object_owner_ The description of the User object in the varchar


mstr_user_desc
desc metadata. (512)

object_owner_ mstr_user_ The login of the User object in the varchar


login login metadata. (255)

The UTC timestamp of when the user


was first created in the metadata. If a
creation_ creation_
script was used to import a list of users, datetime
timestamp timestamp
the timestamp may be identical for
users. This is expected.

The latest UTC timestamp from when


modification_ modification_ the User object was modified. The
datetime
timestamp timestamp value will continually update as the
User is modified or changed.

The latest status of the User Object in


the metadata. The status can be:
object_owner_ mstr_user_ varchar
l Visible
status status (25)
l Hidden

l Deleted

The numeric ID for the corresponding


metadata_id metadata_id metadata for each User. All users are int(11)
stored at the metadata level.

object_owner_ The version ID of the owner of the varchar


object_version
version object. (32)

Copyright © 2023 All Rights Reserved 90


Plat fo r m An alyt ics

fact_object_change_journal
This fact table stores the historical change journal modification information.
By joining this table with other lookup tables, like lu_object, lu_account, and
lu_account, the user can analyze who changed what object at which time.

The objects that track the change journal information include all the object
types in the lu_object_type tables. Adding Change Journal Fact tables to the
Platform Analytics Repository enables administrators to analyze the object
modification history for all objects in the metadata(s) being monitored by
Platform Analytics.

Column Description Data-Type

The auto-generated numeric ID for the fact object. This


object_id allows you to determine what project these objects belong bigint(20)
to.

The auto-generated numeric ID for the fact object. This


allows you to determine which Session the change applied
session_id bigint(20)
to, which client or server the change applied to, and which
type of client (i.e. Session Source) the change applied to.

The auto-generated numeric ID for the fact object. This bigint(20)


account_id allows you to determine who (i.e. account) modified the
object.

change_type_
The fixed ID for the Object Change Type. tinyint(4)
id

transaction_ datetime
MicroStrategy internal use.
timestamp (3)

tran_date MicroStrategy internal use. date

The comments a user leaves when changes are saved on


comments longtext
an object.

Copyright © 2023 All Rights Reserved 91


Plat fo r m An alyt ics

lu_change_type
The Change Type is the object change types a user performs on an object.
For example, creating a new object or deleting an object.

Data-
Column Description
Type

The fixed numeric ID of the change type. This is the source


change_ smallint
column for the change_type_id column of fact_object_change_
type_id (6)
journal.

The fixed list of change types. Change Types include:

0 Reserved

1 Reserverd2

2 Save Objects

3 Reserverd3

4 Delete Objects

5 Garbage Collection

change_ 6 Set Change Journal State varchar


type_desc 7 Get Change Journal State (32)

8 Purge Change Journal

9 Search Change Journal

10 Delete Merge User

11 Find Objects By Paths

12 Copy Object

13 Manipulate source Accounts

14 Notify Cluster Cube Change

Copyright © 2023 All Rights Reserved 92


Plat fo r m An alyt ics

Security Filter
lu_security_filter
The list of Security Filter objects and the corresponding descriptive
information from the MicroStrategy metadata(s) being monitored by Platform
Analytics. For more information about Security Filter Objects, see
Restricting Access to Data: Security Filters.

Column Description Data-Type

security_filter_ The auto-generated numeric ID of the Security Filter


bigint(20)
id object.

security_filter_ varchar
The GUID of the Security Filter object.
guid (32)

security_filter_ The name of the Security Filter object stored in the varchar
name metadata. (255)

security_filter_ varchar
The detailed description of the Security Filter object.
desc (512)

creation_ The UTC timestamp for when the Security Filter was first
datetime
timestamp created.

The latest UTC timestamp when the security filter was


modification_
modified. The timestamp will continually update as the datetime
timestamp
security filter is modified.

The current status of the security filter. A security filter can


have a status of:
security_filter_ varchar
l Visible
status (25)
l Deleted

l Hidden

The numeric ID of the project. Security Filters are stored at


project_id int(11)
the level of project and metadata.

Copyright © 2023 All Rights Reserved 93


Plat fo r m An alyt ics

Column Description Data-Type

The numeric ID of the metadata. Security Filters are stored


metadata_id int(11)
at the level of project and metadata.

The numeric ID of the corresponding security filter object


owner_id owner. This column is not mapped to an attribute in the bigint(20)
schema.

The navigation path to the Security Filter in the project.


security_filter_
For example: longtext
location
Platform Analytics\Project Objects\MD Security Filters

varchar
folder_guid MicroStrategy internal use.
(32)

transaction_
MicroStrategy internal use. datetime
timestamp

security_filter_ varchar
Version ID of the Security Filter.
version (32)

fact_action_security_filter_view
This fact table tracks which security filters were applied on a particular
execution. By joining this table with the fact_access_transaction_view table,
the user can analyze which Security Filters were applied for each execution.
The ​ S ecurity Filter Sequence​ represents order the security filter was
applied, when multiple security filters are applied during an execution.

Data-
Column Description
Type

parent_ The auto-generated numeric Parent Action ID. This is a source bigint
tran_id column for the Parent Action attribute. (20)

security_ bigint
The auto-generated numeric ID of the Security Filter object
filter_id (20)

Copyright © 2023 All Rights Reserved 94


Plat fo r m An alyt ics

Data-
Column Description
Type

security_ The source column for the Security Filter Sequence​ fact.
filter_ Represents the sequence order a security filter was applied when int(11)
sequence multiple security filters are applied during an execution

Prompt Hierarchy

Prompts​ are tracked as part of the Object​ ​ hierarchy as well as separate


attributes in order to analyze ​ p rompt answers,​ which are embedded in an
object.

lu_prompt
The lu_prompt table contains the distinct prompt objects stored in the
metadata for a specific project.

Each ​ p rompt​ has a unique GUID and is defined at the Project level.

Copyright © 2023 All Rights Reserved 95


Plat fo r m An alyt ics

Column Description Data-Type

The auto-generated numeric ID for the prompt. This column


prompt_id bigint(20)
is the source of the prompt attribute.

varchar
prompt_guid The GUID of the prompt object in the metadata.
(32)

varchar
prompt_name The name of the prompt stored in the metadata.
(255)

varchar
prompt_desc The detailed description of the prompt.
(512)

The navigation path to the prompt in the project.

prompt_ For example:


longtext
location
Platform Analytics/Public Objects/Prompts/System
Prompts

The numeric ID of the current status of the prompt. The


status can continue to update if the prompt is modified. The
status can be:
varchar
prompt_status
l Visible (25)
l Hidden

l Deleted

creation_
The UTC timestamp for when the prompt was first created. datetime
timestamp

The latest UTC timestamp for when the prompt was last
modification_
modified. The timestamp will continue to update as the datetime
timestamp
prompt is modified.

The fixed numeric ID of the Prompt Type. Prompt Type


includes:
prompt_type_
l Attribute Element Prompt smallint(6)
id
l Embedded Prompt

l Level Prompt

Copyright © 2023 All Rights Reserved 96


Plat fo r m An alyt ics

Column Description Data-Type

l Object Prompt

l Prompt- includes, attribute hierarchy prompts,


attribute qualification prompts, and metric
qualification prompts

l Value Prompt

This column is the source of the Prompt Type attribute.

project_id The numeric ID of the corresponding project. bigint(20)

The numeric ID of the corresponding prompt object owner.


owner_id bigint(20)
This column is not mapped to an attribute in the schema.

varchar
prompt_version The version ID of the prompt.
(32)

The flag used to track if the prompt must be required. The


value can be:
varchar
required l Not Applicable
(41)
l N

l Y

lu_prompt_type
The ​ P rompt Type​ for a specific ​ P rompt​ stored in the metadata(s) being
monitored. This attribute provides groupings of the prompts. The data in this
table is static and predefined.

Attribute hierarchy prompts, attribute qualification prompts, and metric


qualification prompts cannot be differentiated as different prompt types.
They are grouped to a default Prompt Type.

Copyright © 2023 All Rights Reserved 97


Plat fo r m An alyt ics

Data-
Column Description
Type

prompt_ The fixed numeric ID of the Prompt Type. This is the source smallint
type_id column of the Prompt Type attribute. (6)

Prompt Type elements include:

l Attribute Element Prompt

l Embedded Prompt

prompt_ l Level Prompt varchar


type_desc (255)
l Object Prompt

l Prompt- includes, attribute hierarchy prompts, attribute


qualification prompts, and metric qualification prompts

l Value Prompt

fact_prompt_answers
This fact table stores the ​ P rompt Answer​ for each prompt during an
execution. During execution, each time a prompt is answered, a new record
is added into this table. If the user does not answer a prompt​ selection, the
prompt answer​ is recorded as null.

Example report:

l Attributes: Object, Prompt, Prompt Answer

l Metric: Count (action)

This report indicates the number of times a specific prompt linked to an


object (report, document, or dossier) has been answered with a particular
response.

The fact_prompt_answers table is the source for two facts:

l Prompt Actions​ : Records the unique tran_id (actions) recorded in the


fact_prompt_answers table.

Copyright © 2023 All Rights Reserved 98


Plat fo r m An alyt ics

l Prompt Answer Sequence: Records the order in which the Prompt


Answers are​ applied.

Data-
Column Description
Type

Auto-generated numeric transaction ID. This is the source


parent_ column for the Parent Action attribute. Action is the lowest level
bigint(20)
tran_id that is defined in the Platform Analytics project schema. Parent
Action is used to group by multiple Actions sharing the same job.

varchar
prompt_id The auto-generated numeric ID for the prompt.
(255)

prompt_ The source for Prompt Order fact. Indicates in what order a user smallint
order_id answered a series of prompts in a prompted object. (6)

The prompt answer chosen by the user. The prompt answer can
prompt_ varchar
include a metric value, an object, an attribute list, custom text,
answer (2048)
etc. This is the source column of the Prompt Answer attribute.

Copyright © 2023 All Rights Reserved 99


Plat fo r m An alyt ics

Cache Hierarchy

The ​ C ache hierarchy provides analysis for ​ C ache Object ​ r elated actions.
Cube actions can include both executions (Cube publish, Report hit Cube,
etc) as well as cube administration tasks (Cube Load, Cube Unload, Delete
Cube, etc).

The fact_action_cube_cache table stores the data for Cache​ instances used
during a cube action. Key​ facts include: ​ C ache Expiration Timestamp (UTC),
Cache Last Update Timestamp (UTC), Cache Size (KB), Historical Hit
Count, and Hit Count.

The cache hierarchy is specific to MicroStrategy and does not include


analysis for any Badge actions.

Copyright © 2023 All Rights Reserved 100


Plat fo r m An alyt ics

lu_cache_object
Cache Objects​ stored in the table represents the cube objects for which the
​ c ache​ was created. The cache hierarchy only stores information related to
cube and report caches. This table is a view on table lu_object.

Warehouse
View Table
Table Description Data-Type
Column
Column

cache_object_id object_id The fixed ID for the Cache Object. bigint(20)

cache_object_ The GUID of the Cache Object in the varchar


object_guid
guid metadata. (32)

cache_object_ The name of the Cache Object in the varchar


object_name
name metadata. (255)

cache_object_ The detailed description of the Cache varchar


object_desc
desc Object. (512)

The navigation path to the Cache


Object in the project.
cache_object_
object_location For example: longtext
location
Platform Analytics/Shared Reports/1.
Dossiers/Telemetry

The ID of the Cache Object Type.

cache_object_ For example:


object_type_id smallint(6)
type_id l Data Import Cube

l OLAP Cube

cache_object_
creation_ The UTC timestamp for when the
creation_ datetime
timestamp Cache Object was first created.
timestamp

The numeric ID of the corresponding


cache_project_id project_id bigint(20)
project.

Copyright © 2023 All Rights Reserved 101


Plat fo r m An alyt ics

Warehouse
View Table
Table Description Data-Type
Column
Column

cache_object_ The ID of the corresponding Cache


owner_id bigint(20)
owner_id Object Owner in the metadata.

cache_object_
modification_ The timestamp of when the Cache
modification_ Datetime
timestamp Object was last modified.
timestamp

cache_object_ object_status_
The status of the Cache Object. tinyint(4)
status_id id

cache_object_ varchar
object_version The version of the Cache Object.
version (32)

The flag used to track if the Cache


Object is certified in the metadata. The
flag can be:
cache_object_ object_ varchar
certified certified l Not Applicable (14)
l N

l Y

lu_cache
The​ cache instances are stored in the lu_cache​ table. This stores all cube
cache instances created in​ the metadata over time. Cache instances are
identified based on the GUID.

Data-
Column Description
Type

bigint
cache_id The auto-generated ID for the cache instance.
(20)

cache_ The GUID of the cache instance in the metadata. varchar

Copyright © 2023 All Rights Reserved 102


Plat fo r m An alyt ics

Data-
Column Description
Type

instance_guid (32)

The ID corresponding to the type of cache. Only Intelligent


cache_type_id tinyint(4)
cube caches are supported.

cache_object_ The ID corresponding to the object to which the cache


bigint(20)
id instance was created.

The ID of the corresponding metadata where the cache was bigint


metadata_id
created. (20)

lu_cache_type
Cache Type ​ i s the categorization of the ​ C ache. Only report Cache and
Intelligence Cube Cache types are tracked.

Data-
Column Description
Type

cache_
The fixed ID for the Cache Type. tinyint(4)
type_id

cache_ The predefined list of Cache Types. For example, a sample varchar
type_desc element includes an Intelligence cube cache. (25)

lu_cache_object_owner
The lu_cache_object_owner table is used to track the user who created the
Cache Object or the user who currently owns the Cache Object. The ​ C ache
Object Owner​ usually defines the permissions for how the ​ C ache Object​ can
be used and by whom. The lu_cache_object_owner table is a view on the lu_
mstr_user table in the warehouse.

Copyright © 2023 All Rights Reserved 103


Plat fo r m An alyt ics

View Table Warehouse Table


Description Data-Type
Column Column

The auto-generated ID for


cache_object_ the current Owner/User in
mstr_user_id bigint(20)
owner_id the MicroStrategy
metadata.

The GUID of the


cache_object_owner_
mstr_user_guid MicroStrategy User in the varchar(32)
guid
metadata.

The name of the


cache_object_ MicroStrategy User in the
mstr_user_name varchar(255)
owner_name metadata that has
ownership of the object.

The login of the


cache_object_owner_
mstr_user_login MicroStrategy User in the varchar(255)
login
metadata.

The UTC timestamp of


creation_timestamp creation_timestamp when the user was first datetime
created in the metadata.

The latest UTC timestamp


from when the
MicroStrategy User was
modification_ modification_
modified. The value will datetime
timestamp timestamp
continually update as the
User is modified or
changed.

The latest status of the


User in the metadata. The
cache_object_ status can be:
mstr_user_status varchar(25)
owner_status
l Visible

l Deleted

Copyright © 2023 All Rights Reserved 104


Plat fo r m An alyt ics

View Table Warehouse Table


Description Data-Type
Column Column

The numeric ID for the


corresponding metadata
where the MicroStrategy
metadata_id metadata_id int(11)
User was created. All
users are stored at the
metadata level.

The version ID of the


cache_object_
mstr_user_version owner of the Cache varchar(32)
owner_version_id
Object.

lu_cache_object_type
The ​ C ache​ ​ O bject Type represents the type of objects the cache instances
created. This attribute provides more granular grouping options for the
Cache Objects. The data in this table is predefined and this is a view on the
table lu_object_type.

View Table WH Table Data-


Description
Column Column Type

cache_
object_ smallint
object_type_ The fixed ID for the Cache Object Type.
type_id (6)
id

The fixed list of Cache Object Types.


cache_ Sample elements include:
object_ varchar
object_type_
type_desc l OLAP Cube (128)
des c
l Data Import Cube

cache_ The numeric ID of the corresponding Cache


object_ smallint
object_ Object Category. Not supported in the
category_id (6)
category_ id schema.

Copyright © 2023 All Rights Reserved 105


Plat fo r m An alyt ics

lu_cache_status
The ​ C ache Status​ indicates the status of the cache instance. The Cache
Status can change for the cache instance overtime. Therefore, the Cache
Status is stored in the fact_latest_cube_cache and fact_action_cube_cache
tables to track the latest, historical, and changing status over the life of the
cube cache instance. For a detailed explanation of the Cube Status values,
see​ KB31566: MicroStrategy 9.4.x - 10.x Intelligent Cube status indication
and workflow​ .

Data-
Column Description
Type

cube_
The fixed numeric ID for the Cache Status. int(11)
status_id

The description form of the Cache Status. The status can be the
combination of any of the following elements:

l Processing

l Active

l Filed
cube_ l Monitoring Information Dirty
varchar
status_
l Dirty (255)
desc
l Loaded

l Load Pending

l Unload Pending

l Imported

l Foreign

Cache Project
The ​ C ache Project​ attribute is a logical table alias based off the lu_project
table in the warehouse.

Copyright © 2023 All Rights Reserved 106


Plat fo r m An alyt ics

fact_action_cube_cache
The ​ f act_action_cube_cache ​ t able records the transactions telemetry
related to Cube Cache instances as well as key metrics for each cube
action.

Key facts include:

l Hit Count - the number of times the Intelligent cube is used by


reports/documents/dossiers since it was last updated. This number
increases every time the report/document/dossier is executed and hits
the cache. Hit Count resets when the cache is updated.

l Historical Hit Count - the number of times the Intelligent cube is used
by reports/documents/dossiers since it was published. This number
will increment regardless of cache updates.

l Cache Size (KB) - records the size of the cube cache instance in KB.

l Cube Last Update Timestamp (UTC) - The UTC timestamp when the
cube was last updated.

l Cache Expiration Timestamp (UTC) - The UTC timestamp the cache


instance is set to expire.

The Action Categories recorded in the fact_action_cube_cache table


include:

l Cube Modification

l Cube Executions

l Cube Cache Hit

l Cache Hit

l Cache Creation

Copyright © 2023 All Rights Reserved 107


Plat fo r m An alyt ics

Column Description Data-Type

The auto-generated numeric Parent Action ID. This is a


parent_tran_ source column of the Parent Action attribute. Parent Action is
bigint(20)
id the lowest level that is defined in the Platform Analytics
project schema.

cache_id The auto-generated ID for the cube cache instance. bigint(20)

The ID corresponding to the status of the cube instance at the


cube_
transaction level. The status of the cube instance can change int(11)
status_id
over time.

cache_size Size of the cube instance in KB. bigint(20)

historical_
Historical Hit Count of a cube instance. bigint(20)
hit_count

hit_count Hit Count of a cube instance. bigint(20)

last_update_
Last update timestamp (UTC) of the cube. datetime
timestamp

fact_latest_cube_cache
The fact_latest_cube_cache table records only the latest transaction related
to the cube cache instance.

Key facts include:

l Hit Count - the number of times the Intelligent Cube has been used by
reports/documents/dossiers since it was last updated. Hit count will
increase every time the report/document/dossier gets executed and hits
the cache but will reset when the cache is updated

l Historical Hit Count - the number of times the Intelligent Cube has been
used by reports/documents/dossiers since it was published. This number
will increment regardless of cache updates.

l Cache Size (KB) - records the size of the cube cache instance in KB.

Copyright © 2023 All Rights Reserved 108


Plat fo r m An alyt ics

l Cube Last Update Timestamp (UTC) - The timestamp (in UTC timezone)
when cube was last updated.

l Cache Expiration Timestamp (UTC) - The timestamp (in UTC timezone)


when the Cache instance is set to expire.

Data-
Column Description
Type

cache_id The auto-generated ID for the cube cache instance. bigint(20)

iserver_
The auto-generated ID for the cube cache instance. bigint(20)
instance_id

The ID corresponding to the status of the cube instance at


cube_status_
the transaction level. The status of the cube instance can int(11)
id
change over time.

cache_size Size of the cube instance in KB. bigint(20)

historical_hit_
Historical Hit Count of a cube instance. bigint(20)
count

hit_count Hit Count of a cube instance. bigint(20)

transaction_
MicroStrategy internal use. bigint(20)
timestamp

last_update_
Last update timestamp (UTC) of the cube. datetime
timestamp

Copyright © 2023 All Rights Reserved 109


Plat fo r m An alyt ics

Configuration Objects Hierarchy

lu_metadata
List of the metadata repositories that contain the configuration, schema, and
application objects being monitored by Platform Analytics.

Data-
Column Description
Type

metadata_id The auto-generated numeric ID for the metadata repository. int(11)

metadata_ The GUID of the metadata(s) being monitored by Platform int(11)

Copyright © 2023 All Rights Reserved 110


Plat fo r m An alyt ics

Data-
Column Description
Type

guid Analytics.

metadata_ The metadata database connection URL. Used to differentiate


varchar
db_ metadata’s with the same GUID but different database
(512)
connection servers.

lu_project
List of the ​ p rojects​ in the metadata repositories that are being monitored by
Platform Analytics. All applications, schema, and configuration objects are
stored at the project level.

Column Description Data-Type

project_id The auto-generated numeric ID for the project. bigint(20)

varchar
project_guid The GUID of the project.
(32)

varchar
project_name The name of the project stored in the metadata.
(255)

The long description of the project added in the Project varchar


project_desc
Properties editor. (512)

creation_
The UTC timestamp for when the project was first created. datetime
timestamp

The latest UTC timestamp from when the Project was last
modification_
modified. The modification timestamp will continue to datetime
timestamp
update as the project is modified.

The latest status of the project. The status can be:


varchar
project_status l Visible
(25)
l Deleted – indicates the project has been removed

Copyright © 2023 All Rights Reserved 111


Plat fo r m An alyt ics

Column Description Data-Type

The numeric ID of the metadata. Projects are stored at the


metadata_id int(11)
level of metadata.

The numeric ID of the Project owner. This column is not


owner_id bigint(20)
available in the reporting schema.

transaction_
MicroStrategy internal use. datetime
timestamp

project_ varchar
The version ID of the project.
version (32)

lu_db_type
The list of the database type in the metadata repositories.

Column Description Data-Type

The numeric ID of the database type for the database


db_type_id int(11)
instance.

db_type_ The description of the database type for the database


varchar(225)
desc instance.

lu_db_version
The list of the database version in the metadata repositories.

Column Description Data-Type

The numeric ID of the database version for the database


db_version_id int(11)
instance.

db_version_ The description of the database version for the database varchar
desc instance. (225)

Copyright © 2023 All Rights Reserved 112


Plat fo r m An alyt ics

lu_db_login
List the ​ d atabase login​ created in the metadata. All configuration objects are
stored at the metadata level. For more information about database
instances, ​ s ee​ ​ C reating a database login​ .

Data-
Column Description
Type

db_login_id The auto-generated numeric ID for the database login.​ bigint(20)

db_login_ varchar
The Name of the database login.
name (32)

varchar
db_login_guid The GUID of the database login.
(255)

The status of the database login. The status can be:


db_instance_ varchar
l Visible
status (25)
l Deleted

The numeric ID of the metadata. Configuration objects are


metadata_id bigint(20)
stored at the level of metadata.

transaction_
MicroStrategy internal use. datetime
timestamp

db_login_ varchar
The version ID of the database login.
version (32)

lu_db_instance
List the ​ d atabase instance​ created in the metadata. All configuration objects
are stored at the metadata level. For more information about database
instances, ​ s ee ​ C reating a database instance​ .

Copyright © 2023 All Rights Reserved 113


Plat fo r m An alyt ics

Column Description Data-Type

db_instance_id The auto-generated numeric ID for the database instance. bigint(20)

db_instance_ varchar
The GUID of the database instance.​
guid (32)

db_instance_ varchar
The name of the database instance.
name (255)

db_instance_ The long description of the database instance added in the varchar
desc Properties editor. (255)

creation_ The UTC timestamp for when the database instance was
datetime
timestamp first created.

The latest UTC timestamp from when the database


modification_
instance was last modified. The modification timestamp will datetime
timestamp
continue to update with each saved modification.

The status of the database connection. The status can be:


db_instance_ varchar
l Visible
status (25)
l Deleted

The current owner_id of the database instance. The owner


can be the person who created the object or was later
owner_id bigint(20)
changed to be the owner. This column is not mapped in the
Platform Analytics schema.

The numeric ID of the metadata. Configuration objects are


metadata_id bigint(20)
stored at the level of metadata.

transaction_
MicroStrategy internal use. datetime
timestamp

The numeric ID of the database type for the database


db_type_id int(11)
instance.

db_version_id The version ID of the database instance. int(11)

Copyright © 2023 All Rights Reserved 114


Plat fo r m An alyt ics

lu_db_connection
List the ​ d atabase connections​ created in the metadata. All configuration
objects are stored at the metadata level. For more information about
database connections, ​ s ee ​ C reating a Database Connection and ​ H ow to
Manage Database Connections​ .

Column Description Data-Type

db_ The auto-generated numeric ID for the database


bigint(20)
connection_id connection.

db_
varchar
connection_ The GUID of the database connection.
(32)
guid

db_
varchar
connection_ The name of the database connection.
(255)
name

The status of the database connection. The status can be:


db_
varchar
connection_ l Visible
(25)
status
l Deleted

creation_ The UTC timestamp when the database connection was


datetime
timestamp first created.

The latest UTC timestamp when the database connection


modification_
was last modified. The modification timestamp will continue datetime
timestamp
to update with each saved modification.

The numeric ID of the metadata. Configuration Objects are


metadata_id bigint(20)
stored at the level of metadata.

transaction_
MicroStrategy internal use. datetime
timestamp

db_
varchar
connection_ The version ID of the database connection.
(32)
version

Copyright © 2023 All Rights Reserved 115


Plat fo r m An alyt ics

Column Description Data-Type

data_source_ The name of the configured DSN for the database varchar
name connection. (4096)

lu_db_connection_map
The ​ D atabase Connection Map ​ i s an attribute to link the unique
combinations of ​ D atabase Connection​ , Database Instance​ , and ​ D atabase
Login​ , which were used during an action (i.e. running a report). The ID value
is auto-generated, but does not represent anything meaningful on its own.​
All configuration objects are stored at the metadata level.

A Connection Map allows an administrator to apply different levels access to


RDBMS per User/User Group. For more information, see ​ C onnection
Mappings​ . If a User/User Group for which the connection map is defined
runs a report/document/dossier, there will be a new entry in lu_db_
connection_map. If the Connection Map has been created, but never applied
to an action, it will not appear in this table.

Column Description Data-Type

The auto-generated numeric


ID for the Database
Connection Map. The value
acts as a links between the
db_connection_map_id database instance, bigint(20)
database connection, and
database login, but does not
have any meaning on its
own.

The numeric ID of the


db_instance_id bigint(20)
database instance.

db_connection_id The numeric ID of the bigint(20)

Copyright © 2023 All Rights Reserved 116


Plat fo r m An alyt ics

Column Description Data-Type

database connection.

The numeric ID of the


db_login_id bigint(20)
database login.

The metadata ID for which


the Database Connection
metadata_id Map was executed against. bigint(20)
Configuration objects are
stored at the metadata level.

Copyright © 2023 All Rights Reserved 117


Plat fo r m An alyt ics

Action Hierarchy

Every Intelligence server and Identity server log is processed and stored at
the transactional (​ A ction, Parent Action​ ) level in the main fact table access_
transaction.

The action hierarchy allows users to analyze the type of actions that a user
is performing on the system. A sample of transaction types (i.e. ​ A ction

Copyright © 2023 All Rights Reserved 118


Plat fo r m An alyt ics

Types​ ) includes: logging into a project, executing a report, republishing a


cube, creating a history list message or scanning a Badge QR Code. The full
lists of Action Types are included in the following section.

The ​ A ction Category​ is a grouping of Action Types. It provides a broader


analysis of user actions i.e. Executions or Badge Actions.

Each Action has a corresponding ​ S tatus ​ a nd​ Status Category​ . The Status
and Status Category are used to differentiate successful or failed
transactions.

These are the facts sourced from the access_transaction table:

l Action​ - The Action fact records the unique tran_id recorded in the
access_transaction table

l Action Start Timestamp (UTC)​ - UTC Timestamp of when the Action


(execution) began.

l Action Finish Timestamp (UTC)​ - UTC Timestamp of when the


Action (execution) finished.

l Execution Duration (ms)​ - The Execution Duration (ms) fact records


the total time spent during an action (execution) in milliseconds.

l Prompt Answer Duration (ms) ​ - The Prompt Answer Duration (ms)


fact records the total time spent answering a prompt during an action
(execution) in milliseconds.

l Total Queue Duration (ms) - The Total Queue Duration (ms) fact
records the total time spent waiting in queue for the job to be executed
in milliseconds. This includes initial queue time and the queue time
between different steps.

l Row Count - The number of rows in the result set.

l Step Count - The number of steps completed during an execution.

l SQL Pass Count - The number of SQL passes required to generate


the result set for the execution.

Copyright © 2023 All Rights Reserved 119


Plat fo r m An alyt ics

l CPU Duration - The time spent on CPU during job execution in


milliseconds.

l Initial Queue Duration (ms) - Total time spent by the job waiting in
initial queue to begin execution in milliseconds.

l Job Step total Queue Duration (ms) - Time spent waiting in the
queue between job steps in milliseconds.

l Elapsed Duration (ms) - Total time spent executing a job from queue
to finish in milliseconds.

l Total Processing Duration (ms) - Total time spent executing a job


from queue, through waiting for prompt answers, to finish in
milliseconds.

l Job Start Timestamp (UTC) - The timestamp (in UTC timezone) a job
began execution.

access_transaction

Column Description Data-Type

Auto-generated numeric action ID. This is the source


column for the ​ A ction​ attribute and the Action fact. Action
tran_id bigint(20)
is the lowest level that is defined in the Platform Analytics
project schema.

Auto-generated numeric Parent Action ID. This is the


source column of the Parent Action attribute. Parent Action
parent_tran_id is one level above Action and was introduced to account for bigint(20)
the "Bucketing" of multiple transactions within a single
session/job.

The account ID of the account who performed the


transaction. The Account can be either Badge or
account_id bigint(20)
MicroStrategy metadata users. See lu_account for more
details.

tran_lat The longitude point where the transaction occurred. double

Copyright © 2023 All Rights Reserved 120


Plat fo r m An alyt ics

Column Description Data-Type

Location analysis is specific to Badge transactions. This is


the source column for the Latitude attribute.

The latitude point where the transaction occurred. Location


tran_long analysis is specific to Badge transactions. This is the double
source column for the Longitude attribute.

The standardize UTC timestamp when the transaction


began. This is the source column for the ID form of the
tran_timestamp datetime
Timestamp​ attribute in the schema. Also the source column
for the ​ A ction Timestamp (UTC)​ fact.

The date (in UTC timezone) when the transaction occurred.


tran_date date
This is the source column of the Date attribute.

The minute (in UTC timezone) when the transaction


minute_id occurred. This is the source column of the Minute attribute. int(11)
See lu_minute for more information.

The local timestamp when the transaction occurred. The


local time hierarchy is specific to Badge transactions and
local_
is based on the timzone of the mobile device of the Badge datetime
timestamp
App. This is the source column of the Local Timestamp
attribute.

The local date when the transaction occurred. The local


local_tran_date time hierarchy is specific to Badge logs and is based on the date
timezone of the mobile device of the Badge App.

The local minute when the transaction occurred. The local


time hierarchy is specific to Badge transactions and is
local_minute_
based on the timezone of the mobile device of the Badge int(11)
id
App. This is the source column of the Local Minute
attribute.

The Action Type ID corresponding to the specific action.


This is the source column of the Action Type attribute. The
action_type_id smallint(6)
full list of action types can be found in the section below.
See lu_action_type for more information.

Copyright © 2023 All Rights Reserved 121


Plat fo r m An alyt ics

Column Description Data-Type

The Mobile App version ID which was used to log an Badge


mobile_app_id transactions. MicroStrategy Mobile stats are not bigint(20)
tracked. See lu_mobile_app for more details.

The Identity gateway ID that was authenticated during a


gateway_id bigint(20)
transaction. See lu_gateway section for more details.

The Microstrategy session used to log in to the Intelligence


session_id server and project to perform action. See lu_session for bigint(20)
more details.

The OS ID used to log a Badge transactions. MicroStrategy


os_id bigint(20)
OS version are not tracked. See lu_os for more details.

The device used to perform the action. For Badge actions it


is the mobile device on which Badge or Communicator app
device_id bigint(20)
is installed. For MicroStrategy, it is the IP address of the
client machine. See lu_device for more details.

The amount of time spent processing a MicroStrategy or


execution_time Badge action (job) in milliseconds. This is the source double
column of the Execution Duration (ms) fact.

The amount of time a job waited int he queue before


initial_queue_
starting to be processed in milliseconds. This is the source int(11)
time
column of the Initial Queue Duration (ms) fact.

The amount of time a job spent waiting for the user to input
prompt_ an answer to prompts required to execute the job in
int(11)
answer_time milliseconds. This is the source column of the Prompt
Answer Duration (ms) fact.

validating_ The Badge account that was validated by another Badge


bigint(20)
account_id account. See lu_validating_account for more details.

The address_id corresponding to lat/long for a Badge


address_id transaction. This column is specific for Badge bigint(20)
transactions. See lu_address for more details.

facility_ The Facility street Address ID where the Badge transaction bigint(20)

Copyright © 2023 All Rights Reserved 122


Plat fo r m An alyt ics

Column Description Data-Type

address_id occurred. See lu_facility_address for more details.

The status (success/error) for each transaction. ID values


less than 1 represents errors for Badge actions. Values
status_id bigint(20)
greater than 3 represents errors on the Intelligence
Server. See lu_status for more details.

The Network ID associated with the Badge transaction. All


network_id MicroStrategy actions are mapped to a default value. See bigint(20)
lu_network for more details.

The Beacon ID associated with the Badge transactions. If


no beacon was used in the transaction (i.e. open logical
beacon_id application) a default ID will be mapped. MicroStrategy bigint(20)
transactions do not have beacons and therefore always
map to a default value. See lu_beacon for more details.

The physical Space ID associated with the Badge


transactions. If no beacon was used in the transaction (i.e.
open logical application) a default ID will be mapped.
space_id bigint(20)
MicroStrategy transactions do not have spaces and
therefore always map to a default value. See lu_space for
more details.

The logical Application ID associated with the Badge


transactions. If no application was used in the transaction
(i.e. open a physical space/door) a default ID will be
application_id bigint(20)
mapped. MicroStrategy transactions do not have
applications and therefore always map to a default value.
See lu_application for more details.

The configured Desktop Unlock Setting ID associated with


the Badge transactions. If no Desktop was used in the
desktop_
transaction (i.e. open logical application) a default ID will
unlock_ tinyint(4)
be mapped. MicroStrategy transactions do not have
setting_id
Desktops and therefore always map to a default value. See
lu_desktop_unlock_setting for more details.

Copyright © 2023 All Rights Reserved 123


Plat fo r m An alyt ics

Column Description Data-Type

The Parent Job ID for MicroStrategy actions. Not all actions


have a Parent Job. If a document/dossier based on
datasets is executed, the dataset jobs will have the same
parent_job_id bigint(20)
document job as the Parent Job ID. The Parent Job ID can
be used to link the dataset child jobs. This is the source
column of the Parent Job attribute.

The Job ID corresponding to MicroStrategy execution


actions. If a MicroStrategy action does not trigger a job, a
job_id bigint(20)
default value will be mapped. This is the source column for
the Job attribute.

The Object ID corresponding to MicroStrategy execution


transaction. This could be a report, document, dossier, or
object_id cube Object ID. For MicroStrategy Session transactions, bigint(20)
there is no object; therefore, the project_id is used. See lu_
object for more details.

The Connection Map ID used for the MicroStrategy


execution. Connection Map is only available for report and
connection_ Cube execution. It represents the unique combination of
bigint(20)
map_id database login, database instance, and database
connection used in the execution. See lu_db_connection_
map for more details.

The Subscription ID when an execution was a result of a


subscribed report/document/dossier. Not all MicroStrategy
subscription_id and Badge transactions have subscriptions and therefore a bigint(20)
default value may be mapped. See lu_subscription for more
information.

The Bar Code ID associated with the Badge transactions. If


no barcode was scanned in the transaction (i.e. open a
physical space/door) a default ID will be mapped.
bar_code_id bigint(20)
MicroStrategy transactions do not have bar codes and
therefore always map to a default value. See lu_bar_code
for more details.

Copyright © 2023 All Rights Reserved 124


Plat fo r m An alyt ics

Column Description Data-Type

The History List Message ID corresponding to a


history_list_ MicroStrategy transaction. Not all transactions have
bigint(20)
message_id History List Message and, therefore, a default value may be
mapped. See lu_history_list_message for more details.

The Desktop ID associated with the Badge transactions. If


no desktop was used in the transaction (i.e. open a
physical space/door) a default ID will be mapped.
desktop_id bigint(20)
MicroStrategy transactions do not have desktops and,
therefore, always map to a default value. See lu_desktop
for more details.

The Recipient ID represents the user who is subscribed to


receive a subscription. This could be MicroStrategy user in
recipient_id bigint(20)
the metadata or an external contact added by a
MicroStrategy user. See lu_recipient for more details.

The Job Priority ID represents the priority of a job to be


job_priority_id executed and determines how long it will wait in the job int(11)
execution queue. See lu_job_priority for more details.

The total amount of time a job spent waiting in queue (in


total_queue_
milliseconds). This is the source column of the Total Queue int(11)
time
Duration (ms) fact

The number of rows in the result set for an execution. This


row_count int(11)
is source column for the Row Count fact.

The number of job steps executed for an execution. This is


step_count int(11)
the source column for the Step Count fact

sql_pass_ The number of SQL passes used for an execution. This is


int(11)
count the source column for the SQL Pass Count fact.

The time a job spent processing on the CPU during


job_cpu_time execution. This column is the source for the CPU Duration bigint(11)
(ms) fact.

Copyright © 2023 All Rights Reserved 125


Plat fo r m An alyt ics

lu_action_type
The ​ A ction Type​ is type of action the user performs, for example creating a
new subscription, exporting a report to PDF, or scanning a Badge QR code.
The action type is a common attribute across both MicroStrategy and Badge
transactions.

Data-
Column Description
Type

action_type_id The numeric ID for the action type. smallint(6)

action_type_ varchar
The detailed type of action the user performs.
desc (255)

action_ The numeric ID of the action category corresponding to the


smallint(6)
category_id action type.

lu_action_category
Action Category​ is a grouping of ​ A ction Types​ . It provides a broader
analysis of user actions i.e. Executions or Badge Actions. The Action
Category attribute enables filtering when analysis on a single type of
transaction is desired.

Column Description Data-Type

action_category_id The numeric ID for the action category. smallint(6)

action_category_desc A categorization of types of actions a user performs. varchar(50)

List of Action Categories and Types and their description:


A full list of Action Categories and Action Types are explained below. This
list will continue to grow as Platform Analytics continues to process more
detailed telemetry.

Copyright © 2023 All Rights Reserved 126


Plat fo r m An alyt ics

Action Category Action Type Explanation

Indicates that a Badge app user unlocked a


Access by physical location when their smartphones
Bluetooth are detected near a Bluetooth badge reader
using the Badge app.

Indicates a Badge App user unlocked


physical resources, such as locked doors or
Access by Key
offices, by tapping a key in the Badge app
on their smartphone.

Indicates a Badge app user logged in to a


Access by QR
MicroStrategy Badge web application by scanning a QR code
Code
Actions using the Badge app on their smartphone.

Indicates a Communicator app user


Bluetooth discovered other Badge app users within the
Discovery same vicinity by detecting the Bluetooth
signal of other users.

Indicates one Badge app user validated


Validate Badge
another user by scanning their unique QR
by QR Code
code.

Indicates one Badge app user validated


another user by entering their Badge Code.
The Badge Code is unique to each user and
can be configured to change routinely. For
Validate Badge
example, this code can be given over the
by Usher Code
phone to identify yourself if you are talking
to someone who does not know you. By
default the Badge Code is 4 or 8 digits in
length and updates every hour.

Indicates a Badge user opened their badge


Badge Opened
app on their smartphone.

Windows Unlock Indicates a Badge user signed in to their


by QR Code Windows computers by using their

Copyright © 2023 All Rights Reserved 127


Plat fo r m An alyt ics

Action Category Action Type Explanation

smartphone to scan a QR code displayed on


their computer screen.

Indicates a Badge user received a


confirmation message (push notification) on
Access by Push their smartphone that has Badge app
Notification installed. The user must tap the
confirmation message to verify her identity
and finish logging in.

On the VPN login page, the user can type


Badge Code as
the Badge Code displayed on the Badge app
Second Password
on her smartphone in order to login.

Indicates a user logged in to a Badge web


Single Sign-On
application using SAML authentication.

Indicates that a Badge user unlocked a


Access by
physical locations when their smartphones
Beacon
are detected near a beacon, see.

Indicates a Badge user paired a smartphone


Desktop Unlock
device with their Mac computer.

Desktop Unlock Indicates a Badge user paired a smartphone


Pairing device with their Mac computer.

Indicates a user scanned a barcode using


Scanned Barcode the Badge App. The exact barcode string is
stored in the lu_barcode table.

Indicates that a new badge code was


generated on the Badge app. The Badge
Generate Badge Code is unique to each user and can be
Code configured to change routinely. By default,
the Badge Code is 4 or 8 digits in length and
updates every hour.

Agree to Badge Indicates that a user agreed to Badge

Copyright © 2023 All Rights Reserved 128


Plat fo r m An alyt ics

Action Category Action Type Explanation

Policy Policy.

Delete Badge Indicates that a user removes a badge from


from Device their Badge app on the mobile device.

Upload Badge Indicates a user uploads a new photo for


Photo their badge.

Device enrollment requires a user to


MicroStrategy Badge associate a mobile phone numbers with
Device
Management their Badges. Device Enrollment Request
Enrollment
indicates a user submitted a request for a
Request
device enrollment code to be sent to a
secure phone number.

Device Once the device enrollment code is entered


Enrollment and verified successfully through the Badge
Verified app.

You can log when and where members of


the network use the Badge app by enabling
Location location tracking for a badge. Location
Tracking Tracking indicates that the smartphone
moved a 500m distance based on the
long/lat values or 5 minutes.

MicroStrategy Badge Indicates a Badge app on smartphone was


Location Tracking detected within the range of a beacon for
Beacon Enter the first time. The beacon must be
configured to “log user location” in Network
Manager.

Indicates a Badge app on smartphone was


detected leaving the range of a beacon. The
Beacon Exit
beacon must be configured to “log user
location” in Network Manager.

A user logged in to a MicroStrategy


MicroStrategy Logins Server Login
Intelligence Server.

Copyright © 2023 All Rights Reserved 129


Plat fo r m An alyt ics

Action Category Action Type Explanation

A user logged out of a MicroStrategy


Intelligence Server. When a user logs out of
Server Logout
a server session, they are also
automatically logged out of a project.

A user logged in to a MicroStrategy Project


Project Login manually. When a user logs into a project a
server login is automatically created.

A user logs out of a MicroStrategy Project


Project Logout
manually.

A user logged in to a MicroStrategy Project


by scheduler. When a user logs into a
Scheduler Login
project a server login is automatically
created.

A user logs out of a MicroStrategy Project


Scheduler Logout
by scheduler.

Download MSTR Indicates a user exported a dossier object


File with Cube based on cube as an MSTR file and hits the
Cache Hit cube cache.

Indicates a user executed a view report or a


Execute with
cube-based document or dossier and hits
Cube Cache Hit
the cube cache.

Export to Excel Indicates a user exported a view report or a


Cube Cache Hit with Cube Cache cube-based document to Excel and hits the
Hit cube cache.

Export to PDF Indicates a user exported a view report or a


with Cube Cache cube-based document or a cube-based
Hit dossier to PDF and hits the cube cache.

Export to CSV
Indicates a user exported a view report to
with Cube Cache
CSV and hits the cube cache.
Hit

Copyright © 2023 All Rights Reserved 130


Plat fo r m An alyt ics

Action Category Action Type Explanation

Export to Plain
Indicates a user exported a view report to
Text with Cube
Plain Text and hits the cube cache.
Cache Hit

Export to HTML
Indicates a user exported a cube-based
with Cube Cache
document to HTML from history list
Hit(Developer
subscription and hits the cube cache.
Only)

Execute with Indicates a user executed a report or a


Cache Hit document or a dossier and hits the cache.

Indicates a user exported a normal report or


Export to Excel
a report-based document to Excel and hits
with Cache Hit
the cache.

Indicates a user exported a normal report or


Export to PDF
Cache Hit a report-based document or a report-based
with Cache Hit
dossier to PDF and hits the cache.

Export to CSV Indicates a user exported a normal report to


with Cache Hit CSV and hits the cache.

Export to Plain
Indicates a user exported a normal report to
Text with Cache
Plain Text and hits the cache.
Hit

Indicates a user executed a report or a


Execute and
Cache Creation document or a dossier or a cube and creates
Create Cache
a cache.

To change the intelligent cube status, right-

Delete Cube click the Cube in the Cube Monitor, and

Cache select one of the actions. Cube Delete


indicates a user removed a published
Cube Modifications
Intelligent Cube as an accessible set of data
for multiple reports from the Cube Monitor.

Activate Cube To change the intelligent cube status, right-

Copyright © 2023 All Rights Reserved 131


Plat fo r m An alyt ics

Action Category Action Type Explanation

click the Cube in the Cube Monitor, and


select one of the actions. Cube Activate
Cache Loads a previously deactivated Intelligent
Cube as an accessible set of data for
multiple reports.

To change the intelligent cube status, right-


click the Cube in the Cube Monitor, and
Deactivate Cube select one of the actions. Cube Deactivate
Cache removes an Intelligent Cube instance from
Intelligence Server memory, but saves it to
secondary storage, such as a hard disk.

To change the intelligent cube status, right-


click the Cube in the Cube Monitor, and
select one of the actions. Loading a cube
Load Cube Cache
moves an Intelligent Cube from your
machine’s secondary storage to Intelligence
Server memory.

To change the intelligent cube status, right-


click the Cube in the Cube Monitor, and
Unload Cube select one of the actions. Unloading a cube
Cache moves an Intelligent Cube from Intelligence
Server memory to your machine’s secondary
storage, such as a hard disk.

Download Indicates a user exported a dossier object


Execution MSTR File based on report datasets as an MSTR file.

Indicates a user executed a report or a


Execute document or a dossier without hitting any
cache.

Execute and Indicates a user exported a report-based


Export to HTML document to HTML from history list
(Developer Only) subscription without hitting any cache.

Copyright © 2023 All Rights Reserved 132


Plat fo r m An alyt ics

Action Category Action Type Explanation

Indicates a user exported a normal report or


Execute and
a report-based document to Excel without
Export to Excel
hitting any cache.

Indicates a user exported a report-based


Execute and
document or dossier to PDF without hitting
Export to PDF
any cache.

Execute and Indicates a user exported a report to CSV


Export to CSV without hitting any cache.

Execute and
Indicates a user exported a report to Plain
Export to Plain
Text without hitting any cache.
Text

Execute Report Right Click a normal report and view SQL in


SQL View Developer.

Execute with
Indicates a user exported a report using a
Dynamically
dynamically sourced cube and hit the
Sourced Cube
cache.
Cache Hit

The Intelligence Cube is updated


(republished) based on the cube republish
settings. The Intelligence Cube’s Republish
Republish cube Policy is evaluated. If the data returned is
data via update already in the Intelligent Cube, it is updated
where applicable. (If we have multiple table
Cube Executions sources, all of them should have the same
Republish Policy)

The Intelligence Cube is updated


(republished) based on the cube republish
Republish cube settings. The Intelligence Cube’s Republish
data via append Policy is evaluated. If new data is available,
it is fetched and added to the Intelligent
Cube. (If we have multiple table sources, all

Copyright © 2023 All Rights Reserved 133


Plat fo r m An alyt ics

Action Category Action Type Explanation

of them should have the same Republish


Policy).

The Intelligence Cube is updated


(republished) based on the cube republish
settings. The Intelligence Cube’s Republish
Policy is evaluated. If a new data is
Republish cube available, it is fetched and added to the
data dynamically Intelligence cube, and data that no longer
meets the filter's criteria is deleted from the
Intelligent Cube. (If we have multiple table
sources, all of them should have the same
Republish Policy).

The Intelligence Cube is updated


(republished) based on the cube republish
settings. The Intelligence Cube’s Republish
Policy is evaluated. If new data is available,
Republish cube it is fetched and added to the Intelligent
data via upsert Cube, and if the data returned is already in
the Intelligent Cube, it is updated where
applicable. (If we have multiple table
sources, all of them should have the same
Republish Policy).

Refresh cube The incremental refresh filter is evaluated. If


data by new data is available, it is fetched and
appending via added to the Intelligent Cube. Data that was
filter already in the Intelligent Cube is not altered.

The incremental refresh filter is evaluated.


The data that meets the filter’s definition is
Refresh cube deleted from the cube. For example, if the
data by deleting Intelligent Cube contains data for 2008,
via filter 2009 and 2010, and the filter returns data
for 2009, all the data for 2009 is deleted
from the cube.

Copyright © 2023 All Rights Reserved 134


Plat fo r m An alyt ics

Action Category Action Type Explanation

The incremental refresh filter is evaluated. If


Refresh cube the data available is already in the
data by updating Intelligent Cube, it is updated where
via filter applicable. No new data is added to the
Intelligent Cube.

The incremental refresh filter is evaluated. If


Refresh cube new data is available, it is fetched and
data by upserting added to the Intelligent Cube, and if the
via filter data returned is already in the Intelligent
Cube, it is updated where applicable.

Refresh cube The incremental refresh report is evaluated.


data by If new data is available, it is fetched and
appending via added to the Intelligent Cube. Data that was
report already in the Intelligent Cube is not altered.

The incremental refresh report is evaluated.


The data that meets the report’s definition is
Refresh cube deleted from the cube. For example, if the
data by deleting Intelligent Cube contains data for 2008,
via report 2009 and 2010, and the filter returns data
for 2009, all the data for 2009 is deleted
from the cube.

The incremental refresh report is evaluated.


Refresh cube If the data available is already in the
data by updating Intelligent Cube, it is updated where
via report applicable. No new data is added to the
Intelligent Cube.

The incremental refresh report is evaluated.


Refresh cube If new data is available, it is fetched and
data by upserting added to the Intelligent Cube, and if the
via report data returned is already in the Intelligent
Cube, it is updated where applicable.

Cube Publish Indicates a user published an intelligence

Copyright © 2023 All Rights Reserved 135


Plat fo r m An alyt ics

Action Category Action Type Explanation

cube from any tool (Web, Developer, etc).


The Intelligent Cube's SQL is re-executed,
and all the data is loaded from the data
warehouse into Intelligence Server's
memory.

The Intelligence Cube is updated


(republished) based on the cube republish
Republish DI
setting. The Intelligence Cube’s Republish
Cube with Multi-
Policy is evaluated. If we have different
refresh Policies
(multiple) policies for different tables, the
data will be updated based on its settings.

Edit in Design Indicates a user opened document or


Edit
Mode dossier in Design Mode.

Indicates a grid or a visualization in a


Export to PDF dossier is exported to PDF after dossier is
executed.

Indicates a grid or a visualization in a


Export to HTML
dossier is exported to HTML after dossier is
(Developer Only)
executed.

Indicates a grid or a visualization in a


Manipulations Export to CSV dossier is exported to Data after dossier is
executed.

Indicates a grid or a visualization in a


Export to Plain
dossier is exported to Plain Text after
Text
dossier is executed.

Indicates a grid in a dossier is exported to


Export to Excel
Excel after dossier is executed.

Indicates a user manipulated a dossier, for


Manipulation
example, apply a filter element.

Partial Dataset Indicates a user executed a report-based

Copyright © 2023 All Rights Reserved 136


Plat fo r m An alyt ics

Action Category Action Type Explanation

Execution document from developer

Change History Indicates a user modified the history list


List Message message. A modification can be changing
Status the status from read to unread or vice versa.

Indicates a MicroStrategy user created a


Create History
History List message, see Adding reports
List Message
and documents to the History List​ .

Indicates a MicroStrategy user deleted a


Delete History
Modify History List History List Message, see Maintaining your
List Message
Messages History List​ .

Indicates a MicroStrategy user renamed the


Rename History
History List Message, see Maintaining your
List Message
History List​ .

Indicates a MicroStrategy user viewed the


View History List History List Message of a report, document,
Message dossier, see ​ A bout viewing reports and
documents in your History List​ .

Status (errors)
The ​ S tatus​ and ​ S tatus Category​ attributes are used to track the
success/failure for both MicroStrategy and Badge transactions.

Status Category​ provides a high level grouping to analyze individual error


messages. ​ S tatus​ is the exact error message that is recorded in the logs.
Most errors in the logs are recorded at the unique job and session level.
Therefore, when trying to determine what is the "most frequent error," which
occurs in a MicroStrategy environment, a count of errors at the Status​ level
almost always results in 1. To analyze​ an aggregated errors in the system,
create a report with the count (status) at the level of Status Category. Status
Category is a parent of Status.

Copyright © 2023 All Rights Reserved 137


Plat fo r m An alyt ics

lu_status_category
The categorization of an action status recorded in the logs.

Data-
Column Description
Type

status_ smallint
The auto-generated numeric ID of the status category.
category_id (6)

The categorization of the actions status. The elements include,

l Success - indicates a successful transaction for both


Badge and MicroStrategy action types.

l Database Error Occurred - indicates that an issue


occurred at the database level causing the problem.
status_
l Failed Subscription Delivery - indicates that delivery of varchar
category_
a subscribed object was unsuccessful. (25)
desc
l Cancelled - indicates that the job/action was cancelled
before it could finish.

l Denied – a Badge transaction failed.

l Error message category generated by MicroStrategy


Intelligence server.

lu_status
The status of whether the action by the user was successful, denied, or
resulted in a specific error message. For Badge, there is set list of denied
types. For MicroStrategy the exact error message is recorded. The status_
desc column stores the exact error message recorded from the Intelligence
Server logs and can be used to analyze the unique job execution details.

Copyright © 2023 All Rights Reserved 138


Plat fo r m An alyt ics

Data-
Column Description
Type

status_id The auto-generated numeric ID of the status bigint(20)

The status of whether the action by the user was successful,


denied or resulted in a specific error. The elements include,

l VPN Access Denied- A denied action that occurs when


trying to log in to VPN Badge gateway.

l Physical Access Denied- A denied action that occurs


when trying to access a physical space that you are
unauthorized to access.

l Push Notification Denied - A denied action that occurs


varchar
status_desc when trying to approve access via push notification.
(4096)
l QR Code Expired- A denied action that occurs when
scanning a QR code that has expired.

l Invalid Badge- A denied action that occurs when trying


to access one network with another networks badge.

l Denied- when Badge cannot determine why there was a


failure the generic "Denied" status is assigned.

l error message generated by MicroStrategy Intelligence


Server.

status_ The numeric ID of the status category corresponding to the smallint


category_id action status. (6)

db_error_ A flag representing whether the status was the result of a


tinyint(4)
indicator database error.

cancel_
A flag representing whether the job was cancelled or not. tinyint(4)
indicator

Copyright © 2023 All Rights Reserved 139


Plat fo r m An alyt ics

Job and Session

Every MicroStrategy execution will have a corresponding Job. A Job is any


request to the system submitted by users from the MicroStrategy platform.
The job is stored in the fact_access_transaction_view fact table. Jobs may
include scheduled or ad-hoc report or document executions. Some
MicroStrategy actions do not have jobs. In these cases, default values are
applied. See the below chart to explain the default values.

Action Types Default Value

All Badge Action Types -1

History List Modifications (109, 122, 156, 157, 158, 159) -2

Cube Modifications (161, 162, 163,) -2

MicroStrategy Logins (100, 101, 102, 103) -3

Copyright © 2023 All Rights Reserved 140


Plat fo r m An alyt ics

Parent Job​ is a result of a job triggering another child job. For example,
when a document with reports as datasets is executed, it will first create a
document job, which will trigger several child jobs for report execution. In
this example, the job associated with the document execution is a parent job
of report execution jobs. Standalone report execution will not have a parent
job.

lu_job_step_type
This table lists the Intelligence Server tasks involved in executing a report or
a document. Below is list of all the possible values for Job Step.

Data-
Column Description
Type

step_type_
The fixed numeric ID for the document or report job type. int(11)
id

The Job Type that was executed against the Intelligence server.
Job Types can include,

l MD Object Request

l Close Job

l SQL Engine

l SQL Execution

l Analytical Engine
step_type_ varchar
l Resolution Server
desc (255)
l Report Net Server

l Element Request

l Get Report Instance

l Error Message Send

l Output Message Send

l Find Report Cache

l Document Execution

Copyright © 2023 All Rights Reserved 141


Plat fo r m An alyt ics

Data-
Column Description
Type

l Document Send

l Update Report Cache

l Request Execute

l Datamart Execute

l Document Data Preparation

l Document Formatting

l Document Manipulations

l Apply View Context

l Export Engine

l Find Cube Task

l Update Cube Task

l Post-processing Task

l Delivery Task

l Persist Result Task

l Document Dataset Execution Task

l Document Process Report with Prompt

l Data Import Data Preparation

l Remote Server Execution

l Import Dashboards Async

l Job Processing Last Step

Copyright © 2023 All Rights Reserved 142


Plat fo r m An alyt ics

Job Step Types and Descriptions:

Job Step
Description
Type

MD Object
Requesting an object definition from the project metadata
Request

Close Job Closing a job and removing it from the list of pending jobs

SQL Engine SQL is generated that is required to retrieve data, based on schema

SQL Execution SQL that was generated for the report is executed

Analytical
Applying analytical processing to the data retrieved from the data source
Engine

Resolution
Loading the definition of an object
Server

Report Net
Transmitting the results of a report
Server

Element
Attribute element browsing
Request

Get Report
Retrieving a report instance from the metadata
Instance

Error Message
Sending an error message
Send

Output
Sending a message other than an error message
Message Send

Find Report
Searching or waiting for a report cache
Cache

Document
Executing a document
Execution

Document
Transmitting a document
Send

Copyright © 2023 All Rights Reserved 143


Plat fo r m An alyt ics

Job Step
Description
Type

Update Report
Updating report caches
Cache

Request
Requesting the execution of a report
Execute

Datamart
Executing a datamart report
Execute

Document
Constructing a document structure using data from the document’s
Data
datasets
Preparation

Document
Exporting a document to the requested format
Formatting

Document
Applying a user’s changes to a document
Manipulation

Apply View
Reserved for future use
Context

Exporting a document or report to PDF, plain text, Excel spreadsheet, or


Export Engine
XML

The cube instance is located from the Intelligent Cube Manager, when a
Find Cube
subset report, or a standard report that uses dynamic caching, is
Task
executed.

Update Cube The cube instance is updated from the Intelligent Cube Manager, when
Task republishing or refreshing a cube.

Post-
processing Reserved for future functionality.
Task

Used by Distribution Services, for email, file, or printer deliveries of


Delivery Task
subscribed-to reports/documents.

Copyright © 2023 All Rights Reserved 144


Plat fo r m An alyt ics

Job Step
Description
Type

Persists execution results, including History List and other condition


Persist Result
checks. All subscriptions hit this step, although only subscriptions that
Task
persist results (such as History List) perform actions in this step.

Document
Dataset A virtual task only used for statistics manager and enterprise manager to
Execution record the time that spend on dataset execution.
Task

Document Will be triggered after SQL Engine step discovers prompts, collect
Process unanswered prompts and present them to client. After get answers
Report with launch jobs to execute these dataset which contains unanswered
Prompt prompts.

Data Import
Data
This task prepares the data for multiple tables in data import cubes.
Preparation
Task

Remote
Server
Direct access on remote MSTR project
Execution
Task

Import
Dashboards Asynchronous Import of Dashboards
Async Task

fact_step_sequence_view
This table is used when the Document and/or Report Job Steps option is
enabled for Advanced Statistics logging via Command Manager. It stores
information on each processing step of a document/dossier/report
execution. It is best used for troubleshooting the performance of an object at
the job level.

There are five facts sourced from this table:

Copyright © 2023 All Rights Reserved 145


Plat fo r m An alyt ics

l Job Step Start Timestamp (UTC) - the timestamp (in UTC timezone)
when the Job Step begins.

l Job Step Finish Timestamp (UTC) - the timestamp (in UTC


timezone) when the Job Step finishes.

l Job Queue Duration (ms) - the fact calculates the time spent waiting
in queue for the job to be executed in milliseconds.

l Job CPU Duration (ms) - the time spent on CPU during the job
execution in milliseconds.

l Job Step Duration (ms) - the total execution time for the job
execution in milliseconds.

Column Description Data-Type

parent_tran_
The auto-generated numeric action ID. bigint(20)
id

The sequence number ID for each job’s steps. Used to


step_
determine which order the steps were taken on the int(11)
sequence_id
Intelligence server.

The numeric ID of the document/dossier/report job execution


step_type_id int(11)
job step type.

step_start_
The UTC timestamp when the job step started. datetime
timestamp

step_finish_
The UTC timestamp when the job step finished. datetime
timestamp

job_queue_
The Queue duration in milliseconds. bigint(20)
time

job_cpu_
The CPU duration in milliseconds. bigint(20)
time

step_
duration_ The total execution duration time in milliseconds. bigint(20)
time

Copyright © 2023 All Rights Reserved 146


Plat fo r m An alyt ics

lu_session_view
Each user that connects to the MicroStrategy Intelligence server and/or
project has a unique ​ S ession connection GUID. A user cannot log in to a
project without first having a session to the Intelligence server. However, a
user can have a session to the Intelligence server without connecting to a
project (i.e. performing administrative task in Developer). The lu_session_
view table tracks the unique session connection information at the project
and metadata level.

For each unique user ​ S ession​ that is created, there will be an ​ I ntelligence
Server Instance,​ a​ Session Source​ , a ​ C lient Server Machine, ​ a nd a ​ D evice​ .

Data-
Column Description
Type

The auto-generated numeric ID value for each unique


session_id bigint(20)
session.

varchar
session_guid The GUID of the Session.
(32)

The numeric ID of the Intelligence Server Instance that was


iserver_ connected to the session. Not all sessions connections have
bigint(20)
instance_id an applicable I-Server Instance. For example, scheduled
jobs.

The Client Server Machine IP that was connected to for the


client_server_
session. Not all session connections have a client server bigint(20)
machine_id
machine.

session_ The ID of the session source that was used to establish the
bigint(20)
source_id user session connection.

metadata_id The metadata ID for which the user session was connected. bigint(20)

MicroStrategy actions (executions, session, etc.), it is the IP


device_id bigint(20)
address of the machine from which the session was created.

The timestamp of when the session was opened. The


connection_
mapping of this column to the Platform Analytics project datetime
time
schema is pending.

Copyright © 2023 All Rights Reserved 147


Plat fo r m An alyt ics

lu_session_source
Each ​ S ession​ that is created as a user connection to the Intelligence server
and Project has a source. The Session Source ​ r epresents the client or tool
that the user used to establish a connection.

Data-
Column Description
Type

session_ bigint
The fixed numeric ID value for the Session Source.
source_id (20)

The specific Session Source that was used to connect to the


Intelligence server and/or Project. The Session Source can be:

0 Not Applicable

1 Developer

2 Intelligence Server Administrator

3 Web Administrator

4 Intelligence Server

5 Project Upgrade
session_
6 Web varchar
source_
(255)
desc 7 Scheduler

8 Custom Application

9 Narrowcast Server

10 Object Manager

12 Odbo Cube Designer

13 Command Manager

14 Enterprise Manager

15 Command Line Interface

Copyright © 2023 All Rights Reserved 148


Plat fo r m An alyt ics

Data-
Column Description
Type

16 Project Builder

17 Configuration Wizard

18 MD Scan

19 Cache Utility

20 Fire Event

21 Java Admin Clients

22 Web Services

23 Office

24 Tools

25 Portal Server

26 Integrity Manager

27 Metadata Update

28 COM Browser

29 Mobile

30 Repository Translation Wizard

31 Health Center

32 Cube Advisor

34 Desktop

35 Library

36 Library iOS

37 Workstation

39 Library Android

Copyright © 2023 All Rights Reserved 149


Plat fo r m An alyt ics

Data-
Column Description
Type

40 Workstation MacOS

41 Workstation Windows

42 Desktop MacOS

43 Desktop Windows

44 Tableau

45 Qlik

46 PowerBI

47 Microsoft Office

48 Hyper Browser Chrome

49 Hyper Mobile iOS

50 Hyper Mobile Android

51 Hyper Office Outlook Web

52 Hyper Office Outlook Windows

53 Hyper Office Outlook Mac

lu_sql_pass_type
This table stores the static list of ​ S QL Pass Types​ . Each SQL Pass that is
recorded in the fact_sql_stats table will have a corresponding SQL Pass
Type.

Data-
Column Description
Type

sql_pass_
The fixed numeric ID for the SQL Pass Type. int(11)
type_id

Copyright © 2023 All Rights Reserved 150


Plat fo r m An alyt ics

Data-
Column Description
Type

The descriptive name for the SQL Pass Type. The SQL Pass
Type can include:

l Select

l Insert Into Select

l Create Table

l Analytical

l Select Into

l Insert into Values

l Homogeneous Partition Query

l Heterogeneous Partition Query

l Metadata Partition Pre-Query

l Metadata Partition Last Pre-Query


sql_pass_type_ varchar
l Empty
desc (255)
l Create Index

l Metric Qualification Break By

l Metric Qualification Threshold

l Metric Qualification

l User Defined

l Homogeneous Partition Loop

l Homogeneous Partition One Table

l Heterogeneous Partition Loop

l Heterogeneous Partition One Table

l Insert Fixed Values Into

l Datamart From Analytical Engine

l Cleanup Temp Resources

Copyright © 2023 All Rights Reserved 151


Plat fo r m An alyt ics

Data-
Column Description
Type

l Return Element Number

l Incremental Element Browsing

l MDX Query

l Sap Bapi

l Intelligent Cube Instruction

l Heterogeneous Data Access

l Excel File Data Import

l Text File Data Import

l Database Table Data Import

l SQL Data Import

l Data Import Excel File

l Data Import Text File

l Data Import Table

l Data Import Custom SQL

l Data Import OAuth

l Data Import Open Refine

l SQL Incremental Data Transfer

l Data Import Cube From File

lu_sql_clause_type
This table stores the static list of SQL Clause Types. Each SQL Pass that is
recorded in the fact_sql_stats table will have a corresponding SQL Clause
Type.

Copyright © 2023 All Rights Reserved 152


Plat fo r m An alyt ics

Data-
Column Description
Type

sql_clause_ smallint
The fixed numeric ID value for the SQL Clause Type.
type_id (6)

The descriptive name for the SQL Clause Type. The


SQL Pass Type can be,

0 Not Applicable

1 Select

sql_pass_type_ 2 Select Group By varchar


desc (255)
4 Select Aggregate

8 From

16 Where

17 Order By

fact_sql_stats
This table contains the ​ S QL Pass​ information that is executed on the
warehouse during a report job executions. Each SQL Pass is recorded at the
Parent Action​ level and one action​ ​ can correspond to multiple SQL​ Passes.

One report execution (Parent Action) can have multiple ​ S QL Pass


Sequences​ .

This fact table is best used for performance analysis of reports execution
times to determine inefficient report definitions. Data will be available only
when the Advanced statistics option is enabled during configuration in
Command Manager.

The fact_sql_stats table is the source for the facts listed below:

Copyright © 2023 All Rights Reserved 153


Plat fo r m An alyt ics

l SQL Pass Duration (ms)​ - records the SQL Pass execution


duration in milliseconds.

l SQL Pass End Timestamp ​ - records the UTC timestamp when


the SQL Pass finishes.

l SQL Pass Start Timestamp​ - records the UTC timestamp when


the SQL Pass begins.

l SQL Pass Tables Accessed​ - records the number of tables hit


during the SQL pass.

Column Description Data-Type

The auto-generated transaction ID for each report that is


parent_tran_
executed on the warehouse. Each Parent Action can bigint(20)
id
correspond to multiple SQL Pass.

The auto-generated SQL Pass ID for each execution. This


sql_pass_id bigint(20)
is the primary key on the table.

sql_pass_
The sequence number of the SQL pass. int(11)
sequence_id

sql_pass The exact SQL used in the pass. longtext

sql_start_
The UTC timestamp when the SQL Pass began. timestamp
timestamp

sql_end_
The UTC timestamp when the SQL Pass finished. timestamp
timestamp

The numeric ID corresponding to the SQL Type.

For example,
sql_pass_
l Create Index int(11)
type_id
l Insert Into Values

l Incremental Element Browsing

execution_ The total time spent on the SQL Pass Statement. Defined
bigint(20)
time as the start timestamp minus the end timestamp.

Copyright © 2023 All Rights Reserved 154


Plat fo r m An alyt ics

Column Description Data-Type

total_tables_ The number of tables hit by the SQL pass. This is the
smallint(6)
accessed source column for the SQL Pass Tables Accessed fact.

The auto-generated error ID for a DB error encountered


db_error_id bigint(20)
during SQL execution. See lu_db_error for more details.

lu_db_error
This table stores the list of Database Error Messages. Each SQL Pass that
is recorded in the fact_sql_stats table will have a corresponding db_error_
id.

Column Description Data-Type

db_error_id The auto generated id for the database error. bigint(20)

db_error_ The full text of the database error message returned from the Varchar
desc server. (4096)

fact_report_columns

Data-
Column Description
Type

parent_ The auto-generated parent transaction ID for each report that is


bigint(20)
tran_id executed on the warehouse.

The auto-generated column ID that was hit during that report


column_id bigint(20)
execution.

sql_ The SQL Clause Type ID that corresponds to which type of SQL
smallint
clause_ clause was executed against the given column / table. See lu_
(6)
type_id sql_clause_type for more details.

table_id The auto-generated Table ID that the SQL statement was run bigint(20)

Copyright © 2023 All Rights Reserved 155


Plat fo r m An alyt ics

Data-
Column Description
Type

against. This is the source column for the Database Table


attribute. See lu_db_table_view for more details.

The number of times the column/table/clause type combination


column_
occurs within an execution. This is the source column for the int(11)
hit_count
Column Hit Count fact.

Distribution Services Hierarchy

lu_recipient
The ​ R ecipient ​ t able is used to track the contact that received a distribution
services message from a MicroStrategy ​ A ccount​ . The Recipient can be:

1. A user object in the metadata: Name and Email are the same as in the
values stored in the metadata

2. An external email contact: When the user is an external contact, the


email and name attribute forms will be the same values.

3. The account can send a message directly to itself.

Copyright © 2023 All Rights Reserved 156


Plat fo r m An alyt ics

For more information about different types of contacts, see Creating and
Managing Contacts​ ​ for MicroStrategy Distribution Services.

The recipient_id column is recorded along with job executions in


MicroStrategy. When a new distribution services message is received with a
recipient, a new entry is added into lu_recipient and fact_access_
transaction table. Recipient_ids are shared with lu_entity.

Only executions related to subscriptions will have a valid recipient. All the
ad-hoc object executions will have a default recipient assigned to them. For
example, a user who is executing a report does not have a recipient. In
these logs, a default (recipient_id = -1) is assigned. To analyze on
subscription executions, exclude the recipient_id = -1.

Data-
Column Description
Type

recipient_id The auto-generated ID for the recipient. bigint(20)

recipient_ varchar
The GUID of the recipient.
guid (32)

recipient_ varchar
Name of the recipient who received the message.
name (255)

recipient_ The email address or file path of the recipient who received varchar
address the message. (512)

The numeric ID of the metadata. Recipients are stored at the


metadata_id int(11)
level of metadata.

lu_subscription_base
In MicroStrategy, it is possible to trigger one subscription which is sent to
multiple users at the same time. In this case, there will be a ​ P arent
Subscription, which is linked to child ​ S ubscriptions. ​ T he lu_subscription_
base table is used to track both the Parent and child Subscriptions. If a
Subscription does not have a parent, the same ID is repeated.

Copyright © 2023 All Rights Reserved 157


Plat fo r m An alyt ics

Column Description Data-Type

The auto-generated numeric ID for the Subscription


subscription_id bigint(20)
object.

subscription_ varchar
The GUID of the subscription stored in the metadata.
guid (32)

subscription_ varchar
The name of Subscription stored in the metadata.
name (255)

The numeric ID for the Parent Subscription. If a


parent_
Subscriptions does have a Parent Subscriptions the ID will bigint(20)
subscription_id
be the same as subscription_id.

subscription_ The HTML link for managing the subscription on a Java varchar
url_j2ee based web server. (8192)

subscription_ The HTML link for managing the subscription on a .Net varchar
url_dotnet based web server. (8192)

creation_
The UTC timestamp when the subscription is first created. datetime
timestamp

The latest UTC timestamp for when the Subscription was


modification_
last modified. The timestamp will continue to update as the datetime
timestamp
subscription is modified.

This is the format in which a subscription is delivered to a


delivery_
user as. For example, PDF, Excel, CSV etc. See lu_ smallint(6)
format_id
delivery_format for more details.

The numeric ID of the latest status of the Subscription. The


status can be,
subscription_ varchar
l Active
status (25)
l Inactive

l Deleted

schedule_id The ID of the corresponding schedule of the subscription. bigint(20)

subscription_ The ID of the type for the subscription. int(11)

Copyright © 2023 All Rights Reserved 158


Plat fo r m An alyt ics

Column Description Data-Type

type_id

object_id The ID of the object which was subscribed. bigint(20)

owner_id The ID of the user who owns the Subscription. bigint(20)

The ID of the metadata where the subscription was


metadata_id bigint(20)
created.

transaction_
MicroStrategy internal use. datetime
timestamp

lu_subscription
In MicroStrategy, it is possible to trigger one subscription which is sent to
multiple users at the same time. In this case, there will be a ​ P arent
Subscription, which is linked to child ​ S ubscriptions. ​ T he lu_subscription
view table tracks Subscriptions created in the metadata(s) being monitored.
For more details about creating subscriptions, see ​ S cheduling reports and
documents: Subscriptions​ . Note that parent subscriptions are not included in
this view table. See the lu_parent_subscription for more details about parent
subscriptions.

The value of subscription_status column could be “Invalid” in multiple


scenarios:

1. The object that the subscription is subscribed to is deleted

2. The subscription expires

3. The user who create the subscription is deleted

4. The project which the subscriptions is deleted

Copyright © 2023 All Rights Reserved 159


Plat fo r m An alyt ics

View Table Warehouse


Description Data-Type
Column Table Column

The auto-generated numeric ID for the


subscription_id subscription_id bigint(20)
Subscription object.

subscription_ subscription_ The GUID of the subscription stored in varchar


guid guid the metadata. (32)

subscription_ subscription_ The name of Subscription stored in the varchar


name name metadata. (255)

The numeric ID for the Parent


parent_ parent_ Subscription. If a Subscriptions does
bigint(20)
subscription_id subscription_id have a Parent Subscriptions, the ID will
be the same as subscription_id.

HTML link for managing the


subscription_ subscription_ varchar
subscription on a Java based Web
url_j2ee url_j2ee (8192)
server.

The format in which a subscription is


delivery_ delivery_ delivered to a user. For example, PDF,
smallint(6)
format_id format_id Excel, CSV etc. See lu_delivery_format
for more details.

HTML link for managing the


subscription_ subscription_ varchar
subscription on a .Net based Web
url_dotnet url_dotnet (8192)
server.

creation_ creation_ The UTC timestamp when the


datetime
timestamp timestamp subscription is first created.

The latest UTC timestamp for when the


modification_ modification_ Subscription was last modified. The
datetime
timestamp timestamp timestamp will continue to update as
the subscription is modified.

The numeric ID of the latest status of


subscription_ subscription_ the Subscription. The status can be: varchar
status status (25)
l Active

Copyright © 2023 All Rights Reserved 160


Plat fo r m An alyt ics

View Table Warehouse


Description Data-Type
Column Table Column

l Inactive

l Deleted

The ID of the corresponding schedule


schedule_id schedule_id bigint(20)
of the subscription.

subscription_ subscription_
The ID of the type for the subscription. int(11)
type_id type_id

The ID of the object which was


object_id object_id bigint(20)
subscribed.

The ID of the user who owns the


owner_id owner_id bigint(20)
Subscription.

The ID of the metadata where the


metadata_id metadata_id bigint(20)
subscription was created.

subscription_ subscription_
The ID of the owner of the subscription. bigint(20)
owner_id owner_id

lu_parent_subscription
In MicroStrategy, it is possible to trigger one subscription which is sent to
multiple users at the same time. In this case, there will be a ​ P arent
Subscription, which is linked to child ​ S ubscriptions. ​ T he lu_parent_
subscription view table tracks Subscriptions created in the metadata(s)
being monitored. For more details about creating subscriptions, see
​ S cheduling reports and documents: Subscriptions.

View Table Warehouse


Description Data-Type
Column Table Column

The auto-generated numeric ID for the


subscription_id subscription_id bigint(20)
Subscription object.

Copyright © 2023 All Rights Reserved 161


Plat fo r m An alyt ics

View Table Warehouse


Description Data-Type
Column Table Column

subscription_ subscription_ The GUID of the subscription stored in varchar


guid guid the metadata. (32)

subscription_ subscription_ The name of Subscription stored in the varchar


name name metadata. (255)

HTML link for managing the


subscription_ subscription_ varchar
subscription on a Java based web
url_j2ee url_j2ee (8192)
server.

HTML link for managing the


subscription_ subscription_ varchar
subscription on a .Net based web
url_dotnet url_dotnet (8192)
server.

The format in which a subscription is


delivery_ delivery_ delivered to a user. For example, PDF,
smallint(6)
format_id format_id Excel, CSV etc. See lu_delivery_format
for more details.

creation_ creation_ The UTC timestamp when the


datetime
timestamp timestamp subscription is first created.

The latest UTC timestamp for when the


modification_ modification_ Subscription was last modified. The
datetime
timestamp timestamp timestamp will continue to update as the
subscription is modified.

The numeric ID of the latest status of


the Subscription. The status can be:
subscription_ subscription_ varchar
l Active
status status (25)
l Inactive

l Deleted

The ID of the corresponding schedule of


schedule_id schedule_id bigint(20)
the subscription.

subscription_ subscription_ The ID of the type for the subscription. int(11)

Copyright © 2023 All Rights Reserved 162


Plat fo r m An alyt ics

View Table Warehouse


Description Data-Type
Column Table Column

type_id type_id

The ID of the object which was


object_id object_id bigint(20)
subscribed.

The ID of the user who owns the


owner_id owner_id bigint(20)
Subscription.

The ID of the metadata where the


metadata_id metadata_id bigint(20)
subscription was created.

lu_subscription_type
The table is the predefined list of Subscription Types. Each Subscription ​ h as
a corresponding Subscription Type​ , see ​ T ypes of Subscriptions​ for more
details.

Data-
Column Description
Type

subscription_ smallint
The fixed numeric ID for the subscription type.
type_id (6)

The type of subscription that was sent to the recipient.


The types can include:

l Email

l File
subscription_ varchar
l Print
type_desc (255)
l Custom

l History List

l Client

l Cache Update

Copyright © 2023 All Rights Reserved 163


Plat fo r m An alyt ics

Data-
Column Description
Type

l Mobile

l Personal View

l FTP

lu_subscription_owner
lu_subscription_owner is a view on the lu_mstr_user table in the warehouse.
The lu_subscription_owner table is used to track the user who created the
object or another user who currently owns the object. The owner usually
defines the permissions for how the object can be used and by whom.

The lu_owner view table is mapped to two logical tables in the Platform
Analytics project, ​ O bject Owner and ​ S ubscription Owner​ .

Warehouse
View Table
Table Description Data-Type
Column
Column

The auto-generated numeric ID for the


subscription_
mstr_user_id current Subscription Owner in the bigint(20)
owner_id
MicroStrategy metadata.

subscription_ The metadata GUID of the user who has varchar


mstr_user_guid
owner_guid ownership of the subscription. (32)

The name of the User object in the


subscription_ mstr_user_ varchar
metadata that has ownership of the
owner_name name (255)
Subscription.

subscription_ mstr_user_ The login of the User object in the varchar


owner_login login metadata. (255)

creation_ creation_ The UTC timestamp of when the user


datetime
timestamp timestamp was first created in the metadata. If a

Copyright © 2023 All Rights Reserved 164


Plat fo r m An alyt ics

Warehouse
View Table
Table Description Data-Type
Column
Column

script was used to import a list of users,


the timestamp may be identical for
users. This is expected.

The latest UTC timestamp from when the


modification_ modification_ User object was modified. The value will
datetime
timestamp timestamp continually update as the User is
modified or changed.

The latest status of the User object in


the metadata. The status can be:
subscription_ mstr_user_ varchar
l Visible
owner_status status (25)
l Hidden

l Deleted

The numeric ID for the corresponding


metadata_id metadata_id metadata for each User. All users are int(11)
stored at the metadata level.

subscription_ mstr_user_ The version ID for the MicroStrategy varchar


owner_version version user that owns the subscription. (32)

lu_schedule
The lu_schedule table contains the distinct ​ S chedule​ objects stored in the
metadata. Each schedule has a unique GUID and is defined at the metadata
level. For more information about schedule objects, see Creating and
managing schedules.

A report or document can also be emailed directly without a subscription.


For these types of subscriptions, Platform Analytics assigns the schedule as
Send Now, see ​ E mailing a Report or Document​ for more details.

Copyright © 2023 All Rights Reserved 165


Plat fo r m An alyt ics

Column Description Data-Type

schedule_id The auto-generated numeric ID of the schedule. bigint(20)

varchar
schedule_guid The GUID of the Schedule object in metadata.
(32)

schedule_ varchar
The name of the Schedule stored in the metadata.
name (255)

varchar
schedule_desc The detailed description of the Schedule object.
(512)

creation_ The UTC timestamp for when the Schedule was first
datetime
timestamp created.

The latest UTC timestamp from when the Schedule was


modification_
modification. The timestamp will continually update as the datetime
timestamp
Schedule is modified.

The current status of the Schedule. A Schedule can have a


status of:
schedule_ varchar
l Visible
status (25)
l Deleted

l Hidden

schedule_type_
The numeric ID of the corresponding Schedule Type. tinyint(4)
id

event_id The numeric ID of the corresponding Event. bigint(20)

The numeric ID of the corresponding Schedule object


owner_id owner. This column is not mapped to an attribute in the bigint(20)
schema.

The numeric ID of the metadata. Schedules are stored at


metadata_id bigint(20)
the metadata level.

transaction_
MicroStrategy internal use. datetime
timestamp

schedule_ varchar
The version ID of the schedule.
version (32)

Copyright © 2023 All Rights Reserved 166


Plat fo r m An alyt ics

lu_schedule_type
Each ​ S chedule ​ h as a corresponding ​ S chedule Type​ . The Schedule Type
can be Time-Based, Event-Based, or a Send Now subscription. For more
details reference, see ​ T ime-triggered schedules​ and ​ E vent-triggered
schedules​ .

A report or document can also be emailed directly without a subscription.


For these types of subscriptions, Platform Analytics assigns the schedule
type as Send Now. See ​ E mailing a Report or Document​ for more details.

Data-
Column Description
Type

schedule_
The fixed numeric ID for the schedule type. tinyint(4)
type_id

The type of subscription that was sent to the recipient. The


types can include:

l Unknown
schedule_type_ varchar
desc l Time-Based (128)

l Event-Based

l Send Now

lu_event
The full list of ​ E vent ​ o bjects and the corresponding descriptive information
from the MicroStrategy metadata(s) being monitored by Platform Analytics.
For more details about event objects, see ​ A bout events and event-triggered
schedules​ .

Column Description Data-Type

event_id The auto-generated numeric ID of the Event. bigint(20)

Copyright © 2023 All Rights Reserved 167


Plat fo r m An alyt ics

Column Description Data-Type

varchar
event_guid The GUID of the Event object in the metadata.
(32)

varchar
event_name The name of the Event stored in the metadata.
(255)

varchar
event_desc The detailed description of the Event object.
(512)

creation_
The UTC timestamp for when the Event was first created. datetime
timestamp

The latest UTC timestamp from when the Event was


modification_
modification. The timestamp will continually update as the datetime
timestamp
Event is modified.

The current status of the Event. An Event can have a


status of:
varchar
event_status l Visible
(25)
l Deleted

l Hidden

The numeric ID of the corresponding Event object owner.


owner_id bigint(20)
This column is not mapped to an attribute in the schema.

The numeric ID of the metadata. Events are stored at the


metadata_id bigint(20)
metadata level.

transaction_
MicroStrategy internal use. datetime
timestamp

varchar
event_version The version ID of the event.
(32)

lu_delivery_format
Not all subscription types have the same delivery formats. For more details
about Subscription Types and Delivery Formats, see ​ T ypes of subscriptions​ .

Copyright © 2023 All Rights Reserved 168


Plat fo r m An alyt ics

Data-
Column Description
Type

delivery_ smallint
A fixed numeric ID of the Delivery Format.
format_id (6)

The Delivery Format that was selected for the subscription.


The Delivery Formats include:

l CSV

l Dataset

l Editable XML

l Excel

l Flash

l Graph

l HTML
delivery_ varchar
format_desc l HTML5 (255)

l Interactive XML

l MSTR File

l PDF

l Phone

l Plain Text

l Presentation

l Tablet

l XML

lu_subscription_device
Lists the devices used to receive a subscription.

Copyright © 2023 All Rights Reserved 169


Plat fo r m An alyt ics

Column Description Data-Type

Subscription_ The auto-generated ID for the Subscription Device. This is


bigint(20)
device_id the source column for the Subscription Device attribute.

Subscription_ varchar
The name of the device.
device_name (255)

Subscription_ varchar
The GUID of the device.
device_guid (32)

Subscription_ varchar
The version ID of the device.
device_version (32)

Subscription_ varchar
The description of the device.
device_desc (512)

Creation_
The timestamp the device was created. datetime
timestamp

Modification_
The timestamp the device was modified. datetime
timestamp

Transaction_ The timestamp a transaction was last received for the


datetime
timestamp device.

Metadata_id The auto-generated ID for the metadata. bigint(20)

lu_history_list_message_view
List the report or document execution result that is stored in a user's
personal ​ H istory List Message folder. Each user has their own History List
folder with messages that can either be stored in a database or file system.

A History List is a collection of pre-executed reports and documents that


have been sent to a user’s personal History folder. These pre-executed
reports and documents are called History List messages. This table stores
the full list of ​ H istory List Message ​ t hat have been pre-executed. For more
details about History Lists, see ​ U nderstanding History Lists​ .

Copyright © 2023 All Rights Reserved 170


Plat fo r m An alyt ics

Column Description Data-Type

history_list_ The auto-generated numeric ID of the History List


bigint(20)
message_id Message.

history_list_ varchar
The GUID of the History List Message in metadata.
message_guid (32)

history_list_ The most recent title of the History List Message. The title varchar
message_title can be modified at any time. (512)

The current status of the History List Message. The status


of the history list messages can continually updated. A
History List Message can have a status of:

l Create History List Message

history_list_ l Change History List Message Status – indicates varchar


message_status that the message changed either from read to (100)
unread or vice versa.

l Delete History List Message

l View History List Message – the user executed the


history list message.

is_deleted MicroStrategy internal use. int(11)

creation_ The UTC timestamp for when the History List Message was
datetime
timestamp first created.

The latest UTC timestamp from when the History List


modification_
Message was modification. The timestamp will continually datetime
timestamp
update as the History List Message is modified.

The numeric ID of the project. History List Messages are


project_id bigint(20)
stored at the project and metadata level.

The numeric ID of the metadata. History List Messages


metadata_id bigint(20)
are stored at the project and metadata level.

Copyright © 2023 All Rights Reserved 171


Plat fo r m An alyt ics

User Hierarchy

lu_account
The lu_account table is designed​ to integrate ​ U sers from multiple data
sources such as MicroStrategy metadata users, Usher, Physical Access
Systems (PACS) into a common user identity. The Account​ is​ linked to a
​ U ser​ based on a common email address form. If no email address is
available, the account_login will be used. For example, if two metadata’s
with the duplicated MicroStrategy user objects are being monitored by
Platform Analytics, the accounts will be linked based on the login.

Each user from different sources will have an auto-generated account_id by


the Platform Analytics ETL. Any MicroStrategy executions/manipulations or
Usher transactions will be tracked at the Account​ level.​ As new data sources
are added to Platform Analytics, the lu_account table will be expanded to
integrate the new data sources into a single ​ U ser​ identity.

A ​ U ser​ is the unique identity of the person. Each User can have multiple
​ A ccounts​ from different data sources. For example, multiple badges from
Usher and the user objects created in the metadata of the MicroStrategy
Platform. The ​ U ser​ attribute allows analysis on all of the user information
across these different data sources.

Account Type​ will distinguish from which source the account was created.
For example, MicroStrategy User or a specific network.

Copyright © 2023 All Rights Reserved 172


Plat fo r m An alyt ics

The ​ A ccount Status ​ w ill distinguish if the Account is Active, Deleted,


Pending, or Inactive.

The ​ A ccount Role​ is specific to Badge and is populated by the privileges


granted through Network Manager for the specific badge. The account role
indicates if the badge has Administrator or Standard access. For
MicroStrategy accounts, the default value is Standard.

Column Description Data-type

The unique account id is the auto-generated ID for the


account_id bigint(20)
accounts in different data sources.

The name of the specific account. A single user can have


varchar
account_name multiple accounts from different systems (Badge,
(255)
MicroStrategy, Physical Access, etc).

Multiple accounts are linked to a single user based on the varchar


account_email
common email address. (255)

varchar
account_login The account login or domain name of the account.
(255)

The uploaded URL of the picture of the account. A User


with multiple badges can have multiple pictures. A varchar
account_picture
MicroStrategy metadata account cannot have a picture. (1024)
This column does not resolve the image.

The uploaded picture from the Account intended to be


account_ embedded in an HTML format. The picture for the account varchar
picture_large with height = 635 pixels. A user with multiple badges can (1024)
have multiple pictures.

The uploaded picture from the Account intended to be


account_ embedded in an HTML format. The picture for the account varchar
picture_small with height = 72 pixels. A user with multiple badges can (1024)
have multiple pictures.

The UTC timestamp when the account was first created.


creation_
For MicroStrategy metadata users, this is the timestamp datetime
timestamp
when the user was created. For Badge, this is when the

Copyright © 2023 All Rights Reserved 173


Plat fo r m An alyt ics

Column Description Data-type

badge was created in Network Manager.

ID of the account role. This column is specific to Badge


and is populated by the badge privileges granted through
Network Manager for the specific Badge account. The
account_role_id account role indicates if the badge has Administrator or int(11)
Standard access. For MicroStrategy accounts, the default
account role is MicroStrategy User. See lu_account_role
for more details.

ID of the account status. This column is common for both


MicroStrategy and Badge accounts. An account can be:

l Inactive

account_ l Enabled
tinyint(4)
status_id l Disabled

l Deleted

l Pending - Specific to Badge users who have been


sent but not retrieved a badge.

ID of the account type. The account type indicated the


source of from where the account originated. For
account_type_
MicroStrategy accounts, the ID will be a static value and bigint(20)
id
the default name is MicroStrategy User. For Badge
accounts, the ID is generated through Network Manager.

ID of the network corresponding to the account. For


MicroStrategy accounts, the ID will be a static value and
network_id the default name is MicroStrategy Network. For Badge bigint(20)
accounts the Network ID is generated through Network
Manager and corresponds to the badge name.

Auto-generated ID of the user that the accounts


user_id bigint(20)
corresponds to based on the common email address.

varchar
mstr_user_guid The GUID of the User object in MicroStrategy metadata
(32)

Copyright © 2023 All Rights Reserved 174


Plat fo r m An alyt ics

Column Description Data-type

The title of the Badge account added through Network varchar


account_title
Manager. (255)

The phone number of the Badge account used for device varchar
account_phone
enrollment. (75)

longitude The most recent longitude value of the Badge account. double

latitude The most recent latitude value of the Badge account. double

last_action_
The most recent action timestamp of the Badge account. datetime
timestamp

last_location_
The most recent location timestamp of the Badge account. datetime
timesta mp

The UTC timestamp when the account was last modified.


modification_
For MicroStrategy metadata users, this is the timestamp datetime
timesta mp
when the user object was last modified.

varchar
account_desc The description of the account.
(512)

mstr_user_ varchar
The version ID of the MicroStrategy user.
version (32)

varchar
ldap_link The link to the account in LDAP.
(512)

varchar
nt_link The link to the account in NT.
(255)

varchar
wh_link The link to the account in the WH.
(255)

password_
expiration_ How often a password wille xpire for the given account. int(11)
frequency

password_
The date the password for the account will expire. datetime
expiration_date

Copyright © 2023 All Rights Reserved 175


Plat fo r m An alyt ics

Column Description Data-type

password_
Whether the password for the account is allowed to be
change_ varchar(7)
changed.
allowed

password_
This account is required to change its password on the
change_ varchar(7)
next login.
required

standard_auth_ This account is allowed to login via Standard


varchar(7)
allowed Authentication.

trusted_auth_ varchar
The ID of the trusted authentication user.
user_id (255)

The auto-generated ID for the metadata that this user


metadata_id bigint(20)
belongs to.

lu_account_role
The ​ A ccount Role​ indicates the level of access or privileges for the
​ A ccount​ . This table is specific to Badge and is populated by the badge
privileges granted through Network Manager for the specific badge. For
more information, see ​ R ole Management​ . For MicroStrategy metadata
accounts, the default value is MicroStrategy User.

Data-
Column Description
Type

account_
ID of the account role. int(11)
role_id

The level of privilege for the account. This column is specific to


Badge and is populated by the badge privileges granted through
account_ Network Manager for the specific Badge account. An account varchar
role_desc role can be: (255)

l Badge Administrator Access

Copyright © 2023 All Rights Reserved 176


Plat fo r m An alyt ics

Data-
Column Description
Type

l Badge Standard Access

l MicroStrategy User (the default value for MicroStrategy


accounts)

lu_account_status
The current ​ A ccount Status​ of an ​ A ccount​ . This column is common for both
MicroStrategy and Badge accounts. The account status can change over
time, for example an account can begin as active and is later updated to be
deleted.

Data-
Column Description
Type

account_
ID of the current account status. tinyint(4)
status_id

The current status of the account. An account status can be:

l Inactive

l Pending- specific to Usher and indicated that an account


account_
was sent a badge via email, but it is pending badge varchar
status_
recovery on the mobile device. (25)
desc
l Deleted

l Disabled

l Enabled

lu_account_type
The ​ A ccount Type​ will distinguish from which data source the ​ A ccount​ was
created. For example, a MicroStrategy user in the metadata or a specific
badge name added through Network Manager (see ​ B adge Name​ ) .

Copyright © 2023 All Rights Reserved 177


Plat fo r m An alyt ics

Data-
Column Description
Type

ID of the account type. For MicroStrategy accounts, the ID will


account_ bigint
be a static value. For Badge accounts, the ID is generated
type_id (20)
through Network Manager.

The account type indicated the source of from where the account
originated.

account_ l MicroStrategy User - the user was created in the metadata varchar
type_desc l MicroStrategy Guest User (255)

l <Badge Name> -​ the name of the badge added in Network


Manager.

ID of the network corresponding to the account type. For


MicroStrategy accounts, the ID will be a static value and the bigint
network_id
default name is MicroStrategy Network. For Badge accounts the (20)
Network ID is generated through Network Manager.

lu_user
A ​ U ser​ is the consolidated identity of multiple ​ A ccounts​ . Each User can
have multiple accounts from different sources. For example, multiple badges
from Badge or users created in the metadata of the MicroStrategy platform.
The ​ U ser ​ a ttribute allows analysis on all of the user information across
these different data sources.

Multiple accounts​ are linked to a single ​ U ser​ based on a common email


address(if available) or login. For MicroStrategy metadata users, the email
address is added from Developer under the User Editor > Deliveries
category > Addresses section (see ​ M aintaining Addresses​ ) . For Badge, the
email address is added through Network Manager (see ​ I mporting Users​ ) . If
two or more accounts share a common email address, they will be linked to a
single User. For determining the user_name, the first account processes will
be used.

Copyright © 2023 All Rights Reserved 178


Plat fo r m An alyt ics

Each User can be enriched with an HR organization hierarchy. However, this


is not essential for analysis. In case no HR organization information is
added, default values will be assigned.

Data-
Column Description
Type

user_id The auto-generated numeric ID of the user. bigint(20)

The first account name that was processed in the ETL. A


user can have different account names for the same user,
varchar
user_name i.e. John Smith and Jonathan Smith linked to the same email
(255)
address. In this case, the first account_name will be used for
the user_name.

The email address is primarily used to link multiple accounts


varchar
user_login to a single user identity. If email address is not available,
(255)
the user_login is used to identify a link between users.

The email address used to link multiple accounts to a single varchar


user_email
user identity. (255)

manager_id MicroStrategy internal use. bigint(20)

Auto-generated ID of the department. The department


department_id int(4)
information is populated through a CSV file.

department_ The employee number for the department head in which the
bigint(20)
owner_id user belongs.

Auto-generated ID of the division. The division information is


division_id int(4)
populated through a CSV file.

division_ The employee number for the division head in which the user
bigint(20)
owner_id belongs.

Auto-generated ID of the group. The group information is


group_id int(4)
populated through a CSV file.

group_owner_ The employee number for the group head in which the user
bigint(20)
id belongs.

Copyright © 2023 All Rights Reserved 179


Plat fo r m An alyt ics

Data-
Column Description
Type

Auto-generated ID of the unit. The unit information is


unit_id int(4)
populated through a CSV file.

The employee number for the unit head in which the user
unit_owner_id bigint(20)
belongs.

lu_network
A ​ N etwork​ is a group of connected ​ a ccounts​ . The Network is configured
through Network Manager. For MicroStrategy metadata users, a default
network called MicroStrategy Network is assigned.

Additionally resources (gateways, applications, spaces, desktops) are


stored to the network level.

Column Description Data-Type

network_id Auto-generated numeric ID of each Network. bigint(20)

The organization name entered when creating a new varchar


network_desc
network through Network Manager. (255)

The status of the Network. A network can be active or varchar


network_status
deleted. (25)

creation_
The timestamp for when the Network was first created. datetime
timestamp

The latest timestamp of when the Network has been


modification_ modified. Modifications can include actions such as,
datetime
timestamp changing the badge properties, adding additional users or
enabling Physical Access.

Copyright © 2023 All Rights Reserved 180


Plat fo r m An alyt ics

lu_validating_account
This table is a view on the lu_account table. A​ Validating Account​ is specific
to Badge and represents peer-to-peer authentications. An account​ can be
validated through scanning a QR code or entering the​ Badge Code of
another badge​ . The Badge Code is unique to each user and can be
configured to change routinely. For example, this code can be given over the
phone to identify yourself if you are talking to someone who does not know
you. By default, the Badge Code is 4 or 8 digits in length and updates every
hour.

Data-
Column Description
Type

validating_account_ The numeric ID of the account who has been validated


BigInt(20)
id by another user.

validating_account_ The name of the account who has been validated by varchar
name another user. (255)

validating_account_ The email of the account who has been validated by varchar
email another account. (255)

validating_account_ The picture of the account who has been validated by varchar
picture another account. (1024)

User Group Hierarchy


A MicroStrategy ​ U ser​ can have either a direct relationship with a ​ U ser
Group​ or an indirect relationship. For example, a user can be in a user
groups. Additionally, a user group can be nested in another parent user
group. The ​ U ser Group​ relationship tables store the recursive relationship
between accounts and the corresponding user groups. This is illustrated in
the examples at the end of this section.

Copyright © 2023 All Rights Reserved 181


Plat fo r m An alyt ics

lu_user_group
The list of ​ U ser Groups​ is the MicroStrategy metadata. This table stores
information specific to MicroStrategy metadata. Badge accounts do not
belong to metadata user groups. Therefore, all Badge Accounts​ are
assigned default value as MicroStrategy Badges.

Column Description Data-Type

user_group_id The auto-generated ID of the User Group. bigint(20)

user_group_ varchar
The metadata GUID of the User Group object.
guid (32)

user_group_ varchar
The name of the User Group stored in the metadata.
name (255)

user_group_ The description added in the properties dialog box for the varchar
desc User Group object. (512)

The ID for the corresponding metadata for each User


metadata_id bigint(20)
Group. All User Groups are stored at the metadata level.

The UTC timestamp of when the user group was first


creation_ created in the metadata. If a script was used to import a
datetime
timestamp list of users, the timestamp may be identical for the user
groups.

The latest UTC timestamp from when the user group


modification_
object was last changed. The value will continually update datetime
timestamp
as the object is modified.

The latest status of the User Group object in the metadata.


The status can be:
user_group_ varchar
status l Active (25)

l Deleted

user_group_ varchar
The version ID of the User Group.
version (32)

Copyright © 2023 All Rights Reserved 182


Plat fo r m An alyt ics

rel_account__usergroup
The relationship table between Microstrategy user objects and the
immediate parent user groups in the metadata. This table does not store
indirect relationships between users and user groups. This table will not
have a row for user groups that does not have a user directly in it.

Data-
Column Description
Type

The ID of the account that belongs to the corresponding User


account_id bigint(20)
Group.

user_group_ The User Group ID for which the account corresponds to


bigint(20)
id directly.

rel_childuser__usergroup
This is a relationship table between Microstrategy User Groups and their
parent User Groups in the metadata. A single User Group can be a child to
multiple parent User Groups, which could recursively belong to other parent
User Groups. This table is specifically used to form the recursive
relationship between User Groups and all parent User Groups (direct or
indirect). All indirect relationships are resolved to create a direct
relationship in this table.

Data-
Column Description
Type

child_ The ID of the child user which belongs to the corresponding bigint
group_id parent User Group. (20)

user_
The parent User Group ID. bigint(20)
group_id

Copyright © 2023 All Rights Reserved 183


Plat fo r m An alyt ics

HR Organization Hierarchy

This information is intended to enrich the user level analysis. All attributes
and tables corresponding to the HR organization hierarchy must be manually
provided by importing a csv file. For instructions, see Importing an
Organizational Hierarchy. Two sample reports using the Department,
Division, Group, Unit, User attributes are provided at the end of this section.

The login attributes can be used a security filters to restrict data available to
only managers corresponding to only those users who are direct reports.
These attributes and tables are elective and available to enrich analysis, but
not critical for the Platform Analytics project.

lu_department
The ​ D epartment​ attribute is the highest level attribute of an HR organization
hierarchy and is a consolidation of multiple ​ D ivisions​ . The information must
be provided via csv file.

Data-
Column Description
Type

department_id Auto-generated numeric ID for the Department. int(4)

department_ The name of the Department. For example, Sales, Finance, varchar
desc Technology, etc. (255)

Copyright © 2023 All Rights Reserved 184


Plat fo r m An alyt ics

lu_department_owner
The Department Owner​ is the manager for the corresponding ​ D epartment​ .
Each Department can have​ only one owner.

Data-
Column Description
Type

department_ bigint
Auto-generated numeric ID for the Department Owner.
owner_id (20)

department_ The name of the Department Owner or manager for the varchar
owner_desc department. For example, John Smith. (255)

department_ Auto-generated numeric ID for the Department Owner bigint


owner_login_id login. (20)

lu_department_owner_login
The ​ D epartment Owner Login​ is the username for each ​ D epartment Owner​ .
For example, John Smith is the department owner for the Technology
department. His login is jsmith.

Data-
Column Description
Type

department_owner_ Auto-generated numeric ID for the Department bigint


login_id Owner Login. (20)

department_owner_ The username/login of the Department Owner. For varchar


login_desc example, jsmith. (50)

lu_division
The ​ D ivision​ is a consolidation of multiple ​ G roups​ within the organization
hierarchy.

Copyright © 2023 All Rights Reserved 185


Plat fo r m An alyt ics

Data-
Column Description
Type

division_id Auto-generated numeric ID for the Division. int(4)

division_ The name of the Division. For example, North America Sales, varchar
desc Corporate Finance, etc. (255)

lu_division_owner_login
The ​ D ivision Owner Login​ is the username for each ​ D ivision Owner​ .

Data-
Column Description
Type

division_owner_login_ Auto-generated numeric ID for the Division Owner


bigint(20)
id Login.

division_owner_login_ The username/login of the division owner. For varchar


desc example, jsmith. (50)

lu_group
A ​ G roup​ is a consolidation of multiple ​ U nits​ within the organization
hierarchy.

Column Description Data-Type

group_id Auto-generated numeric ID for the Unit. int(4)

group_desc The name of the Group. For example, West Coast Sales. varchar(255)

lu_group_owner_login
The ​ G roup Owner Login​ is the username for each ​ G roup Owner​ .

Copyright © 2023 All Rights Reserved 186


Plat fo r m An alyt ics

Data-
Column Description
Type

group_owner_login_ Auto-generated numeric ID for the Group Owner


bigint(20)
id Login.

group_owner_login_ The username/login of the Group Owner. For varchar


desc example, jsmith. (50)

lu_unit
A ​ U nit​ is a consolidation of multiple ​ U sers within the organization hierarchy.​

Column Description Data-Type

unit_id Auto-generated numeric ID for the Unit. int(4)

unit_desc The name of the Unit. For example, Seattle Sales Team. varchar(255)

lu_unit_owner_login
The ​ U nit Owner Login​ is the username for each ​ U nit Owner​ .

Data-
Column Description
Type

unit_owner_login_id Auto-generated numeric ID for the Unit Owner Login. bigint(20)

unit_owner_login_ The username/login of the Unit Owner. For example, varchar


desc jsmith (50)

Copyright © 2023 All Rights Reserved 187


Plat fo r m An alyt ics

Example Report

Copyright © 2023 All Rights Reserved 188


Plat fo r m An alyt ics

Time Hierarchy

The Time hierarchy attributes are based off UTC timezone. Both
MicroStrategy Intelligence Server and Identity Server send the server
timezone with the transactional logs. The timezone is then standardized to
UTC in the Platform Analytics ETL.

In addition to the standard time attributes (Day, Month, Year, Month of Year,
etc), there are supplementary attributes to provide additional levels of
analysis. They include:

l Time Period​ : Predefined time periods such as Yesterday, Last Week, etc.

l Week Time Window​ : Rolling seven day increments of week windows.

Copyright © 2023 All Rights Reserved 189


Plat fo r m An alyt ics

lu_date
The source table for the ​ D ate​ attribute which tracks MicroStrategy and
Badge transactions. Each day, a new date entry is added to the lu_date
table.

Column Description Data-Type

The generated numeric ID for Date. The format for


date_id date_id is yyyy-mm-dd. date
For example, 2017-01-02

The Previous Date ID used for transformation in the


previous_date_ Platform Analytics project. date
id
For example, 2017-01-01

The generated numeric ID for Week. The format for


week_id week_id is yyyyww. mediumint)6)
For example, 201701

day_of_week_id The corresponding Day of Week ID. smallint(6)

The corresponding month_id in format yyyymm.


month_id int(11)
For example, 201701

The description form of Month.


month_desc varchar(25)
For example, January, 2017

previous_ The Previous Month ID.


int(11)
month_id For example, 201612

month_of_year_
The Month of Year ID. tinyint(4)
id

The Year ID. The source column for the Year attribute
year_id int(11)
in the Platform Analytics project.

quarter_id The Quarter ID the day resides in. int(11)

previous_ The Month ID of the quarter the day resides in from int(11)

Copyright © 2023 All Rights Reserved 190


Plat fo r m An alyt ics

Column Description Data-Type

quarter_month_
the previous month.
id

lu_month
The lu_month tracks on which ​ M onth that a MicroStrategy or Badge
transaction occurred.​

Data-
Column Description
Type

The generated numeric ID for Month. The format for month_id


month_id is yyyymm. int(11)
For example, 201802.

The descriptive form of the Month. The format is Month, Year. varchar
month_desc
For example, February, 2018. (32)

The Previous Month ID used for transformation in the


previous_ Platform Analytics project. int(11)
month_id
For example, 201801.

lu_month_of_year
List on which ​ M onth of Year ​ t he MicroStrategy or Badge transaction
occurred.

Data-
Column Description
Type

month_of_year_
The fixed numeric ID for the Month of Year. tinyint(4)
id

Copyright © 2023 All Rights Reserved 191


Plat fo r m An alyt ics

Data-
Column Description
Type

The descriptive form of the Month of Year. For example,

month_of_year_ l January varchar


desc l February (25)

l March

The short descriptive form of the Month of Year. For


example,
month_of_year_ varchar
l Jan
short_desc (10).
l Feb

l Mar

previous_month_ The Previous Month of Year ID used for transformation in


tinyint(4)
of_year_id the Platform Analytics project.

lu_day_of_week
Day of Week indicates which day the MicroStrategy or Badge transaction
occurred.

Column Description Data-Type

day_of_week_id The fixed numeric ID for the Day of Week. smallint(6)

The descriptive form of Day of Week. For example,

l Monday
day_of_week_desc varchar(25)
l Tuesday

l Wednesday

The short descriptive form of Day of Week. For


day_of_week_short_ example, varchar
desc (10)
l Mon

Copyright © 2023 All Rights Reserved 192


Plat fo r m An alyt ics

Column Description Data-Type

l Tue

l Wed

part_of_week_id The fixed numeric ID for the Part of Week. smallint(6)

lu_part_of_week
Part of Week indicates whether the MicroStrategy or Badge transaction
occurred on the Weekend or Weekday.

Data-
Column Description
Type

part_of_week_id The fixed numeric ID for the Part of Week. tinyint(4)

The descriptive form of Part of Week. The Part of Week


can be:
part_of_week_ varchar
desc l Weekday - defined as Monday through Friday. (25)

l Weekend - defined as Saturday or Sunday.

lu_time_period
Time Period​ is used to track predefined rolling time windows. The
predefined Time Periods are intended to be overlapping. For example, the
Time Period for Last Week will include the actions for Yesterday and Last 2
Months will include all the actions for all other time windows. The rel_date_
timeperiod table is updated daily in the Platform_Analytics_daily_etl.xxx
procedure and therefore the Time Period attribute does not store data for the
current date.

Copyright © 2023 All Rights Reserved 193


Plat fo r m An alyt ics

Data-
Column Description
Type

time_period_id The fixed numeric ID for the defined Time Periods. tinyint(4)

The descriptive form of the Time Period. The Time Period are
defined as:

l Yesterday - today minus 1 day.


time_period_ varchar
desc l Last Week - today minus 7 days. (50)

l Last Month - today minus 30 days.

l Last 2 Months - today minus 60 days.

rel_date_timeperiod
The relationship table used to track the rolling Time Periods. The predefined
Time Periods are intended to be overlapping. For example, the Time Period
for Last Week will include the actions for Yesterday and Last 2 Months will
include all the actions for all other time windows. The rel_date_timeperiod
table is updated daily in the Platform_Analytics_daily_etl.xxx procedure and
therefore the Time Period attribute does not store data for the current date.

Column Description Data-Type

date_id The Date corresponding to the specific Time Period. date

time_period_id The fixed numeric ID for the defined Time Period. tinyint(4)

lu_week
The source table for the ​ W eek​ attribute which tracks MicroStrategy and
Badge transactions. The lu_week table stores the week elements until
overflow in the year 2035.

Copyright © 2023 All Rights Reserved 194


Plat fo r m An alyt ics

Column Description Data-Type

The generated numeric ID for Week. The format for week_


id is yyyyww. mediumint
week_id
(9)
For example, 201720.

The week description.


week_desc varchar(16)
For example, Week 20, 2017.

week_begin_ The beginning date from the week range.


date
date For example, 2017-05-21.

week_end_ The ending date from the week range.


date
date For example, 2017-05-28.

The week_begin_date to week_end_date.


week_range varchar(50)
For example, 05/21/2017 - 05/28/2017.

The Previous Week ID used for transformation in the


previous_ Platform Analytics project. mediumint(9)
week_id
For example, 201719.

The corresponding month_id in format yyyymm.


month_id int(11)
For example, 201705.

year_id The Year ID. smallint(6)

lu_week_time_window
Week Time Windows​ is used to track predefined rolling week windows. The
Week Time Windows are consecutive and not overlapping. For example, the
Last Week will include the last seven dates. It will not overlap with the dates
for two weeks ago. The rel_date_weektime_window table is updated daily in
the Platform_Analytics_daily_etl.xxx procedure and, therefore, the Week
Time Windows attribute does not store data for the current date.

Copyright © 2023 All Rights Reserved 195


Plat fo r m An alyt ics

Data-
Column Description
Type

week_time_ The fixed numeric ID for the defined Week Time


tinyint(4)
window_id Windows.

The descriptive form of the Week Time Windows. The


time windows are defined as:

l Last Week - today minus 7 day


week_time_ varchar
l 2 Weeks Ago - 8 to 14 days ago
window_desc (50)
l 3 Weeks Ago - 15 to 21 days ago

l 4 Weeks Ago - 22 to 28 days ago

l 5 Weeks Ago - 29 to 35 days ago

previous_week_ The Previous Week Time Window ID used for


tinyint(4)
time_window_id transformation in the Platform Analytics project.

rel_date_weektime_window
The relationship table used to track the Dates​ for the rolling ​ W eek Time
Windows​ . The Week Time​ Windows are consecutive and not overlapping.
For example, the Last Week will include the last seven dates. It will not
overlap with the dates for two weeks ago. The rel_date_weektime_window
table is updated daily in the Platform_Analytics_daily_etl.xxx procedure and
therefore the Week Time Windows attribute does not store data for the
current date.

Data-
Column Description
Type

The Date corresponding to the specific Week Time


date_id date
Window.

week_time_window_
The fixed numeric ID for the Week Time Window. tinyint(4)
id

Copyright © 2023 All Rights Reserved 196


Plat fo r m An alyt ics

lu_minute
The ​ M inute​ when a Badge or MicroStrategy transaction occurs.

Data-
Column Description
Type

minute_id The fixed numeric ID for the Minute. int(11)

The descriptive form of the Minute. The Minute is stored in the 24


hours format of hh:mm.

For example:
minute_ varchar
desc l 10:09 - represents 10:09 am (8)

l 14:45 - represents 2:45 pm

l 23:30 -represents 11:30 pm

hour_id The numeric ID for the corresponding Hour. tinyint(4)

lu_hour
The ​ H our​ when a Badge or MicroStrategy transaction occurs.

Column Description Data-Type

hour_id The fixed numeric ID for the Hour. tinyint(4)

The descriptive form of the Hour.

For example,

hour_desc l 12AM varchar(25)

l 1AM

l 2AM

part_of_day_id The numeric ID corresponding to the Part of Day. tinyint(4)

Copyright © 2023 All Rights Reserved 197


Plat fo r m An alyt ics

lu_part_of_day
The ​ P art of Day​ when a MicroStrategy or Badge action occurs (i.e. Morning,
Afternoon). The Part of Day is predefined based on the relationship with
Hour.

Data-
Column Description
Type

part_of_
The fixed numeric ID for the Part of Day. tinyint(4)
day_id

The descriptive form representing the time range for the Part of
Day. The Part of Day can be:

l Morning - hours 6am to 11am


part_of_ varchar
day_desc l Afternoon - hours 12pm to 5pm (25)

l Evening - hours 6pm to 10pm

l Night - hours 11pm to 5am

lu_quarter

The Quarter when a MicroStrategy or Identity action occurs.

Data-
Column Description
Type

quarter_id The fixed numeric ID for the Quarter. int(11)

The descriptive form representing the Quarter.

For example:
quarter_ varchar
l Q1 2017
desc (25)
l Q2 2018

l Q3 2019

previous_ The fixed numeric ID of the quarter previous to the current one. int(11)

Copyright © 2023 All Rights Reserved 198


Plat fo r m An alyt ics

Data-
Column Description
Type

This is the source column for the Previous Quarter


quarter_id
transformation.

The fixed numeric ID of the same quarter in the previous year.


last_year_
This is the source column for the Last Year Quarter int(11)
quarter_id
transformation.

quarter_of_ The fixed number ID of the quarter number within the year.
tinyint(4)
year_id For example Q3 2019 would be 3.

Badge Local Time

For Badge app transactions, the app sends the timezone of the users device
in the logs. This timezone is used to populate the Local Time hierarchy
attributes. MicroStrategy transactions do not currently have the user device
timezone and therefore do not use the Local Time hierarchy. All local time

Copyright © 2023 All Rights Reserved 199


Plat fo r m An alyt ics

hierarchy attributes (local date, local month, etc.) are logical table views in
the Platform Analytics MicroStrategy project.

Servers Hierarchy

The Server attributes allow for analysis at the machine level. The
Intelligence server and Client Server Machine attributes provide the name
and/or IP information for where an Intelligence server and Client Server are
hosted.

Multiple server definitions can be available, but you can install only one
Intelligence server on one Intelligence server machine and the Intelligence
server uses only one Server Definition at a time.

lu_iserver_instance
List the ​ I ntelligence Server Instances​ that have been configured to monitor
statistics using Platform Analytics. An ​ I ntelligence Server Instance​ uniquely
identifies a MicroStrategy Intelligence server hosted on a machine. One
Intelligence server uses only one server definition at a time.

Copyright © 2023 All Rights Reserved 200


Plat fo r m An alyt ics

Data-
Column Description
Type

iserver_ The auto-generated numeric ID for the Intelligence Server


bigint(20)
instance_id Instance.

iserver_
varchar
definition_ The GUID of the server definition stored in the metadata.
(32)
guid

iserver_
varchar
definition_ The name of the server definition stored in the metadata.
(255)
name

iserver_ The machine name where the Intelligence Server Instance is


bigint(20)
machine_id configured.

iserver_port_
The port which the Intelligence Server Instance is configured. int(4)
number

iserver_ The I-server Cluster ID. This is -1 by default. See the example smallint
cluster_id of SQL for adding the cluster information below. (4)

iserver_
The machine name where the Intelligence server is varchar
machine_
configured. (255)
name

iserver_ The auto-generated numeric ID of the Intelligence server


bigint(20)
definition_id definition. See lu_iserver_definition for more information.

lu_iserver_machine
List the ​ I ntelligence server machines​ that have I-Servers configured to
monitor statistics using Platform Analytics. An Intelligence server machine
can have one or more instances of an Intelligence server running on
different port numbers. An Intelligence server machine can have the IP,
name, or both.

Copyright © 2023 All Rights Reserved 201


Plat fo r m An alyt ics

Data-
Column Description
Type

iserver_machine_ The auto-generated numeric ID for the Intelligence


bigint(20)
id server machine.

The machine IP where the Intelligence server is varchar


iserver_machine_ip
configured. (32)

iserver_machine_ The machine name where the Intelligence server is varchar


name configured. (255)

lu_iserver_cluster
There is no automated method for determining the ​ I ntelligence server
cluster​ information from the logs. However, this information can be manually
updated. Examples of the procedure to update the lu_iserver_cluster and lu_
iserver_instance tables are provided below. The Intelligence server cluster
attribute is hidden by default.

If you want to cluster two Intelligence server instances (for example,


iserver_instance_id = 1 and 2) together and name the cluster as "My MSTR
Intelligence server cluster," then you can run the query below against the
Platform Analytics WH.

Call add_iserver_cluster_for_instances('1,2', 'My MSTR Intelligence Server


Cluster');

The procedure would give a result:

NEW ISERVER CLUSTER GENERATED:


ID = 1, NAME = My MSTR Intelligence Server Cluster

Copyright © 2023 All Rights Reserved 202


Plat fo r m An alyt ics

Data-
Column Description
Type

iserver_cluster_ The auto-generated numeric ID for the Intelligence server smallint


id cluster. (4)

iserver_cluster_ The Intelligence server cluster name that was inserted varchar
name through custom SQL. (255)

lu_client_server_machine
This table stores the ​ C lient Server Machine​ information where an
application server, such as the Web Server or Mobile Server, is hosted.

Data-
Column Description
Type

client_server_ The auto-generated numeric ID for the Client Server bigint


machine_id Machine. (20)

client_server_ Client machine name or IP address when machine varchar


machine_name name is not available. (32)

Client Telemetry
Client telemetry was introduced in MicroStrategy 2021 Update 8 (December
2022). Starting in MicroStrategy 2021 Update 9, this preview feature is out-
of-the-box.

Copyright © 2023 All Rights Reserved 203


Plat fo r m An alyt ics

Client telemetry attributes describe the peripheral activity of the


MicroStrategy platform, such as user interaction with the content (switching
to pages, sorting, and so on) and devices used during the session.

Telemetry collected from modern clients is stored at the transactional level


and is joined with server telemetry (users, objects, server jobs) to provide
meaningful insights.

Client-associated attributes have relationships with time, account, device,


and session hierarchies. The addition of the Custom Application attribute
offers the ability to analyze how customized applications impact the
interaction of the user base with the content.

Attribute/Metric Table
Description Folder
Name Mapping

lu_client_
action_
Client Action Category for a client action
category A12. Clients
Category (execution or manipulation)
lu_client_
action_type

Date on which the client action fact_client_


Client Action Date A12. Clients
was performed executions

Specific execution or lu_client_


manipulation type for a client action_type
Client Action Type A12. Clients
action (sort, show total, drill, fact_client_
etc.) executions

Copyright © 2023 All Rights Reserved 204


Plat fo r m An alyt ics

Attribute/Metric Table
Description Folder
Name Mapping

fact_client_
Client Action Unique ID for each client action A12. Clients
executions

lu_client_
Numerical version of the
application_
Client Application MicroStrategy client
version A12. Clients
Version application from which a client
fact_client_
request was made
device_history

lu_client_
Type of MicroStrategy client
application_
application from which a
version_
Client Application request was made (Library
category A12. Clients
Version Category Web, Library Mobile iOS,
lu_client_
MicroStrategy Application, and
application_
so on.)
version

lu_client_
For executions that hit a
cache_type
Client Cache Type cache, indicates the type of A12. Clients
fact_client_
cache hit
executions

Session ID provided by client lu_client_


application. Does not session
Client Session A12. Clients
correspond to the Intelligence fact_client_
server session. executions

Increments each time a user


opens a dossier/document in a
Execution fact_client_
given client session. Restarts A12. Clients
Sequence executions
at 1 for each new client
session.

Flag that indicates whether a


Manipulation Is fact_client_
manipulation used multimedia A12. Clients
Cached Multimedia executions
cache

Copyright © 2023 All Rights Reserved 205


Plat fo r m An alyt ics

Attribute/Metric Table
Description Folder
Name Mapping

Name of the object


(standalone or embedded) on fact_client_
Manipulation Name A12. Clients
which a manipulation was executions
performed

Flag that indicates whether a fact_client_


Manipulation Result A12. Clients
manipulation was successful executions

Increments each time a user


performs a manipulation on an
Manipulation fact_client_
object. Restarts at 1 each time A12. Clients
Sequence executions
for each new object
manipulated.

Content of the manipulation


performed
Example: For a switch page
fact_client_
Manipulation Value manipulation type, the name of A12. Clients
executions
the page. For a sorting
manipulation type, ascending
or descending.

lu_network_
Network availability from which type
Network Type A12. Clients
a client request was made fact_client_
executions

lu_device
fact_client_
The unique ID and name of the executions
A08. Client
Device device with which a user fact_client_
Devices
performs an action. device_history
access_
transactions

The category of device types lu_device_ A08. Client


Device Category
(such as. iPhone, iPad, tablet, category Devices

Copyright © 2023 All Rights Reserved 206


Plat fo r m An alyt ics

Attribute/Metric Table
Description Folder
Name Mapping

personal computer, and so on). lu_device_type

lu_device_type
The make and/or model of the
lu_device A08. Client
Device Type phone, tablet, or computer
fact_client_ Devices
used to perform an action.
device_history

lu_os
fact_client_
The name and version of the A08. Client
OS device_history
operating system. Devices
access_
transactions

The operating system category lu_os_


A08. Client
OS Category (such as iOS, Android, macOS, category
Devices
Windows, Linux, and so on). lu_os

lu_custom_
An attribute that provides a list A02.
application
Custom Application of custom applications in the Configuration
fact_client_
metadata. Objects
executions

Avg Client Render Average of Client Total Render fact_client_


M09. Clients
Time Time (ms) at the report level executions

Avg Client Total Average of Client Total View fact_client_


M09. Clients
View Time Time (ms) at the report level executions

Timestamp of last update to


Client Device State the mobile device information fact_client_
M09. Clients
Change Time (OS update, application device_history
version update, and so on 1 )

Total number of
dossier/document executions fact_client_
Client Actions M09. Clients
and manipulations performed executions
by the client. 2

Copyright © 2023 All Rights Reserved 207


Plat fo r m An alyt ics

Attribute/Metric Table
Description Folder
Name Mapping

Timestamp of when the mobile


Client Received fact_client_
device received a response M09. Clients
Timestamp executions
from the Intelligence server

Client Render Timestamp of when the client fact_client_


M09. Clients
Finish Timestamp finished rendering executions

Timestamp of when the client


Client Render Start started rendering, triggered by fact_client_
M09. Clients
Timestamp the object execution or executions
manipulation (such as sorting)

Time elapsed between sending


Client Request the dossier/document request fact_client_
M09. Clients
Receive Time(ms) and the received data from the executions
Intelligence server

Timestamp of when the


Client Request dossier/document request was fact_client_
M09. Clients
Timestamp sent from the mobile device to executions
the Intelligence server

Time elapsed between client


Client Total Render fact_client_
rendering start and client M09. Clients
Time (ms) executions
rendering finish

Time elapsed between


rendering finish and closing the
Client Total View dossier/document or sending fact_client_
M09. Clients
Time (ms) the application to the executions
background for mobile
applications

Timestamp of when the end


Client View Finish user closed the fact_client_
M09. Clients
Timestamp dossier/document or sent the executions
application to the background

Copyright © 2023 All Rights Reserved 208


Plat fo r m An alyt ics

Attribute/Metric Table
Description Folder
Name Mapping

Total count of manipulations


Number of fact_client_
performed in the opened M09. Clients
Manipulations executions
dossier/document

A metric that counts the total


access_ M01. Telemetry
Devices number of devices with
transactions > Badge
actions.

1 An account change on the device is not captured by this metric.

2 Compared to server executions, this metric also includes any activity

performed in offline mode.

Device Configuration Hierarchy

The Device Configuration hierarchy attributes are primarily used to track the
Badge and Communicator mobile client configuration telemetry. The device
configuration telemetry is tracked at the transactional level. Each Badge
transaction records the exact operating system​ , device,​ and ​ m obile​
configuration. Therefore, the historical upgrade or modification of the device
configuration is recorded. To see the latest device configuration for a user,
sort descending on the Timestamp attribute.

Copyright © 2023 All Rights Reserved 209


Plat fo r m An alyt ics

For MicroStrategy transactions, the unique Device​ configuration is tracked.


The ​ D evice Type​ and ​ D evice Category​ are always defined as a static value,
but the Device is populated with the IP of the client that the MicroStrategy
transaction occurred from.

lu_device
The ​ D evice​ stores the unique ID and name/IP of the client device with which
a users performs an action. For Usher iOS clients, the device_desc is the
name registered on the mobile device. For Usher Android clients, the
device_desc is the name of the model. For MicroStrategy clients, the name
is the IP address from the client.

The Device ID is always unique for both Usher and MicroStrategy client
actions. For Usher, the ID is generated by Usher Server. For MicroStrategy,
the ID is auto-generated by Platform Analytics ETL.

Data-
Column Description
Type

The auto-generated ID of the Usher device or MicroStrategy bigint


device_id
Session Source IP address. (20)

The description of the Device. The description can vary


device_ varchar
depending on the source (Badge iOS, Badge Android, or
desc (255)
MicroStrategy Session Source).

device_ bigint
The corresponding Device Type ID for the Device.
type_id (20)

lu_device_type
The Device Type records make and/or model of the phone, tablet, or
computer used to perform a Badge action. All MicroStrategy actions are
assigned a static Device Type of Personal Computer.

Copyright © 2023 All Rights Reserved 210


Plat fo r m An alyt ics

Data-
Column Description
Type

device_ bigint
The auto-generated numeric ID of the Device Type.
type_id (20)

The descriptive make and/or model of the phone or tablet. All


MicroStrategy actions are assigned a static Device Type of
Personal Computer.

device_ For example, varchar


type_desc (255)
l iPhone

l Google Pixel

l LG-D820

device_
The corresponding Device Category ID. int(11)
category_id

lu_device_category
The Device Category records the grouping of device types. All MicroStrategy
actions are assigned a static Device Category of Personal Machine.

Data-
Column Description
Type

device_category_
The fixed numeric ID of the Device Category. int(11)
id

The fixed description of the Device Category. The


elements include:

l Android
device_category_ varchar
desc l iPhone (255)

l iPad

l Tablet

Copyright © 2023 All Rights Reserved 211


Plat fo r m An alyt ics

Data-
Column Description
Type

l Personal Computer

l Not Applicable

lu_os
The ​ O S ​ i s used to record the exact operating system which is on the Badge
app or Communicator app used in an Identity transaction. The ​ O S​ is tracked
at the transactional level, therefore, each Identityaction will record the
unique OS version and OS Category.

MicroStrategy transactions do not record the client OS information and are


mapped to a default Not Applicable.

Column Description Data-Type

os_id The auto-generated ID of the OS. bigint(20)

The description of the OS installed on the Usher

Client. For example,


os_desc varchar(255)
l Android 8.1.0

l iOS 11.2.5

os_category_id The corresponding OS Category ID. int(11)

lu_os_category
The OS Category ​ i s used to record the type of operating system which is
running on the Badge app mobile client. The ​ O S​ is tracked at the
transactional level, therefore, each Badge action will record the unique OS
version and OS Category.

Copyright © 2023 All Rights Reserved 212


Plat fo r m An alyt ics

MicroStrategy transactions do not record the client OS information and are


mapped to a default Not Applicable.

Data-
Column Description
Type

os_category_id The fixed numeric id of the OS Category. int(11)

The descriptive form of the OS Category. The OS Category


includes:

l Android
os_category_ varchar
desc l iOS (255)

l OS x

l Not Applicable

lu_mobile_app
The ​ M obile App ​ r ecords the exact Mobile App version on the Badge users’
device during a transaction. The ​ M obile App​ is tracked at the transactional
level, therefore, each Badge action will record the unique app version.

MicroStrategy transactions do not record the Mobile App information and are
mapped to a default Unknown.

Data-
Column Description
Type

mobile_ bigint
The auto-generated numeric ID for the Mobile App.
app_id (20)

The Mobile App version installed on the Badge users’ device.


The Mobile App version will change as the user upgrades the
mobile_app_ app version. An example of Mobile App includes: varchar
desc (50)
l Badge/10.10.101

l Communicator/10.10.002

Copyright © 2023 All Rights Reserved 213


Plat fo r m An alyt ics

Data-
Column Description
Type

l Not Applicable

mobile_
app_ The corresponding Mobile App Category ID. tinyint(4)
category_id

lu_mobile_app_category
The ​ M obile App Category ​ i s used to record the type of Mobile App which is
running on the users mobile client. The ​ M obile App​ is tracked at the
transactional level, therefore, each Badge action will record the unique
Mobile App version.

MicroStrategy transactions do not record the Mobile App information and are
mapped to a default ‘Unknown’.

Column Description Data-Type

mobile_app_category_id The fixed numeric ID for the Mobile Category tinyint(4)

The descriptive form of the Mobile App


Category.

The Mobile App Category includes:

l Badge
mobile_app_category_
varchar(255)
desc l Door Reader

l Communicator

l WatchKit

l Not Applicable

Copyright © 2023 All Rights Reserved 214


Plat fo r m An alyt ics

Compliance Telemetry Hierarchy

User Entity
A ​ U ser ​ E ntity can inherit project access privileges from two Sources:

1. User ​ ( self)

2. User Groups ​ ( all direct and indirect parent user groups)

User Entities can be found in lu_user_entity_view.

Copyright © 2023 All Rights Reserved 215


Plat fo r m An alyt ics

Source Entity
A Source can inherit project access privileges from three Privilege Sources:

1. User ​ ( self)

2. User Groups ​ ( all direct and indirect parent user groups)

3. Security Roles

Sources can be found in lu_source_entity_view and Privilege Sources can


be found in lu_privilege_source_view.

Relationship Tables
The relationship table rel_user_entity_source captures the relationship
between a User Entity and its Source of privileges. Sources may be the User
itself or a parent User Group.

The relationship table rel_source_privilege_source_scope captures the


relationship between a Source and its Privilege Source. Privilege sources
may be the User itself, a parent User Group, or a Security Role. It also
captures the relationship between a Privilege Source and its Scope. A
Scope is the set of projects the privileges are applicable for.

The relationship table rel_privilege_source_privilege_group captures the


relationship between a Privilege Source and a Privilege Group. A privilege
group is a set of unique privileges applied to a Privilege Source. Multiple
Privilege Sources may share the same Privilege Group if they have the same
applied privileges.

The relationship table rel_privilege_group_privilege captures the


relationship between a Privilege Group and its unique set of Privileges.

Joining rel_user_entity_source, rel_source_privilege_source_scope, rel_


privilege_source_privilege_group , and rel_privilege_group_privilege will
give you an enumerated list of all Privileges available to a User Entity based
on their Sources’ Privilege Sources.

Copyright © 2023 All Rights Reserved 216


Plat fo r m An alyt ics

The table fact_user_entity_resolved_privilege captures the list of resolved


privileges for a given User Entity. When we join rel_user_entity_source, rel_
source_privilege_source_scope, rel_privilege_source_privilege_group , rel_
privilege_group_privilege withfact_user_entity_resolved_privilege we get a
resolved list of all Privileges available to each User Entity and their
respective Product. The resolution process allows us to relate all Privileges
available to a User Entity, as well as each Privilege’s Source and Privilege
Source.

For more information about project access privileges, see List of Privileges.

rel_user_entity_source
A User can inherit privileges directly from two Sources:

1. User ​ ( self)

2. User Groups ​ ( all direct and indirect parent user groups)

The relationship table rel_user_entity_source captures the relationship


between a User Entity and its Source of privileges. This includes
relationships between a User and itself and a User with its parent User
Groups.

Column Description Data-Type

user_entity_
The auto-generated numeric ID for the User Entity. bigint(20)
id

The auto-generated numeric ID for the Source


source_id bigint(20)
corresponding to a User or User Group.

audit_ The timestamp of when the License Audit was triggered.


timestamp
timestamp This is sent by the Intelligence server to Kafka logs.

The auto-generated numeric ID of the corresponding


metadata_id bigint(20)
metadata.

MicroStrategy internal use. The timestamp the ETL inserted


insert_ts timestamp
the row into the database.

Copyright © 2023 All Rights Reserved 217


Plat fo r m An alyt ics

lu_user_entity_view
The list of all possible User Entities. Therefore, it contains Users and
Contacts. It is a view based on lu_entity limited to entity_type_ids in (1,4).

WH Table
Column Description Data-Type
Column

The auto-generated numeric ID value for


user_entity_id entity_id bigint(20)
User Entity.

user_entity_ The name of the User Entity that has varchar


entity_name
name privileges. (255)

user_entity_ varchar
entity_desc The description of the User Entity.
desc (255)

The User Entity type can be:


user_entity_
entity_type_id l User (1) int(11)
type_id
l Contact (4)

The ID for the corresponding metadata


metadata_id metadata_id for each Privilege Source. All User bigint(20)
Entities are stored at the metadata level.

user_entity_ varchar
entity_guid The GUID of User Entity.
guid (32)

creation_ creation_ The UTC timestamp of when the User


datetime
timestamp timestamp Entity was first created in the metadata.

The latest UTC timestamp from when the


modification_ modification_ User Entity object was last changed. The
datetime
timestamp timestamp value will continually update as the
object is modified.

The Status of the User Entity. The Status


can be:
varchar
status status
l Enabled (1) (32)

l Disabled (0)

Copyright © 2023 All Rights Reserved 218


Plat fo r m An alyt ics

lu_user_entity_type_view
A User Entity can be of type: User or Contact. Therefore, this lookup table
contains a list of two static elements. This lookup table is a view based on
lu_entity_type limited to entity_type_ids in (1,4).

Data-
Column Description
Type

user_entity_type_id The fixed ID for User Entity type. int(11)

Description for User Entity Type. A User Entity can be


of type:
user_entity_type_ varchar
desc l User (1) (255)

l Contact (4)

lu_source_entity_view
The list of all possible Sources. Therefore, it contains Users and User
Groups. It is a view based on lu_entity limited to entity_type_ids in (1,2).

WH Table
Column Description Data-Type
Column

The auto-generated ID for the Source


source_id entity_id bigint(20)
entity.

varchar
source_name entity_name The name of the Source.
(255)

varchar
source_desc entity_desc The description of the Source.
(255)

The Source type can be:

source_type_id entity_type_id l User (1) int(11)

l User Group (2)

Copyright © 2023 All Rights Reserved 219


Plat fo r m An alyt ics

WH Table
Column Description Data-Type
Column

The ID for the corresponding metadata


metadata_id metadata_id for each Privilege Source. All sources bigint(20)
are stored at the metadata level.

user_entity_ varchar
entity_guid The GUID of Source.
guid (32)

creation_ creation_ The UTC timestamp of when the Source


datetime
timestamp timestamp was first created in the metadata.

The latest UTC timestamp from when the


modification_ modification_ Source object was last changed. The
datetime
timestamp timestamp value will continually update as the
object is modified.

The Status of the Source. The Status


can be:
varchar
status status
l Enabled (1) (32)

l Disabled (0)

rel_source_privilege_source_scope
A Source can get its privileges from three Privilege Sources:

1. User (self),

2. User Groups (all direct and indirect Parent User Groups) and

3. Security Roles (applied to User or Parent User groups)

The relationship table rel_source_privilege_source_scope captures the


relationship between the Source entity from rel_user_entity_source and
their Privilege Source. Each row also contains a Scope which indicates the
list of projects the Privilege Source is applicable to.

Copyright © 2023 All Rights Reserved 220


Plat fo r m An alyt ics

Column Description Data-Type

source_id The auto-generated numeric ID value for a Source entity. bigint(20)

privilege_ The auto-generated numeric ID value for a Privilege Source


bigint(20)
source_id entity.

The auto-generated numeric ID value for a scope of


scope_id bigint(20)
projects.

audit_ The timestamp the License Audit was triggered, sent by the
timestamp
timestamp Intelligence server to Kafka logs.

The auto_generated numeric ID value for the corresponding


metadata_id bigint(20)
Metadata

MicroStrategy internal use. The timestamp the ETL inserted


insert_ts timestamp
the row into the database.

lu_scope
The Scope attribute was introduced to optimize SQL query and ETL
performance. It is not intended to be used in ad-hoc reports. The level of
assigning Privileges can differ depending on the Privilege Source.

Security Roles can grant a set of privileges but restrict them to a few
projects. Whereas, User and User Group privileges are applicable globally
for all the projects. To determine which privilege is assigned to which list of
project(s), Scope is used. It represents the list of projects for which the
privilege source is applicable.

If Privilege Source (from rel_source_privilege_source_scope) is a Security


Role, Scope would be a positive number representing subset of projects in
the metadata. If Privilege Source is User or User group, we assign a default
scope_ID (-metadataId) representing all the projects in the metadata.

Copyright © 2023 All Rights Reserved 221


Plat fo r m An alyt ics

Column Description Data-Type

The auto-generated numeric ID for each unique list of


scope_id bigint(20)
projects.

scope_desc The list of project ID’s in the metadata. longtext

rel_scope_project
The relationship table between ​ S cope and ​ ​ P roject​ . Scope is used to
represent the list of projects for which a User/User Group inherits a list of
privileges from Security role. This table maintains the relationship between
scope and the corresponding projects.

Column Description Data-Type

scope_id The auto-generated numeric ID value for scope. bigint(20)

project_id The auto-generated numeric ID value for projects. bigint(20)

metadata_id The ID for the corresponding metadata for each Project. bigint(20)

rel_privilege_source_privilege_group
A relationship table between Privilege Source (which is User, User group, or
Security Role) and their corresponding set of privileges represented by the
Privilege Group.

Column Description Data-Type

privilege_ The auto-generated numeric ID value for a Privilege Source


bigint(20)
source_id entity.

privilege_ The auto-generated numeric ID value for a Privilege Group


bigint(20)
group_id representing the set of Privileges for a Privilege Source.

audit_ The timestamp the License Audit was triggered. This is sent timestamp

Copyright © 2023 All Rights Reserved 222


Plat fo r m An alyt ics

Column Description Data-Type

timestamp by the Intelligence server to Kafka logs.

metadata_id The auto generated ID for a metadata. bigint(20)

MicroStrategy internal use. The timestamp the ETL inserted


insert_ts timestamp
the row into the database.

lu_privilege_source_view
The list of all possible Privilege Sources. Therefore, it contains Users, User
Groups and Security Roles. A view based on lu_entity limited to entity_type_
ids in (1,2,3).

WH Table
Column Description Data-Type
Column

privilege_ The auto-generated ID for Privilege


entity_id bigint(20)
source_id Source entity.

privilege_ varchar
entity_name The name of the Privilege Source.
source_name (255)

privilege_ varchar
entity_desc The description of Privilege Source.
source_desc (255)

The Privilege Source type can be:

privilege_ l User (1)


entity_type_id int(11)
source_type_id l User Group (2)

l Security Role (3)

The ID for the corresponding metadata


metadata_id metadata_id for each Privilege Source. All sources bigint(20)
are stored at the metadata level.

privilege_ varchar
entity_guid The GUID of Privilege Source.
source_guid (32)

Copyright © 2023 All Rights Reserved 223


Plat fo r m An alyt ics

WH Table
Column Description Data-Type
Column

The UTC timestamp of when the


creation_ creation_
Privilege Source was first created in the datetime
timestamp timestamp
metadata.

The latest UTC timestamp from when the


modification_ modification_ Privilege Source object was last
datetime
timestamp timestamp changed. The value will continually
update as the object is modified.

Status of Privilege Source. The Status


can be:
varchar
status status
l Enabled (32)

l Disabled

lu_privilege_source_type_view
A Privilege Source can be of Type: User, User group or Security Role.
Therefore this lookup table contains a list of three static elements. This
lookup table is a view based on lu_entity_type limited to entity_type_ids in
(1,2,3).

Data-
Column Description
Type

privilege_source_
The fixed ID for privilege source type. int(11)
type_id

The description for Privilege Source Type. Privilege


Source type can be:
privilege_source_ varchar
l User (1)
type_desc (255)
l User Group (2)

l Security Role (3)

Copyright © 2023 All Rights Reserved 224


Plat fo r m An alyt ics

lu_privilege_group
This table is used for internal purpose only. A Privilege Group represents a
unique set of privileges applied to a Privilege Source. Multiple Privilege
Sources with the same set of privileges will be assigned the same Privilege
Group.

Column Description Data-Type

privilege_group_id The auto-generated numeric ID value. bigint(20)

privilege_group_desc A set of privileges. varchar(4096)

rel_privilege_group_privilege
This is a relationship table between Privilege Groups and their set of
Privileges. The join between rel_privilege_source_privilege_group, rel_
privilege_group_privilege, and lu_privilege gives a list of privileges that are
assigned directly to each Privilege Source. Such a list includes only the
directly assigned privileges and not the inherited privileges.

Column Description Data-Type

privilege_id The fixed ID value for the privilege. int(11)

privilege_group_id The auto-generated ID value for privilege group. bigint(20)

fact_user_entity_resolved_privilege
This table contains the resolved list of Privileges and their associated
Product (including those directly applied to a user, those from a parent
group, and those from a security role applied to a user or parent group) for
each User Entity in the metadata.

Copyright © 2023 All Rights Reserved 225


Plat fo r m An alyt ics

Column Description Data-Type

user_entity_
The auto-generated ID value for the User Entity. bigint(20)
id

privilege_id The fixed ID value for the Privilege. smallint(6)

product_id The fixed ID value for the Product. smallint(6)

audit_ The timestamp the License Audit was triggered. It is sent


timestamp
timestamp by the Intelligence server to Kafka logs.

license_
entity_status_ The fixed ID Value for the status of the entity. tinyint(4)
id

metadata_id The auto-generated ID value for the metadata. bigint(20)

MicroStrategy Internal Use. The timestamp the ETL


insert_ts timestamp
inserted the row into the database.

lu_license_entity_status_view
An entity (from lu_user_entity_view, lu_privilege_source_view, lu_source_
entity_view) in the license model can be either enabled or disabled.
Therefore this lookup table contains a list of two static elements. This lookup
table is a view based on lu_account_status limited to account_status_id in
(0,1) for enabled/disabled.

Column Description Data-Type

license_entity_status_id The fixed numeric ID for status. tinyint(4)

The description for Status can be:

license_entity_status_desc l Enabled (1) varchar(25)

l Disabled (0)

Copyright © 2023 All Rights Reserved 226


Plat fo r m An alyt ics

lu_product
lu_product is lookup table for all the Products. To access each product a
user needs to have a set of privileges.

Column Description Data-Type

product_id The fixed numeric ID for the Product. int(11)

product_desc The description for the Product. varchar(255)

lu_privilege
The static list of all ​ P rivileges​ . For more information about project access
privileges, see ​ L ist of Privileges​ .

Data-
Column Description
Type

The fixed numeric ID for Privileges. This is the source column


privilege_id int(11)
for the Privilege attribute.

privilege_ varchar
The description for the Privilege.
desc (255)

Copyright © 2023 All Rights Reserved 227


Plat fo r m An alyt ics

Badge Resource Hierarchy

The attributes related to resources are specific to Identity transactions.


Some of these attributes share common elements. For example, a door
authenticated through the Identity server can be a Gateway as well as a
Space. Sample reports at the end of the section explain the data.

lu_gateway
A Gateway is an access point that requires the authentication of an account.
A gateway is a unique physical, logical, or Badge desktop resource into
which the account authenticates. It is the consolidation of the data from the
Application, Space, and Desktop tables. The columns are populated by the
PACS system configured in Network Manager, the Logical Applications
configured in Network Manager, or when a user configured the Badge
Desktop client for unlocking personal machines.

If the transaction does not correspond to accessing a resource (for example,


uploading a new badge photo or running a report in MicroStrategy) a default
value of Not Applicable is assigned.

Copyright © 2023 All Rights Reserved 228


Plat fo r m An alyt ics

Column Description Data-Type

The numeric ID of the gateway generated by Identity


gateway_id bigint(20)
server.

The name of the individual physical, logical resource, or


desktop machine.

For example,
varchar
gateway_desc
l Elevator A (255)

l Office 365

l MAC-JSMITH

The current status of the gateway. A gateway status can


be:
varchar
gateway_status
l Active (25)

l Deleted – deleted from Network Manager

creation_
The UTC timestamp when the gateways was first created. datetime
timestamp

The latest UTC timestamp of the gateway modification.


modification_
The value will continually change as the datetime
timestamp
properties/configuration of the gateway changes.

A flag to indicate if the gateway is a physical, logical or


logical_
desktop resource. This column is specific for the ETL and int(11)
physical_flag
not included in the reporting schema.

gateway_
The numeric ID of the corresponding gateway category. tinyint(4)
category_id

The numeric ID of the Network. All gateways are stored at


network_id bigint(20)
the level of Network.

lu_gateway_category
A Gateway Category is an automated categorization of the gateway
populated through the Platform Analytics ETL. If the transaction does not
correspond to accessing a Badge resource, for example, uploading a new

Copyright © 2023 All Rights Reserved 229


Plat fo r m An alyt ics

badge photo or running a Report in MicroStrategy, a default value of Not


Applicable is assigned.

Column Description Data-Type

gateway_category_id The numeric ID of the gateway category. tinyint(4)

A categorization of the gateway.

l Physical - PACS system


gateway_category_desc varchar(25)
l Logical - logical web applications

l Desktop - Mac or Windows

lu_application
An Application is a logical web application that requires the authentication
from an account. For example, an Application could be Salesforce, Office
365, or Rally. For a full list of web application that can be configured with
Badge, see Signing into Web Applications.

If the transaction does not correspond to accessing an application i.e.


opening a door or running a report a default value of Not Applicable is
assigned.

Column Description Data-Type

The numeric ID of the application generated by Identity


application_id bigint(20)
server.

The name of the logical application that an account


authenticates into.
application_ For example, varchar
desc (255)
l Rally

l Office 365

application_ The current status of the application. An application status varchar

Copyright © 2023 All Rights Reserved 230


Plat fo r m An alyt ics

Column Description Data-Type

status can be Active or Deleted. (25)

creation_ The UTC timestamp when the application was configured


datetime
timestamp through Network Manager.

The latest UTC timestamp of the application modification.


modification_
The value will continually change as the datetime
timestamp
properties/configuration of the application is updated.

The numeric ID of the Network. All applications are stored


network_id bigint(20)
at the level of Network.

Campus > Facility > Floor > Space


The physical resource hierarchy is intended to provide a categorization for
enriched analysis of the PACS system. The Space attribute elements are
populated directly from values in the PACS system. The higher level
attributes are manually added by the customer by following the steps to
create a hierarchy of your Physical resources. The categorization is elective.
If no categorization is found, default values will be provided.

lu_space
A Space is a physical building access point that requires the authentication
of an account. The Space desc form is populated directly by the values in
the PACS system. The PACS system is configured through Network
Manager (see Configuring PACS).

Column Description Data-Type

space_id The numeric ID of the space generated by Identity server. bigint(20)

The name of the PACS access point. This value is imported


directly from the PACS system. varchar
space_desc
(255)
For example,

Copyright © 2023 All Rights Reserved 231


Plat fo r m An alyt ics

Column Description Data-Type

l Elevator A

l Parking Garage Door 3

creation_ The UTC timestamp when the PACS space was first
datetime
timestamp imported through Network Manager.

The latest UTC timestamp when the space was modified.


modification_
The value will continually change as the datetime
timestamp
properties/configuration of the space are updated.

The current status of the space. A space’s status can be varchar


space_status
Active or Deleted. (25)

The Floor ID to which the space corresponds. The


relationship with Floor is configured through Network
floor_id bigint(20)
Manager. If no Floor is mapped, the default value is Not
Applicable.

The numeric ID of the Network. All spaces are stored at the


network_id bigint(20)
level of Network.

lu_floor
A ​ F loor​ is the grouping of spaces. For example, Lobby, 10th Floor, Parking
Garage, etc.

Column Description Data-Type

floor_id The numeric ID of the Floor generated by Identity server. bigint(20)

The name of the Floor to which the space is mapped to and varchar
floor_desc
created in Network Manager. (255)

creation_ The UTC timestamp when the Floor was first created
datetime
timestamp through Network Manager.

modification_ The latest UTC timestamp when the Floor was modified.
datetime
timestamp The value will change as the mapping to spaces or names

Copyright © 2023 All Rights Reserved 232


Plat fo r m An alyt ics

Column Description Data-Type

of the Floor are updated.

The current status of the Floor. A Floor’s status can be varchar


floor_status
Active or Deleted. (25)

The facility id to which the floor corresponds. The


facility_id relationship with floor is configured through Network bigint(20)
Manager.

The numeric ID of the Network. All floors are stored at the


network_id bigint(20)
level of Network.

lu_facility
A ​ F acility​ is the grouping of floors, such as Headquarters or London Office,
and is representative of a building. Each ​ F acility​ has a ​ F acility Address​
associated with the location of the building. A network administrator can add
the address of the facility which to provide an additional level of analysis of
Badge transactions.

The Platform Analytics ETL stores the facility address and draws a 500-
meter radius around the facility. Any Badge transaction (long/lat) that
happens within the 500-meter radius is mapped to the stored facility
address. Any transaction outside the radius is mapped to Not Applicable.
The true location of the user is never manipulated for the Longitude/Latitude
attributes.

If a user is detected within two facility radiuses, the Platform Analytics ETL
will choose the facility which is the closest distance. Both the Facility and
Facility Address are configured through Network Manager. The radius is not
configurable.

There are multiple benefits for adding with the Facility Address. When two
users perform an Badge transaction from the same location, the mobile
device can send longitude/latitude data that is different. This a limitation
with geo data from mobile device. The Facility Address attribute allows an

Copyright © 2023 All Rights Reserved 233


Plat fo r m An alyt ics

analyst to compare aggregate values. By grouping all transactions to a


single facility address at HQ, Platform Analytics can compare how many
Badge transactions occurred in a defined area. Previously, the end user had
to manually group the transactions within the dossier or dataset in order to
have this aggregate view.

Additionally, this allows an administrator to answer questions like:

l Did the user open the ​ S pace​ (i.e. PACS system) remotely?

l What logical ​ A pplications​ were accessed from my headquarters


facility?

Column Description Data-Type

facility_id The numeric ID of the facility. bigint(20)

The name of the facility to which the Floor is mapped. The varchar
facility_desc
value is added through Network Manager. (255)

The current status of the facility. A facility’s status can be varchar


facility_status
Active or Deleted. (25)

creation_ The UTC timestamp when the Facility was first created
datetime
timestamp through Network Manager.

The latest UTC timestamp when the facility was modified.


modification_
The value will change as the mapping to Floors or name of datetime
timestamp
the facility are updated.

The corresponding latitude value for the Facility Address


based on the address added in Network Manager. The
facility_latitude double
latitude and longitude coordinates are used to draw the
radius.

The corresponding longitude value for the Facility Address


facility_ based on the address added in Network Manager. The
double
longitude latitude and longitude coordinates are used to draw the
radius.

location_desc The street address, city, state, country of the facility added varchar

Copyright © 2023 All Rights Reserved 234


Plat fo r m An alyt ics

Column Description Data-Type

through Network Manager. (1096)

updated_
MicroStrategy internal use. int(11)
location_flag

address_id MicroStrategy internal use. bigint(20)

The campus ID to which the facility corresponds. The


campus_id relationship with Facility is configured through Network bigint(20)
Manager.

The numeric ID of the Network. All facilities are stored at


network_id bigint(20)
the level of Network.

lu_campus
A ​ C ampus​ is a collection of ​ F acilities. The campus to facility mapping is
configured through Network​ Manager.

Column Description Data-Type

campus_id The numeric ID of the campus. bigint(20)

The name of the campus to which the facilities are mapped varchar
campus_desc
in Network Manager. (255)

The current status of the campus. A campus’ status can be varchar


campus_status
Active or Deleted. (25)

creation_ The UTC timestamp when the Campus was first created
datetime
timestamp through Network Manager.

The latest UTC timestamp when the campus was modified.


modification_
The value will change as the mapping to facilities change datetime
timestamp
or if the name of the campus is updated.

The numeric ID of the Network. All campuses are stored at


network_id bigint(20)
the level of Network.

Copyright © 2023 All Rights Reserved 235


Plat fo r m An alyt ics

lu_facility_address
This table is a view on the lu_facility table. Each Facility has a Facility
Address associated with the location of the building.

Data-
Column Description
Type

facility_
The numeric ID of the facility address. bigint(20)
address_id

facility_
The name of the facility to which the address corresponds. The varchar
address_
value is added through Network Manager. (255)
desc

The corresponding latitude value for the Facility Address based


facility_
on the address added in Network Manager. The latitude and double
latitude
longitude coordinates are used to draw the radius.

The corresponding longitude value for the Facility Address


facility_
based on the address added in Network Manager. The latitude double
longitude
and longitude coordinates are used to draw the radius.

facility_ The street address, city, state, country of the facility added
varchar
street_ through Network Manager. The address is used for the 500-
(1096)
address meter-radius ETL logic mentioned previously.

The numeric ID of the Network. Each facility address is stored at


network_id bigint(20)
the level of Network.

lu_beacon
A ​ B eacon​ can be configured to provide access to physical gateways/spaces
or identify a users location.

Column Description Data-Type

beacon_id The numeric ID of the beacon generated by Identity server. bigint(20)

Copyright © 2023 All Rights Reserved 236


Plat fo r m An alyt ics

Column Description Data-Type

varchar
beacon_desc The name of the beacon configured in Network Manager.
(255)

The current status of the beacon. A beacon’s status can


be:
varchar
beacon_status l Active
(25)
l Deleted

l Inactive

The UUID of the beacon, supplied to you by your third-party varchar


beacon_uuid
beacon provider. (255)

The minor value you assigned to this beacon using your


beacon_minor int(11)
third-party setup tool.

The major value for your beacon. If you have more than one
beacon_major building in your network, you can have multiple major int(11)
values.

creation_ The UTC timestamp when the beacon was configured


datetime
timestamp through Network Manager.

The latest UTC timestamp of the beacon modification. The


modification_
value will continually change as the datetime
timestamp
properties/configuration of the beacon are updated.

The numeric ID of the Network. All beacons are stored at


network_id bigint(20)
the level of Network.

lu_bar_code
A Barcode can represent people (such as an employee, a customer, a
contractor, or a party to a transaction) and objects (such as a vehicle,
computer, package, contract, or transaction receipt). Scanning a barcode
links the person, place, or thing identified by the barcode with the Badge
user who performed the scan.

Copyright © 2023 All Rights Reserved 237


Plat fo r m An alyt ics

You can scan third-party barcodes and QR codes to display the data in
Platform Analytics. The barcode is scanned using the Badge app. The exact
barcode string is stored in the lu_bar_code table. The string can be parsed
using derived attributes in MicroStrategy.

Scanning the codes allows you to create reports and dossiers that tie the
scanning transaction and the data inside the code to Badge, Identity, and
telemetry features. This feature also provides a way to document a
transaction or link a Badge user and another person or object, using an
identity-centric and location-aware approach. For example, you can
document the path of a package from its source to its destination, providing
information on who, where, and when the package was handled.

Data-
Column Description
Type

bar_code_
The numeric ID of the bar code. bigint(20)
id

The string of the bar code. This string is series of numbers/letters


bar_code longtext
that can be parsed using Derived Attributes.

bar_code_
The ID of the corresponding barcode type. int(11)
type_id

lu_bar_code_type
The format type of the barcode scanned by the Badge app.

Data-
Column Description
Type

bar_code_
The numeric ID of the barcode type. int(11)
type_id

bar_code_ The format type of the barcode the user scans. The Badge app varchar
type_desc supports scanning the following types, (255)

Copyright © 2023 All Rights Reserved 238


Plat fo r m An alyt ics

Data-
Column Description
Type

l AZTEC

l CODABAR – Android only

l CODE39

l CODE39 MOD43 – iOS only

l CODE93

l CODE128

l DATAMATRIX

l EAN8

l EAN13

l INTERLEAVED2OF5

l ITF14 – iOS only

l PDF417

l QR

l UPC-E

lu_desktop
A ​ D esktop​ is the name of the user’s Mac or Windows machine that was
locked or unlocked using their Badge app.

Column Description Data-Type

desktop_id The numeric ID of the desktop. bigint(20)

The name of the user’s desktop (e.g. personal machine)


that is paired with the Badge app.
varchar
desktop_desc
For example, (255)

l MAC-JSMITH

Copyright © 2023 All Rights Reserved 239


Plat fo r m An alyt ics

Column Description Data-Type

l WAS-RJONES

l John’s MacbookPro

The current status of the desktop. A desktop’s status can varchar


desktop_status
be Active or Deleted. (25)

desktop_os_id Microstrategy internal use. bigint(20)

The UTC timestamp when the desktop (i.e personal


creation_
machine) was first paired with a device using the Desktop datetime
timestamp
Client.

The latest UTC timestamp when the desktop was modified.


modification_
The value will change if the desktop pairing setting is datetime
timestamp
changed or a new device is paired.

The numeric ID of the Network. Each desktop is stored at


network_id bigint(20)
the level of Network.

lu_desktop_unlock_setting
The Desktop Unlock Setting is configured by the end user to control the
proximity that triggers the unlock feature on either a Mac or a Windows
machine. This setting can be changed at any time and therefore is stored at
the transactional level in the access transaction fact table.

Data-
Column Description
Type

desktop_
unlock_ The numeric ID of the desktop unlock setting. tinyint(4)
setting_id

desktop_ The desktop unlock setting which is used to determine the


range for unlocking the personal Mac/Windows machine. varchar
unlock_
(25)
setting_desc For example:

Copyright © 2023 All Rights Reserved 240


Plat fo r m An alyt ics

Data-
Column Description
Type

l Close

l Nearby

l Far

Badge Location Hierarchy

In MicroStrategy Badge the Longitude and Latitude are logged for all
transactions on Badge app. The location setting can be managed from
Network Manager and the personal device of the user.

In order to provide additional level on analysis, Platform Analytics uses a


Google® API call to populate the Address, City, State and Country
associated with each set of lat/long coordinates. By default, you are subject
to the limits of the Google® Geocoding API free license, this is noted as a
prerequisite in the Platform Analytics. For more prerequisites, see Platform
Analytics Prerequisites. If the API limit has been reached, the address, city,
state, and country will be recorded as Unknown. However, the latitude and
longitude will always be recorded if sent from a mobile device. If the location
is restricted by the device, the longitude/latitude values will be null and the
location tables will be populated with Location Services Disabled.

Copyright © 2023 All Rights Reserved 241


Plat fo r m An alyt ics

The rows in the tables are the exact values provided by Google. In some
cases, Google will change the value returned in the API of the
address/city/state/country and a new record will be added to the table. For
example, Colorado and CO.

In the Location hierarchy, the Address attribute is the direct child of City,
State and Country attributes. This will help reduce the in-memory joins
between the attributes to improve the performance of reports and dossiers
related to the Location Hierarchy.

To use location-based analysis in Platform Analytics, by default, you are


subject to the limits of the Google® Geocoding API free license. For heavier
use of location-based features, purchase a Google Maps API for Work
license. For information on the Google Maps API, see Google Maps
Platform.

lu_country
The list of countries returned by the Google® API from a corresponding
latitude/longitude transaction by the user.

Column Description Data-Type

country_id The auto-generated numeric ID of the Country. bigint(20)

The name of the country returned from the Google® API.

Example elements include,

l Unknown

country_desc l Location Services Disabled varchar(255)

l United States

l Germany

l China

Copyright © 2023 All Rights Reserved 242


Plat fo r m An alyt ics

lu_state
The list of states returned by the Google® API for which there has been a
corresponding latitude/longitude transaction by the user. Not all longitude
and latitude coordinates are associated with a State. In these cases, the row
will be populated with No State (<name of the country>).

Column Description Data-Type

state_id The auto-generated numeric ID of the state. bigint(20)

The name of the state returned from the Google® API.

Example elements include,

l Unknown varchar
state_desc
l Location Services Disabled (255)

l No State(Germany)

l Virginia

The numeric ID value for the corresponding country to each


country_id bigint(20)
state.

lu_city
The list of cities returned by the Google® API for which there has been a
corresponding latitude/longitude transaction by the user. Not all longitude
and latitude coordinates are associated with a city. In these cases, the row
will be populated with Unknown (<name of the state>).

Column Description Data-Type

city_id The auto-generated numeric ID of the city. bigint(20)

The name of the state returned from the Google® API.

city_desc Example elements include, varchar(255)

l Unknown

Copyright © 2023 All Rights Reserved 243


Plat fo r m An alyt ics

Column Description Data-Type

l Location Services Disabled

l Unknown (Virginia)

l Tysons Corner

state_id The numeric ID for the corresponding state to each city. bigint(20)

country_id The numeric ID for the corresponding country to each city. bigint(20)

lu_address
The list of Address returned by the Google® API from a corresponding
latitude/longitude transaction by the user. Not all longitude and latitude
coordinates are associated with an address. In these cases, the row will be
populated with Unknown (<name of the state>).

Column Description Data-Type

address_id The auto-generated numeric ID of the address. bigint(20)

latitude The latitude coordinate associated with the address. double

longitude The longitude coordinate associated with the address. double

The street address returned from the Google® API.

Example elements include,


street_ varchar
l Unknown
address (255)
l Location Services Disabled

l 1850 Towers Cres Plaza, Tysons, VA 22182, USA

city_id The numeric ID for the corresponding city to each address. bigint(20)

The numeric ID for the corresponding state to each


state_id bigint(20)
address.

The numeric ID for the corresponding country to each


country_id bigint(20)
address.

Copyright © 2023 All Rights Reserved 244


Plat fo r m An alyt ics

Communicator Inbox Messages


These tables track the two-way communication transactions between
Communicator app and Badge app. From Communicator, an administrator
can create two types of communication messages, (1) a survey question with
a list of responses; (2) a notification which the can only be confirmed. These
notifications are sent badges on the Badge app where the user can respond.

In addition to the initial message, a follow-up notification can also be sent.


This is tracked using the Parent Message.

In the Platform Analytics project, there is no project schema based off the
warehouse tables. To build a custom report, use the Communicator Inbox
Messages data import cube.

lu_usher_inbox_messages
This table stores the list of messages and the parent message sent from
Communicator. It also stores the sender device and location information at
the time each message was sent.

Column Description Data-Type

message_id This is the message ID generated by Identity server. bigint(20)

message_ The text of the survey or notification message the varchar


desc administrator sends to the Badge. (255)

creation_
This is the UTC timestamp from when the message was sent. datetime
timestamp

latitude The latitude of the sender. double

longitude The longitude of the sender. double

The message_id of the original parent message. This is


parent_ populated if the user sends a follow up message from
bigint(20)
message_id Communicator. If no follow-up message is sent, this value will
be null.

Copyright © 2023 All Rights Reserved 245


Plat fo r m An alyt ics

Column Description Data-Type

network_id The numeric ID of the network for which the sender belongs. bigint(20)

The numeric account ID of the Communicator administrator


sender_
who sent the message. In Platform Analytics, each Badge has bigint(20)
account_id
a unique account ID.

sender_ The numeric badge id of the Communicator administrator who


bigint(20)
badge_id sent the message.

The numeric ID of the Communicator app version from which


app_id int(11)
the message was sent.

usher_
The numeric ID of device from which the message was sent. bigint(20)
device_id

os_id The numeric ID of OS from which the message was sent. int(11)

lu_usher_inbox_responses
This table stores the list of possible responses for each message sent from
Communicator.

There are two types of messages, the first type is confirmation. This is when
the admin states a question and the Identity server user would reply by
hitting the key Confirm. Confirm is the default value we use if there are no
options given. The second type of message is one sent with options. For
example, if the Communicator admin asked “Do you like red or blue?" This
table would then get two entries, one for red with its own response_id and
one for blue with its own response_id with both having the same message
ID.

Data-
Column Description
Type

This ID is auto generated for the possible choices the bigint


response_id
responder selects in Badge. (20)

Copyright © 2023 All Rights Reserved 246


Plat fo r m An alyt ics

Data-
Column Description
Type

response_ varchar
This is the response of the user using the Badge.
desc (25)

bigint
message_id This is the message id generated by the Identity server.
(20)

fact_usher_inbox_messages
This table stores each of the Communicator messages sent to the individual
Badge users. It also stores the time each message was sent.

Column Description Data-Type

message_id This is the message ID generated by Identity server. bigint(20)

responder_ For Platform Analytics only, this is the account for the
bigint(20)
account_id person who received the message.

responder_ For Platform Analytics only, this is the badge for the person
bigint(20)
badge_id who received the message.

sent_ This is the UTC timestamp from when the message was
datetime
timestamp sent.

local_sent_ This is the local timestamp from when the message was
datetime
timestamp sent.

sent_date This is the UTC date from when the message was sent. date

local_sent_
This is the local date from when the message was sent. date
date

fact_usher_inbox_messages_view
This View is used to track who responded to a specific message over the last
14 days only.

Copyright © 2023 All Rights Reserved 247


Plat fo r m An alyt ics

Column Description Data-Type

message_id This is the message ID generated by Identity server. bigint(20)

responder_ MicroStrategy internal use. This is the account for the


bigint(20)
account_id person who received the message.

sent_
The UTC timestamp from when the message was sent. datetime
timestamp

local_sent_ The local timestamp when the message was sent to the
datetime
timestamp user.

fact_usher_inbox_responses
This is the fact table that tracks each Communicator response. If an admin
sends 100 messages but only 50 people reply, this table would have 50
entries.

Column Description Data-Type

message_id This is the message ID generated by Identity server. bigint(20)

This ID is auto generated in Lu_usher_inbox_responses. It


response_id is how we know what option the Identity server user bigint(20)
selected.

responder_ For Platform Analytics only, this is the account for the
bigint(20)
account_id person who received the message.

responder_ For Platform Analytics only, this is the badge for the person
bigint(20)
badge_id who received the message.

latitude Latitude of the responder. double

longitude Longitude of the responder. double

The numeric ID of the Communicator app version from


app_id int(11)
which the message was sent.

user_device_ The numeric ID of device from which the message was sent. bigint(20)

Copyright © 2023 All Rights Reserved 248


Plat fo r m An alyt ics

Column Description Data-Type

id

os_id The numeric ID of os from which the message was sent. int(11)

facility_ The numeric ID of facility address from which the message


bigint(20)
address_id was sent.

network_id The numeric ID of the network for which the sender belongs.

response_ This is the UTC timestamp from when the reply to the
datetime
timestamp message was sent.

local_
Local timestamp from when the reply to the message was
response_ datetime
sent.
timest amp

response_ This is the UTC date from when the reply to the message
date
date was sent.

local_
This is the local date from when the reply to the message
response_ date
was sent.
date

fact_usher_inbox_responses_view
This view is to help easily track which message is replied to by which users
over the last 14 days.

Column Description Data-Type

message_id This is the message id generated by Identity server. bigint(20)

This ID is auto generated in Lu_usher_inbox_responses. It


response_id is how we know what option the Identity server user bigint(20)
selected.

responder_ For Platform Analytics only, this is the account for the
bigint(20)
account_id person who received the message.

Copyright © 2023 All Rights Reserved 249


Plat fo r m An alyt ics

Column Description Data-Type

latitude Latitude of the user. double

longitude Longitude of the user. double

User_device_
The numeric ID of device from which the message was sent. bigint(20)
id

response_ The UTC timestamp from when the reply to the message
datetime
timestamp was sent.

local_
The local timestamp from when the reply to the message
response_ datetime
was sent.
timest amp

Platform Analytics Configuration Tables


etl_pa_version
This table can be used to track which version of Platform Analytics was
installed and/or updated.

The historical files version are also tracked in this table. The latest version
of the procedures and DDL files will indicate which Platform Analytics file
versions are currently running.

Column Description Data-Type

The name of the file that the installer executes in the


filename varchar(128)
warehouse.

The type of file being executed.

For example:
filetype varchar(32)
l DDL

l Procedure

Copyright © 2023 All Rights Reserved 250


Plat fo r m An alyt ics

Column Description Data-Type

l Data Update

insert_ts The UTC timestamp when the file was called by the installer. datetime

etl_network_control
This table controls which Networks will be processed into the Platform
Analytics warehouse. By default, the table is empty, so data for all the
networks is processed. If an ID of a Network is inserted in this table,
Platform Analytics will only process this specific network and exclude all
others.

Column Description Data-Type

network_id The ID of the network to be processed. bigint(20)

network_desc The description/name of the network. varchar(255)

Copyright © 2023 All Rights Reserved 251


Plat fo r m An alyt ics

In st al l i n g Pl at f o r m An al yt i cs
The Platform Analytics monitoring tool is a component of the MicroStrategy
platform. Thus, you can install Platform Analytics on Windows and Linux by
using the MicroStrategy Installation Wizard. View the Platform Analytics
Prerequisites before installing.

To install Platform Analytics, see Installing with the MicroStrategy


Installation Wizard.

Platform Analytics requires the installation and configuration of the following


components:

l At least one Telemetry Server

l This component is installed by selecting the Telemetry Server option in


the MicroStrategy Installation Wizard.

l This component can be clustered in the MicroStrategy Installation


Wizard.

l One Telemetry Store

l This component is installed by selecting the Platform Analytics option in


the MicroStrategy Installation Wizard.

l One Telemetry Cache

l This component is installed by selecting the Platform Analytics option in


the MicroStrategy Installation Wizard.

l One Platform Analytics Repository

l This component is installed by selecting the Platform Analytics option in


the MicroStrategy Installation Wizard. For more information, see
MicroStrategy Repository.

l If you upgrade from an earlier version of Platform Analytics to 2020 or


above, you can choose to use MySQL as the repository or to migrate

Copyright © 2023 All Rights Reserved 252


Plat fo r m An alyt ics

your existing data from MySQL to PostgreSQL. For more information,


see Migrate Data from a MySQL Database to a PostgreSQL Database.

Once you have successfully installed the components mentioned above, you
need to configure Platform Analytics. For more information, see Configuring
Platform Analytics. If you want to update your Platform Analytics project, see
the Upgrade Help.

Platform Analytics Prerequisites


Before installing Platform Analytics on a Windows or Linux machine, ensure
that all of the following prerequisites are met.

l The following ports must be open and available in the machines where you
will install the Telemetry Server(s) and Platform Analytics:

l 2181

l 2888 and 3888 (only if you plan to cluster three or more Telemetry
Servers)

l 5432

l 6379

l 9092

l You must create a MicroStrategy user in the group System Monitors >
System Administrators or have access to the default Administrator user.

l For an estimation of resource requirements for stable and performant


operation of the Telemetry Store (previously called Platform Analytics
Consumer) architecture under a consistent transactional load, see
KB482872: Capacity Planning for Platform Analytics.

l If installing PostgreSQL on another machine, the database user for the


Platform Analytics Repository must be configured to allow remote access.

Copyright © 2023 All Rights Reserved 253


Plat fo r m An alyt ics

Remote access can be enabled at the time of creating the DB User or after
by following these steps:

1. Open the pg_hba.conf file in the following location:

On Windows: C:\Program Files (x86)\Common


Files\MicroStrategy\Repository\pgsql\PGDATA\pg_
hba.conf

On Linux:
/opt/mstr/MicroStrategy/Repository/pgsql/PGDATA/pg_
hba.conf

2. Add the following line to the file: host platform_analytics_wh


mstr_pa {Machine IP}/32 password

For example, host platform_analytics_wh mstr_pa


10.21.67.61/32 password.

3. Save the file.

4. Restart the PostgreSQL database.

l If installing MySQL on another machine, the database user for the


Platform Analytics Repository must be configured to allow remote access.
Remote access can be enabled at the time of creating the DB User or after
by following these steps:

1. Connect to the MySQL Server using any client (MySQL Workbench,


DB Query Tool, etc.).

2. Run the command:

UPDATE mysql.user SET Host='%' WHERE Host='localhost' AND User='<DB


User Name>'; FLUSH PRIVILEGES;

3. Replace <DB User Name> with the user installing the Platform
Analytics Repository.

Copyright © 2023 All Rights Reserved 254


Plat fo r m An alyt ics

4. The database user can now connect to this MySQL server instance
from any remote machine.

Copyright © 2023 All Rights Reserved 255


Plat fo r m An alyt ics

Co n f i gu r i n g Pl at f o r m An al yt i cs
After you have successfully installed all the components required for
Platform Analytics to work, you will need to configure Platform Analytics.
The following sections can assist you during the configuration process:

1. Configure

a. Single Node

b. Remote Platform Analytics Warehouse

c. High Throughput

2. Load Object Telemetry

3. Client Telemetry Configuration

One of the key benefits of Platform Analytics is that it can monitor multiple
MicroStrategy environments from a centralized location. However, after
completing the initial configuration above, Platform Analytics will only
monitor the environment that you configured to log telemetry. To monitor
additional environments, you must individually configure them to send
telemetry to Platform Analytics. See Monitor Metadata Repositories Across
Multiple Environments for more information.

Configure Single Node Telemetry


Client telemetry was introduced in MicroStrategy 2021 Update 8 (December
2022). Starting in MicroStrategy 2021 Update 9, this preview feature is out-
of-the-box.

A single node telemetry configuration can be used to process data from one
or more MicroStrategy Intelligence server clusters. This topic discusses how
to configure single node telemetry on your enterprise platform.

You have the flexibility to install single node telemetry on:

Copyright © 2023 All Rights Reserved 256


Plat fo r m An alyt ics

l Servers in a separate environment. For example you could have a Platform


Analytics and Telemetry server on Machine 1 and Intelligence server(s)
installed on Machine 2 (or 3 and beyond for Intelligence server clusters).

l The same environment that hosts your Intelligence server or one of its
clusters.

Get started with the following topics:

l Installation Restrictions

l Install Components

l Configure Telemetry Logging

Installation Restrictions
Currently, Telemetry server is an integral part of Intelligence server
installation.

Copyright © 2023 All Rights Reserved 257


Plat fo r m An alyt ics

If you choose a single node telemetry configuration, where your Telemetry


server is installed on a separate environment, you must install both
Intelligence and Telemetry servers in conjunction into a desired
environment. At a later time you will need to stop the Telemetry server
process (Kafka and Zookeeper services) on each environment hosting your
intended Intelligence server nodes (machines 1,2, and 3), while stopping the
Intelligence server process on the intended host for the Telemetry server
(machine 4).

Install Components
Start by installing components on the corresponding environments.

Copyright © 2023 All Rights Reserved 258


Plat fo r m An alyt ics

1. Choose the following components for machine 4 in the Installation


wizard:

l Platform Analytics

l MicroStrategy Repository

l MicroStrategy Intelligence

l MicroStrategy Telemetry Server

2. After installation, turn off MicroStrategy Intelligence server.

3. Choose the following components in the Installation wizard for


machines 1, 2, and 3:

l MicroStrategy Intelligence

l MicroStrategy Telemetry Server

4. After installation, turn off MicroStrategy Telemetry server (Kafka &


Zookeeper services).

Configure Telemetry Logging


After you have installed all the components in the previous section, follow
the steps below to configure telemetry logging and the Platform Analytics
project.

1. Grant Access to the Platform Analytics Repository for Single Node


Telemetry

2. Enable Telemetry Logging for Intelligence Servers When Using Single


Node Telemetry

3. Create and Configure the Platform Analytics Project for Single Node
Telemetry

4. Trigger the Initial Load of Object Telemetry

5. Client Telemetry Configuration

Copyright © 2023 All Rights Reserved 259


Plat fo r m An alyt ics

Grant Access to the Platform Analytics Repository for Single


Node Telemetry
By default, Microstrategy Repository allows access to and from a local
machine. If you choose to host Microstrategy Repository on a separate
machine, you must grant access to the Intelligence server (each node of the
cluster) to query the data.

1. Navigate to the repository installation path.

Windows default location:

C:\Program Files (x86)\Common


Files\MicroStrategy\Repository\pgsql\PGDATA\pg_
hba.conf

Linux default location ;

/opt/mstr/Microstrategy/install/Repository/pg_data/pg_
hba.conf

2. Modify pg_hba.conf to grant access to the Intelligence server (each


node of the cluster). Add the following lines to the bottom of the file:

host platform_analytics_wh mstr_pa <I-servermachine node1 IP>/32 password


host platform_analytics_wh mstr_pa <I-servermachine node2 IP>/32 password
host platform_analytics_wh mstr_pa <I-servermachine node3 IP>/32 password

3. Save your changes.

Enable Telemetry Logging for Intelligence Servers When


Using Single Node Telemetry
The Intelligence server telemetry logs are a source of data that Platform
Analytics uses to analyze the performance and utilization of your
MicroStrategy system. This telemetry logging can be enabled using either
the Configuration wizard or Command Manager.

Copyright © 2023 All Rights Reserved 260


Plat fo r m An alyt ics

There are two different levels of statistics that can be enabled for logging,
basic and advanced. The Configuration wizard can only configure basic
statistics, while Command Manager can configure both.

l Enable Statistics Using the Configuration Wizard

l Enable Statistics Using Command Manager

Enable Statistics Using the Configuration Wizard


The MicroStrategy Configuration wizard allows you to enable basic statistic
for all active projects in the metadata. If a new project is created after the
Configuration wizard setup, you must enable basic statistics for that project
using Command Manager.

Advanced statistics cannot be enabled using the Configuration wizard. Use


Command Manger to enable advanced statistics

1. Launch the Configuration wizard from the Intelligence server machine


and select Configure Intelligence Server.

Copyright © 2023 All Rights Reserved 261


Plat fo r m An alyt ics

2. Follow the Configuration wizard instructions until you reach the


Platform Analytics Configuration section and select Send
Intelligence Server telemetry to Platform Analytics. Once this
option is enabled for Platform Analytics, basic statistics are enabled by
default and cannot be disabled using the Configuration wizard.

3. In Telemetry Server Address enter the following information

Telemetry Server Host: The Kafka server fully qualified domain name
or IP address. The default is 127.0.0.1 if Platform Analytics is installed
on same machine. If not, provide the Platform Analytics machine's IP
address (Machine 4 in this example).

Telemetry Server Port: The Kafka server port number. The default is
9092.

Use <host:port> format as shown below.

Enable Statistics Using Command Manager


Command Manager allows you to enable basic or advanced statistics
logging on a project-by-project basis.

Copyright © 2023 All Rights Reserved 262


Plat fo r m An alyt ics

You must have the following privileges to enable statistics from Command
Manager:

l Use Command Manager

l Configure server basic

Enable Basic Statistics

1. Launch Command Manager and connect to the Intelligence server in


which you want to enabled statistics logging. Repeat steps 2-4 for each
Intelligence server node in a cluster.

2. Execute the following command to configure the Intelligence server to


send statistics logs to the MicroStrategy Telemetry server:

ALTER SERVER CONFIGURATION ENABLEMESSAGINGSERVICES TRUE


CONFIGUREMESSAGINGSERVICES"bootstrap.servers:<kafka IP>:<Kafka
Port>/batch.num.messages:5000/queue.buffering.max.ms:2000";

3. To confirm the Intelligence server configuration is successfully applied,


execute the following command:

LIST PROPERTIES FOR SERVER CONFIGURATION;

4. If you see Telemetry Server enabled = TRUE and Messaging


Services Configuration = bootstrap.servers:<IP:Port> in
the output, the configuration is successful.

Copyright © 2023 All Rights Reserved 263


Plat fo r m An alyt ics

Enable Advanced Statistics

1. Execute the following command to enable each project in the


environment to log a basic or advanced level of statistics. Make sure to
replace the project name with the projects you want to monitor.

ALTER PASTATISTICS (BASICSTATS (ENABLED | DISABLED) [DETAILEDREPJOBS


(TRUE | FALSE)] [DETAILEDDOCJOBS (TRUE | FALSE)] [JOBSQL (TRUE | FALSE)]
[COLUMNSTABLES (TRUE | FALSE)] [MOBILECLIENTS (TRUE|FALSE)
[MOBILEMANIPULATION (TRUE|FALSE)] [MOBILECLIENTLOCATION (TRUE|FALSE)]])
IN PROJECT "<PROJECT_NAME>";

2. To confirm the statistics logging level was successfully applied to the


project, execute the following command:

LIST [ALL] PROPERTIES FOR PASTATISTICS IN PROJECT "<project_name>";

3. If you are using more than one Intelligence server environment, repeat
the above steps for all Intelligence server environments.

Create and Configure the Platform Analytics Project for Single


Node Telemetry
You can create a Platform Analytics project in any metadata. Create or add
the project to metadata using the Configuration wizard or a response file on
the Intelligence server machine that will be monitored by Platform Analytics.

Say for example, Platform Analytics and Telemetry server are installed on
machine 4 and the Intelligence server is installed on Machines 1, 2, and 3.
You can use the Configuration wizard on machines 1, 2, and 3 to create and
configure the Platform Analytics project.

Copyright © 2023 All Rights Reserved 264


Plat fo r m An alyt ics

l Create a Platform Analytics Project Using the Configuration Wizard

l Creating a Project Using a Response File

l Load Object Telemetry to the Platform Analytics Data Repository

Create a Platform Analytics Project Using the Configuration


Wizard
1. Open the Configuration wizard on your Intelligence server.

2. On the Welcome screen, select Create Platform Analytics project.

3. In DSN, choose the Platform Analytics repository. If DSN does not


appear, click New to create a new DSN linking to the Platform Analytics
repository of your choice.

If you have a cluster of Intelligence servers, after completing the steps


to successfully create a project, configure a DSN that points to the
same Platform Analytics repository of your choice on all the remaining
nodes.

Copyright © 2023 All Rights Reserved 265


Plat fo r m An alyt ics

4. Enter your credentials for the database.

5. Click Next.

6. Click Apply. The Configuration wizard automatically applies the


following configuration files:

l PlatformAnalyticsConfigurationNew.scp

l PlatformAnalyticsConfigurationNew_PostgreSQL.scp

l PlatformAnalyticsConfigurationUpgrade.scp

l PlatformAnalyticsConfigurationUpgrade_PostgreSQL.scp

7. If an error appears about being unable to automatically apply project


settings to Platform Analytics, you must manually update the project
settings. See Platform Analytics Project Configuration Scripts for more
information.

Copyright © 2023 All Rights Reserved 266


Plat fo r m An alyt ics

Configure Telemetry with a Remote Platform Analytics


Warehouse
This topic explains how to configure telemetry, connecting to Remote
Repository using a PostgreSQL database server. Remote Repository allows
users to centralize all their information in one database server without
running multiple instances. You can choose an out-of-the-box shipped
MicroStrategy Repository to be installed on the new machine, or opt for your
own instance of a PostgreSQL database server, provisioned by your
organization. The steps below are identical for both scenarios, assuming the
installation of Remote Repository (out-of-the-box or custom) is complete.

1. Grant Access to the Repository

2. Installation Prerequisites

Copyright © 2023 All Rights Reserved 267


Plat fo r m An alyt ics

1. Grant Access to the Repository


Remote Repository (out-of-the-box or custom) requires you to grant access
to the Platform Analytics service (known as PA Consumer) to write and store
data in the repository.

Copyright © 2023 All Rights Reserved 268


Plat fo r m An alyt ics

1. Navigate to repository installation path.

2. Modify pg_hba.conf to grant access to the Intelligence server (each


node of the cluster) as shown in the examples below.

Windows:

INSTALL_PATH\Repository\pgsql\PGDATA\pg_hba.conf

Linux:

INSTALL_PATH/Repository/pg_data/pg_hba.conf

3. Add the following lines to the end of the file.

host platform_analytics_wh mstr_pa <machine 1 IP>/32 password


host platform_analytics_wh mstr_pa <machine 2 IP>/32 password
host platform_analytics_wh mstr_pa <machine 3 IP>/32 password
host platform_analytics_wh mstr_pa <machine 4 IP>/32 password

4. If you opted for a custom deployment of PostgreSQL, make sure


database access is granted for Platform Analytics components, as well
as Intelligence server.

2. Installation Prerequisites
1. The following components must be installed on Machine 1, 2 and 3, as
shown in the above diagram.

l MicroStrategy Intelligence

l MicroStrategy Telemetry server

2. After installation, turn off MicroStrategy Telemetry server (Kafka and


Zookeeper services) to avoid unnecessary system resource
consumption.

3. The following components must be installed on Machine 4, as shown in


the above diagram:

Copyright © 2023 All Rights Reserved 269


Plat fo r m An alyt ics

l Platform Analytics

l MicroStrategy Intelligence

l MicroStrategy Telemetry server

During the installation of components, you are prompted to enter


repository connection information.

4. After installation, turn off MicroStrategy Intelligence server to reserve


system resources.

Enable Telemetry Logging for Intelligence Service When


Using a Remote Platform Analytics Warehouse
The Intelligence server telemetry logs are a source of data that Platform
Analytics uses to analyze the performance and utilization of your
MicroStrategy system. This telemetry logging can be enabled using either
the Configuration wizard or Command Manager.

Copyright © 2023 All Rights Reserved 270


Plat fo r m An alyt ics

There are two different levels of statistics that can be enabled for logging,
basic and advanced. The Configuration wizard can only configure basic
statistics, while Command Manager can configure both.

l Enable Statistics Using the Configuration Wizard

l Enable Statistics Using Command Manager

Enable Statistics Using the Configuration Wizard


The MicroStrategy Configuration wizard allows you to enable basic statistic
for all active projects in the metadata. If a new project is created after the
Configuration wizard setup, you must enable basic statistics for that project
using Command Manager.

Advanced statistics cannot be enabled using the Configuration wizard. Use


Command Manger to enable advanced statistics

1. Launch the Configuration wizard from the Intelligence server machine


and select Configure Intelligence Server.

Copyright © 2023 All Rights Reserved 271


Plat fo r m An alyt ics

2. Follow the Configuration wizard instructions until you reach the


Platform Analytics Configuration section and select Send
Intelligence Server telemetry to Platform Analytics. Once this
option is enabled for Platform Analytics, basic statistics are enabled by
default and cannot be disabled using the Configuration wizard.

3. In Telemetry Server Address enter the following information

Telemetry Server Host: The Kafka server fully qualified domain name
or IP address. The default is 127.0.0.1 if Platform Analytics is installed
on same machine. If not, provide the Platform Analytics machine's IP
address (Machine 4 in this example).

Telemetry Server Port: The Kafka server port number. The default is
9092.

Use <host:port> format as shown below.

Enable Statistics Using Command Manager


Command Manager allows you to enable basic or advanced statistics
logging on a project-by-project basis.

Copyright © 2023 All Rights Reserved 272


Plat fo r m An alyt ics

You must have the following privileges to enable statistics from Command
Manager:

l Use Command Manager

l Configure server basic

Enable Basic Statistics

1. Launch Command Manager and connect to the Intelligence server in


which you want to enabled statistics logging. Repeat steps 2-4 for each
Intelligence server node in a cluster.

2. Execute the following command to configure the Intelligence server to


send statistics logs to the MicroStrategy Telemetry server:

ALTER SERVER CONFIGURATION ENABLEMESSAGINGSERVICES TRUE


CONFIGUREMESSAGINGSERVICES"bootstrap.servers:<kafka IP>:<Kafka
Port>/batch.num.messages:5000/queue.buffering.max.ms:2000";

3. To confirm the Intelligence server configuration is successfully applied,


execute the following command:

LIST PROPERTIES FOR SERVER CONFIGURATION;

4. If you see Telemetry Server enabled = TRUE and Messaging


Services Configuration = bootstrap.servers:<IP:Port> in
the output, the configuration is successful.

Copyright © 2023 All Rights Reserved 273


Plat fo r m An alyt ics

Enable Advanced Statistics

1. Execute the following command to enable each project in the


environment to log a basic or advanced level of statistics. Make sure to
replace the project name with the projects you want to monitor.

ALTER PASTATISTICS (BASICSTATS (ENABLED | DISABLED) [DETAILEDREPJOBS


(TRUE | FALSE)] [DETAILEDDOCJOBS (TRUE | FALSE)] [JOBSQL (TRUE | FALSE)]
[COLUMNSTABLES (TRUE | FALSE)] [MOBILECLIENTS (TRUE|FALSE)
[MOBILEMANIPULATION (TRUE|FALSE)] [MOBILECLIENTLOCATION (TRUE|FALSE)]])
IN PROJECT "<PROJECT_NAME>";

2. To confirm the statistics logging level was successfully applied to the


project, execute the following command:

LIST [ALL] PROPERTIES FOR PASTATISTICS IN PROJECT "<project_name>";

3. If you are using more than one Intelligence server environment, repeat
the above steps for all Intelligence server environments.

Create and Configure the Platform Analytics Project for


Remote Repository
You can create a Platform Analytics project that connects to a remote
repository using either the Configuration wizard or a response file.

Say for example, you have a repository on machine 5, a Platform Analytics


and Telemetry server on machine 4, and Intelligence server on machines 1,
2, and 3. You can use the Configuration wizard on the Intelligence server
machines (1, 2, and 3) to create and configure the Platform Analytics project
connecting to the repository on machine 5.

Copyright © 2023 All Rights Reserved 274


Plat fo r m An alyt ics

l Create a Platform Analytics Project Using the Configuration Wizard

l Creating a Project Using a Response File

l Load Object Telemetry to the Platform Analytics Data Repository

Create a Platform Analytics Project Using the Configuration


Wizard
1. Open the Configuration wizard on your Intelligence server.

2. On the Welcome screen, select Create Platform Analytics project.

3. In DSN, choose the Platform Analytics repository. If DSN does not


appear, click New to create a new DSN linking to the Platform Analytics
repository of your choice. In this example, you are connecting to the
repository on machine 5.

If you have a cluster of Intelligence servers, after completing the steps


to successfully create a project, configure a DSN that points to the
same Platform Analytics repository of your choice on all the remaining
nodes.

Copyright © 2023 All Rights Reserved 275


Plat fo r m An alyt ics

4. Enter your credentials for the database.

5. Click Next.

6. Click Apply. The Configuration wizard automatically applies the


following configuration files:

l PlatformAnalyticsConfigurationNew.scp

l PlatformAnalyticsConfigurationNew_PostgreSQL.scp

l PlatformAnalyticsConfigurationUpgrade.scp

l PlatformAnalyticsConfigurationUpgrade_PostgreSQL.scp

7. If an error appears about being unable to automatically apply project


settings to Platform Analytics, you must manually update the project
settings. See Platform Analytics Project Configuration Scripts for more
information.

Configure High Throughput or Advanced Architecture


This topic explains how to configure high throughput architecture using a
cluster of Kafka nodes (Telemetry servers). One Telemetry Store (Platform

Copyright © 2023 All Rights Reserved 276


Plat fo r m An alyt ics

Analytics) can only consume data from a single Kafka node or single Kakfa
cluster.

All Kafka nodes should be in cluster and multiple Kafka clusters are not
supported.

Get started with the following topics:

1. Install Components

Copyright © 2023 All Rights Reserved 277


Plat fo r m An alyt ics

2. Configure Telemetry Server

3. Restart Necessary Services

4. Configure Platform Analytics Consumer

1. Install Components
Start by installing components on the corresponding environments.

1. Choose the following components for machines 1, 2, and 3 in the


Installation wizard:

l MicroStrategy Intelligence

l MicroStrategy Telemetry Server

l Choose Create a cluster... while creating a clustered environment


for Telemetry and provide the other nodes or machines addresses
where you have installed or are going to install Telemetry. Repeat
this for machines 1, 2, and 3.

Copyright © 2023 All Rights Reserved 278


Plat fo r m An alyt ics

2. Choose the following components for machines 4 and 5 in the


Installation wizard:

l MicroStrategy Intelligence

l MicroStrategy Telemetry Server

3. After installation, turn off MicroStrategy Telemetry server (Kafka &


Zookeeper services).

4. Choose the following components for machine 6 in the Installation


wizard:

l Platform Analytics

l During the installation of components, you must enter repository


(machine 7) connection information.

Copyright © 2023 All Rights Reserved 279


Plat fo r m An alyt ics

5. Install Platform Analytics Repository on machine 7. You can choose an


out-of-the-box MicroStrategy Repository or opt for you own instance of
PostgreSQL database server, provided by your organization.

6. For Windows deployments, continue with Windows Specific


Modifications to Create the Platform Analytics Service, otherwise go to
2. Configure Telemetry Server for Linux deployments.

Windows Specific Modifications to Create the Platform Analytics


Service
On Windows machines, you must recreate the Platform Analytics service to
prevent the service from going down.

1. Go to Services.

a. Stop MicroStrategy Platform Analytics Consumer.

b. Stop MicroStrategy Platform Analytics In-Memory Cache.

2. Delete the MicroStrategy Platform Analytics Consumer service.

a. Launch a Windows command prompt with administrative


privileges.

b. Execute the following command:

sc delete MSTR_PlatformAnalyticsConsumer

c. Close Services.

3. Recreate the Platform Analytics Consumer service.

1. Navigate to the Platform Analytics directory.

2. Open MSTR_PlatformAnalyticsConsumer.config for editing.

Copyright © 2023 All Rights Reserved 280


Plat fo r m An alyt ics

3. Delete --DependesOn =Redis to remove the dependent services


(Kafka and Zookeeper).

4. Launch a Windows command prompt with administrative privileges.

a. Navigate to the Platform Analytics directory.

b. Execute the following command:

PlatformAnalyticsConsumer.exe install MSTR_PlatformAnalyticsConsumer


--Config PlatformAnalyticsConsumer_config.txt

5. Go to Services.

a. Start MicroStrategy Platform Analytics Consumer.

b. Refresh Service Manager if necessary.

2. Configure Telemetry Server


Perform following steps below for all Telemetry server nodes. This example
uses machines 1, 2, and 3.

l Edit server.properties

l Edit zookeeper.properties

l Edit myid

Copyright © 2023 All Rights Reserved 281


Plat fo r m An alyt ics

Edit server.properties
1. Open server.properties for editing.

Windows location:

C:\Program Files (x86)\MicroStrategy\Messaging


Services\Kafka\kafka_x.x.xx\config

Linux location:

/opt/MicroStrategy/MessagingServices/Kafka/kafka_
x.x.x./config

2. Under ##### Server Basics ####, provide a unique broker ID to


each Telemetry server machine in the preferred order of node failover.

In this example:

Machine 1: broker.id=1
Machine 2: broker.id=2
Machine 3: broker.id=3

# Set the broker id to a unique value for each node.


# Do not change it on the machine configured during single node set up,
i.e. your main node. It should be left at the default value and referred
to by the other nodes.
# For example,

broker.id=1

3. Under ##### Internal Topic Settings ####, set both offsets


and transaction state factors to the amount of nodes in the cluster. In
this example, that is 3.

# offsets.topic.replication.factor= set to the number of nodes in your


cluster
# transaction.state.log.replication.factor= set to the number of nodes in
your cluster
# For example,

offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3

Copyright © 2023 All Rights Reserved 282


Plat fo r m An alyt ics

4. Under ##### Zookeeper #####, add all Telemetry server node IP


addresses or FQDNs for the zookeeper.connect parameter. The
order of nodes must correspond to the broker ID parameter in step 2.

# Set zookeeper.connect= to a comma separated list of <IP address:2181>


for all nodes in the cluster.
# For example,

zookeeper.connect=10.27.18.73:2181,10.27.18.224:2181,10.27.36.168:2181

Edit zookeeper.properties
1. Open zookeeper.properties for editing.

Windows location:

C:\Program Files (x86)\MicroStrategy\Messaging


Services\Kafka\kafka_x.x.xx\config

Linux location:

/opt/MicroStrategy/MessagingServices/Kafka/kafka_
x.x.x./config

2. Add new lines at the end of the file with server.node_


id=ip:2888:3888. In this example, there are three new lines for each
node.

# To allow Zookeeper to work with the other nodes in your cluster, add
the following properties to the end of the zookeeper.properties file.
# initLimit=5
# syncLimit=2
# server.X= <IP address of the node>:2888:3888
# When adding this property, replace X above with the broker.id for the
node being referenced. A separate entry must be made for each node in the
cluster.
# For example,

initLimit=5
syncLimit=2
server.0=10.27.18.73:2888:3888
server.1=10.27.18.224:2888:3888
server.2=10.27.36.168:2888:3888

Copyright © 2023 All Rights Reserved 283


Plat fo r m An alyt ics

Edit myid
1. Open myid for editing. If this file doesn’t exist, you should create one.

Windows location:

C:\Program Files (x86)\MicroStrategy\Messaging


Services\tmp\zookeeper

Linux location:

/opt/MicroStrategy/MessagingServices/tmp/zookeeper

2. Confirm that the myid file does not have a hidden extension. In File
Explorer, go to View > Show > File name extensions to show
extensions. If your file has an extension, remove it.

3. Make sure the broker.id for each node matches the values you set in
server.properties.

# Make sure the broker.id is the same as it appears in server.properties.


# For example,

broker.id=1

3. Restart Necessary Services


After updating the configurations for Kafka and Zookeeper on all nodes in
the cluster, you must restart the services, including the Intelligence server.

When restarting the services, it's important to note that all configuration file
changes must be completed first. For example, if you are adding two
additional Kafka nodes and you already have one existing node, then the
install and configuration should be completed on all three nodes before
restarting any of the services.

Additionally, some services are dependent on each other, so the services


should be started in the order provided below. Not starting in this order can
cause inconsistencies in the services.

Copyright © 2023 All Rights Reserved 284


Plat fo r m An alyt ics

1. Start Zookeeper and Kafka on the main node before starting other
nodes.

2. Start Zookeeper on the remaining nodes.

3. Start Kafka on the remaining nodes.

4. Configure Platform Analytics Consumer


Perform the following steps on the node where you are running Platform
Analytics Consumer. In this example, that is machine 6.

1. Open PAConsumerConfig.yaml for editing.

Windows location:

C:\Program Files (x86)\MicroStrategy\Platform


Analytics\conf

Linux location:

/opt/MicroStrategy/Platform Analytics/conf

2. Add all telemetry node IP addresses to the file, using the following
format:

zookeeperConnection:IP1:port,IP2:port,IP3:port

bootstrap.servers: IP1:port,IP2:port,IP3:port

# Set kafkaTopicNumberOfReplicas: number of nodes in cluster


# Set zookeeperConnection: <ipAddress:2181> for all nodes in cluster
# Set bootstrap.servers: <ipAddress:9092> for all nodes in cluster
# For example,

kafkaTopicNumberOfReplicas: 3
zooKeeperConnection: 10.27.18.73:2181,10.27.18.224:2181
bootstrap.servers: 10.27.18.73:9092,10.27.18.224:9092

3. Ensure that the kafkaTopicNumberOfReplicas parameter matches


the number of Telemetry server nodes. In this example, that would be 3.

4. Restart the following services:

Copyright © 2023 All Rights Reserved 285


Plat fo r m An alyt ics

l MicroStrategy Platform Analytics Consumer

l MicroStrategy Platform Analytics In-Memory Cache

Enable Telemetry Logging for Intelligence Servers When


Using High Throughput or Advanced Architecture
The Intelligence server telemetry logs are a source of data that Platform
Analytics uses to analyze the performance and utilization of your
MicroStrategy system. This telemetry logging can be enabled using either
the Configuration wizard or Command Manager.

There are two different levels of statistics that can be enabled for logging,
basic and advanced. The Configuration wizard can only configure basic
statistics, while Command Manager can configure both.

l Enable Statistics Using the Configuration Wizard

l Enable Statistics Using Command Manager

Enable Statistics Using the Configuration Wizard


The MicroStrategy Configuration wizard allows you to enable basic statistic
for all active projects in the metadata. If a new project is created after the
Configuration wizard setup, you must enable basic statistics for that project
using Command Manager.

Advanced statistics cannot be enabled using the Configuration wizard. Use


Command Manger to enable advanced statistics

1. Launch the Configuration wizard from the Intelligence server machine


and select Configure Intelligence Server.

Copyright © 2023 All Rights Reserved 286


Plat fo r m An alyt ics

2. Follow the Configuration wizard instructions until you reach the


Platform Analytics Configuration section and select Send
Intelligence Server telemetry to Platform Analytics. Once this
option is enabled for Platform Analytics, basic statistics are enabled by
default and cannot be disabled using the Configuration wizard.

3. In Telemetry Server Address enter the following information

Telemetry Server Host: The Kafka server fully qualified domain name
or IP address. The default is 127.0.0.1 if Platform Analytics is installed
on same machine. If not, provide the Platform Analytics machine's IP
address (Machine 4 in this example).

Telemetry Server Port: The Kafka server port number. The default is
9092.

Use <host:port> format as shown below.

Copyright © 2023 All Rights Reserved 287


Plat fo r m An alyt ics

Enable Statistics Using Command Manager


Command Manager allows you to enable basic or advanced statistics
logging on a project-by-project basis.

You must have the following privileges to enable statistics from Command
Manager:

l Use Command Manager

l Configure server basic

Enable Basic Statistics

1. Launch Command Manager and connect to the Intelligence server in


which you want to enabled statistics logging. Repeat steps 2-4 for each
Intelligence server node in a cluster.

2. Execute the following command to configure the Intelligence server to


send statistics logs to the MicroStrategy Telemetry server:

ALTER SERVER CONFIGURATION ENABLEMESSAGINGSERVICES TRUE


CONFIGUREMESSAGINGSERVICES"bootstrap.servers:<Telemetry Node 1

Copyright © 2023 All Rights Reserved 288


Plat fo r m An alyt ics

IP>:<Port>,<Telemetry Node 2 IP>:<Port>,<Telemetry Node 3


IP>:<Port>/batch.num.messages:5000/queue.buffering.max.ms:2000";

3. To confirm the Intelligence server configuration is successfully applied,


execute the following command:

LIST PROPERTIES FOR SERVER CONFIGURATION;

4. If you see Telemetry Server enabled = TRUE and Messaging


Services Configuration = bootstrap.servers:<IP:Port> in
the output, the configuration is successful.

Enable Advanced Statistics

1. Execute the following command to enable each project in the


environment to log a basic or advanced level of statistics. Make sure to
replace the project name with the projects you want to monitor.

ALTER PASTATISTICS (BASICSTATS (ENABLED | DISABLED) [DETAILEDREPJOBS


(TRUE | FALSE)] [DETAILEDDOCJOBS (TRUE | FALSE)] [JOBSQL (TRUE | FALSE)]
[COLUMNSTABLES (TRUE | FALSE)] [MOBILECLIENTS (TRUE|FALSE)
[MOBILEMANIPULATION (TRUE|FALSE)] [MOBILECLIENTLOCATION (TRUE|FALSE)]])
IN PROJECT "<PROJECT_NAME>";

2. To confirm the statistics logging level was successfully applied to the


project, execute the following command:

LIST [ALL] PROPERTIES FOR PASTATISTICS IN PROJECT "<project_name>";

Copyright © 2023 All Rights Reserved 289


Plat fo r m An alyt ics

3. If you are using more than one Intelligence server environment, repeat
the above steps for all Intelligence server environments.

Configure the Platform Analytics Project for High Throughput


or Advanced Architecture
The Platform Analytics project must be created via one of the Intelligence
server nodes in a cluster, connecting to the repository, which is machine 7 in
this example.

l Create a Platform Analytics Project Using the Configuration Wizard

l Load Object Telemetry to the Platform Analytics Data Repository

l Load Project on Other Nodes

l Troubleshooting

Create a Platform Analytics Project Using the Configuration


Wizard
1. Open the Configuration wizard on your Intelligence server.

2. On the Welcome screen, select Create Platform Analytics project.

Copyright © 2023 All Rights Reserved 290


Plat fo r m An alyt ics

3. In DSN, choose the Platform Analytics repository. If DSN does not


appear, click New to create a new DSN linking to the Platform Analytics
repository of your choice. In this example, you are connecting to the
repository on machine 6.

4. Enter your credentials for the database.

5. Click Next.

Copyright © 2023 All Rights Reserved 291


Plat fo r m An alyt ics

6. Click Apply. The Configuration wizard automatically applies the


following configuration files:

l PlatformAnalyticsConfigurationNew.scp

l PlatformAnalyticsConfigurationNew_PostgreSQL.scp

l PlatformAnalyticsConfigurationUpgrade.scp

l PlatformAnalyticsConfigurationUpgrade_PostgreSQL.scp

7. If an error appears about being unable to automatically apply project


settings to Platform Analytics, you must manually update the project
settings. See Platform Analytics Project Configuration Scripts for more
information.

There is no need to re-create the Platform Analytics project via the


remaining Intelligence server nodes (machine 2, 3, and 4) since you are
connecting to the same metadata for a clustered environment. You only
need to load the Platform Analytics project into Intelligence server memory
for the remaining nodes.

Load Project on Other Nodes


Repeat the following steps for machines 2, 3, and 4.

1. Launch Developer.

2. Navigate to Administration > System Administration > Project >


PlatformAnalytics > Administer Project > Load.

Copyright © 2023 All Rights Reserved 292


Plat fo r m An alyt ics

The PA project loads and is ready to use.

Troubleshooting
If Apache ZooKeeper cannot be restarted, ensure Kafka is fully configured.

1. Open the kafka-logs folder located in C:\Program Files


(x86)\MicroStrategy\Messaging Services\tmp.

2. Open the myid file and ensure the broker.id is the same as it
appears in server.properties. If they are different, this may be why
Apache ZooKeeper is not starting.

3. If there is no telemetry in the Kafka topics, check if statistics are


enabled for Platform Analytics projects by running the following
command in Command Manager:

LIST ALL PROPERTIES FOR PASTATISTICS IN PROJECT "Platform Analytics";

4. If the command returns False, run:

ALTER PASTATISTICS BASICSTATS ENABLED DETAILEDREPJOBS TRUE


DETAILEDDOCJOBS TRUE JOBSQL TRUE COLUMNSTABLES TRUE IN PROJECT "Platform
Analytics";

BASICSTATS must always be enabled. Select what kind of advanced


stats are needed by changing the parameter after it to true/false.
See List Platform Analytics Statistics Properties Statement_ for more
information about basic versus advanced statistics.

Copyright © 2023 All Rights Reserved 293


Plat fo r m An alyt ics

Load Object Telemetry to the Platform Analytics Data


Repository
Platform Analytics captures and analyzes the telemetry from the
MicroStrategy Intelligence Server by using information from metadata
objects to provide context and descriptive details. For example, the
telemetry sent by the Intelligence Server provides the object GUID, but the
metadata change journal logs provide the Object Name, Object Creation
Date, Object Owner, etc.

To obtain this descriptive information, the administrator must trigger an


initial load by which the Intelligence Server sends the metadata information.
After the load, all subsequent changes to any objects in the metadata are
captured in real-time by Platform Analytics.

The following information should be collected prior to configuration:

l The project source to the Intelligence Server which you wish to trigger the
initial load for Platform Analytics. For steps to create a Project Source, see
Create and Configure the Platform Analytics Project for Single Node
Telemetry.

l You must have the following privileges:

l Use Command Manager

l Configure server basic

l The Platform Analytics Store and Intelligence Server must be configured to


use the same Messaging Services Host(s) and Port(s).

Trigger the Initial Load of Object Telemetry


When creating a new metadata repository or upgrading an existing metadata
repository, MicroStrategy 2021 will create and import the event and
schedule objects needed to perform the initial load of object telemetry.
Administrators only need to trigger the event using either Command
Manager or Developer.

Copyright © 2023 All Rights Reserved 294


Plat fo r m An alyt ics

The initial load of metadata objects can take several hours depending on
the size of the metadata. If the process gets interrupted then you must re-
trigger the initial load. Unsure if the process was complete? See Verify the
Initial Load of Object Telemetry.

Load Object Telemetry via Developer


1. Using Developer, connect to the Intelligence Server for which you are
triggering the initial load.

2. Under Administration > Configuration Managers > Events right-click


the Load Metadata Object Telemetry event and select Trigger.

Load Object Telemetry via Command Manager


1. Using Command Manager, connect to the Intelligence Server for which
you wish to trigger the initial load.

2. Trigger the Event for the Scheduled Administration Task to send all
metadata object information to MicroStrategy Messaging Services.

TRIGGER EVENT "Load Metadata Object Telemetry";

Copyright © 2023 All Rights Reserved 295


Plat fo r m An alyt ics

Verify the Initial Load of Object Telemetry


To verify whether the initial load process was successful, review the
MetadataObjectTelemetry log file.

By default, the MetadataObjectTelemetry log is enabled. To turn off the log,


navigate to the MicroStrategy Diagnostics and Performance Logging Tool
and uncheck the MetadataObjectTelemetry log.

l On Windows, this file is located in <install_


path>\Common Files\MicroStrategy\Log\MetadataObjectTelem
etry.log.

l On Linux, this file is located in


/var/log/MicroStrategy/MetadataObjectTelemetry.log.

Understanding the MetadataObjectTelemetry Log


This log file provides detailed information on the status of your initial load,
such as if the initial load was successfully triggered and how many projects
were loaded.

There are three types of messages in this log file:

l Metadata object telemetry message

l Project object telemetry message

l Subscription telemetry message

Metadata Object Telem etry Message

The first message on the log file is the metadata object telemetry start
message. If the initial load is successfully triggered, you will see the
following information:

l Metadata object telemetry start: Indicates the initial load has


started.

Copyright © 2023 All Rights Reserved 296


Plat fo r m An alyt ics

l ObjectTelemetryID: Indicates the unique identifier of the initial load.


Each triggered initial load has a unique ID.

l Project distribution info: Indicates the host and projects that are
being loaded. If you are using a clustered environment, each node will
appear in this section with its associated projects.

Fo r Exam p l e

2019-09-02 04:32:59.416-04:00 [HOST:tec-w-004718][SERVER:CastorServer]


[PID:3208][THR:1600][MetadataObjectTelemetry][Info]
[UID:54F3D26011D2896560009A8E67019608]
[SID:5068987ECF4302F929BA2DCF45DF405D][OID:0]
[ObjectTelemetryID:5707FDFC4046F238782CC89C8DA94DD8] Metadata object
telemetry start. Project distribution info: (tec-w-004718, [Human Resources
Analysis Module, MicroStrategy Tutorial, Platform Analytics,
Configuration]).

The above message indicates that the initial load was triggered since the
Metadata object telemetry start message appears. This load is
also given a unique identifier:
ObjectTelemetryID:5707FDFC4046F238782CC89C8DA94DD8. This
identifier will appear in subsequent messages related to this initial load.

This message also provides a record of how many projects are going to load.
The Project distribution info line indicates the node, tec-w-
004718, that contains four projects: Human Resources Analysis
Module, MicroStrategy Tutorial, Platform Analytics, and
Configuration.

If this were a clustered environment, the Project distribution info line


would contain each node with its associated projects. For example, (TEC-W-
002270, [MicroStrategy Tutorial]), (TEC-W-002613, [New
Project, Configuration]).

Copyright © 2023 All Rights Reserved 297


Plat fo r m An alyt ics

Project Object Telem etry Message

This message is logged when each project starts, progresses, and is


finished with the object telemetry process. It's only logged by the node that
is the primary server of the project.

The project object telemetry messages appear as:

l Project object telemetry start: Indicates the project has started


the object telemetry process and how many objects will be sent to Kafka
from the project.

l Project object telemetry in progress: Indicates the number of


objects sent to Kafka, the number of objects that failed to be sent to Kafka,
and the number of objects that need to be sent to Kafka.

l Project object telemetry finish: Indicates the project has


finished the object telemetry process and how many objects were
successfully sent to Kafka.

Fo r Exam p l e: Pr o j ect Ob j ect Tel em et r y St ar t

2019-09-02 04:32:59.437-04:00 [HOST:tec-w-004718][SERVER:CastorServer]


[PID:3208][THR:1600][MetadataObjectTelemetry][Info]
[UID:54F3D26011D2896560009A8E67019608]
[SID:5068987ECF4302F929BA2DCF45DF405D][OID:0]
[ObjectTelemetryID:5707FDFC4046F238782CC89C8DA94DD8] Project
'Configuration' object telemetry start. About 588 objects will be sent.

This is the project's start message. It indicates that the project,


Configuration, is beginning the object telemetry process and that there are
588 objects that will be sent to Kafka from the project.

This number may not represent the final number of objects sent to Kafka
because objects can be added or deleted when the project object telemetry
is being sent.

Copyright © 2023 All Rights Reserved 298


Plat fo r m An alyt ics

Fo r Exam p l e: Pr o j ect Ob j ect Tel em et r y In Pr o gr ess

2019-09-02 04:32:59.524-04:00 [HOST:tec-w-004718][SERVER:CastorServer]


[PID:3208][THR:1600][MetadataObjectTelemetry][Info]
[UID:54F3D26011D2896560009A8E67019608]
[SID:5068987ECF4302F929BA2DCF45DF405D][OID:0]
[ObjectTelemetryID:5707FDFC4046F238782CC89C8DA94DD8] Project
'Configuration' object telemetry in progress. 46 objects are sent to Kafka
successfully, 0 objects failed to send to Kafka. A total of about 588
objects need to be sent.
2019-09-02 04:32:59.610-04:00 [HOST:tec-w-004718][SERVER:CastorServer]
[PID:3208][THR:1600][MetadataObjectTelemetry][Info]
[UID:54F3D26011D2896560009A8E67019608]
[SID:5068987ECF4302F929BA2DCF45DF405D][OID:0]
[ObjectTelemetryID:5707FDFC4046F238782CC89C8DA94DD8] Project
'Configuration' object telemetry in progress. 147 objects are sent to Kafka
successfully, 0 objects failed to send to Kafka. A total of about 588
objects need to be sent.

This is an example of two project object telemetry in progress messages.


Each time 100 objects are sent to Kafka, a new in progress message is
logged. In the first log, 46 objects were sent to Kafka. The second log
appeared when 147 objects were successfully sent to Kafka. In both logs, 0
objects failed to send.

Fo r Exam p l e: Pr o j ect Ob j ect Tel em et r y Fi n i sh

2019-09-02 04:33:00.013-04:00 [HOST:tec-w-004718][SERVER:CastorServer]


[PID:3208][THR:1600][MetadataObjectTelemetry][Info]
[UID:54F3D26011D2896560009A8E67019608]
[SID:5068987ECF4302F929BA2DCF45DF405D][OID:0]
[ObjectTelemetryID:5707FDFC4046F238782CC89C8DA94DD8] Project
'Configuration' object telemetry finish. 588 objects were sent in total.

This is the project's finished message. The project, Configuration, has


successfully sent 588 objects to Kafka.

Copyright © 2023 All Rights Reserved 299


Plat fo r m An alyt ics

Subscription Telem etry Message

This message is logged when the cluster node starts, progresses, and is
finished with the subscription instance telemetry process.

The subscription telemetry messages appear as:

l Subscription instance telemetry start: Indicates the node has


started the subscription instance telemetry and how many subscription
instances will be sent to Kafka from the project.

l Subscription instance telemetry progress: Indicates the


number of subscription instances sent to Kafka, the number of
subscription instances that failed to be sent to Kafka, and the total
subscriptions that need to be sent to Kafka.

l Subscription instance telemetry finish: Indicates the node has


finished the subscription instance telemetry and how many objects were
successfully sent to Kafka.

Fo r Exam p l e: Su b scr i p t i o n In st an ce Tel em et r y St ar t

2019-09-03 22:39:14.977-04:00 [HOST:tec-w-004718][SERVER:CastorServer]


[PID:3208][THR:12568][MetadataObjectTelemetry][Info]
[UID:54F3D26011D2896560009A8E67019608]
[SID:CFEEB1FA81CFBF01553D3E9EE9F6CEE1][OID:0]
[ObjectTelemetryID:974CB30B43A997627A648E89D786C438] Subscription instance
telemetry start. About 26 objects will be sent.

This message indicates the subscription instance telemetry process has


started and that 26 objects will be sent to Kafka.

Fo r Exam p l e: Su b scr i p t i o n In st an ce Tel em et r y Pr o gr ess

2019-09-03 22:39:14.980-04:00 [HOST:tec-w-004718][SERVER:CastorServer]


[PID:3208][THR:12568][MetadataObjectTelemetry][Info]
[UID:54F3D26011D2896560009A8E67019608]
[SID:CFEEB1FA81CFBF01553D3E9EE9F6CEE1][OID:0]
[ObjectTelemetryID:974CB30B43A997627A648E89D786C438] Subscription instance

Copyright © 2023 All Rights Reserved 300


Plat fo r m An alyt ics

telemetry in progress. 26 objects are sent to Kafka successfully, 0 objects


failed to send to Kafka. A total of about 26 objects need to be sent.

This message indicates that the subscription instance telemetry process is


in progress. Each time 100 objects are sent to Kafka, a new progress
message is logged. In this log, only 26 objects needed to be sent to Kafka
and 26 objects were successfully sent.

Fo r Exam p l e: Su b scr i p t i o n In st an ce Tel em et r y Fi n i sh

2019-09-03 22:39:14.982-04:00 [HOST:tec-w-004718][SERVER:CastorServer]


[PID:3208][THR:12568][MetadataObjectTelemetry][Info]
[UID:54F3D26011D2896560009A8E67019608]
[SID:CFEEB1FA81CFBF01553D3E9EE9F6CEE1][OID:0]
[ObjectTelemetryID:974CB30B43A997627A648E89D786C438] Subscription instance
telemetry finish. 26 objects were sent in total.

This message indicates that the current node finished the subscription
instance telemetry progress and that 26 subscriptions were successfully
sent to Kafka.

Re-trigger the Initial Load of Object Telemetry


In the event that an initial load is interrupted or fails, the process will need to
be re-triggered.

Re-trigger from Developer


1. Open Developer and connect to the Project Source containing the
Platform Analytics project.

If you are running MicroStrategy Developer on Windows for the first


time, run it as an administrator.

Right-click the program icon and select Run as Administrator.

Copyright © 2023 All Rights Reserved 301


Plat fo r m An alyt ics

This is necessary in order to properly set the Windows registry keys.


For more information, see KB43491.

2. Navigate to Administration > Configuration Managers > Events.

3. Right-click the event Load Metadata Object Telemetry and choose


Trigger.

Re-trigger from Command Manager


1. Using Command Manager, connect to the Intelligence Server for which
you wish to re-trigger the initial load.

2. Execute the following command:

TRIGGER EVENT "Load Metadata Object Telemetry";

Client Telemetry Configuration


Client telemetry was introduced in MicroStrategy 2021 Update 8 (December
2022). Starting in MicroStrategy 2021 Update 9, this preview feature is out-
of-the-box.

Follow the procedures listed below to configure client telemetry.

1. Prerequisites

2. Configure Library Server to Send Telemetry to Telemetry Server

3. Enable Client Telemetry for MicroStrategy Projects

4. Disable Client Telemetry for MicroStrategy Projects

5. Enable/Disable TLS

Copyright © 2023 All Rights Reserved 302


Plat fo r m An alyt ics

1. Prerequisites
1. Install and configure Platform Analytics first, prior to enabling client
telemetry.

2. End users must be using clients outlined in scope of the feature.

2. Configure Library Server to Send Telemetry to Telemetry


Server
Only users with administrator privileges can configure Telemetry server in
Workstation. This means you should be an Admin user with the Configure
statistics privilege.

1. In Workstation, connect to the environment of interest.

2. Right-click the environment and choose Properties.

Choose Get Info if you are using a Mac.

3. Verify the Telemetry server connection information, such as machine


name, port number, is populated. If the status is Connected, skip to 3.
Enable Client Telemetry for MicroStrategy Projects.

The default machine name is populated as given for


producer.kafkaproperties.bootstrap.servers in the Library
configOverride.properties file under:

Windows

C:\Program Files (x86)\Common Files\MicroStrategy\Tomcat\apache-tomcat-


9.0.68\webapps\MicroStrategyLibrary\WEB-INF\classes\config

Linux

Copyright © 2023 All Rights Reserved 303


Plat fo r m An alyt ics

/opt/tomcat/webapps/MicroStrategyLibrary/WEB-
INF/classes/config/configOverride.properties

4. To configure the Telemetry server, start by entering the Telemetry


server information for the connected environment. This includes such
information as the machine name, as well as the corresponding host/IP,
port, and TLS specification. You must configure TLS for Kafka and
Library prior to enabling it from Workstation. MicroStrategy does not
recommend using localhost or 127.0.0.1.

5. You can enter as many nodes as necessary by clicking Edit and . If


you are using a clustered Telemetry server, ensure all node
connectivity information is entered and all nodes are connectable.

Copyright © 2023 All Rights Reserved 304


Plat fo r m An alyt ics

6. Verify that the status of Telemetry server connection is set to


Connected after applying the settings to enable client telemetry.

3. Enable Client Telemetry for MicroStrategy Projects


1. Enabling client telemetry requires that the Telemetry server
configuration is performed first. Otherwise, the client telemetry option
is Disabled.

2. Use the toggle to Enable Client Telemetry. Enabling client telemetry


for an environment enables basic stats and client telemetry for all
active projects.

Copyright © 2023 All Rights Reserved 305


Plat fo r m An alyt ics

3. If a new project is added after enabling client telemetry or basic stats


are disabled for some projects using Command Manager, there is an
option to Update client telemetry for all projects.

4. Disable Client Telemetry for MicroStrategy Projects


1. Disable the Enable Client Telemetry toggle.

2. At the prompt, click Apply to disable basic statistics, advanced


statistics, and client telemetry for all projects.

This setting controls all of the projects in your metadata. If you want to
re-enable basic statistics for select projects using Command Manager,
see Enable Statistics from Command Manager.

5. Enable/Disable TLS
Enabling TLS enables a secured connection between the Library and
Telemetry servers.

To enable TLS from Workstation , you must manually configure TLS for the
Telemetry and Library servers. If TLS is not configured correctly for both
servers, attempting to enable TLS from Workstation will fail.

Follow the instructions below to:

l Configure the Telemetry Server

l Configure the Library Server

If you have a clustered Telemetry server, make sure TLS is configure properly
on each node.

Copyright © 2023 All Rights Reserved 306


Plat fo r m An alyt ics

1. Use the TLS toggle to enable a TLS connection.

2. Click the TLS toggle again to disable the TLS connection.

Configure One-Way SSL between Library and Telemetry


Servers
Before configuring one-way SSL between the Library and Telemetry servers,
see KB484968 to set up the certificates, keys, and Telemetry Server nodes.

This procedure provides instructions on configuring one-way SSL between


Library and Telemetry servers, assuming the setup on Telemetry Server (all
nodes of the cluster) and key/certificates have been configured by following
the KB article listed above.

Copyright © 2023 All Rights Reserved 307


Plat fo r m An alyt ics

1. Locate the configuration file. The path of this file may vary depending
on your configured install path for Tomcat.

Windows

C:\Program Files (x86)\Common Files\MicroStrategy\Tomcat\apache-tomcat-


9.0.65\webapps\MicroStrategyLibrary\WEB-
INF\classes\config\configOverride.properties

Linux

/opt/tomcat/webapps/MicroStrategyLibrary/WEB-
INF/classes/config/configOverride.properties

If there is no configOverride.properties file in the folder, you


must create it.

In the file, all fields that must be modified have a description and
example available in configDefault.properties, which is in the
same folder.

2. Add the truststore path and password to the config file. Open
configOverride.properties and add the following fields if they do
not exist. If you make any changes, restart Tomcat.

trustStore.path=/<path to>/client.truststore.jks
trustStore.passphrase=<truststore password>

3. Configure the IP/port and security protocol. Open


configOverride.properties, add/modify the following fields, and
restart Tomcat.

producer.kafkaProperties.security.protocol = SSL
producer.kafkaProperties.bootstrap.servers = host1:port1,host2:port2,...

Copyright © 2023 All Rights Reserved 308


Plat fo r m An alyt ics

Upgrade the Platform Analytics Project


As of 2019, you can upgrade your Platform Analytics project in the metadata
of your connected Intelligence Server. Upgrading the project is
recommended with each platform and update release in order to brings in
the latest dossiers, attributes, metrics and reporting optimizations to the
Platform Analytics project.

1. Open Configuration Wizard.

2. Select Upgrade existing environment to MicroStrategy Secure


Enterprise, and click Next.

3. Select Upgrade Platform Analytics Project, and click Next.

4. Provide the following information:

l User Name: Enter the MicroStrategy user name that can access the
Intelligence Server.

If this is your first time connecting to the MicroStrategy Intelligence


Server, use the user name Administrator without a password.

l Password: Enter the password for the MicroStrategy user that can
access the Intelligence Server.

5. Select the MySQL/PostgreSQL DSN for the Platform Analytics


Repository.

6. Enter your User Name and Password for the DSN.

7. Click Next.

8. Click Apply. The Configuration Wizard automatically applies one of the


following configuration files depending on the status of the user.

For new MySQL users:

l PlatformAnalyticsConfigurationNew.scp

Copyright © 2023 All Rights Reserved 309


Plat fo r m An alyt ics

For existing MySQL users:

l PlatformAnalyticsConfigurationUpgrade.scp

For new MicroStrategy users:

l PlatformAnalyticsConfigurationNew_PostgreSQL.scp

For existing PostgreSQL users:

l PlatformAnalyticsConfigurationUpgrade_PostgreSQL.scp

9. To manually update the project settings, see Configure the Platform


Analytics Project.

Upgrade Platform Analytics Repository


Upgrading the repository is required with each platform and update release
in order to benefit from Platform Analytics warehouse new features, defect
fixes, and database structure optimizations.

The Configuration Wizard provides the following options:

l Host: Type the host name of the Platform Analytics warehouse. By default,
this is set to the last successful connection value.

l Port: Type the port number of the Platform Analytics warehouse. By


default, this is set to the last successful connection value.

l User Name: Type the user name for the Platform Analytics warehouse. By
default, this is set to the value from PAConsumerConfig.yaml file

l Password: Type the password for the Platform Analytics warehouse user.

Depending on the warehouse type you choose for the Host and Port, you
must set the parameter whDbType to either "postgresql" or "mysql" in
the PAConsumerConfig.yaml file.

The default path is:

Copyright © 2023 All Rights Reserved 310


Plat fo r m An alyt ics

l Linux: /opt/MicroStrategy/PlatformAnalytics/Conf

l Windows: C:\Program Files (x86)\MicroStrategy\Platform


Analytics\conf

Click Next to proceed.

You can also update the Platform Analytics repository using the
Configuration Wizard in interactive mode.

How to Update the Repository in Interactive Mode


To update the Platform Analytics repository using the Configuration Wizard
in interactive mode on Windows:

1. In a Windows console, enter one of the following commands:

l For 64-bit, enter MACfgWiz_64.

l For 32-bit, enter MACfgWiz.

2. Click Enter.

3. Type 2 and click Enter to create a new response.ini file.

4. Type 5 and click Enter to upgrade your existing environment to


MicroStrategy Analytics Enterprise.

5. Type 3 and click Enter to upgrade your Platform Analytics repository.

6. Enter your Platform Analytics warehouse database credentials. By default,


the server name, port number, and user name are set to the last successful
connection value.

If you did not change the values, leave as default. The default password can
be found at C:\Program Files (x86)\Common Files\MicroStrategy\Default_
Accounts.txt

7. By default, the configuration is saved as Response.ini in the common files


path, C:\Program Files (x86)\Common Files\MicroStrategy. You can leave
the field blank to use the default name or type a different name, and then
click Enter. The response.ini file is generated, and you are prompted
whether
Copyright © to runReserved
2023 All Rights the configuration immediately. 311

8. Type Y and click Enter to run the configuration.


Plat fo r m An alyt ics

To update the Platform Analytics repository using the Configuration


Wizard in interactive mode on Linux:

1. In a Linux console window, browse to HOME_PATH where HOME_


PATH is the specified home directory during installation.

2. Browse to the bin directory.

3. At the command prompt, type mstrcfgwiz-editor, then click


Enter. The Configuration Wizard opens in command line mode.

4. Click Enter.

5. Type 2 and click Enter to create a new response.ini file.

6. Type 5 and click Enter to upgrade your existing environment to


MicroStrategy Analytics Enterprise.

7. Type 3 and click Enter to upgrade your Platform Analytics


repository.

8. Enter your Platform Analytics warehouse database credentials. By


default, the server name, port number, and user name are set to the
last successful connection value.

9. By default, the configuration is saved as Response.ini in the


/HOME_PATH/ directory, where HOME_PATH is the directory you
specified as the Home Directory during installation. You can leave
the field blank to use the default name or type a different name, and
then click Enter. The response.ini file is generated, and you are
prompted whether to run the configuration immediately.

10. Type Y and click Enter to run the configuration.

Copyright © 2023 All Rights Reserved 312


Plat fo r m An alyt ics

Advanced Configuration
The following sections cover optional setup procedures and are not required
for configuring Platform Analytics.

Configure Platform Analytics Using the


PAConsumerConfig.yaml File
Platform Analytics stores all configuration parameters for the Telemetry
Store (previously Platform Analytics Consumer) and the Identity Telemetry
producer (previously Usher Metadata Producer) in the
PAConsumerConfig.yaml file. For more information about Platform
Analytics architecture reference, see Platform Analytics Architecture and
Services.

The YAML file structure is updated each release with new configuration
parameters or Telemetry Server topics. All modifiable values are retained
after an upgrade, so any customized parameters are not lost. However, all
newly-added fields are set to the default after an upgrade.

The YAML file is located on the machine where Platform Analytics was
installed using the MicroStrategy Installation Wizard.

The default path is:

l Linux: /opt/MicroStrategy/PlatformAnalytics/Conf

l Windows: C:\Program Files (x86)\MicroStrategy\Platform


Analytics\conf

How to Read a YAML File


In a YAML file, indentation is used to express nested values. For example:

parentConfig:
numberOfConsumers: 1
pollTimeoutMillisec: 1000
kafkaProperties:

Copyright © 2023 All Rights Reserved 313


Plat fo r m An alyt ics

bootstrap.servers: "10.27.17.167:9092"

YAML uses the key: value notation. A single space is required after the
colon.

To read more about YAML functionality, see Learn YAML in Y minutes.

PAConsumerConfig.yaml Specifications
The PAConsumerConfig file consists of the following parts:

l paParentConfig: Common configurations for the Telemetry Server


(Kafka) and Telemetry Manager (Zookeeper) across TopicsGroups

l paEtlConfig: Configuration for the Telemetry Store (Platform Analytics


Consumer) to perform data processing.

l usherServerConfig: Connectivity configuration parameters for


connecting to the Identity Server database to collect Identity metadata
information.

l paTopicsGroupList: List of Telemetry Server TopicsGroups and their


configuration.

Each topicsGroup inherits settings from defaultConfig and


parentConfig. Each topicsGroup can also override specific settings that
it inherits.

Sample PAConsumerConfig.yaml File

Below is a sample PAConsumerConfig.yaml file. Reference the installed


file for the latest version.

---
paParentConfig:
consumerGroupSuffix: ~
overrideKafkaOffsets: true
kafkaTopicNumberOfReplicas: 1
kafkaTopicsDoNotCreateList:
zooKeeperConnection: 127.0.0.1:2181

Copyright © 2023 All Rights Reserved 314


Plat fo r m An alyt ics

ignoreUsherTopics: false
kafkaConsumerProperties:
bootstrap.servers: 127.0.0.1:9092

paEtlConfig:
redisConnection:
redisServer: 127.0.0.1
redisPort: 6379
redisPassword:
dailyETLConfiguration:
scheduleHour: 5
scheduleMin: 2
viewCutoffRangeInDays: 14
beaconDedup: true
locationDedup: true
warehouseDbConnection:
whHost: 127.0.0.1
whUser: root
whPasswd:
whPort: 3306
whDb: platform_analytics_wh
whClientCertificateKeyStore:
whClientCertificateKeyStoreType:
whClientCertificateKeyStorePassword:
whTrustCertificateKeyStore:
whTrustCertificateKeyStoreType:
whTrustCertificateKeyStorePassword:
pgWarehouseDbConnection:
pgWhHost: localhost
pgWhUser: mstr_pa
pgWhPasswd: Ugjx+93ROzBsA2gwBOWT5Qlu6hbfg5frTBmLmg==,970sBwUbi4EowB/4
pgWhPort: 5432
pgWhDb: platform_analytics_wh
pgWhSSLcert: ~
pgWhSSLkey: ~
pgWhSSLrootcert: ~
pgWhSSLmode: ~
geoLocationTopic: Mstr.PlatformAnalytics.Geolocation
kafkaHealthCheckTopic: mstr-pa-health-check

Copyright © 2023 All Rights Reserved 315


Plat fo r m An alyt ics

usherProducerKeys:
- SourceProvisionBadgePhone
- SourceProvisionOrganization
- SourceEnvironmentVariables
- SourceOrganization
- SourceOrganizationBadge
- SourceBadgeAdminRole
- SourceBadge
- SourceGateway
- SourceGatewayHierarchyAndDef
- SourceBeacon
- SourceDevice
googleAPIConfig:
googleApiKey:
googleApiClientId:
businessQuota: 100000
freeQuota: 2500
sleepTimeQuery: 5
usherLookupTopic: Mstr.PlatformAnalytics.UsherLookup

usherServerConfig:
usherServerDbConnection:
usherServerMysqlAesKeyPath:
usherServerUrl:
usherServerUser:
usherServerPassword:

paTopicsGroupList:
-
name: UsherInboxMessage
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
-
name: UsherInboxResponse
numberOfConsumers: 1
usherFlag: true
topics:

Copyright © 2023 All Rights Reserved 316


Plat fo r m An alyt ics

- Mstr.IdentityServer.ActionLog
-
name: Geolocation
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.PlatformAnalytics.Geolocation
-
name: UsherLog
numberOfConsumers: 2
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
- Mstr.IdentityServer.LocationLog

paParentConfig Settings

The paParentConfig settings are common configurations for the


Telemetry Server (Kafka) and Telemetry Manager (Zookeeper) across
TopicsGroups. For example:

---
paParentConfig:
consumerGroupSuffix: ~
overrideKafkaOffsets: true
kafkaTopicNumberOfReplicas: 1
kafkaTopicsDoNotCreateList:
zooKeeperConnection: 127.0.0.1:2181
ignoreUsherTopics: false
kafkaConsumerProperties:
bootstrap.servers: 127.0.0.1:9092

Below are settings defined for both the paParentConfig and


topicsGroup configuration, along with the defaultConfig values for
each setting.

Copyright © 2023 All Rights Reserved 317


Plat fo r m An alyt ics

Name Default Description

This field is used for testing


or recovering data in a
production environment.
Appended to the
topicsGroup name makes
up the actual Consumer
Group ID (also referred to
as group.id).

A key use for this field is to


change the property in
order to generate a new
Consumer Group ID.
Generating a new
Consumer Group ID causes
all records remaining in the
Telemetry Server topics to
be reprocessed.
consumerGroupSuffix ~ (null value)
For example, in order to
repopulate the Platform
Analytics Warehouse (PA
WH), you can modify this
property to a unique string
after re-initializing the PA
WH by using the PA custom
installer. The unique string
should be a string not used
before in the same
environment. A
recommended strategy is to
include a timestamp in it.

For example
reprocess_incorrect_
log_johndoe_
1330111282018

Copyright © 2023 All Rights Reserved 318


Plat fo r m An alyt ics

Name Default Description

If true, uses topic-partition


offset values in the
database to set the Kafka
offsets for a given
Consumer Group at startup.
overrideKafkaOffsets true
If false, uses offset values
stored in Kafka.

It's recommended to keep


the configuration as default.

This is the replica factor


configured for all Telemetry
Server topics. It is set
during the installation of
Platform Analytics
depending on if a cluster of
1 or number of Telemetry Servers is
kafkaTopicNumberOfReplicas
Telemetry Servers installed or a single node.

This value should match


the number of Telemetry
Server nodes clustered in
order to take advantage of
Kafka's failure tolerance.

The list of topics under


topicsGroupList that
will not be created by the
kafkaTopicsDoNotCreateList empty string Telemetry Store (Platform
Analytics Consumer) upon
startup. This field should
not be modified.

127.0.0.1:2181 or The comma-separated


pre-configured Telemetry Manager
zooKeeperConnection
Zookeeper cluster (Zookeeper) cluster
quorum configuration.

Copyright © 2023 All Rights Reserved 319


Plat fo r m An alyt ics

Name Default Description

For example:
FQDN1:PORT1,
FQDN2:PORT2,
FQDN3:PORT3

The default port is 2181,


set during the installation
of Platform Analytics.

This value is set during the


installation of Platform
Analytics depending on if
the Identity Server was
installed or not.
ignoreUsherTopics false
false if Identity Server is
installed and configured;

true if Identity Server is


not installed.

The comma-separated
Telemetry Server (Kafka)
cluster configuration (e.g.
127.0.0.1:9092 or FQDN1:PORT1,
bootstrap.servers pre-configured Kafka FQDN2:PORT2,
broker quorum FQDN3:PORT3).

The default port is 9092,


set during the installation
of Platform Analytics.

paEtlConfig Settings

paEtlConfig:
redisConnection:
redisServer: 127.0.0.1
redisPort: 6379
redisPassword: ~

Copyright © 2023 All Rights Reserved 320


Plat fo r m An alyt ics

dailyETLConfiguration:
scheduleHour: 5
scheduleMin: 2
viewCutoffRangeInDays: 14
currentFactDataKeepDays: 180
beaconDedup: true
locationDedup: true
whDbType: postgresql
warehouseDbConnection:
whHost: 127.0.0.1
whUser: root
whPasswd: r9oJP5d6
whPort: 3306
whDb: platform_analytics_wh
pgWarehouseDbConnection:
pgWhHost: localhost
pgWhUser: mstr_pa
pgWhPasswd:
Ugjx+93ROzBsA2gwBOWT5Qlu6hbfg5frTBmLmg==,970sBwUbi4EowB/4
pgWhPort: 5432
pgWhDb: platform_analytics_wh
pgWhSSLcert: ~
pgWhSSLkey: ~
pgWhSSLrootcert: ~
pgWhSSLmode: ~
geoLocationTopic: Mstr.PlatformAnalytics.Geolocation
kafkaHealthCheckTopic: mstr-pa-health-check
usherProducerKeys:
- SourceProvisionBadgePhone
- SourceProvisionOrganization
- SourceEnvironmentVariables
- SourceOrganization
- SourceOrganizationBadge
- SourceBadgeAdminRole
- SourceBadge
- SourceGateway
- SourceGatewayHierarchyAndDef
- SourceBeacon
- SourceDevice

Copyright © 2023 All Rights Reserved 321


Plat fo r m An alyt ics

googleAPIConfig:
googleApiKey: ~
googleApiClientId: ~
businessQuota: 100000
freeQuota: 2500
sleepTimeQuery: 5
usherLookupTopic: Mstr.PlatformAnalytics.UsherLookup

Below are settings defined for paEtlConfig, along with the


defaultConfig values for each setting.

Name Default Description

The fully qualified


domain name
(FQDN) or IP for
the Telemetry
Cache (Redis
redisServer 127.0.0.1
server).

For best
performance, use
a local Telemetry
Cache instance.

The port for


Telemetry Cache
(Redis server).
redisPort 6379
The default is
6379, set during
installation.

The password to
connect to the
Telemetry Cache
redisPassword empty string (Redis server) if
password
authentication is
enabled.

Copyright © 2023 All Rights Reserved 322


Plat fo r m An alyt ics

Name Default Description

By default,
password
authentication is
not enabled.

The hour specified


for the Platform
Analytics daily
scheduleHour 5 ETL to kick off.
The default value
5 means 05:00
UTC.

The minute of the


scheduled hour
for the Platform
Analytics daily
scheduleMin 2 ETL to run. The
default value 2
stands for 2
minutes past the
scheduled hour.

The number of
days of data for
which the view
tables in the
Platform Analytics
Cube will hold in
memory during
viewCutoffRangeInDays 14 republish.

For example, a 14-


day default means
that the view
tables and
Platform Analytics
Cube will include

Copyright © 2023 All Rights Reserved 323


Plat fo r m An alyt ics

Name Default Description

data from the last


rolling 14 days.
The data returned
by the Platform
Analytics project
schema is never
limited.

For more details,


see Modify the
Amount of Data
Returned In-
Memory for the
Platform Analytics
Cube.

The number of
days of data which
the current fact
tables in the
Platform Analytics
Repository will
hold.

For the
PostgreSQL
currentFactDataKeepD warehouse, we
180
ays create the
historical table for
some fact tables
whose data
amount may be
very large. Like
access_
transactions, fact_
sql_stats. The
historical table'

Copyright © 2023 All Rights Reserved 324


Plat fo r m An alyt ics

Name Default Description

name starts with


the prefix
"historical_".

For example, a
180- day default
means that the
current fact tables
will include data
from the last
rolling 180 days.
And all other data
will be stored in
the historical fact
tables.

A flag for
determining
whether de-
duplication of the
MicroStrategy
Badge beacon
tracking data is
turned on.

If true, the
Telemetry Store
beaconDedup true
ETL will remove
any duplicate
beacon actions if
all of the
conditions are
met:

l log from the


same user

l interacting with
the same

Copyright © 2023 All Rights Reserved 325


Plat fo r m An alyt ics

Name Default Description

beacon

l within 180
seconds

Turning this flag


on helps keep the
minimum valid
data points for
analysis without
excess data
collection.

A flag for
determining
whether de-
duplication of the
MicroStrategy
Badge location
tracking data is
turned on.

If true, the
Telemetry Store
ETL removes any
duplicate location
locationDedup true
tracking actions if
all of the
conditions are
met:

l log from the


same user

l within 60
seconds

Turning this flag


on will help keep
the minimum valid

Copyright © 2023 All Rights Reserved 326


Plat fo r m An alyt ics

Name Default Description

data points for


analysis without
excess data
collection.

The default
database type that
was used as the
Platform Analytics
Repository.
Beginning with
whDbType postgresql MicroStrategy
2020, the default
database is
"postgresql" but it
can also support
database type
"mysql".

The fully qualified


domain name
(FQDN) or IP of
the Platform
whHost pre-configured via installation Analytics
Repository where
the Telemetry
Store will store
data for reporting.

The username
used to connect to
the Platform
Analytics
whUser pre-configured via installation
Repository where
the Telemetry
Store will store
data for reporting.

Copyright © 2023 All Rights Reserved 327


Plat fo r m An alyt ics

Name Default Description

The password of
the user used to
connect to the
Platform Analytics
whPasswd pre-configured via installation
Repository where
the Telemetry
Store will store
data for reporting.

The port of the


MySQL database
server to connect .

The port for


Platform Analytics
whPort 3306 Repository
(MySQL Server
Database).

The default is
3306, set during
installation.

The database of
the Platform
Analytics
whDb platform_analytics_wh
warehouse.

This should not be


changed.

The fully qualified


domain name
(FQDN) or IP of
the PostgreSQL
pgWhHost localhost database which
used for Platform
Analytics
Repository. As we

Copyright © 2023 All Rights Reserved 328


Plat fo r m An alyt ics

Name Default Description

will install the


PostgreSQL
Server on the
machine which
installed the
Platform Analytics,
the default value
is "localhost"

The PostgreSQL
database
username used to
connect to the
pgWhUser mstr_pa Platform Analytics
Repository where
the Telemetry
Store will store
data for reporting.

The PostgreSQL
database
password of the
user used to
connect to the
Platform Analytics
Repository where
the Telemetry
pgWhPasswd pre-configured via installation Store will store
data for reporting.
This password is
encrypted during
the installation.
You can find the
unencrypted
password from file
"Default_

Copyright © 2023 All Rights Reserved 329


Plat fo r m An alyt ics

Name Default Description

Accounts.txt"
which under the
(Windows:
C:\Program Files
(x86)\Common
Files\MicroStrateg
y\ or Linux:
./install/Repositor
y/)

The port of the


PostgreSQL
database server
to connect.

The port for


Platform Analytics
pgWhPort 5432
Repository
(PostgreSQL
Server Database).

The default is
5432, set during
installation.

The database of
the Platform
Analytics
pgWhDb platform_analytics_wh warehouse.

This should not be


changed.

For future SSL


pgWhSSLcert empty string authentication
support.

For future SSL


pgWhSSLkey empty string
authentication

Copyright © 2023 All Rights Reserved 330


Plat fo r m An alyt ics

Name Default Description

support.

For future SSL


pgWhSSLrootcert empty string authentication
support.

For future SSL


pgWhSSLmode empty string authentication
support.

The Telemetry
Server (Kafka)
topic for location
data geocoding
Mstr.PlatformAnalytics.Geoloc processing from
geoLocationTopic
ation the MicroStrategy
Badge mobile
app.

This should not be


changed.

The Telemetry
Server (Kafka)
topic used for the
kafkaHealthCheckTopic mstr-pa-health-check health check.

This should not be


changed.

SourceProvisionBadgePhone
SourceProvisionOrganization
SourceEnvironmentVariables
SourceOrganization This should not be
usherProducerKeys
SourceOrganizationBadge changed.

SourceBadgeAdminRole
SourceBadge
SourceGateway

Copyright © 2023 All Rights Reserved 331


Plat fo r m An alyt ics

Name Default Description

SourceGatewayHierarchyAndDef
SourceBeacon
SourceDevice

Flag for
determining if the
logging True Google geocoding
API usage logging
is enabled.

Flag for
determining if the
alerting True Google geocoding
API usage logging
is enabled.

The business key


to allow making
googleApiKey empty string Google geocoding
API calls with a
business quota.

The business key


to allow making
googleApiClientId empty string Google geocoding
API calls with a
business quota.

The daily quota for


making Google
geocoding API
businessQuota 100000
calls without any
developer or
business keys.

For internal use


callLimit 1000
only.

Copyright © 2023 All Rights Reserved 332


Plat fo r m An alyt ics

Name Default Description

The number of
seconds to pause
between the
Google geocoding
sleepTimeQuery 5 API calls for
location data
processing.

This should not be


changed.

The Kafka topic


used for Usher
server metadata
Mstr.PlatformAnalytics.UsherL
usherLookupTopic information
ookup
telemetry.

This should not be


changed.

usherServerConfig Settings

usherServerConfig:
usherServerDbConnection:
usherServerMysqlAesKeyPath:
usherServerUrl:
usherServerUser:
usherServerPassword:

Below are settings defined for usherServerConfig, along with the


defaultConfig values for each setting.

Name Default Description

pre-configured The AES key file path used to


usherServerMysqlAesKeyPath
via installation decrypt the password

Copyright © 2023 All Rights Reserved 333


Plat fo r m An alyt ics

Name Default Description

The JDBC connectivity URL to


pre-configured
usherServerUrl connect to Usher Server meta
via installation
information database

The username to connect to


pre-configured
usherServerUser Usher Server meta information
via installation
database

The password to connect to


pre-configured
usherServerPassword Usher Server meta information
via installation
database

paTopicsGroupList Settings

The following settings that are defined only at the topicsGroup level, not
at the ParentConfig.

paTopicsGroupList:
-
name: UsherInboxMessage
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
-
name: UsherInboxResponse
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
-
name: Geolocation
numberOfConsumers: 1
usherFlag: true
topics:
- Mstr.PlatformAnalytics.Geolocation
-

Copyright © 2023 All Rights Reserved 334


Plat fo r m An alyt ics

name: UsherLog
numberOfConsumers: 2
usherFlag: true
topics:
- Mstr.IdentityServer.ActionLog
- Mstr.IdentityServer.LocationLog

Below are settings defined for paTopicsGroupList.

Name Description

Name for the topicsGroup. Must be unique among all


name
topicsGroup names.

The number of consumer processes assigned to this


numberOfConsumers
topicsGroup when forming a consumer group

usherFlag true if this topicGroup is Usher-related.

The list of Kafka topics that the consumers in the topicsGroup


topics
subscribe to.

Manually Deploy the Platform Analytics Repository


By default, the Platform Analytics Repository is created when running the
MicroStrategy Installation Wizard during installation. However, if your
warehouse becomes corrupted or if you only have a partial deployment of
the repository, you may want to recreate the repository without re-running
the Wizard. To support recreating the warehouse, utilize the Platform
Analytics Warehouse Custom Installer.

The Platform Analytics Custom Installer requires the same prerequisites as


the MicroStrategy Installation Wizard. For more information, see Platform
Analytics Prerequisites.

How to Manually Create the Platform Analytics Repository

Copyright © 2023 All Rights Reserved 335


Plat fo r m An alyt ics

1. Open Windows Services, locate MicroStrategy Platform Analytics


Consumer and MicroStrategy Usher Metadata Producer. Right-click each
service and select Stop.

2. From the Platform Analytics directory, located at C:\Program Files


(x86)\MicroStrategy\Platform Analytics\, open the bin folder and run the
Platform Analytics Custom Installation script:

platform-analytics-custom-install.bat

3. When prompted, enter one of the following options:

l Enter -o <install> to create the Platform Analytics warehouse


database from scratch.

This will always drop platform_analytics_wh database (if any) and re-
create it. If the warehouse had processed data previously, this option will
result in a loss of data.

l Enter -o <update> to update the Platform Analytics warehouse database


from an existing version to the latest version.

This option will not drop the platform_analytics_wh database and will
preserve the historical data.

4. From the Platform Analytics directory, open the log folder and edit the file
platform-analytics-installation.log.

5. Verify you have the line Installation finished successfully at the


conclusion of the file to indicate that the installation was successful.

6. Open Windows Services, locate MicroStrategy Platform Analytics


Consumer and MicroStrategy Usher Metadata Producer. Right-click each
service and select Start.

Copyright © 2023 All Rights Reserved 336


Plat fo r m An alyt ics

1. In the Platform Analytics directory, located at


/MicroStrategy/install/PlatformAnalytics/, open the
bin folder and run the following commands:

./platform-analytics-consumer.sh stop
./platform-analytics-usher-lookup-producer.sh stop

2. Run the Platform Analytics Custom Installation script:

platform-analytics-custom-install.sh

3. When prompted, enter one of the following options:

l Enter -o <install> to create the Platform Analytics warehouse


database from scratch.

l Enter -o <update> to update the Platform Analytics warehouse


database from an existing version to the latest version.

4. In the Platform Analytics directory, open the log folder and edit the
file platform-analytics-installation.log.

5. Verify you have the line Installation finished


successfully at the conclusion of the file to indicate that the
installation was successful.

6. In the Platform Analytics directory, open the bin folder and run the
following commands:

./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start

Configure the Platform Analytics Daily ETL


The Platform Analytics Daily ETL is a process triggered by the Platform
Analytics Consumer to execute certain aspects of data processing that only
need to occur once a day. The process can be configured to execute at any
user-defined time using the PAConsumerConfig.yaml configuration file.

Copyright © 2023 All Rights Reserved 337


Plat fo r m An alyt ics

It's recommended to schedule the Daily ETL during off hours.

Before configuring the Daily ETL, you must have MicroStrategy and Platform
Analytics fully installed and configured. For more information, see Installing
Platform Analytics.

How to Configure the Daily ETL


1. Open Windows Services, right-click MicroStrategy Platform Analytics
Consumer and select Stop.

2. In the Platform Analytics directory, located at C:\Program Files


(x86)\MicroStrategy\Platform Analytics, open the conf folder.

3. Edit the PAConsumerConfig.yaml file.

4. Under the dailyETLConfiguration heading, replace the value after


scheduleHour: and scheduleMin: with the hour and minute you want the
daily ETL to be triggered.

The time configuration references UTC time zone in a 24 hour format.

For example, if your server is set to UTC time zone, and you want to
schedule the Daily ETL to run at 11:05 pm, you would set the values as:

dailyETLConfiguration:
scheduleHour: 23
scheduleMin: 05

5. Save the file.

6. Open Windows Services, right-click MicroStrategy Platform Analytics


Consumer and select Start.

If you have more questions about Daily ETL, see Commonly Asked
Questions and Troubleshooting.

Copyright © 2023 All Rights Reserved 338


Plat fo r m An alyt ics

1. In the Platform Analytics directory, located at


/opt/MicroStrategy/PlatformAnalytics/, open the bin folder and run
the command:

./platform-analytics-consumer.sh stop

2. Still in the Platform Analytics directory, open the conf folder.

3. Edit the PAConsumerConfig.yaml file.

4. Under the dailyETLConfiguration heading, replace the value


after scheduleHour: and scheduleMin: with the hour and
minute you want the daily ETL to be triggered.

The time configuration references UTC time zone in a 24 hour


format.

For example, if your server is set to UTC time zone, and you want
to schedule the Daily ETL to run at 11:05 pm, you would set the
values as:

dailyETLConfiguration:
scheduleHour: 23
scheduleMin: 05

5. Save the file.

6. In the Platform Analytics directory, open the bin folder and run the
command:

./platform-analytics-consumer.sh start

If you have more questions about Daily ETL, see Commonly Asked
Questions and Troubleshooting.

Copyright © 2023 All Rights Reserved 339


Plat fo r m An alyt ics

Configure SSL for PostgreSQL and Platform Analytics


Consumer
Communication between the Platform Analytics consumer and a PostgreSQL
database can be configured to use SSL for encryption and authentication.
For more information, see the PostgreSQL documentation.

PostgreSQL Server Side Configuration

You must have OpenSSL version 1.1.0 or later installed.

1. Run the OpenSSL application as an Administrator to generate a private key.


You must provide a passphrase when generating the private key:

openssl is not included at the beginning of every line because the


commands are being executed with the OpenSSL application. If the
certificates and keys are being generated on a Unix system, you may need
to include openssl before every line.

genrsa -des3 -out server.key 1024


rsa -in server.key -out server.key

2. Create the server certificate:

The OpenSSL application may need to be relaunched to successfully create


the server certificate.

-subj is a shortcut to avoid prompting for information.

-x509 produces a self signed certificate rather than a certificate request.

req -new -key server.key -days 3650 -out server.crt -x509 -subj "/CN=IP or
HOSTNAME"

3. Open Command Prompt or File Explorer and navigate to where the server
certificate is located.

4. Copy the newly created server certificate to create the certificate authority:

copy server.crt root.crt

Copyright © 2023 All Rights Reserved 340


5. Add the following to the postgres.conf file:
Plat fo r m An alyt ics

listen_addresses = '*' # what IP address(es) to listen on;

Uncomment and change the following:

ssl = on
ssl_ca_file = '\\LOCATION_OF_FILE\\root.crt'
ssl_cert_file = '\\LOCATION_OF_FILE\\server.crt'
ssl_key_file = '\\LOCATION_OF_FILE\\server.key'
ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
ssl_prefer_server_ciphers = on

6. In the pg_hba.conf file, add or modify the following:

You may need to comment out host entries.

hostssl enforces SSL for DB_USERNAME.

clientcert=1 enforces client authentication (two way authentication).

hostssl platform_analytics_wh DB_USERNAME CLIENT_IP/32 cert


clientcert=1

7. Save the changes to both files.

8. Open Start > Services and restart PostgreSQL or MicroStrategy Repository.

Copyright © 2023 All Rights Reserved 341


Plat fo r m An alyt ics

Client Side Setup


1. Create the private key and certificate:

genrsa -des3 -out postgresql.key 1024


rsa -in postgresql.key -out postgresql.key
req -new -key postgresql.key -out postgresql.csr -subj "/CN=DB_USERNAME"
x509 -req -in postgresql.csr -CA root.crt -CAkey server.key -out
postgresql.crt -CAcreateserial

If you receive an error, you may need to comment out tsa_policy1 in the
openssl.cnf file. Save and relaunch openssl as an Administrator.

# Policies used by the TSA examples.


#tsa_policy1 = 1.2.3.4.1
tsa_policy2 = 1.2.3.4.5.6
tsa_policy3 = 1.2.3.4.5.7

2. Convert the private key into DER format using the command below:

The JDBC PostgreSQL driver used by Platform Analytics requires that the
key file be in DER format rather than PEM format.

pkcs8 -topk8 -inform PEM -in postgresql.key -outform DER -nocrypt -out
postgresql.key.der

3. Depending on the ODBC driver being used for PostgreSQL, a key store may
be required. To create a key store:

pkcs12 -export -in postgresql.crt -inkey postgresql.key -out postgresql.p12

4. Copy the files that were created to the client machine and update the
PAConsumerConfig.yaml file with the below path to the certificate and
key.

The client key is in DER format.


Copyright © 2023 All Rights Reserved 342
Plat fo r m An alyt ics

pgWarehouseDbConnection:
pgWhHost: YOUR_HOST
pgWhUser: DB_USERNAME
pgWhPasswd: YOUR_PASSWORD
pgWhPort: 5432
pgWhDb: platform_analytics_wh
pgWhSSLcert: \LOCATION_OF_FILE\postgresql.crt
pgWhSSLkey: \LOCATION_OF_FILE\postgresql.key.der
pgWhSSLrootcert: \LOCATION_OF_FILE\root.crt
pgWhSSLmode: verify-ca

PostgreSQL Server Side Configuration


1. Generate a private key using OpenSSL. You must provide a
passphrase when generating a private key:

openssl genrsa -des3 -out server.key 1024

2. Remove the passphrase:

openssl rsa -in server.key -out server.key

3. Grant access to the key file.

The permissions on server.key must disable any access to world


or group. Do this by setting the chmod permission to 0600.
Alternatively, the file can be owned by root and have group read
access, that is, chmod 0640 permissions. This setup is intended
for installations where certificate and key files are managed by the
operating system. The user under which the PostgreSQL server
runs should then be made a member of the group that has access to
those certificate and key files.

Change the file permissions and owner to the system user running
PostgreSQL:

Copyright © 2023 All Rights Reserved 343


Plat fo r m An alyt ics

chown PSQL_OWNER:PSQL_OWNER server.key


chmod 0600 server.key

If owned by root:

chown root:root server.key


chmod 06040 server.key

4. Create the server certificate and certificate authority:

-subj is a shortcut to avoid prompting for the info.

-x509 produces a self signed certificate rather than a certificate


request.

openssl req -new -key server.key -days 3650 -out server.crt -x509
-subj "/CN=IP OR HOSTNAME"
cp server.crt root.crt

5. Do the following for all certificates created:

chown PSQL_OWNER:PSQL_OWNER server.crt


chmod 0600 server.crt

chown PSQL_OWNER:PSQL_OWNER root.crt


chmod 0600 root.crt

6. Add the following to the postgres.conf file:

listen_addresses = '*' # what IP address(es) to listen on;

Uncomment and change the following:

ssl = on
ssl_ca_file = '/LOCATION_OF_FILE/root.crt'
ssl_cert_file = '/LOCATION_OF_FILE/server.crt'

Copyright © 2023 All Rights Reserved 344


Plat fo r m An alyt ics

ssl_key_file = '/LOCATION_OF_FILE/server.key'
ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
ssl_prefer_server_ciphers = on

7. In the pg_hba.conf file, add or modify the following:

You may need to comment out host entries.

hostssl enforces SSL for DB_USERNAME.

clientcert=1 enforces client authentication (two way


authentication).

hostssl platform_analytics_wh DB_USERNAME CLIENT_IP/32


cert clientcert=1

8. Restart PostgreSQL or MicroStrategy Repository.

If PostgreSQL was installed outside of MicroStrategy, use a


command like the following:

systemctl restart postgresql-11

If MicroStrategy Repository is used for Platform Analytics, use the


following command:

This command cannot be run as root.

cd /opt/MicroStrategy/PlatformAnalytics/bin
./mstr_pg_ctl restart

Copyright © 2023 All Rights Reserved 345


Plat fo r m An alyt ics

Client Side Setup


1. Create the private key and certificate:

openssl genrsa -des3 -out postgresql.key 1024


openssl rsa -in postgresql.key -out postgresql.key
openssl req -new -key postgresql.key -out postgresql.csr -subj
"/CN=DB_USERNAME"
openssl x509 -req -in postgresql.csr -CA root.crt -CAkey
server.key -out postgresql.crt -CAcreateserial

2. Convert the private key into DER format using the command below:

The JDBC PostgreSQL driver used by Platform Analytics requires


that the key file be in DER format rather than PEM format.

openss1 pkcs8 -topk8 -inform PEM -in postgresql.key -outform DER -


nocrypt -out postgresql.key.der

3. Depending on the ODBC driver being used for PostgreSQL, a key


store may be required. To create a key store:

openss1 pkcs12 -export -in postgresql.crt -inkey postgresql.key -


out postgresql.p12

4. Copy the files that were created to the client machine and update
the PAConsumerConfig.yaml file with the below path to the
certificate and key.

The client key is in DER format.

pgWarehouseDbConnection:
pgWhHost: YOUR_HOST
pgWhUser: DB_USERNAME
pgWhPasswd: YOUR_PASSWORD
pgWhPort: 5432
pgWhDb: platform_analytics_wh

Copyright © 2023 All Rights Reserved 346


Plat fo r m An alyt ics

pgWhSSLcert: /LOCATION_OF_FILE/postgresql.crt
pgWhSSLkey: /LOCATION_OF_FILE/postgresql.key.der
pgWhSSLrootcert: /LOCATION_OF_FILE/root.crt
pgWhSSLmode: verify-ca

Incremental Refresh Schedule for the Platform Analytics Cube


By default, the Platform Analytics cube is scheduled to complete a full
republish every hour on the 30th minute. For customers with very large
datasets, this full republish could take a significant amount of time as it
loads every row of every table/view into the in-memory cube. Incremental
refresh subscriptions consist of smaller data fetches from non-static tables
that are appended to the already existing cube cache loaded in memory.
Executing faster than a full republish, incremental refresh schedules allow
updates to the Platform Analytics cube much more often so the data stays
fresh and relevant.

It is recommended when adopting incremental refresh subscriptions, to


change the default republish schedule to occur less frequently. For example,
a full cube refresh can be performed once a day with an incremental refresh
every 15 minutes. This can be adjusted based on the performance of your
cube publication and incremental refresh.

Creating the Incremental Refresh subscription requires access to


MicroStrategy Developer and MicroStrategy Web.

Creating the Incremental Refresh Schedule


1. Open MicroStrategy Developer and log in to your project source.

2. Expand Administration > Configuration Managers > Schedules.

3. Right click in the list of schedules and select New > Schedule.

4. Change the name to something descriptive and click Next.

Copyright © 2023 All Rights Reserved 347


Plat fo r m An alyt ics

PlatformAnalytics_Every15Min_IR will be used for this example.

5. Select Time-triggered and click Next.

6. Select No end date and click Next.

7. Set the Recurrence Pattern to Daily / Every 1 Day.

8. Set the Time to trigger to Execute All Day Every / Executing every:
15 minutes.

9. Click Next then Finish.

Apply the Schedule to the Platform Analytics Project


1. Log in to MicroStrategy Web and open the Platform Analytics project.

2. Browse to Shared Reports > 2. Utilities.

3. Right-click the Platform Analytics Cube and select Schedule.

4. Click Show Advanced Options.

5. For each table listed:

l Check the box next to the table name in the Data Source column

l Change the Refresh Policy to Update Existing Data and Add New
Data.

l Click Set Source.

l Click on the DSN for the Platform Analytics Warehouse.

l Choose the namespace for the Platform Analytics Warehouse. The


default is platform_analytics_wh.

l Begin typing the view name into the Table text box.

l Double click the incremental view corresponding to the selected table

Copyright © 2023 All Rights Reserved 348


Plat fo r m An alyt ics

and click Finish.

Table mapping

Table Name Incremental View

fact_access_transactions_view fact_access_transactions_incremental

fact_action_cube_cache_view fact_action_cube_cache_incremental

fact_action_security_filter_view fact_action_security_filter_incremental

fact_latest_cube_cache fact_latest_cube_cache_incremental

fact_object_component fact_object_component_incremental

lu_account lu_account_incremental

lu_cache lu_cache_incremental

lu_cache_object lu_cache_object_incremental

lu_component_object lu_component_object_incremental

lu_history_list_message_view lu_history_list_message_incremental

lu_object lu_object_incremental

lu_project_object lu_project_object_incremental

lu_session_view lu_session_incremental

lu_status lu_status_incremental

lu_subscription lu_subscription_incremental

lu_validating_account lu_validating_account_incremental

6. Change the Schedule drop-down to the schedule created in the


previous section, PlatformAnalytics_Every15Min_IR.

7. Set the Schedule name field to something descriptive, such as


Platform Analytics Cube PlatformAnalytics_Every15Min_IR.

8. Click Schedule, then Done.

Copyright © 2023 All Rights Reserved 349


Plat fo r m An alyt ics

The Platform Analytics Cube will now be incrementally refreshed with the
latest data every 15 minutes rather than one hour.

Configure Platform Analytics to Use a Different Repository


Database Name
Platform Analytics is configured with a PostgreSQL database and schema
named platform_analytics_wh out-of-the-box. It is possible to configure
Platform Analytics to use a different database as long as the schema is
named platform_analytics_wh.

1. Connect to your PostgreSQL instance.

2. Create a user. If the mstr and mstr_pa users already exist from the
MicroStrategy installation, skip this step.

CREATE USER mstr_pa WITH ENCRYPTED PASSWORD '<password>' NOSUPERUSER


CREATEDB;

3. Create the Database and Grant Database privileges:

CREATE DATABASE YOUR_DATABASE_NAME; GRANT ALL ON DATABASE YOUR_


DATABASE_NAME TO mstr_pa;

4. Update the pg_hba.conf file, if necessary:

host YOUR_DATABASE_NAME mstr_pa 127.0.0.1/32 password


host YOUR_DATABASE_NAME mstr_pa ::1/128 password
host YOUR_DATABASE_NAME mstr_pa samenet password

5. Open the PAConsumerConfig.yaml file and update pgWhDb and any


other necessary fields.

whDbType: postgresql

pgWarehouseDbConnection:
pgWhHost: YOUR_HOST
pgWhUser: mstr_pa
pgWhPasswd: YOUR_PASSWORD

Copyright © 2023 All Rights Reserved 350


Plat fo r m An alyt ics

pgWhPort: 5432
pgWhDb: YOUR_DATABASE_NAME

6. Start populating the Platform Analytics repository by running


platform-analytics-custom-install.

Windows: platform-analytics-custom-install.bat -o
install

Linux: ./platform-analytics-custom-install.sh -o
install

7. Check the etl_pa_version table. The DDLs and procedures should


match the latest version.

SELECT * FROM etl_pa_version;

8. Grant the required schema privileges and alter roles:

GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA platform_analytics_wh to


mstr_pa;
ALTER ROLE mstr_pa in DATABASE YOUR_DATABASE_NAME SET search_path to
platform_analytics_wh;

9. Update the DSN used for Platform Analytics by changing the database
in the DSN to the one you created.

Creating a Project Using a Response File


As an alternative to Creating a Project Using the Configuration Wizard, you
can create a response file with the Platform Analytics project information
and use that response file with the Configuration Wizard to automatically
create the Platform Analytics project on your Intelligence server and
configure the connection to the Platform Analytics repository.

Creating a Response File


MicroStrategy recommends that you create a response file through the
graphical interface of the Configuration Wizard. You step through the

Copyright © 2023 All Rights Reserved 351


Plat fo r m An alyt ics

Configuration Wizard and make your selections, as described in Creating the


Platform Analytics project. When you reach the Summary page of the
Configuration Wizard, do not click Finish. Instead, click Save. You are
prompted to save your selections in a response file.

You can also create or modify a response file with a text editor. For
information on all the parameters in the response file, see Platform Analytics
Response File Parameters.

MicroStrategy supplies a blank response file template, Response.ini, in


the Common Files folder of your MicroStrategy installation. By default, this
folder is C:\Program Files (x86)\Common Files\MicroStrategy.

Platform Analytics Response File Parameters


The table below lists the available parameters and the functionality of
available options for each parameter.

Options Description

Options in this portion refer to creating the Platform Analytics


[PAProjectHeader]
project on this machine.

Defines whether to create the Platform Analytics project, as


determined by the following values:
PAProject=
l 1: Create Platform Analytics project on this machine.

l 0: Do not create the Platform Analytics project.

Defines whether the passwords are encrypted in the


response file, as determined by the following values:

l 0: The passwords are not encrypted in the response file,

PAProjectEncryptPwd= which enables you to modify the passwords in the


response file using a text editor. You can then distribute
the response file to multiple users with various login and
password credentials. However, be aware that this can
compromise your database security if you do not remove

Copyright © 2023 All Rights Reserved 352


Plat fo r m An alyt ics

Options Description

the passwords from the response file before distributing it.

l 1: Encrypts the passwords in the response file, which


ensures that your passwords are secure. This is the
default behavior.

PAProjectDSSUser=
The user name to log in to the Platform Analytics project.

The password for the user name above. This may be


PAProjectDSSPwd= encrypted, depending on the PAProjectEncryptPwd=
setting.

Beginning with MicroStrategy 2020, this file will be removed


and the default project package will be used.

The full path and file name of the MicroStrategy Platform


PAProjectPkgFile= Analytics project package file used to create the project.

On Windows, by default this is C:\Program Files


(x86)\Common
Files\MicroStrategy\PlatformAnalyticsProjectObjects.mmp

Beginning with MicroStrategy 2020, this file will be removed


and the default project package will be used.

The full path and file name of the MicroStrategy Platform


Analytics project configuration package file used to create
PAConfigurePkgFile= the project.

On Windows, by default this is C:\Program Files


(x86)\Common
Files\MicroStrategy\PlatformAnalyticsConfigurationObjects.
mmp.

The Data Source Name for the database that contains your
PAProjectDSNName=
Platform Analytics repository.

PAProjectDSNUserName The user name to connect to the Platform Analytics


= repository database.

PAProjectDSNUserPwd= The password for the user name above for the Platform

Copyright © 2023 All Rights Reserved 353


Plat fo r m An alyt ics

Options Description

Analytics repository database. This may be encrypted,


depending on the PAProjectEncryptPwd= setting.

PAProjectDSNPrefix= The prefix for the Platform Analytics repository tables.

Exam ple

Using the above parameters, this is an example of a completed response


file.

[PAProjectHeader]
PAProject=1
PAProjectEncryptPwd=1
PAProjectDSSUser=Administrator
PAProjectDSSPwd=password
PAProjectDSNName=PA_WAREHOUSE
PAProjectDSNUserName=root
PAProjectDSNUserPwd=password
PAProjectDSNPrefix=

Executing a Response File


You can execute a response file in any of the following ways:

l From within the Configuration Wizard. See, To Use a Response File with
the Configuration Wizard.

l From the Windows command line. See, To Use a Response File through
the Windows Command Line. This enables users to run the file without
using any graphical user interfaces.

l In UNIX or Linux. See To Use a Response File through the Configuration


Wizard in UNIX or Linux or To Use a Response File through the
UNIX/Linux Command Line.

Copyright © 2023 All Rights Reserved 354


Plat fo r m An alyt ics

To Use a Response File with the Configuration Wizard

1. From the Windows Start menu choose All Programs > MicroStrategy
Tools > Configuration Wizard.

2. Click Load.

3. Browse to the path where the response file is saved and click Open.

4. An overview of all the configuration tasks performed by the response


file is displayed. Review the tasks and click Finish.

To Use a Response File through the Windows Com m and Line

Enter the following command in the Windows command line:

macfgwiz.exe -r "Path\response.ini"

Where Path\ is the fully qualified path to the response file.

For example, a common location of a response file is:

C:\Program Files (x86)\Common


Files\MicroStrategy\RESPONSE.INI

If an error message is displayed, check the path and name you supplied for
the response file and make any required changes.

To Use a Response File through the Configuration Wizard in UNIX or Linux

From a UNIX or Linux console window:

1. Browse to <HOME_PATH> where <HOME_PATH> is the directory you


specified as the Home Directory during installation.

2. Browse to the folder bin.

3. Enter mstrcfgwiz-editor and press Enter.

4. Press Enter on the Configuration Wizard Welcome page.

Copyright © 2023 All Rights Reserved 355


Plat fo r m An alyt ics

5. Type 1 to use a response file and press Enter.

6. Type the fully qualified path to the response.ini file and press Enter.

For example:

/home/username/MicroStrategy/RESPONSE.INI

If an error message is displayed, check the path and name you supplied for
the response file and make any required changes.

To Use a Response File through the UNIX/ Linux Com m and Line

1. From a UNIX or Linux console window, browse to <HOME_PATH> where


<HOME_PATH> is the directory you specified as the Home Directory
during installation.

2. Browse to the folder bin.

3. Enter the following command in the command line and press Enter.

mstrcfgwiz-editor -response /Path/response.ini

Where Path is the fully qualified path to the response file.

For example, a common location of a response file is:

/home/username/MicroStrategy/RESPONSE.INI

If an error message is displayed, check the path and name you supplied for
the response file and make any required changes.

Platform Analytics Project Configuration Scripts


The Configuration Wizard will apply the appropriate script listed below
during the project creation or upgrade process.

Copyright © 2023 All Rights Reserved 356


Plat fo r m An alyt ics

New Project Creation

When selecting a MySQL DSN on a new project creation:

Configuration Wizard will apply


PlatformAnalyticsConfigurationNew.scp:

ALTER PROJECT CONFIGURATION USEWHLOGINEXEC FALSE MAXREPORTEXECTIME 3600


MAXSCHEDULEREPORTEXECTIME 3600 MAXNOINTELLIGENTCUBERESULTROWS -1
MAXNOREPORTRESULTROWS -1 MAXNOINTRESULTROWS -1 MAXQUOTAIMPORT 20480
INTELLIGENTCUBEMAXRAM 20480 DATAFETCHINGMEMORY 20480 NULLDISPLAYWAREHOUSE "N/A"
NULLDISPLAYCROSSTABULATOR "N/A" IN PROJECT "Platform Analytics";

ALTER PROJECT "Platform Analytics" DESCRIPTION "Platform Analytics (Version


2021 Update 8)";

ALTER DBINSTANCE "Platform Analytics" LOWTHREADS 8;

ALTER VLDBSETTING PROPERTYNAME "Parallel Query Execution" PROPERTYVALUE "2" ,


PROPERTYNAME "Maximum Parallel Queries Per Report" PROPERTYVALUE "8" FOR
PROJECT "Platform Analytics";

ALTER VLDBSETTING PROPERTYNAME "Report Pre Statement 1" PROPERTYVALUE "SET


TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;", PROPERTYNAME "Report Post
Statement 1" PROPERTYVALUE "COMMIT;" FOR DBINSTANCE "Platform Analytics" ;

ALTER PASTATISTICS BASICSTATS ENABLED DETAILEDREPJOBS FALSE DETAILEDDOCJOBS


FALSE JOBSQL FALSE COLUMNSTABLES FALSE IN PROJECT "Platform Analytics";

CREATEIFNOTEXIST SCHEDULE "PlatformAnalytics_Every_Hour_00" STARTDATE


08/01/2017 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY
EXECUTE ALL DAY EVERY 60 MINUTES STARTTIME 00:00;

CREATEIFNOTEXIST SCHEDULE "PlatformAnalytics_Every_Hour_10" STARTDATE


08/01/2017 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY
EXECUTE ALL DAY EVERY 60 MINUTES STARTTIME 00:10;

CREATEIFNOTEXIST SCHEDULE "PlatformAnalytics_Every_Hour_20" STARTDATE


08/01/2017 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY
EXECUTE ALL DAY EVERY 60 MINUTES STARTTIME 00:20;

CREATEIFNOTEXIST SCHEDULE "PlatformAnalytics_Every_Hour_30" STARTDATE


08/01/2017 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY
EXECUTE ALL DAY EVERY 60 MINUTES STARTTIME 00:30;

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Compliance Telemetry Cube Every Hour_


00" FOR OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_Hour_
00" USER "Administrator" CONTENT GUID 0FF99AD811E880872E300080EF45AF4F IN
PROJECT "Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Communicator Inbox Messages Cube


Every Hour_10" FOR OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_
Hour_10" USER "Administrator" CONTENT GUID 0A5C499011E74AE6000000802F870D74 IN
PROJECT "Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Remote Diagnostic Cube Every Hour_

USER "Administrator" CONTENT GUID 0C73984111E863782D6F0080EFE5EF4E IN

Copyright © 2023 All Rights Reserved 357


Plat fo r m An alyt ics

"Administrator" CONTENT GUID 0C73984111E863782D6F0080EFE5EF4E INPROJECT


"Administrator" CONTENT GUID 0C73984111E863782D6F0080EFE5EF4E IN PROJECT
"Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Platform Analytics Cube Every Hour_


30" FOR OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_Hour_
30" USER "Administrator" CONTENT GUID 7C5CD53E11E529671FB700802FF7BD86 IN
PROJECT "Platform Analytics";

TRIGGER SUBSCRIPTION "Platform Analytics Cube Every Hour_30" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";

TRIGGER SUBSCRIPTION "Communicator Inbox Messages Cube Every Hour_10" MANAGE


ALL OWNER "Administrator" FOR PROJECT "Platform Analytics";

TRIGGER SUBSCRIPTION "Remote Diagnostic Cube Every Hour_20" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";

TRIGGER SUBSCRIPTION "Compliance Telemetry Cube Every Hour_00" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";

ALTER VLDBSETTING PROPERTYNAME "Intermediate Table Type" PROPERTYVALUE "3" FOR


DBINSTANCE "Platform Analytics" ;

ADD ACE FOR DOCUMENT "Object Embedded Dossier" COMBINEACEFORTRUSTEE IN FOLDER


"\Public Objects\Templates" GROUP "System Monitors" ACCESSRIGHTS VIEW FOR
PROJECT "Platform Analytics";

ADD ACE FOR DOCUMENT "Project Embedded Dossier" COMBINEACEFORTRUSTEE IN


FOLDER "\Public Objects\Templates" GROUP "System Monitors" ACCESSRIGHTS
VIEW FOR PROJECT "Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Dimensions Cube Every Hour_00" FOR


OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_Hour_
00" USER "Administrator" CONTENT GUID 78134AB0464000DF758DCF818AE97491 IN
PROJECT "Platform Analytics";

When selecting a PostgreSQL DSN on a new project creation:

Configuration Wizard will apply


PlatformAnalyticsConfigurationNew_PostgreSQL.scp:

ALTER PROJECT CONFIGURATION USEWHLOGINEXEC FALSE MAXREPORTEXECTIME 3600


MAXSCHEDULEREPORTEXECTIME 3600 MAXNOINTELLIGENTCUBERESULTROWS -1
MAXNOREPORTRESULTROWS -1 MAXNOINTRESULTROWS -1 MAXQUOTAIMPORT 20480
INTELLIGENTCUBEMAXRAM 20480 DATAFETCHINGMEMORY 20480 NULLDISPLAYWAREHOUSE "N/A"
NULLDISPLAYCROSSTABULATOR "N/A" IN PROJECT "Platform Analytics";

ALTER PROJECT "Platform Analytics" DESCRIPTION "Platform Analytics (Version


2021 Update 8)";

ALTER DBINSTANCE "Platform Analytics" LOWTHREADS 8;

Copyright © 2023 All Rights Reserved 358


Plat fo r m An alyt ics

ALTER VLDBSETTING PROPERTYNAME "Parallel Query Execution" PROPERTYVALUE "2" ,


PROPERTYNAME "Maximum Parallel Queries Per Report" PROPERTYVALUE "8" FOR
PROJECT "Platform Analytics";

ALTER PASTATISTICS BASICSTATS ENABLED DETAILEDREPJOBS FALSE DETAILEDDOCJOBS


FALSE JOBSQL FALSE COLUMNSTABLES FALSE IN PROJECT "Platform Analytics";

CREATEIFNOTEXIST SCHEDULE "PlatformAnalytics_Every_Hour_00" STARTDATE


08/01/2017 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY
EXECUTE ALL DAY EVERY 60 MINUTES STARTTIME 00:00;

CREATEIFNOTEXIST SCHEDULE "PlatformAnalytics_Every_Hour_10" STARTDATE


08/01/2017 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY
EXECUTE ALL DAY EVERY 60 MINUTES STARTTIME 00:10;

CREATEIFNOTEXIST SCHEDULE "PlatformAnalytics_Every_Hour_20" STARTDATE


08/01/2017 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY
EXECUTE ALL DAY EVERY 60 MINUTES STARTTIME 00:20;

CREATEIFNOTEXIST SCHEDULE "PlatformAnalytics_Every_Hour_25" STARTDATE


08/01/2017 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY
EXECUTE ALL DAY EVERY 60 MINUTES STARTTIME 00:25;

CREATEIFNOTEXIST SCHEDULE "PlatformAnalytics_Every_Hour_30" STARTDATE


08/01/2017 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY
EXECUTE ALL DAY EVERY 60 MINUTES STARTTIME 00:30;

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Compliance Telemetry Cube Every Hour_


00" FOR OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_Hour_
00" USER "Administrator" CONTENT GUID 0FF99AD811E880872E300080EF45AF4F IN
PROJECT "Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Communicator Inbox Messages Cube


Every Hour_10" FOR OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_
Hour_10" USER "Administrator" CONTENT GUID 0A5C499011E74AE6000000802F870D74 IN
PROJECT "Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Remote Diagnostic Cube Every Hour_


20" FOR OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_Hour_
20" USER "Administrator" CONTENT GUID 0C73984111E863782D6F0080EFE5EF4E IN
PROJECT "Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Platform Analytics Cube Every Hour_


30" FOR OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_Hour_
30" USER "Administrator" CONTENT GUID 7C5CD53E11E529671FB700802FF7BD86 IN
PROJECT "Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Capacity Planning Cube Every Hour_


25" FOR OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_Hour_
25" USER "Administrator" CONTENT GUID 62F592B111E9C8DDAF600080EF15494D IN
PROJECT "Platform Analytics";

TRIGGER SUBSCRIPTION "Platform Analytics Cube Every Hour_30" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";

TRIGGER SUBSCRIPTION "Communicator Inbox Messages Cube Every Hour_10" MANAGE


ALL OWNER "Administrator" FOR PROJECT "Platform Analytics";

TRIGGER SUBSCRIPTION "Remote Diagnostic Cube Every Hour_20" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";

Copyright © 2023 All Rights Reserved 359


Plat fo r m An alyt ics

TRIGGER SUBSCRIPTION "Compliance Telemetry Cube Every Hour_00" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";

TRIGGER SUBSCRIPTION "Capacity Planning Cube Every Hour_25" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";

ALTER VLDBSETTING PROPERTYNAME "Intermediate Table Type" PROPERTYVALUE "3" FOR


DBINSTANCE "Platform Analytics" ;

ADD ACE FOR DOCUMENT "Object Embedded Dossier" COMBINEACEFORTRUSTEE IN FOLDER


"\Public Objects\Templates" GROUP "System Monitors" ACCESSRIGHTS VIEW FOR
PROJECT "Platform Analytics";

ADD ACE FOR DOCUMENT "Project Embedded Dossier" COMBINEACEFORTRUSTEE IN


FOLDER "\Public Objects\Templates" GROUP "System Monitors" ACCESSRIGHTS
VIEW FOR PROJECT "Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Dimensions Cube Every Hour_00" FOR


OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_Hour_
00" USER "Administrator" CONTENT GUID 78134AB0464000DF758DCF818AE97491 IN
PROJECT "Platform Analytics";

Project Upgrades

When selecting a MySQL DSN while upgrading the Platform Analytics


project:

Configuration Wizard will apply


PlatformAnalyticsConfigurationUpgrade.scp:

ALTER PROJECT "Platform Analytics" DESCRIPTION "Platform Analytics (Version


2021 Update 8)";

ALTER VLDBSETTING PROPERTYNAME "Intermediate Table Type" PROPERTYVALUE "3" FOR


DBINSTANCE "Platform Analytics" ;

ALTER PROJECT CONFIGURATION NULLDISPLAYWAREHOUSE "N/A"


NULLDISPLAYCROSSTABULATOR "N/A" IN PROJECT "Platform Analytics";

ADD ACE FOR DOCUMENT "Object Embedded Dossier" COMBINEACEFORTRUSTEE IN FOLDER


"\Public Objects\Templates" GROUP "System Monitors" ACCESSRIGHTS VIEW FOR
PROJECT "Platform Analytics";

ADD ACE FOR DOCUMENT "Project Embedded Dossier" COMBINEACEFORTRUSTEE IN


FOLDER "\Public Objects\Templates" GROUP "System Monitors" ACCESSRIGHTS
VIEW FOR PROJECT "Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Dimensions Cube Every Hour_00" FOR

USER "Administrator" CONTENT GUID 78134AB0464000DF758DCF818AE97491 IN

Copyright © 2023 All Rights Reserved 360


Plat fo r m An alyt ics

78134AB0464000DF758DCF818AE97491 INPROJECT AB0464000DF758DCF818AE97491 IN


PROJECT "Platform Analytics";

When selecting a PostgreSQL DSN while upgrading the Platform Analytics


project:

Configuration Wizard will apply


PlatformAnalyticsConfigurationUpgrade_PostgreSQL.scp:

ALTER PROJECT "Platform Analytics" DESCRIPTION "Platform Analytics (Version


2021 Update 8)";

CREATEIFNOTEXIST SCHEDULE "PlatformAnalytics_Every_Hour_25" STARTDATE


08/01/2017 ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY
EXECUTE ALL DAY EVERY 60 MINUTES STARTTIME 00:25;

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Capacity Planning Cube Every Hour_


25" FOR OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_Hour_
25" USER "Administrator" CONTENT GUID 62F592B111E9C8DDAF600080EF15494D IN
PROJECT "Platform Analytics";

TRIGGER SUBSCRIPTION "Capacity Planning Cube Every Hour_25" MANAGE ALL OWNER
"Administrator" FOR PROJECT "Platform Analytics";

ALTER VLDBSETTING PROPERTYNAME "Intermediate Table Type" PROPERTYVALUE "3" FOR


DBINSTANCE "Platform Analytics" ;

ALTER PROJECT CONFIGURATION NULLDISPLAYWAREHOUSE "N/A"


NULLDISPLAYCROSSTABULATOR "N/A" IN PROJECT "Platform Analytics";

ALTER VLDBSETTING PROPERTYNAME "Report Pre Statement 1" DEFAULTVALUE,


PROPERTYNAME "Report Post Statement 1" DEFAULTVALUE FOR DBINSTANCE "Platform
Analytics" ;

ADD ACE FOR DOCUMENT "Object Embedded Dossier" COMBINEACEFORTRUSTEE IN FOLDER


"\Public Objects\Templates" GROUP "System Monitors" ACCESSRIGHTS VIEW FOR
PROJECT "Platform Analytics";

ADD ACE FOR DOCUMENT "Project Embedded Dossier" COMBINEACEFORTRUSTEE IN


FOLDER "\Public Objects\Templates" GROUP "System Monitors" ACCESSRIGHTS
VIEW FOR PROJECT "Platform Analytics";

CREATEIFNOTEXIST CACHEUPDATESUBSCRIPTION "Dimensions Cube Every Hour_00" FOR


OWNER "Administrator" SCHEDULE "PlatformAnalytics_Every_Hour_
00" USER "Administrator" CONTENT GUID 78134AB0464000DF758DCF818AE97491 IN
PROJECT "Platform Analytics";

Copyright © 2023 All Rights Reserved 361


Plat fo r m An alyt ics

Configure Zookeeper ACLs


The znodes in Zookeeper have the World privilege by default. With the
approach shown below, you can restrict administrative access for Zookeeper
znodes to only the IP addresses of the Zookeeper, Kafka, and Consumer
machines. Follow the steps below to change the ACL of znodes to an IP
based scheme.

l Connect to Zookeeper Client and Enable ACLs for Particular Nodes

l Test Consumer and Kafka

Connect to Zookeeper Client and Enable ACLs for Particular


Nodes
1. Locate the file you need to connect to Zookeeper.

Windows

C:\Program Files (x86)\MicroStrategy\Messaging Services\Kafka\kafka_2.13-


3.2.0\bin\windows\zookeeper-shell.bat

Linux

/opt/MicroStrategy/MessagingServices/Kafka/kafka_2.13-
3.2.0/bin/zookeeper-shell.sh

2. Connect to Zookeeper using zookeeper-shell.bat <IP1>:<Port>


, <IP2>:<Port>.

Windows

.\zookeeper-shell.bat 10.23.39.148:2181,10.23.35.115:2181

Linux

./zookeeper-shell.sh 10.23.36.181:2181,10.23.33.221:2181

Copyright © 2023 All Rights Reserved 362


Plat fo r m An alyt ics

3. Verify the znodes in Zookeeper have the World privilege by running the
following command:

getAcl /brokers

4. Reset ACLs on all of the znodes.

setAcl -R / ip:<IP1>:cdrwa,ip:<IP2>:cdrwa,ip:<IP3>:cdrwa,...

setAcl -R /
ip:10.250.151.242:cdrwa,ip:10.250.155.99:cdrwa

5. Verify the ACL of individual nodes by displaying all of the allowed


znodes using the following command:

getAcl /brokers

Copyright © 2023 All Rights Reserved 363


Plat fo r m An alyt ics

Ensure that IP addresses of all Kafka, Zookeeper, and Consumer machines


are included for the system to continue to work. In addition to IP addresses.
you can also include 127.0.0.1 for local host access.

Test Consumer and Kafka


Once you set ACLs, test to make sure Consumer and Kafka are working.

1. Stop all Kafka nodes.

2. Stop all Zookeeper nodes.

3. Start all Zookeeper nodes.

4. After all Zookeeper nodes are up, start all Kafka nodes.

5. Start Consumer.

Po st Co n f i gu r at i o n M ai n t en an ce
For long term maintenance of Platform Analytics, the following options are
available:

Migrate Data from a MySQL Database to a


PostgreSQL Database
The Platform Analytics Data Migration tool is used to help existing
customers migrate their data from MySQL to the newly supported
PostgreSQL repository. This tool can help migrate both new and old versions
of MySQL dump files to the latest version of Platform Analytics.

Copyright ©Backup
2023 All Rights Reserved
Prerequisites: 364
Plat fo r m An alyt ics

l C:\Program Files (x86)\MicroStrategy\Platform


Analytics\PAConsumerConfig.yaml populated with:

warehouseDbConnection:

l whHost: 127.0.0.1

l whUser: root

l whPasswd: encrypted_password

l whPort: 3306

l whDb: platform_analytics_wh

l mysql-connector-java.jar is present in PlatformAnalytics\lib


directory.

l Disk space sufficient to hold a backup of your MySQL platform_


analytics_wh database.

Restore Prerequisites:

l PAConsumerConfig.yaml populated with:

pgWarehouseDbConnection:

l pgWhHost: 127.0.0.1

l pgWhUser: postgres

l pgWhPasswd: encrypted password

l pgWhPort: 5432

l pgWhDb: platform_analytics_wh

l Path to .csv files from a previous backup of platform_analytics_wh.

l Enough disk space available to PostgreSQL to restore the backed up ..csv


files from MySQL.

Copyright © 2023 All Rights Reserved 365


Plat fo r m An alyt ics

Launching the Platform Analytics Data Migration Tool


1. Navigate to your Platform Analytics home directory and go into the bin
directory:

C:\Program Files (x86)\MicroStrategy\Platform


Analytics\bin

2. Call the following script:

platform-analytics-data-migration-tool.bat

3. You will then be prompted with the following :

This is the Platform Analytics Data Migration Tool. The purpose of this
tool is to help migrate your data from an existing Mysql Warehouse to a new
PostgreSQL Warehouse.
Please select from the following options:
1) Backup
2) Restore
3) Backup and Restore
0) Exit

Migration Workflow

Backup
1. Provide the path to the directory where the MySQL backup will be stored.

2. The tool will then begin backing up the MySQL platform_analytics_wh


specified in your PAConsumerConfig.yaml file, placing the backup in your
specified path.

Copyright © 2023 All Rights Reserved 366


Plat fo r m An alyt ics

Restore
1. Provide the path to the directory where the MySQL backup is stored.

2. The tool will prompt you again if you are sure you are okay to drop your
PostgreSQL platform_analytics_wh schema.

3. If yes is selected, the platform_analytics_wh schema will be dropped


and recreated matching the version of your MySQL dump.

4. The backup data is then imported into the newly created platform_
analytics_wh schema.

5. The platform_analytics_wh schema will then be upgraded to the latest


version of Platform Analytics.

Recommended Upgrade Procedures

In-Place Upgrades
If you are performing an in-place upgrade the best practice steps would be
as followed:

1. Confirm that the Platform Analytics Consumer is stopped. Data migration


should not take place while new entries are still being processed.

2. Confirm that the PAConsumerConfig.yaml has the MySQL and


PostgreSQL information shown in the prerequisites above.

3. Go to your Platform Analytics bin directory and call the platform-


analytics-data-migration-tool.bat file.

4. Select the Backup and Restore option (3).

5. Enter the full desired directory path for the database to be backed up to and
restored from.

6. Wait until the backup is complete. The tool you will then prompt if it is okay
to recreate the PostgreSQL warehouse and select yes.

7. The program will then restore your MySQL backup files into your new
PostgreSQL warehouse and the data migration will be complete.
Copyright © 2023 All Rights Reserved 367
8. If you have installed Workstation and Service Registration, the service's
grouping and dependency information in MicroStrategy Workstation's
Topology view should be updated. When Topology is not updated, the view
Plat fo r m An alyt ics

"C:\Program Files (x86)\MicroStrategy\Services


Registration\jar\svcsreg-admin.jar" migrate MicroStrategy-Platform-
Analytics-Consumer MySQL PostgreSQL

4. Open Workstation and select the Topology tab. Consumer should now
appear to depend on and belong to the same group as Store
(PostgreSQL).

Copyright © 2023 All Rights Reserved 368


Plat fo r m An alyt ics

Parallel Upgrades
1. On your new MicroStrategy 2021 machine, populate the
PAConsumerConfig.yaml has the MySQL and PostgreSQL information
shown in the prerequisites above.

2. Copy the mysql-connector-java.jar from your previous installation to


the Platform Analytics\lib directory on the new machine.

3. Go to your Platform Analytics bin directory and call the platform-


analytics-data-migration-tool.bat file.

4. Select the Backup and Restore option (3).

5. Enter the full desired directory path for the database to be backed up to and
restored from.

6. Wait until the backup is complete. The tool you will then prompt if it is okay
to recreate the PostgreSQL warehouse and select yes.

7. The program will then restore your MySQL backup files into your new
PostgreSQL warehouse and the data migration will be complete.

8. If you have installed Workstation and Service Registration, the service's


grouping and dependency information in MicroStrategy Workstation's
Topology view should be updated. When Topology is not updated, the view
will show Consumer as dependent on five other services, including
Repository (MySQL).

To update Topology in Workstation:

1. Locate the MicroStrategy-shipped java path. By default, this is


C:\Program Files (x86)\Common
Files\MicroStrategy\JRE\180_222.

2. Locate the Services Registration's installation directory. By default, this


is C:\Program Files (x86)\MicroStrategy\Services
Registration\jar.

3. Execute the following command:

Copyright © 2023 All Rights Reserved 369


Plat fo r m An alyt ics

"C:\Program Files (x86)\Common Files\MicroStrategy\JRE\180_


222\Win64\bin\java" -jar
"C:\Program Files (x86)\MicroStrategy\Services
Registration\jar\svcsreg-admin.jar" migrate MicroStrategy-Platform-
Analytics-Consumer MySQL PostgreSQL

4. Open Workstation and select the Topology tab. Consumer should now
appear to depend on and belong to the same group as Store
(PostgreSQL).

The Platform Analytics Data Migration tool is used to help existing


customers migrate their data from MySQL to the newly supported
PostgreSQL repository. This tool can help migrate both new and old
versions of MySQL dump files to the latest version of Platform Analytics.

Backup Prerequisites:

l /MicroStrategy/install/PlatformAnalytics/PAConsumerConf
ig.yaml populated with:

warehouseDbConnection:

l whHost: 127.0.0.1

l whUser: root

l whPasswd: encrypted_password

l whPort: 3306

l whDb: platform_analytics_wh

l mysql-connector-java.jar is present in PlatformAnalytics/lib


directory.

l Disk space sufficient to hold a backup of your MySQL platform_


analytics_wh database.

Restore Prerequisites:

Copyright © 2023 All Rights Reserved 370


Plat fo r m An alyt ics

l PAConsumerConfig.yaml populated with:

pgWarehouseDbConnection:

l pgWhHost: 127.0.0.1

l pgWhUser: postgres

l pgWhPasswd: encrypted password

l pgWhPort: 5432

l pgWhDb: platform_analytics_wh

l Path to .csv files from a previous backup of platform_analytics_wh.

l Enough disk space available to PostgreSQL to restore the backed up


..csv files from MySQL.

Launching the Platform Analytics Data Migration Tool


1. Navigate to your Platform Analytics home directory and go into the
bin directory:

/opt/mstr/MicroStrategy/PlatformAnalytics/bin

2. Run the following script:

./platform-analytics-data-migration-tool.sh

3. You will then be prompted with the following :

This is the Platform Analytics Data Migration Tool. The purpose of


this tool is to help migrate your data from an existing Mysql
Warehouse to a new PostgreSQL Warehouse.
Please select from the following options:
1) Backup
2) Restore
3) Backup and Restore
0) Exit

Copyright © 2023 All Rights Reserved 371


Plat fo r m An alyt ics

Migration Workflow

Backup
1. Provide the path to the directory where the MySQL backup will be
stored.

2. The tool will then begin backing up the MySQL platform_


analytics_wh specified in your PAConsumerConfig.yaml file,
placing the backup in your specified path.

Restore
1. Provide the path to the directory where the MySQL backup is
stored.

2. The tool will prompt you again if you are sure you are okay to drop
your PostgreSQL platform_analytics_wh schema.

3. If yes is selected, the platform_analytics_wh schema will be


dropped and recreated matching the version of your MySQL dump.

4. The backup data is then imported into the newly created


platform_analytics_wh schema.

5. The platform_analytics_wh schema will then be upgraded to


the latest version of Platform Analytics.

Recommended Upgrade Procedures

In-Place Upgrades
If you are performing an in-place upgrade the best practice steps would
be as followed:

Copyright © 2023 All Rights Reserved 372


Plat fo r m An alyt ics

1. Confirm that the Platform Analytics Consumer is stopped. Data


migration should not take place while new entries are still being
processed.

2. Confirm that the PAConsumerConfig.yaml has the MySQL and


PostgreSQL information shown in the prerequisites above.

3. Go to your PlatformAnalytics/bin directory and call the


platform-analytics-data-migration-tool.sh file.

4. Select the Backup and Restore option (3).

5. Enter the full desired directory path for the database to be backed
up to and restored from.

6. Wait until the backup is complete. The tool will then prompt if it is
okay to recreate the PostgreSQL warehouse and select yes.

7. The program will then restore your MySQL backup files into your
new PostgreSQL warehouse and the data migration will be
complete.

8. The service's grouping and dependency information needs to be


updated in MicroStrategy Workstation's Topology view. When
Topology is not updated, Telemetry Consumer does not appear to
be dependent on Store (PostgreSQL).

To update Topology in Workstation:

1. Find the owner of the MicroStrategy installation directory. By


default, the owner is the mstr user.

2. Locate the MicroStrategy-shipped java path. By default, this is


/opt/MicroStrategy/_jre.

Copyright © 2023 All Rights Reserved 373


Plat fo r m An alyt ics

3. Locate the Services Registration's installation directory. By


default, this is
/opt/MicroStrategy/ServicesRegistration.

4. Execute the following command:

$ su - mstr
$ /opt/MicroStrategy/_jre/bin/java -jar
/opt/MicroStrategy/ServicesRegistration/jar/svcsreg-admin.jar
migrate MicroStrategy-Platform-Analytics-Consumer MySQL
PostgreSQL

5. Open MicroStrategy Workstation and select the Topology tab.


Consumer should now depend on Store (PostgreSQL).

Parallel Upgrades
1. On your new MicroStrategy 2021 machine, populate the
PAConsumerConfig.yaml has the MySQL and PostgreSQL
information shown in the prerequisites above.

2. Copy the mysql-connector-java.jar from your previous


installation to the PlatformAnalytics/lib directory on the new
machine.

3. Go to your PlatformAnalytics/bin directory and call the


platform-analytics-data-migration-tool.sh file.

4. Select the Backup and Restore option (3).

5. Enter the full desired directory path for the database to be backed
up to and restored from.

6. Wait until the backup is complete. The tool will then prompt if it is
okay to recreate the PostgreSQL warehouse and select yes.

Copyright © 2023 All Rights Reserved 374


Plat fo r m An alyt ics

7. The program will then restore your MySQL backup files into your
new PostgreSQL warehouse and the data migration will be
complete.

8. The service's grouping and dependency information needs to be


updated in MicroStrategy Workstation's Topology view. When
Topology is not updated, Telemetry Consumer does not appear to
be dependent on Store (PostgreSQL).

To update Topology in Workstation:

1. Find the owner of the MicroStrategy installation directory. By


default, the owner is the mstr user.

2. Locate the MicroStrategy-shipped java path. By default, this is


/opt/MicroStrategy/_jre.

3. Locate the Services Registration's installation directory. By


default, this is
/opt/MicroStrategy/ServicesRegistration.

4. Execute the following command:

$ su - mstr
$ /opt/MicroStrategy/_jre/bin/java -jar
/opt/MicroStrategy/ServicesRegistration/jar/svcsreg-admin.jar
migrate MicroStrategy-Platform-Analytics-Consumer MySQL
PostgreSQL

5. Open MicroStrategy Workstation and select the Topology tab.


Consumer should now depend on Store (PostgreSQL).

Add an Additional Kafka Node to an Existing Kafka


Cluster Post Installation
After the initial installation of Platform Analytics is completed, multiple
nodes containing Telemetry Server(s) can be added in order to create
Copyright © 2023 All Rights Reserved 375
Plat fo r m An alyt ics

clustered Telemetry Servers. The clustered environment ensures that if a


telemetry node is down, there is a copy of the telemetry log available on
another node.

To manually add a new Kafka node to an existing Kafka cluster on a


Windows environment, perform the following steps:

1. Disable Services

2. Install the Telemetry Server

3. Configure Kafka

4. Restart Services

Multiple services were renamed in the MicroStrategy 2019 release. Because


this guide requires modifying the underlying files, it uses the original service
name. For a list of changed service names, see the Critical Name Changes
section of the 2019 Readme.

To add an additional Kafka node, you need:

l One environment with MicroStrategy and Platform Analytics fully installed and
configured. For more information, see Installing Platform Analytics.

l Idle environment(s) to be added as the Kafka Server node(s) to create a


Kafka cluster.

Disable Services
Before configuring the new Kafka nodes, ensure that the Intelligence Server
Producer, Apache ZooKepper, Apache Kafka, and Platform Analytics
Consumer and Producer are disabled. If you already have more than one
node in the cluster, disable the services on all nodes.

1. In Command Manager, disable the Intelligence Server Producer by running:

ALTER SERVER CONFIGURATION ENABLEMESSAGINGSERVICES FALSE;

2. Open Windows Services and locate Apache ZooKeeper, Apache Kafka,


and MicroStrategy Platform Analytics Consumer and MicroStrategy
Usher
Copyright © Metadata
2023 All Producer.
Rights Reserved Right-click each service and select Stop. 376
Plat fo r m An alyt ics

Install the Telemetry Server


1. Open the MicroStrategy Installation Wizard on the node(s) you want to add
additional Kafka Servers to.

2. Step through the Wizard.

3. When you're prompted to select the features you want to install, select
Telemetry Server.

MicroStrategy Command Manager is installed by default.

Configure Kafka
Perform following steps for all nodes, including those that already exist in
the cluster.

1. Open the server.properties file located in the C:\Program Files


(x86)\MicroStrategy\Messaging Services\Kafka\kafka_2.11-1.1.0\config
directory.

2. Modify the file:

1. Set the broker.id to a unique value for each node.

Do not change the broker.id on your main node (the machine


configured during single node set up). It should remain at the default
value and be referred to by the other nodes.

2. Set offsets.topic.replication.factor= and


transaction.state.log.replication.factor= to the number of
nodes in your cluster.

3. Set zookeeper.connect= to a comma separated list of <IP


address:2181> for all nodes in the cluster.

4. Add advertised.listeners=<the IP address for this


node> at the end of the file.

3. Save the file.

4. Open the zookeeper.properties file found in the same directory.


Copyright © 2023 All Rights Reserved 377
5. Add the following properties to the end of the zookeeper.properties file:

l initLimit=5
Plat fo r m An alyt ics

server.0=10.27.18.73:2888:3888
server.1=10.27.18.224:2888:3888

6. Create a text file called myid containing only the broker.id of the node.

If your broker.id=1, enter 1.

7. Save the file in the ZooKeeper directory located at C:\Program Files


(x86)\MicroStrategy\Messaging Services\tmp\zookeeper.

Ensure the file doesn't have a hidden extension. To check, click Check
View > Show/hide > File name extensions in File Explorer. Delete any
extension of your myid file.

Restart Services
After the installation and configuration on all Kafka nodes in the cluster are
complete, restart the Intelligence Server Producer, Apache ZooKeeper,
Apache Kafka, and Platform Analytics Consumer and Producer.

Before restarting the services, all configuration file changes must be


completed first. For example, if you are adding two additional Kafka nodes in
addition to your one existing node, then the install and configuration should
be completed on all three nodes before restarting any of the services.

Additionally, some services are dependent on each other, so the services


must be started in the following order:

Apache Zookeeper
1. In Windows Services, start Apache ZooKeeper. Start the main node before
starting other nodes.

Apache Kafka
1. In Windows Services, start Apache Kafka.

Copyright © 2023 All Rights Reserved 378


Plat fo r m An alyt ics

Intelligence Server Producer


1. Open Command Manager and run the following script:

ALTER SERVER CONFIGURATION ENABLEMESSAGINGSERVICES TRUE


CONFIGUREMESSAGINGSERVICES"bootstrap.servers:<hostname1:9092>,<hostname2:90
92>,<hostname3:9092>/batch.num.messages:5000/queue.buffering.max.ms:2000";

Replace the hostname and port with the new Telemetry Server cluster
configuration for the Platform Analytics environment.

2. Restart the Intelligence Server.

If there is a cluster of Intelligence Servers, all nodes must be restarted.

Platform Analytics Consumer


Only perform the following steps on your main node. The main node is the
node running Platform Analytics Consumer.

1. Open the PAConsumerConfig.yaml file located in C:\Program Files


(x86)\MicroStrategy\Platform Analytics\conf.

2. Modify the file:

1. Set kafkaTopicNumberOfReplicas: to the number of nodes in the


cluster.

2. Set zookeeperConnection: <ipAddress:2181> for all nodes in


the cluster.

3. Set bootstrap.servers: <ipAddress:9092> for all nodes in the


cluster.

3. Save the file.

4. In Windows Services, start MicroStrategy Platform Analytics Consumer and


MicroStrategy Usher Metadata Producer.

Copyright © 2023 All Rights Reserved 379


Plat fo r m An alyt ics

Troubleshooting
If Apache ZooKeeper cannot be restarted, ensure Kafka is fully configured.

1. Open the kafka-logs folder located in C:\Program Files


(x86)\MicroStrategy\Messaging Services\tmp.

2. Open the meta.properties file and ensure the broker.id is the same
as it appears in server.properties. If they are different, this may be why
Apache ZooKeeper is not starting.

3. If there is no telemetry in the Kafka topics, check if statistics are enabled for
Platform Analytics projects by running the following command in Command
Manager:

LIST ALL PROPERTIES FOR PASTATISTICS IN PROJECT "Platform Analytics";

4. If the command returns False, run:

ALTER PASTATISTICS BASICSTATS ENABLED DETAILEDREPJOBS TRUE DETAILEDDOCJOBS


TRUE JOBSQL TRUE COLUMNSTABLES TRUE IN PROJECT "Platform Analytics";

BASICSTATS must always be enabled. Select what kind of advanced stats


are needed by changing the parameter after it to true/false. For more
information about Basic versus Advanced stats, see Enable Telemetry
Logging for Intelligence Server.

After the initial installation of Platform Analytics is completed, multiple


nodes containing Telemetry Server(s) can be added in order to create
clustered Telemetry Servers. The clustered environment ensures that if
a telemetry node is down, there is a copy of the telemetry log available
on another node.

To manually add a new Kafka node to an existing Kafka cluster on a


Linux environment, perform the following steps:

1. Disable Services

2. Install the Telemetry Server

Copyright © 2023 All Rights Reserved 380


Plat fo r m An alyt ics

3. Configure Kafka

4. Restart Services

Multiple services were renamed in the MicroStrategy 2019 release.


Because this guide requires modifying the underlying files, it uses the
original service name. For a list of changed service names, see
the Critical Name Changes section of the 2019 Readme.

To add an additional Kafka node, you need:

l One environment with MicroStrategy and Platform Analytics fully


installed and configured. For more information, see Installing Platform
Analytics.

l Idle environment(s) to be added as the Kafka Server node(s) to create a


Kafka cluster.

Disable Services
Before configuring the new Kafka nodes, ensure the Intelligence Server
Producer, Apache ZooKepper, Apache Kafka, and Platform Analytics
Consumer and Producer are disabled. If you already have more than one
node in the cluster, disable the services on all nodes.

1. In Command Manager, disable the Intelligence Server Producer by


running:

ALTER SERVER CONFIGURATION ENABLEMESSAGINGSERVICES FALSE;

This operation does not impact standard MicroStrategy


Intelligence Server functionality.

2. In the Platform Analytics directory, located in


/opt/MicroStrategy/PlatformAnalytics, open the bin folder.

Copyright © 2023 All Rights Reserved 381


Plat fo r m An alyt ics

3. Run the following commands:

./platform-analytics-consumer.sh stop
./platform-analytics-usher-lookup-producer.sh stop

4. In the Kafka directory, located in


/opt/MicroStrategy/MessagingServices/Kafka/kafka_2.11-1.1.0/,
open the bin folder.

5. Run the following commands:

./kafka-server-stop.sh
./zookeeper-server-stop.sh

Install the Telemetry Server


1. Open the MicroStrategy Installation Wizard on the node(s) you want
to add additional Kafka Servers to.

2. Step through the Wizard.

3. When you're prompted to select the features you want to install,


select Telemetry Server.

MicroStrategy Command Manager is installed by default.

Configure Kafka
Perform following steps for all nodes, including those that already exist
in the cluster.

1. Open the server.properties file located in


/opt/MicroStrategy/MessagingServices/Kafka/kafka_2.11-
1.1.0/config.

Copyright © 2023 All Rights Reserved 382


Plat fo r m An alyt ics

2. Modify the file:

1. Set the broker.id to a unique value for each node.

Do not change the broker.id on your main node (the


machine configured during single node set up). It should
remain at the default value and be referred to by the other
nodes.

2. Set offsets.topic.replication.factor= and


transaction.state.log.replication.factor= to the
number of nodes in your cluster.

3. Set zookeeper.connect= to a comma separated list of <IP


address:2181> for all nodes in the cluster.

4. Add advertised.host.name=<the IP address for


this node> at the end of the file.

3. Save the file.

4. Open the zookeeper.properties file found in the same


directory.

5. Add the following properties to the end of the


zookeeper.properties file:

l initLimit=5

l syncLimit=2

l server.X= <IP address of the node>:2888:3888

Replace X with the broker.id of the node being referenced. A


separate entry must be made for each node in the cluster.

For example:

Copyright © 2023 All Rights Reserved 383


Plat fo r m An alyt ics

initLimit=5
syncLimit=2
server.0=10.27.18.73:2888:3888
server.1=10.27.18.224:2888:3888

6. Create a text file called myid containing only the broker.id of the
node.

If your broker.id=1, enter 1.

7. Save the file in the ZooKeeper directory located in


/opt/MicroStrategy/MessagingServices/Kafka/tmp/zookeeper.

Restart Services
After the installation and configuration on all Kafka nodes in the cluster
are complete, restart the Intelligence Server Producer, Apache
ZooKepper, Apache Kafka, and Platform Analytics Consumer and
Producer.

When restarting the services, it's important to note that all configuration
file changes must be completed first. For example, if you are adding two
additional Kafka nodes, plus have one existing node, then the install and
configuration should be completed on all three nodes before restarting
any of the services.

Additionally, some services are dependent on each other, so the


services should be started in the following order:

Apache Zookeeper
1. In the Kafka directory, found in
/opt/MicroStrategy/MessagingServices/Kafka/kafka_2.11-1.1.0/,
open the bin folder.

Copyright © 2023 All Rights Reserved 384


Plat fo r m An alyt ics

2. Start ZooKeeper on all nodes by running:

./zookeeper-server-start.sh -daemon ../config/zookeeper.properties

Apache Kafka
1. In the same folder, start Kafka on all nodes by running:

./kafka-server-start.sh -daemon ../config/server.properties

Intelligence Server Producer


1. Open Command Manager and run the following script:

ALTER SERVER CONFIGURATION ENABLEMESSAGINGSERVICES TRUE


CONFIGUREMESSAGINGSERVICES
"bootstrap.servers:10.27.16.225:9092,10.27.19.34:9092/batch.num.me
ssages:5000/queue.buffering.max.ms:2000";

Replace the hostname and port with the new Telemetry Server
cluster configuration for the Platform Analytics environment.

2. Restart the Intelligence Server.

If there is a cluster of Intelligence Servers, all nodes must be


restarted.

Platform Analytics Consumer


Only perform the following steps on your main node. The main node is
the node running Platform Analytics Consumer.

1. Open the PAConsumerConfig.yaml file located in the


/opt/MicroStrategy/PlatformAnalytics/conf directory.

Copyright © 2023 All Rights Reserved 385


Plat fo r m An alyt ics

2. Modify the file:

1. Set kafkaTopicNumberOfReplicas: to the number of


nodes in the cluster.

2. Set zookeeperConnection: <ipAddress:2181> for all


nodes in the cluster.

3. Set bootstrap.servers: <ipAddress:9092> for all


nodes in the cluster.

3. Save the file.

4. In the Platform Analytics directory, located in


/opt/MicroStrategy/PlatformAnalytics, open the bin folder.

5. Run the command:

./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start

Troubleshooting
If Apache ZooKeeper cannot be started, ensure Kafka is fully configured.

1. Open the kafka-logs folder located in


/opt/MicroStrategy/MessagingServices/Kafka/tmp.

2. Open the meta.properties file and ensure the broker.id is the


same as it appears in server.properties. If they are different,
this may be why Apache ZooKeeper is not starting.

3. If there is no telemetry in the Kafka topics, check if statistics are


enabled for Platform Analytics projects by running the following
command in Command Manager:

LIST ALL PROPERTIES FOR PASTATISTICS IN PROJECT "Platform


Analytics";

4. If the command returns False, run:

Copyright © 2023 All Rights Reserved 386


Plat fo r m An alyt ics

ALTER PASTATISTICS BASICSTATS ENABLED DETAILEDREPJOBS TRUE


DETAILEDDOCJOBS TRUE JOBSQL TRUE COLUMNSTABLES TRUE IN PROJECT
"Platform Analytics";

BASICSTATS must always be enabled. Select what kind of


advanced stats are needed by changing the parameter after it to
true/false. For more information about Basic versus Advanced
stats, see Enable Telemetry Logging for Intelligence Server.

Update the Database User Password Configured to


the Platform Analytics Repository
The database user password of the Platform Analytics Repository is
provided during the installation and configuration steps. However, you can
modify the Platform Analytics Repository user password any time after the
initial installation and configuration.

To change the password, you need:

l One environment with MicroStrategy and Platform Analytics fully installed and
configured. For more information, see Installing Platform Analytics.

l Access to the machine (Linux or Windows) where Platform Analytics was


installed and configured.

Copyright © 2023 All Rights Reserved 387


Plat fo r m An alyt ics

How to Update the Database User Password


1. Open Windows Services and locate MicroStrategy Platform Analytics
Consumer and MicroStrategy Usher Metadata Producer. Right-click each
service and select Stop.

2. In the Platform Analytics directory, located in C:\Program Files


(x86)\MicroStrategy\Platform Analytics\, open the bin folder.

3. Run the following script:

platform-analytics-encryptor.bat

4. Enter your new database user password to generate a new encrypted


password.

5. Copy the encrypted password.

C:\Program Files (x86)\MicroStrategy\Platform Analytics\bin>platform-


analytics-encryptor.bat
Please type the password below to generate the encrypted password for
Platform Analytics:
the secret i will always keep
Encrypted password generated:
7YX+l/9HOr6DPpT0AiEVNzsnug==,x7F8IezkCtLjFFdX
Press any key to continue . . .

6. Still in the Platform Analytics directory, open the conf folder.

7. Open the PAConsumerConfig.yaml file and update the whPasswd field


with the encrypted password.

One space is needed after whPasswd:

For example:

whPasswd: 7YX+l/9HOr6DPpT0AiEVNzsnug==,x7F8IezkCtLjFFdX

8. Save the file.

9. Open Windows Services and start MicroStrategy Platform Analytics


Consumer and MicroStrategy Usher Metadata Producer.

Copyright © 2023 All Rights Reserved 388


Plat fo r m An alyt ics

10. Open Command Manager and connect to the Project Source that contains
the Platform Analytics project.

11. In the PASSWORD field, enter the new unencrypted password.

ALTER DBLOGIN "Platform Analytics Login" LOGIN "myPAWarehouseUser"


PASSWORD "the secret i will always keep";

You do not need to enter the encrypted password. The password you
entered will be encrypted when stored in the metadata.

12. Execute the command. Your password is updated for the Platform Analytics
project.

The database user password of the Platform Analytics Repository is


provided during the installation and configuration steps. However, you
can modify the Platform Analytics Repository password any time after
the initial installation and configuration.

To change the password, you need:

l One environment with MicroStrategy and Platform Analytics fully


installed and configured. For more information, see Installing Platform
Analytics.

l Access to the machine (Linux or Windows) where Platform Analytics was


installed and configured.

How to Update the Database User Password


1. In the Platform Analytics directory, located in
/opt/MicroStrategy/PlatformAnalytics, open the bin folder.

2. Run the following commands to stop MicroStrategy Platform


Analytics Consumer and MicroStrategy Usher Metadata
Producer:

./platform-analytics-consumer.sh stop

Copyright © 2023 All Rights Reserved 389


Plat fo r m An alyt ics

./platform-analytics-usher-lookup-producer.sh stop

3. In the same folder, run the following command to generate an


encrypted password for the MySQL database user:

[user@your-PA-machine bin]#./platform-analytics-encryptor.sh

4. Enter your new database user password to generate a new


encrypted password.

5. Copy the encrypted password.

C:\Program Files (x86)\MicroStrategy\Platform


Analytics\bin>platform-analytics-encryptor.bat
Please type the password below to generate the encrypted password
for Platform Analytics:
the secret i will always keep
Encrypted password generated:
7YX+l/9HOr6DPpT0AiEVNzsnug==,x7F8IezkCtLjFFdX
Press any key to continue . . .

6. In the Platform Analytics directory, open the conf folder.

7. Edit the PAConsumerConfig.yaml file and update the whPasswd


field with the encrypted password.

One space is needed after whPasswd:

For example:

whPasswd:
7YX+l/9HOr6DPpT0AiEVNzsnug==,x7F8IezkCtLjFFdX

8. Save the file.

9. In the Platform Analytics directory, open the bin folder and run the
following commands to start MicroStrategy Platform Analytics
Consumer and MicroStrategy Usher Metadata Producer:

Copyright © 2023 All Rights Reserved 390


Plat fo r m An alyt ics

./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start

10. Open Command Manager and connect to the Project Source that
contains the Platform Analytics project.

11. In the PASSWORD field, enter the new unencrypted password.

ALTER DBLOGIN "Platform Analytics Login" LOGIN


"myPAWarehouseUser" PASSWORD "the secret i will always keep";

You do not need to enter the encrypted password. The password


you entered will be encrypted when stored in the metadata.

12. Execute the command. Your password is updated for the Platform
Analytics project.

Enable Password Authentication on the MicroStrategy


Telemetry Cache
Platform Analytics utilizes the Telemetry Cache (i.e. Redis) to improve the
Telemetry Store (formerly called Platform Analytics Consumer) processing
performance. For best performance, the Telemetry Store and the Telemetry
Cache should be installed on the same machine to reduce the risk of
network latency. By default, these two components are installed together
when using the MicroStrategy Installation Wizard. In addition to enhanced
processing performance, you can improve Platform Analytics data security
by enabling password authentication to the Telemetry Cache.

Multiple services were renamed in the MicroStrategy 2019 release. Because


this guide requires modifying the underlying files, it uses the original service
name. For a list of changed service names, see the Critical Name Changes
section of the 2019 Readme.

To enable password authentication, you need:


Copyright ©l 2023
OneAllenvironment
Rights Reserved
with MicroStrategy and Platform Analytics fully installed 391
and
configured. For more information, see Installing Platform Analytics.

l Access to the machine (Linux or Windows) where Platform Analytics was


Plat fo r m An alyt ics

1. Open Windows Services, locate MicroStrategy Platform Analytics


Consumer, MicroStrategy Usher Metadata Producer, and MicroStrategy
In-Memory Cache. Right-click each service and select Stop.

2. Open the Telemetry Cache installation path, located in C:\Program Files


(x86)\Common Files\MicroStrategy\Redis, and open the redis.conf file.

3. In the Configuration Security section, un-comment the following line:

# requirepass foobared

4. In the same line, replace foobared with your designated password.


Authentication is now enabled.

requirepass [password]

5. Save the file.

6. Open Windows Services and start MicroStrategy In-Memory Cache.

7. In the Platform Analytics directory, located in C:\Program Files


(x86)\MicroStrategy\Platform Analytics\, open the bin folder.

8. Run the following script:

C:\Program Files (x86)\MicroStrategy\Platform Analytics\bin>platform-


analytics-encryptor.bat

9. Enter your new password to generate an encrypted password.

10. Record the encrypted password.

11. In the Platform Analytics directory, open the conf folder.

12. Edit the PAConsumerConfig.yaml file and update the redisPassword:


field with the encrypted password.

One space is needed after redisPassword:

For example:

Copyright © 2023 All Rights Reserved 392


Plat fo r m An alyt ics

redisPassword:
c5eoCdW023nqmME9Nl2ZBntw5MdvBZEOQLd9zD6xVWSx3UjE,EnrazzMgibZDpHD

13. Save the file.

14. Open Windows Services, start MicroStrategy Platform Analytics


Consumer and MicroStrategy Usher Metadata Producer.

Platform Analytics utilizes the Telemetry Cache (i.e. Redis) to improve


the Telemetry Store (formerly called Platform Analytics Consumer)
processing performance. For best performance, the Telemetry Store and
the Telemetry Cache should be installed on the same machine to reduce
the risk of network latency. By default, these two components are
installed together when using the MicroStrategy Installation Wizard. In
addition to enhanced processing performance, you can improve Platform
Analytics data security by enabling password authentication to the
Telemetry Cache.

Multiple services were renamed in the MicroStrategy 2019 release.


Because this guide requires modifying the underlying files, it uses the
original service name. For a list of changed service names, see
the Critical Name Changes section of the 2019 Readme.

To enable password authentication, you need:

l One environment with MicroStrategy and Platform Analytics fully


installed and configured. For more information, see Installing Platform
Analytics.

l Access to the machine (Linux or Windows) where Platform Analytics was


installed and configured.

1. Open the Telemetry Cache installation path, located at


/opt/MicroStrategy/Redis/, and run:

./redis.sh stop

Copyright © 2023 All Rights Reserved 393


Plat fo r m An alyt ics

2. In the Platform Analytics directory, located at


/opt/MicroStrategy/PlatformAnalytics, open the bin folder and run
the following commands:

./platform-analytics-consumer.sh stop
./platform-analytics-usher-lookup-producer.sh stop

3. Return to the Telemetry Cache installation path and open the


redis.conf file.

4. In the Configuration Security section, un-comment the following


line:

# requirepass foobared

5. In the same line, replace foobared with your designated


password. Authentication is now enabled.

requirepass [password]

6. Save the file.

7. Open the Telemetry Cache installation path and run:

./redis.sh start

8. In the Platform Analytics directory, open the bin folder.

9. Run the following script:

[user@your-PA-machine bin]#./platform-analytics-encryptor.sh

10. Enter your new password to generate an encrypted password.

11. Record the encrypted password.

12. In the Platform Analytics directory, open the conf folder.

Copyright © 2023 All Rights Reserved 394


Plat fo r m An alyt ics

13. Edit the PAConsumerConfig.yaml file and update the


redisPassword: field with the encrypted password.

One space is needed after redisPassword:

For example:

redisPassword:
c5eoCdW023nqmME9Nl2ZBntw5MdvBZEOQLd9zD6xVWSx3UjE,EnrazzMgibZDpHD

14. Save the file.

15. In the Platform Analytics directory, open the bin folder and run the
following commands:

./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start

Modify the Amount of Data Returned In-Memory for


the Platform Analytics Cube
Platform Analytics currently stores telemetry for all time in the Platform
Analytics MySQL Repository. As the repository grows in size, you may want
to limit the amount of data that is stored in-memory when publishing the
Platform Analytics super cube.

To reduce the data volume stored in-memory on the MicroStrategy


Intelligence Server, warehouse views are created on top of large fact and
look up tables. By default, the Platform Analytics views are configured to
limit the data to a rolling 14 days. However, the number of days in each view
can be modified by simply changing a value in the Platform Analytics
configuration file and restarting the Platform Analytics consumer.

Increasing the number of days means the size of the cache stored in-
memory on the Intelligence Server increases.

Copyright © 2023 All Rights Reserved 395


Plat fo r m An alyt ics

Many of the out-of-the-box dossiers utilize the Platform Analytics Cube as


the dataset for analysis. Changing the number of days will be reflected in
the analysis of the dependent dossiers.

To see all dossiers and reports that are dependents of the Platform
Analytics Cube, right-click a cube and select Find Dependents.

Tables
To limit the amount of data returned in-memory after republishing the
Platform Analytics Cube, warehouse views have been created on top of
large lookup tables and on all fact tables. These tables are used in the local
schema cube definition but do not apply to the Platform Analytics project
schema. Any application object (report, dossier, document, OLAP Cube)
created based on the project schema returns data for all days in the Platform
Analytics Repository.

The following tables have a view applied:

l fact_prompt_answers_view

l fact_action_security_filter_view

l fact_sql_stats_view

l fact_step_sequence_view

l fact_performance_monitor_view

l fact_action_cube_cache_view

l fact_access_transactions_view

l lu_session_view

l lu_history_list_message_view

Steps to Modify the View Date Range


1. Open Windows Services, right-click MicroStrategy Platform Analytics
Consumer and select Stop.

Copyright © 2023 All Rights Reserved 396


Plat fo r m An alyt ics

2. In the Platform Analytics directory, located at C:\Program Files


(x86)\MicroStrategy\Platform Analytics, open the conf folder.

3. Open the PAConsumerConfig.yaml file and update the


viewCutoffRangeInDays field to the desired number of days.

The value should be a natural positive number.

For example:

viewCutoffRangeInDays: 14

4. Save the file.

5. Open Windows Services, right-click MicroStrategy Platform Analytics


Consumer and select Start.

6. To verify the change, run the SQL query below in the Platform Analytics
Repository:

SELECT cutoff_range_in_days FROM platform_analytics_wh.etl_lu_view_cutoffs


ORDER BY cutoff_id desc limit 1;

1. In the Platform Analytics directory, located at


/opt/MicroStrategy/PlatformAnalytics, open the bin folder and run
the following command:

./platform-analytics-consumer.sh stop

2. Still in the Platform Analytics directory, open the conf folder.

3. Open the PAConsumerConfig.yaml file and update the


viewCutoffRangeInDays field to the desired number of days.

The value should be a natural positive number.

For example:

viewCutoffRangeInDays: 14

Copyright © 2023 All Rights Reserved 397


Plat fo r m An alyt ics

4. Save the file.

5. In the Platform Analytics directory, open the bin folder and run the
following command:

./platform-analytics-consumer.sh start

6. To verify the change, run the SQL query below in the Platform
Analytics Repository:

SELECT cutoff_range_in_days FROM platform_analytics_wh.etl_lu_


view_cutoffs ORDER BY cutoff_id desc limit 1;

MySQL Maintenance
Since Platform Analytics stores telemetry in the Platform Analytics MySQL
Repository, it's important to maintain your MySQL database. There are four
recommended ways to maintain your database:

l Backup Your MySQL Database

l Replicate Your MySQL Database

l Secure Your MySQL Database

l Upgrade Your MySQL

Backup Your MySQL Database


You can quickly backup and restore your MySQL databases on your server
by downloading the backup tool mysqldump. This tool is located in the
root/bin folder of the MySQL installation folder.

mysqldump allows you to dump databases for backup or transfer database


to another database server. The dump file contains a set of SQL statements
to create database objects.

The basic syntax for backing up the database is:

Copyright © 2023 All Rights Reserved 398


Plat fo r m An alyt ics

mysqldump -u [username] –p[password] [database_name] > [dump_file.sql]

Where:

[username] is a valid MySQL username.

[password] is a valid password for the user. There is no space between –p


and the password in the command.

[database_name] is the database name you want to backup. For Platform


Analytics, the database name is platform_analytics_wh.

[dump_file.sql] is the dump file you want to generate.

You can modify the syntax depending on the information you want to backup.

To only backup the structure, add -no-data to the syntax:

mysqldump -u [username] –p[password] –no-data [database_name] > [dump_


file.sql]

To only backup data, add -no-create-info to the syntax:

mysqldump -u [username] –p[password] –no-create-info [database_name] >


[dump_file.sql]

For more information on the database backup program, see Backup and
Recovery.

Replicate Your MySQL Database


Replication allows data from one MySQL database server (the master) to be
copied to one or more MySQL database servers (the slaves). There are
several benefits to replication, such as the ability to isolate read/write load
to improve performance, perform backups on one database without risk of
corruption, or create a local copy of data for remote use.

Typical replication requires synchronization between the master and slave.


There are two types of synchronization:

Copyright © 2023 All Rights Reserved 399


Plat fo r m An alyt ics

l Asynchronous Replication

Replication is asynchronous by default. This type of synchronization is


one-way, where one server acts as the master and the other server or
servers act as the slaves.

l Semi-Synchronous Replication

With semi-synchronous replication, a commit performed on the master


blocks before returning to the session that performed the transaction until
at least one slave acknowledges that it has received and logged the
events for the transaction.

In either case, you can configure your system so that Platform Analytics
Consumer writes to the master and the Intelligence server reads data from
one of the replicas. This is useful for systems with heavy read/write load and
if you have several custom cubes created using the Self Service Schema in
the Platform Analytics project.

For more information on replication, see Replication.

Secure Your MySQL Database


There are general factors that should be considered to secure your MySQL
database. Review the general security issues outlined in the MySQL
documentation. Additionally, after installing MySQL, it's recommended to
perform post-installation security testing. For more information, see
Postinstallation Setup and Testing.

Finally, general access control and security should be prioritized. For


information about account management, see Access Control and Account
Management. If you've lost your MySQL root password, see Reset you Root
Password.

Upgrade Your MySQL


It is a best practice to upgrade your MySQL with the latest bug fixes.
Additionally, upgrades provide the latest features offered between releases
of new MySQL. To have a seamless upgrade, see Upgrading.

Copyright © 2023 All Rights Reserved 400


Plat fo r m An alyt ics

Start-Up Health Checks


Multiple services were renamed in the MicroStrategy 2019 release. Because
this guide requires modifying the underlying files, it uses the original
service name. For a list of changed service names, see the Critical Name
Changes section of the 2019 Readme.

The Telemetry Store (i.e. Platform Analytics Consumer) and Identity


Telemetry producer (i.e. Usher Metadata Producer) are dependent on and
require access to three components to process telemetry logs:

l Platform Analytics Repository (i.e. Database Server)

l Telemetry Cache (i.e Redis)

l Telemetry Server (i.e. Kafka)

All the three components must be in a healthy state for Platform Analytics to
successfully process telemetry logs. If any of these components are
unavailable, the Telemetry Store consumer and Identity Telemetry producer
will stop. Therefore, during start up, both the consumer and producer
execute a health check for the three components and generate a detailed
report with the results.

At times, one of the components may be in the process of starting and not in
a completely ready-state when the health check starts. In such situations,
the consumer and producer will perform three consecutive checks with a 60-
second delay between each check to confirm if the dependencies are in an
unhealthy state.

For more information about Platform Analytics Architecture, see Platform


Analytics Architecture and Services.

Naming Conventions and Locations


A health check is performed during start up of both the Platform Analytics
Consumer and the Usher Metadata Producer. Therefore, there are two

Copyright © 2023 All Rights Reserved 401


Plat fo r m An alyt ics

health check reports generated in the Platform Analytics log folder located
in the default install path:

l Linux: /opt/MicroStrategy/PlatformAnalytics/log

l Windows: C:\Program Files (x86)\MicroStrategy\Platform Analytics\log

The name of the file identifies whether the report is corresponding to the
consumer or the producer.

For example,

platform-analytics-consumer-health-check-yyyymmddhhmmss.out
platform-analytics-usher-lookup-producer-health-check-yyyymmddhhmmss.out

Health Check Report Results


Each health check report is structured into four sections:

1. Health Check

2. Redis Health Check

3. Kafka Health Check

4. Health Check Summary

Each section provides different information on the health of your three


components.

Health Check
During the health check, there are two checks being executed:

l Can the consumer/producer connect to the database provided during


installation and stored in the PAConsumerConfig.yaml configuration
file? If not, additional network connectivity testing occurs to diagnose the
cause of the issue.

l Does the database user have the required privileges? For a full list of
installation prerequisites see, Platform Analytics Prerequisites.

Copyright © 2023 All Rights Reserved 402


Plat fo r m An alyt ics

The Health Check report provides a list of the privileges and the resulting
status. If all the checks are successful, the final line will read Warehouse
health check result is healthy.

If any line reads Failed, check your PAConsumerConfig.yaml file and


ensure the database has the correct privileges.

Suggested Troubleshooting for Health Check Errors

If you receive any of the following errors in the Health Check, here are
suggested workarounds:

Missing Privileges Error

If the database user stored in the PAConsumerConfig.yaml configuration


file is missing privilege(s), then INFO [privilege type] privilege:
Failed. To resolve this error, the administrator must grant the missing
privilege(s) to the database user and restart the consumer.

How to Grant Missing Privileges:

1. Stop the Platform Analytics Consumer and the Usher Metadata


Producer.

2. Connect to the database server that contains the Platform Analytics


Repository. Execute the following command, replacing the 'someuser'
and 'somehost' with the customer specified information:

GRANT DROP ON platform_analytics_wh.* TO ‘someuser’@‘somehost’;

3. Restart the Platform Analytics Consumer and Usher Metadata


Producer.

Failed Connection Error

If the consumer or producer is unable to connect to the database using the


configuration specified in the PAConsumerConfig.yaml configuration file,
you may see the following error:

Copyright © 2023 All Rights Reserved 403


Plat fo r m An alyt ics

2018-11-21 21:43:28,793 INFO HealthCheck main - Failed to connect to the


database. Retrying after waiting for 60 seconds.
2018-11-21 21:45:31,797 INFO HealthCheck main - Failed to connect to the
database. Retrying after waiting for 60 seconds.
2018-11-21 21:47:34,800 ERROR HealthCheck main - Failed to connect to the
database using url:jdbc:mysql://XX.Y.Z.1:3306/platform_analytics_
wh?rewriteBatchedStatements=true&useLegacyDatetimeCode=false&serverTimezone
=UTC. Please double check your connection parameters.
Communications link failure

The PAConsumerConfig.yaml file is populated based on the database


information provided during installation. To resolve this error, connect to the
machine hosting Platform Analytics and confirm all fields under the
warehouseDbConnection heading are correct in the
PAConsumerConfig.yaml file.

Database User Password is Incorrect Error

The consumer or producer will be unable to connect to the database if the


encrypted warehouse password is incorrect. To generate a new encrypted
password and update the confirmation, see Update the Database User
Password Configured to the Platform Analytics Repository .

Database User Created with SSL Enabled Error

Platform Analytics supports MySQL versions 5.6, 5.7, and 8.0. For MySQL
8.0, SSL connection is enabled by default. Currently, Platform Analytics
does not support SSL for the database user connecting to MySQL. When you
create the database user for the Platform Analytics Consumer or Usher
Metadata Producer, specify the SSL/TLS option using the REQUIRE clause.

How to Disable SSL:

1. Connect to the Platform Analytics Repository and execute the following


command:

show variables like '%ssl%';

Copyright © 2023 All Rights Reserved 404


Plat fo r m An alyt ics

2. If the result for 'have_ssl' is 'YES' then SSL is enabled. Create the
user with mysql_native_password and REQUIRE NONE options to
connect without SSL.

CREATE USER 'test'@'%' IDENTIFIED WITH mysql_native_password BY


'password' REQUIRE NONE;

Redis Health Check


The Redis Health Check determines if the consumer or producer can
successfully connect to the Redis server. The check provides detailed
statistics about Redis collected during startup. If all the checks are
successful, the final line will read Redis server health check result
is healthy.

If you see an error in your check, ensure Redis is running and that your
configuration is correct in the PAConsumerConfig.yaml file.

Suggested Troubleshooting for Redis Health Check Errors

If you receive any of the following errors in the Redis Health Check, here are
suggested workarounds:

Redis is Stopped Error

If the consumer or producer is unable to connect to Redis, it may be because


it is in a stopped state. To resolve this error, start the MicroStrategy In-
Memory Cache, the Platform Analytics Consumer, and the Usher Metadata
Producer.

Failed to Connect to Redis Error

If the consumer or producer is unable to connect to Redis, it may be because


the configuration is not correct in the PAConsumerConfig.yaml file. To
resolve this error, connect to the machine hosting Platform Analytics and
confirm all fields under the redisConnection heading are correct in the
PAConsumerConfig.yaml file.

Password Authentication Enabled for Redis Error

Copyright © 2023 All Rights Reserved 405


Plat fo r m An alyt ics

If the consumer or producer is unable to connect to Redis, it may be because


password authentication is enabled. By default, Redis is not configured with
password authentication, but it can set after installation.

If Redis has been enabled with password authentication and the password is
missing in the PAConsumerConfig.yaml configuration file, the consumer
or producer will be unable to connect to Redis. To resolve this error, follow
the steps for Enable Password Authentication on the MicroStrategy
Telemetry Cache.

Kafka Health Check


The Kafka Health Check ensures the Telemetry Manager (Apache
Zookeeper) and the Telemetry Server (Kafka Server) are started and
connected. If all the checks are successful, the final line will read Kafka
cluster health check result is healthy.

Since the Telemetry Server is dependent on the Telemetry Manager, the


Telemetry Manager must be started first.

If you see an error in your check, ensure ZooKeeper and Kafka are started.

Suggested Troubleshooting for Kafka Health Check Errors

How to check if ZooKeeper servers are running on all nodes:

l On Linux, run the following command to run the PID.

ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk '{print


$1}'

l On Windows, open Window's Services and check if the \"Apache


ZooKeeper\" service is running.

How to check if Kafka servers are running on all nodes:

Copyright © 2023 All Rights Reserved 406


Plat fo r m An alyt ics

l On Linux, run the following command to run the PID.

ps ax | grep -i 'kafka\\.Kafka' | grep java | grep -v grep | awk '{print


$1}'

l On Windows, open Window's Services and check if the \"Apache Kafka\"


service is running.

How to start ZooKeeper and Kafka on all nodes:

l On Linux, run the following commands in the Kafka directory:

# Start Zookeeper on all nodes,

./zookeeper-server-start.sh -daemon ../config/zookeeper.properties

# Start Kafka on all nodes,

./kafka-server-start.sh -daemon ../config/server.properties

l On Windows, open Window's Services and start Apache ZooKeeper and


Apache Kafka.

If this is a clustered environment with multiple nodes of ZooKeeper and


Kafka, you must start all nodes of ZooKeeper first.

Health Check Summary


If all health checks are successful, the results will be passing. If any of the
checks failed, you will get a FAIL for the corresponding component. In failed
scenarios, utilize the detailed report list above to investigate possible
causes of the failure.

Platform Analytics Health Check Utility


The Platform Analytics Health Check Utility is an end-to-end health check.
This utility troubleshoots issues across multiple components required to
produce, consume, and report on telemetry from the platform. In particular,

Copyright © 2023 All Rights Reserved 407


Plat fo r m An alyt ics

this check is recommended if a report in one of your projects is not providing


telemetry to the Platform Analytics warehouse.

The Platform Analytics Health Check Utility performs all three health checks
that occur in Start-Up Health Checks and end-to-end telemetry checks to
verify that data can be produced by the Intelligence Server and consumed by
the Platform Analytics Consumer (Telemetry Store).

If you are using Linux, the Platform Analytics Health Check Utility is located
at /opt/MicroStrategy/PlatformAnalytics/bin. If you are using
Windows, it is located at C:\Program Files
(x86)\MicroStrategy\Platform Analytics\bin.

How to Run the Platform Analytics Health Check Utility


To perform an end-to-end Platform Analytics Health Check, run the
platform-analytics-health-check.(sh/bat) utility.

The end-to-end telemetry checks performed by the Platform Analytics Health


Check utility include:

1. Health Check

2. Redis Health Check

3. Kafka Health Check

4. Change Journal Health Check

5. Statistics Health Check

Health Check
During the health check, there are two checks being executed:

l Can the consumer/producer connect to the database provided during


installation and stored in the PAConsumerConfig.yaml configuration
file? If not, additional network connectivity testing occurs to diagnose the
cause of the issue.

Copyright © 2023 All Rights Reserved 408


Plat fo r m An alyt ics

l Does the database user have the required privileges? For a full list of
installation prerequisites see, Platform Analytics Prerequisites.

The Health Check report provides a list of the privileges and the resulting
status. If all the checks are successful, the final line will read Warehouse
health check result is healthy.

If any line reads Failed, check your PAConsumerConfig.yaml file and


ensure the database has the correct privileges.

Suggested Troubleshooting for Health Check Errors

If you receive any of the following errors in the Health Check, here are
suggested workarounds:

Missing Privileges Error

If the database user stored in the PAConsumerConfig.yaml configuration


file is missing privilege(s), then INFO [privilege type] privilege:
Failed. To resolve this error, the administrator must grant the missing
privilege(s) to the database user and restart the consumer.

How to Grant Missing Privileges:

1. Stop the Platform Analytics Consumer and the Usher Metadata


Producer.

2. Connect to the database server that contains the Platform Analytics


Repository. Execute the following command, replacing the 'someuser'
and 'somehost' with the customer specified information:

GRANT DROP ON platform_analytics_wh.* TO ‘someuser’@‘somehost’;

3. Restart the Platform Analytics Consumer and Usher Metadata


Producer.

Failed Connection Error

Copyright © 2023 All Rights Reserved 409


Plat fo r m An alyt ics

If the consumer or producer is unable to connect to the database using the


configuration specified in the PAConsumerConfig.yaml configuration file,
you may see the following error:

2018-11-21 21:43:28,793 INFO HealthCheck main - Failed to connect to the


database. Retrying after waiting for 60 seconds.
2018-11-21 21:45:31,797 INFO HealthCheck main - Failed to connect to the
database. Retrying after waiting for 60 seconds.
2018-11-21 21:47:34,800 ERROR HealthCheck main - Failed to connect to the
database using url:jdbc:mysql://XX.Y.Z.1:3306/platform_analytics_
wh?rewriteBatchedStatements=true&useLegacyDatetimeCode=false&serverTimezone
=UTC. Please double check your connection parameters.
Communications link failure

The PAConsumerConfig.yaml file is populated based on the database


information provided during installation. To resolve this error, connect to the
machine hosting Platform Analytics and confirm all fields under the
warehouseDbConnection heading are correct in the
PAConsumerConfig.yaml file.

Database User Password is Incorrect Error

The consumer or producer will be unable to connect to the database if the


encrypted warehouse password is incorrect. To generate a new encrypted
password and update the confirmation, see Update the Database User
Password Configured to the Platform Analytics Repository .

Database User Created with SSL Enabled Error

Platform Analytics supports MySQL versions 5.6, 5.7, and 8.0. For MySQL
8.0, SSL connection is enabled by default. Currently, Platform Analytics
does not support SSL for the database user connecting to MySQL. When you
create the database user for the Platform Analytics Consumer or Usher
Metadata Producer, specify the SSL/TLS option using the REQUIRE clause.

How to Disable SSL:

Copyright © 2023 All Rights Reserved 410


Plat fo r m An alyt ics

1. Connect to the Platform Analytics Repository and execute the following


command:

show variables like '%ssl%';

2. If the result for 'have_ssl' is 'YES' then SSL is enabled. Create the
user with mysql_native_password and REQUIRE NONE options to
connect without SSL.

CREATE USER 'test'@'%' IDENTIFIED WITH mysql_native_password BY


'password' REQUIRE NONE;

Redis Health Check


The Redis Health Check determines if the consumer or producer can
successfully connect to the Redis server. The check provides detailed
statistics about Redis collected during startup. If all the checks are
successful, the final line will read Redis server health check result
is healthy.

If you see an error in your check, ensure Redis is running and that your
configuration is correct in the PAConsumerConfig.yaml file.

Suggested Troubleshooting for Redis Health Check Errors

If you receive any of the following errors in the Redis Health Check, here are
suggested workarounds:

Redis is Stopped Error

If the consumer or producer is unable to connect to Redis, it may be because


it is in a stopped state. To resolve this error, start the MicroStrategy In-
Memory Cache, the Platform Analytics Consumer, and the Usher Metadata
Producer.

Failed to Connect to Redis Error

If the consumer or producer is unable to connect to Redis, it may be because


the configuration is not correct in the PAConsumerConfig.yaml file. To

Copyright © 2023 All Rights Reserved 411


Plat fo r m An alyt ics

resolve this error, connect to the machine hosting Platform Analytics and
confirm all fields under the redisConnection heading are correct in the
PAConsumerConfig.yaml file.

It's possible the Redis server failed to write the snapshot to the disk. If this
is the case, you can disable the RDP snapshotting process on the Redis
server.

1. Stop the platform-analytics-usher-lookup-producer using


the following command:

./platform-analytics-usher-lookup-producer.sh stop

2. Stop the platform-analytics-consumer using the following


command:

./platform-analytics-consumer.sh stop

3. Stop the Redis server.

4. Apply the following changes to the redis.conf file:

1. Comment out the three lines under the ###SNAPSHOTTING###


heading.

#save 900 1
#save 300 10
#save 60 10000

2. In the same section, enter the line:

save ""

5. Start the Redis server.

6. Start the platform-analytics-consumer using the following


command:

./platform-analytics-consumer.sh start

Copyright © 2023 All Rights Reserved 412


Plat fo r m An alyt ics

7. Start the platform-analytics-usher-lookup-producer using


the following command:

./platform-analytics-usher-lookup-producer.sh start

Password Authentication Enabled for Redis Error

If the consumer or producer is unable to connect to Redis, it may be because


password authentication is enabled. By default, Redis is not configured with
password authentication, but it can set after installation.

If Redis has been enabled with password authentication and the password is
missing in the PAConsumerConfig.yaml configuration file, the consumer
or producer will be unable to connect to Redis. To resolve this error, follow
the steps for Enable Password Authentication on the MicroStrategy
Telemetry Cache.

Kafka Health Check


The Kafka Health Check ensures the Telemetry Manager (Apache
Zookeeper) and the Telemetry Server (Kafka Server) are started and
connected. If all the checks are successful, the final line will read Kafka
cluster health check result is healthy.

Since the Telemetry Server is dependent on the Telemetry Manager, the


Telemetry Manager must be started first.

If you see an error in your check, ensure ZooKeeper and Kafka are started.

Suggested Troubleshooting for Kafka Health Check Errors

How to check if ZooKeeper servers are running on all nodes:

l On Linux, run the following command to run the PID.

ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk '{print


$1}'

Copyright © 2023 All Rights Reserved 413


Plat fo r m An alyt ics

l On Windows, open Window's Services and check if the \"Apache


ZooKeeper\" service is running.

How to check if Kafka servers are running on all nodes:

l On Linux, run the following command to run the PID.

ps ax | grep -i 'server.prop'| grep java | grep -v grep | awk '{print


$1}'

l On Windows, open Window's Services and check if the \"Apache Kafka\"


service is running.

How to start ZooKeeper and Kafka on all nodes:

l On Linux, run the following commands in the Kafka directory:

# Start Zookeeper on all nodes,


./zookeeper-server-start.sh -daemon ../config/zookeeper.properties
# Start Kafka on all nodes,
./kafka-server-start.sh -daemon ../config/server.properties

l On Windows, open Window's Services and start Apache ZooKeeper and


Apache Kafka.

If this is a clustered environment with multiple nodes of ZooKeeper and


Kafka, you must start all nodes of ZooKeeper first.

Change Journal Health Check


The Change Journal check ensures the Platform Analytics Consumer is
healthy. For this check, you must provide a project GUID and a report GUID
to test. You are asked to modify the description of the report, which
generates a Change Journal log. The test verifies if the Intelligence Server
Producer produced the log to the
Mstr.PlatformAnalytics.ChangeJournal.CubesReportsDossier Kafka topic.
Then, it tests if the log is processed and written to the Platform Analytics
warehouse table lu_object.

Copyright © 2023 All Rights Reserved 414


Plat fo r m An alyt ics

If the record is found in both the appropriate Kafka topic and the warehouse,
the final line will read Change Journal health check result is
healthy.

If you see an error in your check, ensure the feature flag Messaging Service
for Platform Analytics is on in the Intelligence server and that the property
Telemetry Server enabled is set to True in the Intelligence server.

Suggested Troubleshooting for Change Journal Health Check Errors

Verify the Intelligence Server is Configured to Write Telemetry to


Kafka

1. Using Command Manager, connect to the Intelligence server.

2. To view the status of the feature flag, run the command:

LIST ALL FEATURE FLAGS;

3. In the results, verify the feature flag Messaging Service for Platform
Analytics says ON. If the feature flag is OFF, run the following
command to turn it on:

ALTER FEATURE FLAG "Messaging Service for Platform Analytics" ON;

4. To view the status of the property Telemetry Server enable, run the
command:

LIST PROPERTIES FOR SERVER CONFIGURATION;

5. In the results, verify the property Telemetry Server enabled is set to


True. If the property is set to False, execute the command below:

Replace <kafka server IP> with your Kafka server IP address.

'ALTER SERVER CONFIGURATION ENABLEMESSAGINGSERVICES TRUE


CONFIGUREMESSAGINGSERVICES "bootstrap.servers: <kafka server

Copyright © 2023 All Rights Reserved 415


Plat fo r m An alyt ics

IP>:9092/batch.num.messages:5000/queue.buffering.max.ms:2000";

6. Restart the Intelligence server.

Verify the Platform Analytics Consumer is Working

On Linux:

1. Navigate to the folder where Platform Analytics is installed.

<Install>/PlatformAnalytics/bin

2. Run the following command:

./platform-analytics-consumer.sh status

3. Start or restart the server with the following command:

./platform-analytics-consumer.sh start

If restarting the server does not resolve the issue, check the logs under
<Install>/PlatformAnalytics/log/platform-analytics-
consumer.log or contact MicroStrategy Technical Support and attach the
folder <Install>/PlatformAnalytics/log to your case.

On Windows:

1. Open Windows Services using services.msc.

2. Start or restart the Telemetry Store.

If restarting the server does not resolve the issue, check the logs under
<Install>/PlatformAnalytics/log/platform-analytics-
consumer.log or contact MicroStrategy Technical Support and attach the
folder <Install>/PlatformAnalytics/log to your case.

Statistics Health Check


This health check ensures the Platform Analytics Consumer can process
report statistics. You are prompted to execute the report from the previous
health check. This generates a log to the topic

Copyright © 2023 All Rights Reserved 416


Plat fo r m An alyt ics

Mstr.PlatformAnalytics.IsReportStats. The health check verifies that the


Intelligence Server Producer produced the record to Kafka and that it is in
the Platform Analytics warehouse.

If the record is found in both the appropriate Kafka topic and the warehouse,
the final line will read Statistics health check result is
healthy.

If you see an error in your check, ensure Statistics are enabled for the
project and that Messaging Services is configured correctly.

Suggested Troubleshooting for Statistics Health Check Errors

Verify Statistics are Enabled for the Project

1. Using Command Manger, connect to the Intelligence server.

2. Run the command:

Replace <Project Name> with your project's name.

LIST ALL PROPERTIES FOR PASTATISTICS IN PROJECT "<Project Name>";

3. In the results, verify the property Basic Statistics is set to True. If it is


set to False, run the command below:

Replace <Project Name> with your project's name.

ALTER PASTATISTICS BASICSTATS ENABLED DETAILEDREPJOBS TRUE


DETAILEDDOCJOBS TRUE JOBSQL TRUE COLUMNSTABLES TRUE IN PROJECT
"<Project Name>";

4. Restart the Intelligence server.

Verify the Platform Analytics Consumer is Working

On Linux:

Copyright © 2023 All Rights Reserved 417


Plat fo r m An alyt ics

1. Navigate to the folder where Platform Analytics is installed.

<Install>/PlatformAnalytics/bin

2. Run the following command:

./platform-analytics-consumer.sh status

3. Start or restart the server with the following command:

./platform-analytics-consumer.sh start

If restarting the server does not resolve the issue, check the logs under
<Install>/PlatformAnalytics/log/platform-analytics-
consumer.log or contact MicroStrategy Technical Support and attach the
folder <Install>/PlatformAnalytics/log to your case.

On Windows:

1. Open Windows Services using services.msc.

2. Start or restart the Telemetry Store.

If restarting the server does not resolve the issue, check the logs under
<Install>/PlatformAnalytics/log/platform-analytics-
consumer.log or contact MicroStrategy Technical Support and attach the
folder <Install>/PlatformAnalytics/log to your case.

Advanced Job Telemetry


Advanced job telemetry aims to improve administration and resource
monitoring in Platform Analytics to catch jobs and object information before
Intelligence server crashes.

Purpose
Prior to MicroStrategy 2021 Update 6, job statistics were not recorded when
a job was created. Thus, when the Intelligence server crashed, there was no
way to determine which jobs were active or the objects being manipulated.

Copyright © 2023 All Rights Reserved 418


Plat fo r m An alyt ics

Starting in MicroStrategy 2021 Update 6, Platform Analytics enhances the


data model to provide server statistics to MicroStrategy administrators
during the initial job creation so that the end user can understand what
users, actions, or cubes caused the Intelligent Server to crash. A new Kafka
topic, Mstr.PlatformAnalytics.IsJobStats, is used to process
messages of the creation and completion of jobs.

Workflow
Once a job is created, a message is sent to the Kafka Topic with the
CREATETIME field. Upon job completion, another message is sent with the
COMPLETETIME field.

Platform Analytics processes messages from the


Mstr.PlatformAnalytics.IsJobStats Kafka Topic and batch inserts
the start of a job in the fact_inprocess_jobs table with creation_time
in ThreeSecondETL. Once the job is complete, an upsert is performed with
completion_time. Platform Analytics extracts, transforms, and loads
(ETL) on an hourly basis to purge the completed jobs and clean the record.

Modify the Default Cube Refresh Rate


Platform Analytics provides default schedules and subscriptions to all out-
of-the-box cubes in the Platform Analytics project. The schedules and
subscriptions are created when you Platform Analytics Project Configuration
Scripts. By default, each cube is scheduled to republish once an hour on a
staggered 10-minute schedule. For example, cube1 is republished on the 00
minute of each hour, cube2 is republished on the 10th minute, and so on.

The staggered schedules are intended to prevent loading the Intelligence


Server with multiple cubes republishing simultaneously. The cube refresh
rate can be changed any time after the initial install and configuration of
Platform Analytics using any supported MicroStrategy tool. For a full list of
the schedules and subscriptions shipped with the current release of Platform

Copyright © 2023 All Rights Reserved 419


Plat fo r m An alyt ics

Analytics, view the PlatformAnalyticsConfiguration.scp script,


located at ./MicroStrategy/Platform Analytics/Util.

To change the default refresh rate, you can utilize MicroStrategy Developer
or Command Manager.

How to Modify the Default Cube Refresh Rate Using


Developer
1. In Developer, log into the project source where the Platform Analytics
project is configured.

2. Go to Administration > Configuration Managers > Schedules.

3. Right-click the schedules that begin with PlatformAnalytics and


select Edit.

4. Step through the Schedule Wizard until you get to the Recurrence
Pattern dialog.

Copyright © 2023 All Rights Reserved 420


Plat fo r m An alyt ics

5. Modify the value next to Executing every to the desired frequency.

6. Click Next.

7. Click Finish.

How to Modify the Default Cube Refresh Rate Using


Command Manager
1. In Command Manager, log into the project source where the Platform
Analytics project is configured.

Copyright © 2023 All Rights Reserved 421


Plat fo r m An alyt ics

2. Enter the Alter Schedule statement with the required modifications.

For example,

ALTER SCHEDULE "PlatformAnalytics_Every_Hour_00" STARTDATE 08/01/2017


ENDDATE NEVER TYPE TIMETRIGGERED DAILY EVERY WEEKDAY EXECUTE ALL DAY
EVERY 120 MINUTES STARTTIME 00:00;

3. Execute the command.

Delete User Data


You can delete data for specific users, across all tables in the Platform
Analytics warehouse, by running the data cleanup script.

How to Delete User Data in Windows


1. Create a text file with the login IDs or email addresses of the users
whose data you want to delete. There should only be one login ID or
email address per line.

2. Go to the bin folder in the PlatformAnalytics directory.

3. Run the platform-analytics-data-cleanup script. Make sure to enter the


full path of the text file as the input parameter.

platform-analytics-data-cleanup.ps1 -f
C:\Users\mstr\Desktop\YourTextFile.txt

4. Open the corresponding log file to confirm the number of deletions.

5. If necessary, go to the Platform Analytics repository to confirm the


deletions.

How to Delete User Data in Linux


1. Create a text file with the login IDs or email addresses of the users
whose data you want to delete. There should only be one login ID or

Copyright © 2023 All Rights Reserved 422


Plat fo r m An alyt ics

email address per line.

2. Go to the bin folder in the PlatformAnalytics directory.

3. Run the platform-analytics-data-cleanup script. Make sure to enter the


full path of the text file as the input parameter.

./platform-analytics-data-cleanup.sh -f
/opt/mstr/PlatformAnalytics/bin/YourTextFile.txt

4. Run the corresponding log file to confirm the number of deletions.

tail -f ../log/platform-analytics-data-cleanup.log

5. You can also go to your Platform Analytics repository to confirm the


deletions.

Purge Platform Analytics Warehouse


It may become necessary to purge some of the data collected and stored in
the Platform Analytics warehouse. If a large volume of stored data begins to
negatively affect the performance of the Platform Analytics Consumer, or if
some metadata or projects have been dropped from an environment, the
commands listed in this section allow administrators to remove the
associated data from the Platform Analytics warehouse.

The commands used to purge Platform Analytics warehouse data are based
on different criteria, including:

l Metadata: You can purge data from some specific metadata.

l Projects: You can purge data from specific projects, but all those projects
must be in one metadata.

l Deleted Objects: You can purge the deleted objects and related data.

l Deleted Projects: You can purge the deleted projects and related data.

Copyright © 2023 All Rights Reserved 423


Plat fo r m An alyt ics

l DaysToKeep: You can purge data and only keep the latest data with the
given number of days.

MicroStrategy provides the following valid commands to purge platform


analytics warehouse:

DELETE_ALL_OBJECTS_IN_METADATA
DELETE_ALL_OBJECTS_IN_PROJECTS
DELETE_ALL_DELETED_OBJECTS
DELETE_ALL_DELETED_PROJECTS
DELETE_ALL_DELETED_OBJECTS_IN_METADATA
DELETE_ALL_DELETED_PROJECTS_IN_METADATA
DELETE_ALL_DELETED_OBJECTS_IN_PROJECTS
DELETE_ALL_FACTS
DELETE_ALL_FACTS_FROM_METADATA
DELETE_ALL_FACTS_FROM_PROJECTS
DELETE_ALL_FACTS_FROM_DELETED_OBJECTS
DELETE_ALL_FACTS_FROM_DELETED_PROJECTS
DELETE_ALL_FACTS_FROM_DELETED_OBJECTS_IN_METADATA
DELETE_ALL_FACTS_FROM_DELETED_PROJECTS_IN_METADATA
DELETE_All_FACTS_FROM_DELETED_OBJECTS_IN_PROJECTS

DELETE_ALL_OBJECTS_IN_METADATA

This command will purge the whole metadata you have given as well as the
related data, including those metadata in lu_metadata. The following tables
will be purged:

Configurat Project Postgres


Metada Project ion Other Object Fact Only
ta s Objects Objects s Tables Fact
Tables

lu_ lu_ lu_account etl_lu_ etl_lu_ access_ historical_


metadat project metadat folder transaction access_
lu_cache a_audit_ transactio
a
time lu_ access_ ns
lu_db_ history_ transactio
connection etl_rel_ list_ n_ historical_
lu_db_ childgrou messag reprocess fact_
connection_ p_ e action_
usergroup fact_ cube_
map lu_ actiion_ cache
lu_db_ lu_db_ object security_
instance error filter historical_

Copyright © 2023 All Rights Reserved 424


Plat fo r m An alyt ics

Configurat Project Postgres


Metada Project ion Other Object Fact Only
ta s Objects Objects s Tables Fact
Tables

lu_db_login lu_grid lu_ fact_ fact_


prompt action_ action_
lu_entity rel_ cube_ security_
account_ lu_ cache filter
lu_event usergroup securit
lu_mstr_ y_filter fact_client_ historical_
rel_ executions fact_
user privilege_ lu_ object_
lu_recipient source_ status fact_ change_
privilege_ machine_ journal
lu_schedule group configurati
on historical_
lu_server_ rel_ fact_
definition scope_ fact_ prompt_
project metadata_ answers
lu_server_ users
instance rel_ historical_
sessioni fact_ fact_
lu_ d_ named_
subscriptio report_
corrdinate user columns
n_base
rel_ fact_ historical_
lu_ source_ named_
subscriptio fact_sql_
privilege_ user_ stats
n_device source_ license
lu_user_ scope historical_
fact_ fact_step_
group rel_user_ object_ sequence
entity_ change_
source journal historical_
lu_
fact_ session
object_
component
fact_
performanc
e_monitor
fact_
product_
named_
users_
license
fact_
prompt_
ansers
fact_
report_
columns
fact_
server_
cpu_

Copyright © 2023 All Rights Reserved 425


Plat fo r m An alyt ics

Configurat Project Postgres


Metada Project ion Other Object Fact Only
ta s Objects Objects s Tables Fact
Tables

license
fact_sql_
stats
fact_step_
sequence
fact_
usher_
entity_
resolved_
privilege
fact_
usher_
inbox_
message
fact_
usher_
inbox_
response
lu_client_
session
lu_session

DELETE_ALL_OBJECTS_IN_PROJECTS

This command will purge all the projects which you have given and related
data, include those projects in lu_project. The following tables will be
purged:

Metadat Project Configurati Other Project Fact Postgres


a s on Objects Object Objects Tables Only Fact
s Tables

lu_ lu_db_ etl_lu_ access_ historical_


project error folder transaction access_
s transactio
lu_grid lu_ ns
history_ access_
rel_ list_ transaction historical_
scope_ messag s_ fact_
project e reprocess action_
cube_
lu_

Copyright © 2023 All Rights Reserved 426


Plat fo r m An alyt ics

Metadat Project Configurati Other Project Fact Postgres


a s on Objects Object Objects Tables Only Fact
s Tables

object fact_ cache


action_
lu_ cube_cache historical_
prompt fact_
fact_ action_
lu_ action_ security_
securit security_ filter
y_filter filter
historical_
lu_ fact_client_ fact_
status executions object_
change_
fact_ journal
object_
change_ historical_
journal fact_
prompt_
fact_ answers
object_
component historical_
fact_
fact_ report_
prompt_ columns
answers
historical_
fact_ fact_sql_
report_ stats
columns
historical_
fact_sql_ fact_step_
stats sequence
fact_step_
sequence

DELETE_ALL_DELETED_OBJECTS

This command will purge all the deleted objects and related data in the
whole pa warehouse. The following tables will be purged:

Project Postgres
Metada Project Configurati Other Object Fact Only
ta s on Objects Objects s Tables Fact
Table

lu_ lu_account etl_rel_ etl_lu_ access_ historical_


project childgrou folder transaction access_
lu_cache p_ s transactio
usergroup lu_ ns
lu_db_ history_ access_
connection

Copyright © 2023 All Rights Reserved 427


Plat fo r m An alyt ics

Project Postgres
Metada Project Configurati Other Object Fact Only
ta s on Objects Objects s Tables Fact
Table

lu_db_ lu_db_ list_ transaction historical_


connection_ error messag s_ fact_
map e reprocess action_
lu_grid cube_
lu_db_error lu_ fact_ cache
rel_ object action_
lu_db_ account_ cube_ historical_
instance usergroup lu_ cache fact_
prompt action_
lu_db_login rel_ fact_
scope_ lu_ security_
lu_entity action_ filter
project securit security_
lu_event y_filter filter historical_
rel_user_
entity_ lu_ fact_
lu_mstr_ fact_ object_
user source status client_ change_
executions journal
lu_schedule
fact_ historical_
lu_server_ object_
definition fact_
change_ prompt_
lu_server_ journal answers
instance fact_ historical_
lu_ object_ fact_
subscriptio change_ report_
n_base journal columns
lu_ fact_ historical_
subscriptio object_ fact_sql_
n_device component stats
lu_user_ fact_ historical_
group prompt_ fact_step_
answers sequence
fact_
report_
columns
fact_sql_
stats
fact_step_
sequence
fact_user_
entity_
resolved_
privilege
fact_
latest_
cube_
cache

Copyright © 2023 All Rights Reserved 428


Plat fo r m An alyt ics

DELETE_ALL_DELETED_PROJECTS

This command will purge all the deleted projects and related data in the
whole pa warehouse. The following tables will be purged:

Metadat Project Configurati Other Project Fact Postgres


a s on Objects Object Objects Tables Only Fact
s Tables

lu_ lu_db_ lu_ access_ historical_


project error object transaction access_
s transactio
lu_grid lu_ ns
securit access_
rel_ y_filter transaction historical_
scope_ s_ fact_
project lu_ reprocess action_
prompt cube_
fact_client_ cache
lu_ executions
history_ historical_
list_ fact_ fact_
messag action_ action_
e cube_cache security_
etl_lu_ fact_ filter
folder action_ historical_
security_ fact_
lu_ filter
status object_
fact_ change_
object_ journal
change_ historical_
journal fact_
fact_ prompt_
object_ answers
component historical_
fact_ fact_
prompt_ report_
answers columns

fact_ historical_
report_ fact_sql_
columns stats

fact_sql_
stats
fact_step_
sequence

Copyright © 2023 All Rights Reserved 429


Plat fo r m An alyt ics

DELETE_ALL_DELETED_OBJECTS_IN_METADATA

This command will purge all the deleted objects under you given metadata
and related data. The following tables will be purged:

Project Postgres
Metada Project Configurati Other Object Fact Only
ta s on Objects Objects s Tables Fact
Tables

lu_ lu_account etl_rel_ etl_lu_ access_ historical_


project childgrou folder transaction access_
lu_cache p_ s transactio
usergroup lu_ ns
lu_db_ history_ access_
connection lu_db_ list_ transaction historical_
lu_db_ error messag s_ fact_
connection_ e reprocess action_
lu_grid cube_
map lu_ fact_
rel_ cache
lu_db_ object client_
account_ executions historical_
instance usergroup lu_ fact_
lu_db_login prompt fact_ action_
rel_ action_
scope_ lu_ security_
lu_entity cube_ filter
project securit cache
lu_event y_filter historical_
rel_user_ fact_
lu_mstr_ entity_ lu_ fact_
action_ object_
user source status security_ change_
lu_schedule filter journal
lu_server_ fact_ historical_
definition latest_ fact_
cube_ prompt_
lu_server_ cache answers
instance
fact_ historical_
lu_ object_ fact_
subscriptio change_ report_
n_base journal columns
lu_ fact_ historical_
subscriptio object_ fact_sql_
n_device change_ stats
journal
lu_
historical_
subscriptio fact_ fact_step_
n_device object_ sequence
component
lu_user_
group fact_
prompt_
answers
fact_
report_

Copyright © 2023 All Rights Reserved 430


Plat fo r m An alyt ics

Project Postgres
Metada Project Configurati Other Object Fact Only
ta s on Objects Objects s Tables Fact
Tables

columns
fact_sql_
stats
fact_step_
sequence
fact_user_
entity_
resolved_
privilege

DELETE_ALL_DELETED_PROJECTS_IN_METADATA

This command will purge all the deleted projects under you given metadata
and related data. The following tables will be purged:

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Object Objects Tables Only Fact
s Tables

lu_ lu_db_ etl_lu_ access_ historical_


project error folder transactions access_
transaction
lu_grid lu_ access_ s
history_ transaction
rel_ list_ s_reprocess historical_
scope_ message fact_
project fact_client_ action_
lu_object executions cube_
lu_ fact_action_ cache
prompt cube_cache historical_
lu_ fact_action_ fact_
security_ security_ action_
filter filter security_
filter
lu_status fact_object_
change_ historical_
journal fact_
object_
fact_object_ change_
component journal
fact_ historical_
prompt_ fact_
answers prompt_
answers

Copyright © 2023 All Rights Reserved 431


Plat fo r m An alyt ics

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Object Objects Tables Only Fact
s Tables

fact_report_ historical_
columns fact_
report_
fact_sql_ columns
stats
historical_
fact_step_ fact_sql_
sequence stats
historical_
fact_step_
sequence

DELETE_ALL_DELETED_OBJECTS_IN_PROJECTS

This command will purge all the deleted objects under you given projects
and related data. The following tables will be purged:

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Object Objects Tables Only Fact
s Tables

lu_ lu_db_ etl_lu_ access_ historical_


project error folder transactions access_
transaction
lu_grid lu_ access_ s
history_ transaction
rel_ list_ s_reprocess historical_
scope_ message fact_
project fact_client_ action_
lu_object executions cube_
lu_ fact_action_ cache
prompt cube_cache historical_
lu_ fact_action_ fact_
security_ security_ action_
filter filter security_
filter
lu_status fact_object_
change_ historical_
journal fact_
object_
fact_object_ change_
component journal
fact_ historical_
prompt_ fact_
answers prompt_
answers
fact_report_

Copyright © 2023 All Rights Reserved 432


Plat fo r m An alyt ics

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Object Objects Tables Only Fact
s Tables

columns historical_
fact_
fact_sql_ report_
stats columns
fact_step_ historical_
sequence fact_sql_
stats
historical_
fact_step_
sequence

DELETE_ALL_FACTS

This command will purge all the fact tables in the whole pa warehouse. The
following tables will be purged:

Metadat Project Configurati Other Project Fact Postgres


a s on Objects Objects Object Tables Only Fact
s Tables

lu_db_ lu_ access_ historical_


error status transactions access_
transaction
rel_ access_ s
sessioni transaction
d_ s_reprocess historical_
coordinat fact_
e fact_client_ action_
executions cube_
fact_action_ cache
cube_cache historical_
fact_action_ fact_
security_ action_
filter security_
filter
fact_object_
change_ historical_
journal fact_
object_
fact_ change_
prompt_ journal
answers
historical_
fact_report_ fact_
columns prompt_
answers
fact_sql_

Copyright © 2023 All Rights Reserved 433


Plat fo r m An alyt ics

Metadat Project Configurati Other Project Fact Postgres


a s on Objects Objects Object Tables Only Fact
s Tables

stats historical_
fact_
fact_step_ report_
sequence columns
lu_session historical_
fact_sql_
stats
historical_
fact_step_
sequence
historical_
lu_session

DELETE_ALL_FACTS_FROM_METADATA

This command will purge all the fact tables in you given metadata lists. The
following tables will be purged:

Metadat Project Configurati Other Project Fact Postgres


a s on Objects Objects Object Tables Only Fact
s Tables

lu_db_ lu_ access_ historical_


error status transactions access_
transaction
rel_ access_ s
sessioni transaction
d_ s_reprocess historical_
coordinat fact_
e fact_client_ action_
executions cube_
fact_action_ cache
cube_cache historical_
fact_action_ fact_
security_ action_
filter security_
filter
fact_object_
change_ historical_
journal fact_
object_
fact_ change_
prompt_ journal
answers
historical_
fact_report_ fact_

Copyright © 2023 All Rights Reserved 434


Plat fo r m An alyt ics

Metadat Project Configurati Other Project Fact Postgres


a s on Objects Objects Object Tables Only Fact
s Tables

columns prompt_
answers
fact_sql_
stats historical_
fact_
fact_step_ report_
sequence columns
lu_client_ historical_
session fact_sql_
lu_session stats
historical_
fact_step_
sequence
historical_
lu_session

DELETE_ALL_FACTS_FROM_PROJECTS

This command will purge all the fact tables in you given project lists. The
following tables will be purged:

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

lu_db_ lu_ access_ historical_


error status transactions access_
transaction
access_ s
transaction
s_reprocess historical_
fact_
fact_action_ action_
cube_cache cube_
fact_action_ cache
security_ historical_
filter fact_
fact_client_ action_
executions security_
filter
fact_object_
change_ historical_
journal fact_
object_
fact_ change_
prompt_ journal

Copyright © 2023 All Rights Reserved 435


Plat fo r m An alyt ics

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence

DELETE_ALL_FACTS_FROM_DELETED_OBJECTS

This command will purge all the fact tables generated by deleted objects in
the whole pa warehouse. The following tables will be purged:

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

lu_db_ lu_ access_ historical_


error status transactions access_
transaction
access_ s
transaction
s_reprocess historical_
fact_
fact_action_ action_
cube_cache cube_
fact_action_ cache
security_ historical_
filter fact_
fact_client_ action_
executions security_
filter
fact_object_
change_ historical_
journal fact_
object_
fact_ change_
prompt_ journal

Copyright © 2023 All Rights Reserved 436


Plat fo r m An alyt ics

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence

DELETE_ALL_FACTS_FROM_DELETED_PROJECTS

This command will purge all the fact tables which generated by deleted
projects in whole pa warehouse. The following tables will be purged:

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

lu_db_ lu_ access_ historical_


error status transactions access_
transaction
access_ s
transaction
s_reprocess historical_
fact_
fact_action_ action_
cube_cache cube_
fact_action_ cache
security_ historical_
filter fact_
fact_client_ action_
executions security_
filter
fact_object_
change_ historical_
journal fact_
object_
fact_ change_
prompt_ journal

Copyright © 2023 All Rights Reserved 437


Plat fo r m An alyt ics

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence

DELETE_ALL_FACTS_FROM_DELETED_OBJECTS_IN_METADATA

This command will purge all the fact tables generated by deleted objects in
given metadata. The following tables will be purged:

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

lu_db_ lu_ access_ historical_


error status transactions access_
transaction
access_ s
transaction
s_reprocess historical_
fact_
fact_action_ action_
cube_cache cube_
fact_client_ cache
executions historical_
fact_action_ fact_
security_ action_
filter security_
filter
fact_object_
change_ historical_
journal fact_
object_
fact_ change_
prompt_ journal

Copyright © 2023 All Rights Reserved 438


Plat fo r m An alyt ics

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence

DELETE_ALL_FACTS_FROM_DELETED_PROJECTS_IN_METADATA

This command will purge all the fact tables generated by deleted projects in
given metadata. The following will be purged:

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

lu_db_ lu_ access_ historical_


error status transactions access_
transaction
access_ s
transaction
s_reprocess historical_
fact_
fact_action_ action_
cube_cache cube_
fact_client_ cache
executions historical_
fact_action_ fact_
security_ action_
filter security_
filter
fact_object_
change_ historical_
journal fact_
object_
fact_ change_
prompt_ journal

Copyright © 2023 All Rights Reserved 439


Plat fo r m An alyt ics

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence

DELETE_All_FACTS_FROM_DELETED_OBJECTS_IN_PROJECTS

This command will purge all the fact tables generated by deleted objects in
given project lists. The following will be purged:

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

lu_db_ lu_ access_ historical_


error status transactions access_
transaction
access_ s
transaction
s_reprocess historical_
fact_
fact_action_ action_
cube_cache cube_
fact_client_ cache
executions historical_
fact_action_ fact_
security_ action_
filter security_
filter
fact_object_
change_ historical_
journal fact_
object_
fact_ change_
prompt_ journal

Copyright © 2023 All Rights Reserved 440


Plat fo r m An alyt ics

Metadat Project Configuratio Other Project Fact Postgres


a s n Objects Objects Objects Tables Only Fact
Tables

answers historical_
fact_
fact_report_ prompt_
columns answers
fact_sql_ historical_
stats fact_
fact_step_ report_
sequence columns
historical_
fact_sql_
stats
historical_
fact_step_
sequence

Purge Configuration File


The purgeConfig.yaml file is located in the Platform Analytics conf
directory. Simply uncomment each command you would like to execute by
removing the # in front of each line. There are six parameters that control
the various purge commands:

l doTestBeforePurge: The default value is true. Set to false to skip


testing before command execution.

l commandName: The name of the command to execute.

l onlyDeletedProjects: Set to true to purge only deleted projects.

l onlyDeletedObjects: Set to true to purge only deleted objects.

l metadataList: By default data will only be purged from the Platform


Analytics warehosue. Provide a list of metadata ids to apply the purge
actions to specific metadata only.

l projectList: By default data will only be purged from the Platform


Analytics warehosue. Provide a list of project guid values to apply the
purge actions to specific projects only.

Copyright © 2023 All Rights Reserved 441


Plat fo r m An alyt ics

l daysToKeep: If this value is 0, it will purge all the fact table. Assume this
value is a, then we will keep fabs(a) days data.

The following is a sample file:

#doTestBeforePurge: true
#commandsToExecute:
# - commandName: DELETE_ALL_DELETED_OBJECTS

# - commandName: DELETE_ALL_DELETED_PROJECTS

# - commandName: DELETE_ALL_OBJECTS_IN_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2

# - commandName: DELETE_ALL_DELETED_OBJECTS_IN_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2

# - commandName: DELETE_ALL_DELETED_PROJECTS_IN_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2

# - commandName: DELETE_ALL_OBJECTS_IN_PROJECTS
# metadataList:
# - metadataId
# projectList:
# - projectGuid_1
# - projectGuid_2

# - commandName: DELETE_ALL_DELETED_OBJECTS_IN_PROJECTS
# metadataList:
# - metadataId
# projectList:
# - projectGuid_1
# - projectGuid_2

# - commandName: DELETE_ALL_FACTS_FROM_DELETED_OBJECTS
# daysToKeep: 60

# - commandName: DELETE_ALL_FACTS_FROM_DELETED_PROJECTS
# daysToKeep: 60

# - commandName: DELETE_ALL_FACTS_FROM_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2
# daysToKeep: 60

# - commandName: DELETE_ALL_FACTS_FROM_DELETED_OBJECTS_IN_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2

Copyright © 2023 All Rights Reserved 442


Plat fo r m An alyt ics

# daysToKeep: 60

# - commandName: DELETE_ALL_FACTS_FROM_DELETED_PROJECTS_IN_METADATA
# metadataList:
# - metadataId_1
# - metadataId_2
# daysToKeep: 60

# - commandName: DELETE_ALL_FACTS_FROM_PROJECTS
# metadataList:
# - metadataId
# projectList:
# - projectGuid_1
# - projectGuid_2
# daysToKeep: 60

# - commandName: DELETE_All_FACTS_FROM_DELETED_OBJECTS_IN_PROJECTS
# metadataList:
# - metadatatId
# projectList:
# - projectGuid_1
# - projectGuid_2
# daysToKeep: 60

Perform a Data Purge


1. Open a terminal window and navigate to the Platform Analytics folder.

2. Execute the purge script:

Windows: platform-analytics-purge-warehouse.ps1

Linux: ./platform-analytics-purge-warehouse.sh

3. If you have enabled doTestBeforePurge the purge information will be


displayed. Enter Y or N to confirm or abort the purge of the listed data.

View Purge Statistics


The Platform Analytics warehouse contains the purge_statistic table to track
the purge operations that have been executed. Each record contains the
following important information:

l id: This column is just the Id to identify a record.

l execute_time: This column is the time in milliseconds to execute a


populate or delete SQL query.

Copyright © 2023 All Rights Reserved 443


Plat fo r m An alyt ics

l insert_ts: The timestamp of when the query finished executing.

l purge_command_id: For each purge command, it will generate a purge_


command_id to identify a purge command.

l purge_command_name: The current purge command's name.

l rows: How many rows this query effects.

l table_name: What table this query has effected.

Accessi n g t h e Dat a Cap t u r ed b y


Pl at f o r m An al yt i cs
Data is streamed in real time through the MicroStrategy Messaging Services
layer and stored in the Platform Analytics warehouse. Platform Analytics
provides several ways to access, analyze, and act on this telemetry,
including out of the box standard dossiers and native telemetry interfaces in
MicroStrategy Workstation; empowering administrators to provide a better
experience to MicroStrategy users.

You can access Platform Analytics data in three different ways depending on
your needs:

l By viewing the Platform Analytics data embedded in Workstation:


One of the exciting features of Platform Analytics is that it exposes some
of the data that it captures directly in the user interface of Workstation.
This allows users who would otherwise not know how to consume Platform
Analytics data, to gain access to important MicroStrategy data. For more
information, see How to View Dossier Usage in the Workstation Help.

l By running the out-of-the-box Platform Analytics dossiers: Platform


Analytics ships with a MicroStrategy project that provides out-of-the-box
dossiers designed to showcase some of the data that Platform Analytics
captures for each of the different system areas. The dossiers included with

Copyright © 2023 All Rights Reserved 444


Plat fo r m An alyt ics

Platform Analytics are:

l Compliance Telemetry: Determine if a MicroStrategy implementation


complies with the license entitlements.

l Cube and Cache Monitoring: Ensure that cubes and caches are being
fully leveraged to improve the performance of key analytics content.

l Error Analysis: Detect errors and anomalies in the system and improve
the experience of MicroStrategy users by fixing those issues.

l Object Telemetry: Identify the most popular analytics content in the


system and determine who is viewing it and how fast it runs.

l Performance Troubleshooting: Analyze the breakdown of job step


performance and SQL/CSI pass performance for each job.

l Project Overview: Analyze the performance of the MicroStrategy


projects and determine which users connect to them and which products
they use to connect.

l Subscription Analysis: Determine which analytics content users


subscribe to and how much load these subscriptions create on the
system.

l User Activity: Monitor what users do in MicroStrategy and ensure that


they have a positive experience free of performance issues and other
errors.

l By creating your own dossiers: Platform Analytics also supports the


creation of self-service content (dossiers, reports, and documents) which
are based on the out-of-the-box schema and application objects included
in the Platform Analytics project.

Add Supplementary Data to Platform Analytics


To enhance or limit the data visible in Platform Analytics, utilize any of the
following instructions to add data:

Copyright © 2023 All Rights Reserved 445


Plat fo r m An alyt ics

The following information is not available out-of-the-box.

Prompt Metric Names


Many metric subscription metric names are the same for Enterprise Manager
and Platform Analytics; however, there are slight changes between the two.

Enterprise Manager Platform Analytics


Description
Metric Name Metric Name

Count of Prompt Answers

Number of Prompt Counts the number of prompt


Count of Prompt Answers
Answers for a given answers per metadata.

Prompt

RP Number of Jobs (IS_ Counts the number of jobs that


Count of Prompted Jobs
PR_ANS_FACT) require prompted answers.

RP Number of Jobs Counts the number of jobs that


Count of Jobs Containing a
Containing Prompt contain a specific prompt
Prompt Answer Value
Answer Value answer.

RP Number of Jobs Not Count of Jobs Not Counts the number of jobs that
Containing Prompt Containing a Prompt do not contain a specific prompt
Answer Value Answer Value answer.

Count of Prompted Jobs Counts the number of jobs that


RP Number of Jobs with
Where Prompt Answer Is contain an empty prompt
Unanswered Prompts
Not Answered answer.

Job Metric Names

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

RP Number of Ad-Hoc Ad-hoc Jobs Counts the number of ad-hoc jobs.

Copyright © 2023 All Rights Reserved 446


Plat fo r m An alyt ics

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

Jobs

Standalone
RP Number of Data Provides the number of report executions
Report
Requests requested by users.
Executions

RP Number of DB DB Tables Counts the number of database tables


Tables Accessed Accessed accessed.

RP Number of Jobs Jobs Counts the number of job executions.

RP Number of Jobs
For Concurrency Jobs Counts the number of job executions.
Reporting

RP Number of Jobs
Database Jobs Counts the number of jobs hitting the database.
hitting Database

RP Number of Jobs
Jobs Today Counts the number of job executions today.
Today

RP Number of Jobs Non Cache Hit Counts the number of job executions that did not
w/o Cache Hit Jobs hit a server cache.

RP Number of Jobs Non Element Counts the number of job executions that did
w/o Element Loading Load Jobs not result from an element loading.

RP Number of Jobs Cache Hit Jobs Counts the number of job executions that did hit
with Cache Hit a server cache.

RP Number of Jobs Failed DB Jobs Counts the number of job executions that
with DB Error caused a database error.

RP Number of Jobs Element Counts the number of job executions that did
with Element Loading Loading Jobs result from an element loading.

RP Number of Jobs Counts the number of job executions that did


Failed Jobs
with Error cause an Intelligence Server or database error.

Copyright © 2023 All Rights Reserved 447


Plat fo r m An alyt ics

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

RP Number of Jobs Security Filter Counts the number of job executions that did
with Security Filter Jobs have a security filter applied.

RP Number of Jobs SQL Execution Counts the number of job executions that
with SQL Execution Jobs execute SQL.

RP number of
Subscription Counts the number of job executions run
Narrowcast Server
Jobs through MicroStrategy Narrowcast Server.
jobs

RP Number of Counts the number of job executions that did


Prompted Jobs
Prompted Jobs include a prompt.

Distinct
RP Number of Prompts
Prompts Counts the number of prompts in a report job.
Executed

RP Number of Report
Counts the number of job executions that
Jobs from Document Child Jobs
resulted from a document execution.
Execution

RP Number of Reports Counts the number of report definitions


Objects
Used executed.

RP Number of Result Report Row Counts the number of result rows returned from
Rows Count a report execution.

RP Number of Result Cube Row Counts the number of rows in a OLAP View
Rows for View Report Count Report job.

RP Number of Subscription Counts the number of job executions that did


Scheduled Jobs Jobs result from a schedule execution.

RP Number of SQL Counts the number of passes executed during


SQL Passes
Passes report execution.

Counts the number of SQL passes executed on


RP Number of WH WH SQL
the database. This metric excludes Analytical
SQL Passes Passes
SQL.

Copyright © 2023 All Rights Reserved 448


Plat fo r m An alyt ics

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

RP Percentage of Ad-
% Ad-Hoc Jobs Percentage of ad-hoc jobs vs. total jobs.
Hoc Jobs

RP Percentage of % Database Percentage of jobs that hit a database vs. total


Jobs hitting Database Jobs jobs.

RP Percentage of Jobs % Cache Hit Percentage of jobs that hit a server cache vs.
with Cache Hit Jobs total jobs.

RP Percentage of % Cube Cache Percentage of jobs that hit a server cache vs.
Jobs with Cube Hit Hit Jobs total jobs.

RP Percentage of Jobs % Failed DB Percentage of jobs with database error vs. total
with DB Error Jobs jobs.

RP Percentage of
% Failed Jobs Percentage of jobs with any error vs. total jobs.
Jobs with Error

RP Percentage of
% Subscription Percentage of jobs from Narrowcast Server vs.
Narrowcast Server
Jobs total jobs.
jobs

RP Percentage of % Prompted
Percentage of prompted jobs vs. total jobs.
Prompted Jobs Jobs

RP Percentage of % Subscription
Percentage of scheduled jobs vs. total jobs.
Scheduled Jobs Jobs

RP Jobs with No Data Jobs with No Counts the number of jobs that returned no
Returned Data Returned data.

RP Export Engine Export Engine Counts the number of report jobs passing
Jobs Jobs through the export engine.

Number of Sessions Sessions per Provides the number of sessions created per
per User User connected user.

DP Average Number of Jobs per Provides the average number of document jobs
Jobs per Session Session per user session.

Copyright © 2023 All Rights Reserved 449


Plat fo r m An alyt ics

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

RP Average Number Jobs per Provides the average number of job executions
of Jobs per Session Session per session.

RP Average Number of
Jobs per User Average number of job executions per user.
Jobs per User

P Number of Data
Data Request Counts the number of jobs requested by a user
Request Jobs with
Jobs with Error and encountered an error.
Error

Jobs w/o
RP Number of Jobs Counts the number of jobs that don't create a
Cache
w/o Cache Creation cache.
Creation

Jobs with
RP Number of Jobs
Cache Counts the number of jobs that create a cache.
with Cache Creation
Creation

RP Number of Jobs Jobs with


Counts the number of job executions that
with Datamart Datamart
created a datamart.
Creation Creation

RP number of
Subscription
Narrowcast Server Counts the number of subscription jobs.
Jobs
jobs

RP Number of Non- Non-Canceled Counts the number of non-canceled job


Cancelled Jobs Jobs executions.

RP Number of Users Users Who Counts the number of distinct users that
who ran report Execute Jobs executed jobs.

RP Percentage of Jobs % Jobs with


Measures the percentage of jobs that created a
with Datamart Datamart
datamart vs. total jobs.
Creation Creation

RP Export Engine Export Engine Counts the number of jobs passing through the
Jobs Jobs export engine.

Copyright © 2023 All Rights Reserved 450


Plat fo r m An alyt ics

Platform
Enterprise Manager
Analytics Description
Metric Name
Metric Name

RP Average Number of Average of DB


Provides the average number of database result
DB Result Rows per Result Rows
rows per job execution.
Job per Job

Count of EM_USER_ Unused User


Counts the number of user groups that don't
ID from EM_USR_GP_ Groups in
have any users in the metadata.
USER table only Metadata

RP Average Daily Use Avg Job Provides the max Execution Duration (in
Duration per job Execution seconds) of jobs. A jobs Execution Duration
(hh:mm:ss) Duration (s) records the total time spent during a job.

Users Who
DP Number of Users Counts the number of distinct users executing a
Ran
who ran Documents document.
Documents

Number of Users Users Who Counts the number of distinct users executing a
running reports Ran Reports report.

Users Who
Number of Users Counts the number of distinct users executing a
Ran
running documents document.
Documents

Import an Organizational Hierarchy


You can add your institution's HR organization hierarchy to your Platform
Analytics project. The HR User Hierarchy attributes are intended to enrich
the user level analysis in your Platform Analytics project. All attributes and
tables corresponding to the HR organization hierarchy must be manually
provided by importing a .CSV file.

The Department attribute is the highest level parent of an HR organization


hierarchy and is a consolidation of multiple Divisions. The hierarchy
relationship is Department > Division > Group > Unit > User. The login
attributes can be used as security filters to restrict data available to only

Copyright © 2023 All Rights Reserved 451


Plat fo r m An alyt ics

managers and their direct reports. These attributes and tables are elective
and available to enrich analysis, but not a required feature for Platform
Analytics.

Before importing your .CSV file, ensure you've done the following:

l Installed and configured Platform Analytics. See Installing Platform


Analyticsfor more information.

l Located the IP address and Port of the database with your Platform Analytics
Repository.

l Obtained the database user credentials to access your Platform Analytics


data Repository.

l Obtained MySQL Workbench 6.3+ or any equivalent database client

Prepare Your .CSV File


Before importing your data, your .CSV file needs to have the following
column headers:

Copyright © 2023 All Rights Reserved 452


Plat fo r m An alyt ics

l employee_email: The email address of the employee. The email address


provided must match the email address of the MicroStrategy metadata
user or the user's MicroStrategy badge.

l department_desc: The name of the organization's department. For


example, Sales, Finance, Technology, etc.

l department_owner_id: The employee number for the department head.


The Department Owner is the manager for the corresponding department.
Each department can have only one Department Owner.

l division_desc: The name of the organization's division. For example,


North America Sales, Corporate Finance, etc. The Division is a
consolidation of multiple groups within the organization hierarchy.

l division_owner_id: The employee number for the division owner. Each


division can only have one Division Owner.

l group_desc: The name of the organization's group. A group is a


consolidation of multiple units within the organization hierarchy.

l group_owner_id: The employee number for the group owner. Each group
can only have one Group Owner.

l business_unit_desc: The name of the group's unit. A unit is a level above


the user within the organization hierarchy.

l business_unit_owner_id: the employee number for the unit owner. Each


unit can only have one Unit Owner.

l employee_first_name: The first name of the employee.

l employee_last_name: The last name of the employee.

You can add optional columns to import into your Platform Analytic project:

l ceo_employee_id: This is your organization's CEO employee number.

l employee_id: This is your organization's employee number.

Copyright © 2023 All Rights Reserved 453


Plat fo r m An alyt ics

Exam ple .CSV File

employe departm depart divisio divisi grou grou busine busine employ employ
ment
e_email ent_ n_ on_ p_ p_ ss_ ss_ ee_ ee_
desc owner_ desc owne desc own unit_ unit_ first_ last_
id
r_id er_id desc owne name name
r_id

e1@email HR 1 Recruit 2 Camp 3 East 5 Leisa Drake


.com ing us

e2@email HR 1 Recruit 2 Camp 3 West 5 Vella Plain


.com ing us

e3@email HR 1 Recruit 2 Camp 3 West 5 Sofia Strickler


.com ing us

e4@email HR 1 Recruit 2 Tech 4 Backen 5 Fern Byun


.com ing d

e5@email HR 1 Recruit 2 Tech 4 Fronte 5 Dorthy Gumps


.com ing nd

Import Your .CSV File


1. Ensure you have prepared your .CSV file with the required column
headers.

2. Open MySQL Workbench, or any equivalent database client, and


connect to your platform_analytics_warehouse.

3. Right-click the stg_employee table and select Table Data Import


Wizard.

4. Select your prepared .CSV file and click Upload.

5. Under Select Destination, choose Use existing table and select the
stg_employee table from the drop-down.

6. Click Next.

7. At the Configure Import Settings dialog, confirm that your .csv file was
uploaded correctly.

8. Click Next.

Copyright © 2023 All Rights Reserved 454


Plat fo r m An alyt ics

9. Click Save.

10. Restart the Platform Analytics Consumer to immediately populate the


organization hierarchy.

Suggested Troubleshooting if Missing Users

After importing your organizational hierarchy, it's expected that each


employee is already in the MicroStrategy metadata or they have a
MicroStrategy Badge. As part of the importing process, the users from
the .csv file are loaded into the stg_employee table and then matched
with the lu_user table via their email address to add their organization
hierarchy values.

If the importing process does not find a matching email address


between stg_employee and lu_user, a blank row is inserted into stg_
employee_reprocess to track missing users. To re-import these missing
users, ensure their email address is correct in the .CSV file compared
to the MicroStrategy metadata user and MicroStrategy Badge. If this
value is correct, ensure the environment is configured correctly.

Once you have addressed the root cause of the mismatched email
addresses, the Platform Analytics Daily ETL will automatically resolve
any missing users.

Copyright © 2023 All Rights Reserved 455


Plat fo r m An alyt ics

Exam ple Report Created from an Im ported .CSV File

Limit MicroStrategy Badge Networks Analyzed by Platform


Analytics
The etl_network_control table controls which telemetry from
MicroStrategy Badge Networks are processed into the Platform Analytics
Repository. By default, the table is empty, so Platform Analytics will process
data for all MicroStrategy Badge Networks.

To limit the data that is processed into the Platform Analytics Repository,
insert a row(s) into the table with the network_id and network_desc. If a
MicroStrategy Badge network is inserted in this table, Platform Analytics will
only process this specific network and exclude all others.

Column Description Data-Type

network_id Id of the network to be processed. bigint(20)

network_desc Description/name of the network. varchar(255)

Before adding an Intelligence Server cluster, ensure you've done the following:

Copyright © 2023 All Rights Reserved 456


Plat fo r m An alyt ics

l Installed and configured Platform Analytics. See Installing Platform


Analyticsfor more information.

l Located the IP address and Port of the database with your Platform Analytics
Repository.

l Obtained the database user credentials to access your Platform Analytics


data Repository.

l Obtained MySQL Workbench 6.3+ or any equivalent database client

How to Limit MicroStrategy Badge Networks


1. Open Windows Services, locate MicroStrategy Platform Analytics
Consumer, MicroStrategy Usher Metadata Producer, and MicroStrategy
In-Memory Cache. Right-click each service and select Stop.

2. Open the MySQL Workbench or any database client equivalent.

3. Use MySQL Workbench to connect to your Platform Analytics Repository.

4. Review the existing list of Badge Networks currently processed into the
Platform Analytics Repository. Note the networks you want to limit
processing the data for:

select network_id, network_desc


from lu_network;

5. Update the etl_network_control table with the desired Badge networks


to be analyzed by Platform Analytics and processed into the Platform
Analytics Repository.

INSERT INTO etl_network_control (network_id, network_desc)


VALUES ('123', 'MicroStrategy Usher Network');

6. Open Windows Services, locate MicroStrategy Platform Analytics


Consumer, MicroStrategy Usher Metadata Producer, and MicroStrategy
In-Memory Cache. Right-click each service and select Start.

Copyright © 2023 All Rights Reserved 457


Plat fo r m An alyt ics

1. In the Platform Analytics directory, located at


/opt/MicroStrategy/PlatformAnalytics, open the bin folder and run
the following commands:

./platform-analytics-consumer.sh stop
./platform-analytics-usher-lookup-producer.sh stop

2. Open MySQL Workbench, or any database client equivalent, and


connect to your Platform Analytics Repository.

3. Review the existing list of Badge Networks currently processed into


the Platform Analytics Repository. Note the networks you want to
limit processing the data for:

select network_id, network_desc


from lu_network;

4. Update the etl_network_control table with the desired Badge


networks to be analyzed by Platform Analytics and processed into
the Platform Analytics Repository.

INSERT INTO etl_network_control (network_id, network_desc)


VALUES ('123', 'MicroStrategy Usher Network');

5. In the Platform Analytics directory, open the bin folder and run the
following commands:

./platform-analytics-consumer.sh start
./platform-analytics-usher-lookup-producer.sh start

Categorizing MicroStrategy Badge Resources


If you use Platform Analytics to analyze MicroStrategy Badge data, you can
improve the usability of the data that Badge collects about user behavior by
categorizing the physical access resources (PACS) that are integrated with

Copyright © 2023 All Rights Reserved 458


Plat fo r m An alyt ics

the Badge Network. By categorizing resources, you improve the usability of


Badge activity reports created with Platform Analytics.

For example if you create a category of resources that includes all of the
physical locations that are integrated with the Badge network, then you can
specify different subcategories of physical locations, such as elevators,
conference rooms, and private offices. You can then generate reports to
gather in-depth information about the resources.

Multiple services were renamed in the MicroStrategy 2019 release. Because


this guide requires modifying the underlying files, it uses the original
service name. For a list of changed service names, see the Critical Name
Changes section of the 2019 Readme.

Before adding an Intelligence Server cluster, ensure you've done the following:

l Installed and configured the latest version of Platform Analytics. See


Installing Platform Analyticsfor more information.

l Integrated a physical access system with Badge. For steps, see Physical
Gateways.

l Obtained administer privileges for your Badge network and can access
Identity Manager.

To Create a Hierarchy of Your Physical Resources in Platform


Analytics
1. Log into Identity Network Manager by navigating to the Identity
Manager home page and using the Badge app on your smartphone to
scan the displayed QR code.

2. Click Physical Gateways.

3. Under your configured Physical Access Control (PAC) system, the


following are displayed: Beacon Configuration, Building Access, and
Places Hierarchy.

4. To start a new hierarchy, click Edit > Add New Campus.

Copyright © 2023 All Rights Reserved 459


Plat fo r m An alyt ics

5. Enter a specific name for the campus, such as US Campus, UK


Campus, etc.

6. Click Save.

7. Click Facility to add locations under the Campus you created. Provide
the location of the organization that you want to include:

1. In the Facility column, enter a name for the location.

2. In the Facility Address column, enter the address for the location.

3. In the Campus column, select the campus from the drop-down


menu.

8. Click Save.

9. Click Floor to map the levels of a particular location to a Facility.

1. Enter a floor of your building.

2. Select the Facility from the drop-down list. Repeat this for each
floor that you want to map.

10. Click Save.

11. Click Space. All the available places that have been previously
integrated with Badge are shown.

1. Using the Floor drop-down menu, map each space to the correct
floor.

12. Click Save.

After you've configured your Places hierarchy, you can edit any level by
clicking the check box next to a place, and click Edit. To delete places within
your hierarchy, select the place or places, and click Delete. When finished,
click Save to make your changes.

Copyright © 2023 All Rights Reserved 460


Plat fo r m An alyt ics

M o n i t o r M et ad at a Rep o si t o r i es
Acr o ss M u l t i p l e En vi r o n m en t s
Platform Analytics can monitor multiple MicroStrategy environments and
consolidate the telemetry from these environments into a single repository
and project.

Platform Analytics uses a combination of the metadata repository_id


and respository_connection_string in order to differentiate metadata
with the same GUID in the Platform Analytics Repository. These two fields
are included in all of the telemetry logs in order to identify the data collected
from the different environments.

l repository_id: A unique metadata Repository GUID which is added during


the metadata creation. Each newly created metadata will have a unique
respository_id. A backup of the metadata will retain its repository

Copyright © 2023 All Rights Reserved 461


Plat fo r m An alyt ics

_id when restored it to a new environment.

l repository_connection_string: The metadata database connection URL.

For more details about defining a metadata connection in MicroStrategy,


see Connecting to Databases using JDBC Drivers in MicroStrategy 10.x
on MicroStrategy Community

How to Add Additional Environments to a Platform


Analytics Deployment
An environment is defined as a metadata repository and a cluster of
Intelligence servers. To add a new environment to an existing Platform
Analytics deployment requires enabling the Intelligence server to send
telemetry logs to the Telemetry Server(s) and populating the Platform
Analytics Repository with the current state of the metadata objects. These
steps are independent and can be performed in any order.

The initial load of metadata objects can take several hours depending on
the size of the metadata. If the process gets interrupted while server is
sending initial load of data then the user must Re-trigger the Initial Load of
Object Telemetry.

The following information should be collected prior to configuration:

l Telemetry Server(s) Hostname(s) and Port(s) for the existing Platform


Analytics environment

How to collect this information:

1. Connect to Command Manager on the existing Intelligence


Server/metadata which is logging telemetry to Platform Analytics.

2. Execute the command: LIST PROPERTIES FOR SERVER


CONFIGURATION;

Copyright © 2023 All Rights Reserved 462


Plat fo r m An alyt ics

3. The Hostname(s) and Port(s) are under Messaging Services


Configuration. For example, 127.0.0.1:9092

l A project source to the new Intelligence Server environment which you wish
to monitor.

l The username and password for a MicroStrategy user which has the
following privileges:

l Use Command Manager

l Configure server basic

This process assumes Platform Analytics has been fully installed and
configured to processes Intelligence Server telemetry.

Reference the following sections for assistance during the configuration


process:

l Enable Telemetry Logging for Intelligence Server

l Load Object Telemetry to the Platform Analytics Data Repository

How to Update a Repository ID Using Command


Manager
The following procedure is optional.

When restoring a metadata backup to a new environment, the original


repository_id value is retained.

You can refresh the repository_id using the Command Manager steps
below:

1. Using a GUID Generation tool, generate a new GUID.

The characters must be uppercase and generated without hyphens.

Copyright © 2023 All Rights Reserved 463


Plat fo r m An alyt ics

2. Using Command Manager connect to the metadata for which you wish
to update the repository_id.

3. Execute the command below using the new GUID:

ALTER SERVER CONFIGURATION REPOSITORYID "<new_repository_id>";

4. Confirm the repository_id was applied by running the following


command:

LIST PROPERTIES FOR SERVER CONFIGURATION;

How Platform Analytics Identifies Metadata


All objects in a MicroStrategy environment are saved in the metadata. In
Platform Analytics, application and schema objects are captured by the
object attribute with the lu_object lookup table. Each of these components
reside in a project in the metadata. Therefore, the project attribute is a
parent of the object attribute. All projects and configuration objects have
metadata as their parent attribute, following the hierarchy shown below.

Metadata ® Project ® Object

Metadata ® Event/Schedule/...

Platform Analytics supports multi-tenancy. This means you can create a


single Platform Analytics for multiple independent Intelligence server
clusters. Telemetry for different environments can be distinguished using
metadata.

Platform Analytics uniquely identifies metadata using a combination of the


following fields:

l metadata_guid The GUID generated by MicroStrategy when the metadata


is created.

Copyright © 2023 All Rights Reserved 464


Plat fo r m An alyt ics

l metadata_connection_string The connection string Intelligence server


uses to connect to the metadata database. Platform Analytics consumer
performs a standardization operation on the connection string to remove
noncritical information such as drivers, descriptions, and other properties.
It considers only the following fields: host, port, database, and uid/sid for
Oracle.*

host=10.250.151.131; port=5432; database=poc_metadata;

*In MicroStrategy 2019, there are more fields in the standardized connection
string and the process of standardizing the connection string is more
restrictive. This results in relatively fewer cases being processed and
standardized. If a metadata_connection_string fails to meet the
requirements of the standardization process, MicroStrategy falls back to
using the unprocessed metadata_connection_string. Starting in
MicroStrategy 2020, this process was made less restrictive and only
considers the key fields mentioned above.

The fields mentioned above are part of all telemetry, including statistics,
compliance, change journal, initial load, and advanced statistics.

The metadata is uniquely identified based on the following fields:

l metadata_guid A unique metadata GUID is generated and stored in the


metadata when new metadata is created via the configuration wizard.

l host The server where database server is hosted.

l port The port on which the database server is running.

l database The actual metadata database name.

l uid/sid Used only for oracle databases.

If any of the above parameters change, Platform Analytics considers your


metadata to be new and an additional row appears in the lu_metadata table.

Copyright © 2023 All Rights Reserved 465


Plat fo r m An alyt ics

The fields above are processed based on the values provided in the
metadata DSN. This is located in your odbc.ini file for Linux and the
ODBC Data Source Administrator for Windows.

Common Scenarios That Can Result in New Metadata in


Platform Analytics
The following section describes some common scenarios that can result in
new metadata being created in Platform Analytics and how to resolve these
scenarios.

l Parallel Upgrades

l Two or More Intelligence Servers in a Cluster

l Multitenancy: Multiple Intelligence Server Clusters Writing to the Same


Platform Analytics

l Other Issues Observed in Some Environments

Parallel Upgrades
If you are using MicroStrategy 11.1.x and plan to upgrade to 11.2.x
(MicroStrategy 2020), you would most likely create a parallel 11.2.x
environment. This is the most common scenario and upgrade path. You
should create/certify a parallel environment with a newer version of
MicroStrategy before turning off the older version. To create a new
environment, use a backup of metadata databases and Platform Analytics
Warehouse. In this scenario, new metadata gets generated in Platform
Analytics Warehouse and previously existing metadata is no longer used
because one or more of the following fields change:

l metadata_guid This field remains the same since you used a metadata
backup

l host This field may or may not change. If you restored the metadata
database backup on the same database server where the original
metadata database existed, this field remains the same.

Copyright © 2023 All Rights Reserved 466


Plat fo r m An alyt ics

l port This field may or may not change. If you restored the metadata
database backup on the same database server where the original
metadata database existed, this field remains the same.

l database This field may or may not change. If you host the database on a
different server, you can choose the same metadata database name or
choose a different one. If you are hosting the metadata database backup
on the same server as the original metadata, then for most databases, the
database name needs to be different.

For the parallel upgrade path, MicroStrategy generates new metadata and
cannot use reuse existing metadata in the majority of cases. This means you
need to re-trigger the initial load again. If you do not want to go this route
and prefer to continue to track the data in the new environment under the
same metadata, use the solution below.

Solution: Before starting the consumer in the new environment, you can
update the lu_metadata table in Platform Analytics Warehouse to modify the
connection string corresponding to metadata of the original environment.
For example, if your current metadata_connection_string is:

host=10.250.151.131; port=5432; database=poc_metadata;

and you restore a copy of the metadata database on the same database
server to a new database with the name poc_metadata_11_2, then you
can modify the connection string to:

host=10.250.151.131; port=5432; database=poc_metadata_11_


2;

With this change, Platform Analytics Consumer does not create new
metadata and therefore does not need to trigger an initial load.

Two or More Intelligence Servers in a Cluster


In an ideal scenario, both Intelligence servers would send Platform Analytics
the same metadata_connection_string with their telemetry.

Copyright © 2023 All Rights Reserved 467


Plat fo r m An alyt ics

Common issue observed in MicroStrategy 2019: There have been some


cases where Intelligence servers in cluster were sending slightly different
connection strings. This was because of some minor differences in DSN for
the metadata database in the odbc.ini file for different Intelligence servers in
a cluster. For example, on node 1, there could be an additional driver
property not applied on other nodes or there are differences in the driver
versions on different nodes. In MicroStrategy 2019, the standardization logic
was very restrictive and did not remove all unimportant fields. This
standardization process resulted in different connection strings from
different Intelligence servers in a cluster for the above scenarios.

Starting in MicroStrategy 2020, such issues rarely occur because the


standardization process is less restrictive and only key fields are kept.
There may be rare cases where the standardization process could result in
different metadata for Intelligence servers in a cluster.

General Recommedations

1. Use the exact same DSN for the metadata database in odbc.ini for all
the Intelligence Servers in a cluster.

2. If you are not setting a particular driver property in the DSN, remove its
corresponding key from the DSN instead of leaving it blank.

Multitenancy: Multiple Intelligence Server Clusters Writing to the


Same Platform Analytics
The Platform Analytics data model can support telemetry from multiple
independent Intelligence server clusters. This is a common practice when
using a metadata backup to create another environment. In such cases, the
metadata_guid field remains the same. However, one of the fields in the
connection_string will be different. This allows end-users to filter the
metadata to analyze each of their environments.

Copyright © 2023 All Rights Reserved 468


Plat fo r m An alyt ics

Other Issues Observed in Some Environments


In MicroStrategy 2019, the connection string has few additional parameters,
such as driver. This can create unnecessary noise and generate new
metadata in an environment each time the parameters are changed, such as
when upgrading a driver version. Staring in MicroStrategy 2020, the
standardization process only keeps the key fields, removing the possibility
of generating new metadata as a result of a change to any other noncritical
parameters

In MicroStrategy 2019, the restrictive standardization process and these


additional parameters can occasionally result in long connection strings. In
some scenarios, Platform Analytics fails to insert data into the Platform
Analytics Warehouse because the connection string is longer than the
allowable maximum. Starting in MicroStrategy 2020, the less restrictive
standardization process resolves most of these situations since it reduces
the size of the connection string to keep only key fields. Therefore, such an
issue should not occur.

Copyright © 2023 All Rights Reserved 469

You might also like