You are on page 1of 116

Questions and Answers PDF 1/116

Thank You for your purchase


Microsoft DP-500 Exam Question & Answers
Designing and Implementing Enterprise-Scale
Analytics Solutions Using Microsoft Azure and
Microsoft Power BI Exam

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 2/116

Product Questions: 113


Version: 5.0

Topic 1, Litware, Inc.


Overview
Litware, Inc. is a retail company that sells outdoor recreational goods and accessories. The company
sells goods both online and at its stores located in six countries.

Azure Resources
Litware has the following Azure resources:
• An Azure Synapse Analytics workspace named synapseworkspace1
• An Azure Data Lake Storage Gen2 account named datalake1 that is associated with
synapseworkspace1
• A Synapse Analytics dedicated SQL pool named SQLDW
Dedicated SQL Pool
SQLDW contains a dimensional model that contains the following table.

SQLDW contains the following additional tables.

SQLDW contains a view named dbo.CustomerPurchases that creates a distinct list of values from
dbo.Customer [customeriD], dbo.Customer
[CustomerEmail], dbo.ProductfProductID] and dbo.Product[ProductName].
The sales data in SQLDW is updated every 30 minutes. Records in dbo.SalesTransactions are updated
in SQLDW up to three days after being created. The records do NOT change after three days.
Power BI
Litware has a new Power Bl tenant that contains an empty workspace named Sales Analytics.
All users have Power B1 Premium per user licenses.
IT data analysts are workspace administrators. The IT data analysts will create datasets and reports.
A single imported dataset will be created to support the company's sales analytics goals. The dataset

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 3/116

will be refreshed every 30 minutes.


Analytics Goals
Litware identifies the following analytics goals:
• Provide historical reporting of sales by product and channel over time.
• Allow sales managers to perform ad hoc sales reporting with minimal effort.
• Perform market basket analysis to understand which products are commonly purchased in the
same transaction.
• Identify which customers should receive promotional emails based on their likelihood of
purchasing promoted products.
Litware plans to monitor the adoption of Power Bl reports over time. The company wants custom
Power Bl usage reporting that includes the percent change of users that view reports in the Sales
Analytics workspace each month.
Security Requirements
Litware identifies the following security requirements for the analytics environment:
• All the users in the sales department and the marketing department must be able to see Power B1
reports that contain market basket analysis and data about which customers are likely to purchase a
product.
• Customer contact data in SQLDW and the Power B1 dataset must be labeled as Sensitive. Records
must be kept of any users that use the sensitive data.
• Sales associates must be prevented from seeing the CustomerEmail column in Power B1 reports.
• Sales managers must be prevented from modifying reports created by other users.
Development Process Requirements
Litware identifies the following development process requirements:
• SQLDW and datalake1 will act as the development environment. Once feature development is
complete, all entities in synapseworkspace1 will be promoted to a test workspace, and then to a
production workspace.
• Power Bl content must be deployed to test and production by using deployment pipelines.
• All SQL scripts must be stored in Azure Repos.
The IT data analysts prefer to build Power Bl reports in Synapse Studio.

Question: 1

DRAG DROP
You need to implement object-level security (OLS) in the Power Bl dataset for the sales associates.
Which three actions should you perform in sequence? To answer, move the appropriate actions from
the list of actions to the answer area and arrange them in the correct order.

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 4/116

Question: 2
What should you configure in the deployment pipeline?

A. a backward deployment
B. a selective deployment
C. auto-binding
D. a data source rule

Answer: D
Explanation:

Development Process Requirements


Litware identifies the following development process requirements:
SQLDW and datalake1 will act as the development environment. Once feature development is
complete, all entities in synapseworkspace1 will be promoted to a test workspace, and then to a
production workspace.
Power BI content must be deployed to test and production by using deployment pipelines.

Create deployment rules


When working in a deployment pipeline, different stages may have different configurations. For
example, each stage can have different databases or different query parameters. The development
stage might query sample data from the database, while the test and production stages query the
entire database.

When you deploy content between pipeline stages, configuring deployment rules enables you to
allow changes to content, while keeping some settings intact. For example, if you want a dataset in a
production stage to point to a production database, you can define a rule for this. The rule is defined
in the production stage, under the appropriate dataset. Once the rule is defined, content deployed
from test to production, will inherit the value as defined in the deployment rule, and will always
apply as long as the rule is unchanged and valid.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 5/116

You can configure data source rules and parameter rules.

Incorrect:
Not B: if you already have a steady production environment, you can deploy it backward (to Test or
Dev, based on your need) and set up the pipeline. The feature is not limited to any sequential orders.

Reference: https://docs.microsoft.com/en-us/power-bi/create-reports/deployment-pipelines-get-
started#step-4---create-deployment-rules

Question: 3

HOTSPOT
You need to populate the CustomersWithProductScore table.
How should you complete the stored procedure? To answer, select the appropriate options in the
answer area.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 6/116

Box 1: FLOAT
Identify which customers should receive promotional emails based on their likelihood of purchasing
promoted products.

FLOT is used in the last statement of the code: WITH (score FLOAT) as p;

From syntax: MODEL

The MODEL parameter is used to specify the model used for scoring or prediction. The model is
specified as a variable or a literal or a scalar expression.

Box 2: dbo.CustomerWithProductScore
Identify which customers should receive promotional emails based on their likelihood of purchasing
promoted products.

Only table CustomerWithProductScore has the required filed score.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 7/116

From the syntax:


DATA

The DATA parameter is used to specify the data used for scoring or prediction. Data is specified in the
form of a table source in the query. Table source can be a table, table alias, CTE alias, view, or table-
valued function.

Reference: https://docs.microsoft.com/en-us/sql/t-sql/queries/predict-transact-sql

Question: 4
DRAG DROP
You need to create the customized Power Bl usage reporting. The Usage Metrics Report dataset has
already been created. The solution must minimize development and administrative effort.

Which four actions should you perform in sequence? To answer, move the appropriate actions from
the list of actions to the answer area and arrange them in the correct order.

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 8/116

Step 1: From powerbi.com, create a new report..


The company wants custom Power BI usage reporting that includes the percent change of users that
view reports in the Sales Analytics workspace each month.

Step 2: Add a report measure


Measures are used in some of the most common data analyses. Simple summarizations such as
sums, averages, minimum, maximum and counts can be set through the Fields well. The calculated
results of measures are always changing in response to your interaction with your reports, allowing
for fast and dynamic ad-hoc data exploration.

Step 3: Add visuals to the report

Step 4: Publish the report to the Sales Analytics workspace

Reference: https://docs.microsoft.com/en-us/power-bi/transform-model/desktop-measures

Question: 5

You need to configure the Sales Analytics workspace to meet the ad hoc reporting requirements.

What should you do?

A. Grant the sales managers the Build permission for the existing Power Bl datasets.
B. Grant the sales managers admin access to the existing Power Bl workspace.
C. Create a deployment pipeline and grant the sales managers access to the pipeline.
D. Create a PBIT file and distribute the file to the sales managers.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 9/116

Answer: D
Explanation:

Allow sales managers to perform ad hoc sales reporting with minimal effort

Power BI report templates contain the following information from the report from which they were
generated:

Report pages, visuals, and other visual elements


The data model definition, including the schema, relationships, measures, and other model
definition items
All query definitions, such as queries, Query Parameters, and other query elements
What is not included in templates is the report's data.

Report templates use the file extension .PBIT (compare to Power BI Desktop reports, which use the
.PBIX extension).

Note: With Power BI Desktop, you can create compelling reports that share insights across your
entire organization. With Power BI Desktop templates, you can streamline your work by creating a
report template, based on an existing template, which you or other users in your organization can
use as a starting point for a new report's layout, data model, and queries. Templates in Power BI
Desktop help you jump-start and standardize report creation.

Reference: https://docs.microsoft.com/en-us/power-bi/create-reports/desktop-templates

Question: 6

You need to recommend a solution to ensure that sensitivity labels are applied. The solution must
minimize administrative effort.

Which three actions should you include in the recommendation? Each correct answer presents part
of the solution.

NOTE: Each correct selection is worth one point.

A. From the Power Bl Admin portal, set Allow users to apply sensitivity labels for Power Bl content to
Enabled.
B. From the Power Bl Admin portal, set Apply sensitivity labels from data sources to their data in
Power Bl to Enabled.
C. In SQLDW. apply sensitivity labels to the columns in the Customer and
CustomersWithProductScore tables.
D. In the Power Bl datasets, apply sensitivity labels to the columns in the Customer and
CustomersWithProductScore tables.
E. From the Power Bl Admin portal, set Make certified content discoverable to Enabled.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 10/116

Answer: ADE
Explanation:

A Synapse Analytics dedicated SQL pool is named SQLDW.


Customer contact data in SQLDW and the Power BI dataset must be labeled as Sensitive. Records
must be kept of any users that use the sensitive data.

A (not B): Enable sensitivity labels


Sensitivity labels must be enabled on the tenant before they can be used in both the service and in
Desktop.
To enable sensitivity labels on the tenant, go to the Power BI Admin portal, open the Tenant settings
pane, and find the Information protection section.
In the Information Protection section, perform the following steps:

Open Allow users to apply sensitivity labels for Power BI content.

Enable the toggle.

D (not C): When data protection is enabled on your tenant, sensitivity labels appear in the sensitivity
column in the list view of dashboards, reports, datasets, and dataflows.

E: Power BI Tenant Discovery Setting include Make certified content discoverable.

Reference: https://docs.microsoft.com/en-us/power-bi/enterprise/service-security-enable-data-
sensitivity-labels
https://docs.microsoft.com/en-us/power-bi/enterprise/service-security-apply-data-sensitivity-labels
https://support.nhs.net/knowledge-base/power-bi-guidance/

Question: 7

How should you configure the Power BI dataset refresh for the dbo.SalesTransactions table?

A. an incremental refresh of Product where the ModifiedDate value is during the last three days.
B. an incremental refresh of dbo.SalesTransactions where the SalesDate value is during the last three
days.
C. a full refresh of all the tables
D. an incremental refresh of dbo.SalesTransactions where the SalesDate value is during the last hour.

Answer: B
Explanation:

The sales data in SQLDW is updated every 30 minutes. Records in dbo.SalesTransactions are updated
in SQLDW up to three days after being created. The records do NOT change after three days.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 11/116

Topic 2, Contoso, Ltd

Overview
Contoso, Ltd. is a company that sells enriched financial data to a variety of external customers.
Contoso has a main office in Los Angeles and two branch offices in New York and Seattle.
Data Infrastructure
Contoso has a 50-TB data warehouse that uses an instance of SQL Server on Azure Virtual Machines.
The data warehouse populates an Azure Synapse Analytics workspace that is accessed by the
external customers. Currently, the customers can access alt the data.
Contoso has one Power Bl workspace named FinData that contains a single dataset. The dataset
contains financial data from around the world. The workspace is used by 10 internal users and one
external customer. The dataset has the following two data sources: the data warehouse and the
Synapse Analytics serverless SQL pool.
Users frequently query the Synapse Analytics workspace by using Transact-SQL.
User Problems
Contoso identifies the following user issues:
• Some users indicate that the visuals in Power Bl reports are slow to render when making filter
selections.
• Users indicate that queries against the serverless SQL pool fail occasionally because the size of
tempdb has been exceeded.
• Users indicate that the data in Power Bl reports is stale. You discover that the refresh process of the
Power Bl model occasionally times out
Planned Changes
Contoso plans to implement the following changes:
• Into the existing Power Bl dataset, integrate an external data source that is accessible by using the
REST API.
• Build a new dataset in the FinData workspace by using data from the Synapse Analytics dedicated
SQL pool.
• Provide all the customers with their own Power Bl workspace to create their own reports. Each
workspace will use the new dataset in the FinData workspace.
• Implement subscription levels for the customers. Each subscription level will provide access to
specific rows of financial data.
• Deploy prebuilt datasets to Power Bl to simplify the query experience of the customers.
• Provide internal users with the ability to incorporate machine learning models loaded to the
dedicated SQL pool.
Question: 8

You need to recommend a solution to add new fields to the financial data Power Bl dataset with data
from the Microsoft SQL Server data warehouse.
What should you include in the recommendation?

A. Azure Purview
B. Site-to-Site VPN
C. an XMLA endpoint
D. the on-premises data gateway

Answer: D
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 12/116

Refresh data from an on-premises SQL Server database


The SQL Server database must be accessed by Power BI through an on-premises data gateway.
You can install an on-premises data gateway on the same local computer as SQL Server (in
production, it would typically be a different computer).

Reference: https://docs.microsoft.com/en-us/power-bi/connect-data/service-gateway-sql-tutorial

Question: 9

You need to recommend a solution for the customer workspaces to support the planned changes.
Which two configurations should you include in the recommendation? Each correct answer presents
part of the solution.
NOTE: Each correct selection is worth one point.

A. Set Use datasets across workspaces to Enabled


B. Publish the financial data to the web.
C. Grant the Build permission for the financial data to each customer.
D. Configure the FinData workspace to use a Power Bl Premium capacity.

Answer: AD
Explanation:

Build a new dataset in the FinData workspace by using data from the Synapse Analytics dedicated
SQL pool.
Provide all the customers with their own Power BI workspace to create their own reports. Each
workspace will use the new dataset in the FinData workspace

Reference: https://docs.microsoft.com/en-us/power-bi/connect-data/service-datasets-admin-across-
workspaces

Question: 10

DRAG DROP
You need to integrate the external data source to support the planned changes.
Which three actions should you perform in sequence? To answer, move the appropriate actions from
the list of actions to the answer area and arrange them in the correct order.

Answer:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 13/116

Explanation:

Question: 11
DRAG DROP

You need to create Power BI reports that will display data based on the customers’ subscription level.

Which three actions should you perform in sequence? To answer, move the appropriate actions from
the list of actions to the answer area and arrange them in the correct order.

Answer:
Explanation:

Step 1: Create row-level security (RLS) roles


Create roles

Note: Provide all the customers with their own Power BI workspace to create their own reports. Each
workspace will use the new dataset in the FinData workspace.
Implement subscription levels for the customers. Each subscription level will provide access to
specific rows of financial data.
Deploy prebuilt datasets to Power BI to simplify the query experience of the customers.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 14/116

Step 2: Create a DAX expression


Consider a model with two roles: The first role, named Workers, restricts access to all Payroll table
rows by using the following rule expression:

FALSE()
Note: A rule will return no table rows when its expression evaluates to false.

Yet, a second role, named Managers, allows access to all Payroll table rows by using the following
rule expression:

TRUE()
Take care: Should a report user map to both roles, they'll see all Payroll table rows.

Step 3: Add members to row-level security (RLS) roles


Configure role mappings
Once [the model is] published to Power BI, you must map members to dataset roles.

Reference: https://docs.microsoft.com/en-us/power-bi/guidance/rls-guidance

Question: 12

You need to identify the root cause of the data refresh issue.

What should you use?

A. the Usage Metrics Report in powerbi.com


B. Query Diagnostics in Power Query Editor
C. Performance analyzer in Power Bl Desktop

Answer: B
Explanation:

Users indicate that the data in Power BI reports is stale. You discover that the refresh process of the
Power BI model occasionally times out.

With Query Diagnostics, you can achieve a better understanding of what Power Query is doing at
authoring and at refresh time in Power BI Desktop. While we'll be expanding on this feature in the
future, including adding the ability to use it during full refreshes, at this time you can use it to
understand what sort of queries you're emitting, what slowdowns you might run into during
authoring refresh, and what kind of background events are happening.

Reference: https://docs.microsoft.com/en-us/power-query/querydiagnostics

Question: 13

Which two possible tools can you use to identify what causes the report to render slowly? Each

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 15/116

correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. Synapse Studio
B. DAX Studio
C. Azure Data Studio
D. Performance analyzer in Power Bl Desktop

Answer: BD
Explanation:

Some users indicate that the visuals in Power BI reports are slow to render when making filter
selections.

B: You can investigate a slow query in a Power BI report using DAX Studio, looking at the query plan
and the server timings.

D: Use Power BI Desktop Performance Analyzer to optimize the report or model.

Reference: https://www.sqlbi.com/tv/analyzing-a-slow-report-query-in-dax-studio/
https://docs.microsoft.com/en-us/power-bi/guidance/report-performance-troubleshoot

Question: 14

You need to recommend a solution to resolve the query issue of the serverless SQL pool. The solution
must minimize impact on the users.

What should you in the recommendation?

A. Update the statistics for the serverless SQL pool.


B. Move the data from the serverless SQL pool to a dedicated Apache Spark pool.
C. Execute the sp_sec_process_daca_limic stored procedure for the serverless SQL pool.
D. Move the data from the serverless SQL pool to a dedicated SQL pool.

Answer: D
Explanation:

Users indicate that queries against the serverless SQL pool fail occasionally because the size of
tempdb has been exceeded.

In the dedicated SQL pool resource, temporary tables offer a performance benefit because their
results are written to local rather than remote storage.

Temporary tables in serverless SQL pool.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 16/116

Temporary tables in serverless SQL pool are supported but their usage is limited. They can't be used
in queries which target files.

For example, you can't join a temporary table with data from files in storage. The number of
temporary tables is limited to 100, and their total size is limited to 100 MB.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-temporary

Question: 15

HOTSPOT

You need to build a Transact-SQL query to implement the planned changes for the internal users.

How should you complete the Transact-SQL query? To answer, select the appropriate options in the
answer area.

NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: PREDICT
Provide internal users with the ability to incorporate machine learning models loaded to the
dedicated SQL pool.

The example below shows a sample query using prediction function. An additional column with
name Score and data type float is created containing the prediction results. All the input data
columns as well as output prediction columns are available to display with the select statement.
-- Query for ML predictions

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 17/116

SELECT d.*, p.Score


FROM PREDICT(MODEL = (SELECT Model FROM Models WHERE Id = 1),
DATA = dbo.mytable AS d, RUNTIME = ONNX) WITH (Score float) AS p;

Box 2: WITH

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-
warehouse-predict

Topic 3, Misc. Questions

Question: 16
You develop a solution that uses a Power Bl Premium capacity. The capacity contains a dataset that is
expected to consume 50 GB of memory.
Which two actions should you perform to ensure that you can publish the model successfully to the
Power Bl service? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Increase the Max Offline Dataset Size setting.


B. Invoke a refresh to load historical data based on the incremental refresh policy.
C. Restart the capacity.
D. Publish an initial dataset that is less than 10 GB.
E. Publish the complete dataset.

Answer: BE
Explanation:

Enable large datasets


Steps here describe enabling large datasets for a new model published to the service. For existing
datasets, only step 3 is necessary.

Create a model in Power BI Desktop. If your dataset will become larger and progressively consume
more memory, be sure to configure Incremental refresh.

Publish the model as a dataset to the service.

In the service > dataset > Settings, expand Large dataset storage format, set the slider to On, and
then select Apply.

Enable large dataset slider

Invoke a refresh to load historical data based on the incremental refresh policy. The first refresh
could take a while to load the history. Subsequent refreshes should be faster, depending on your
incremental refresh policy.

Reference: https://docs.microsoft.com/en-us/power-bi/enterprise/service-premium-large-models

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 18/116

Question: 17

DRAG DROP
You have a Power Bl dataset that contains the following measures:
• Budget
• Actuals
• Forecast
You create a report that contains 10 visuals.
You need provide users with the ability to use a slicer to switch between the measures in two visuals
only.
You create a dedicated measure named cg Measure switch.
How should you complete the DAX expression for the Actuals measure? To answer, drag the
appropriate values to the targets. Each value may be used once, more than once, or not at all. You
may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: SELECTEDMEASURENAME()
SELECTEDMEASURENAME is used by expressions for calculation items to determine the measure
that is in context by name.
Syntax: SELECTEDMEASURENAME()
No parameters.

Example:
The following calculation item expression checks if the current measure is Expense Ratio and
conditionally applies calculation logic. Since the check is based on a string comparison, it is not
subject to formula fixup and will not benefit from object renaming being automatically reflected. For
a similar comparison that would benefit from formula fixup, please see the ISSLECTEDMEASURE
function instead.

IF (
SELECTEDMEASURENAME = "Expense Ratio",
SELECTEDMEASURE (),
DIVIDE ( SELECTEDMEASURE (), COUNTROWS ( DimDate ) )
)

Box 2: SELECTEDVALUE()

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 19/116

SELECTEDVALUE returns the value when the context for columnName has been filtered down to one
distinct value only. Otherwise returns alternateResult.

Syntax:
SELECTEDVALUE(<columnName>[, <alternateResult>])
M1, M2, ... - A list of measures.

Reference: https://docs.microsoft.com/en-us/dax/selectedmeasurename-function-dax
https://docs.microsoft.com/en-us/dax/selectedvalue-function

Question: 18
You have a Power Bi workspace named Workspacel in a Premium capacity. Workspacel contains a
dataset.
During a scheduled refresh, you receive the following error message: "Unable to save the changes
since the new dataset size of 11,354 MB exceeds the limit of 10,240 MB."
You need to ensure that you can refresh the dataset.
What should you do?

A. Turn on Large dataset storage format.


B. Connect Workspace1 to an Azure Data Lake Storage Gen2 account
C. Change License mode to Premium per user.
D. Change the location of the Premium capacity.

Answer: D
Explanation:

Assigning workspaces to capacities


Workspaces can be assigned to a Premium capacity in the Power BI Admin portal or, for a workspace,
in the Workspace pane.

Note: Capacity limits


Workspace storage limits, whether for My Workspace or an app workspace, depend on whether the
workspace is in shared or Premium capacity.

* Shared capacity limits


For workspaces in shared capacity:

There is a per-workspace storage limit of 10 GB.


Premium Per User (PPU) tenants have a 100 TB storage limit.
When using a Pro license, the total usage can’t exceed the tenant storage limit of 10 GB multiplied by
the number of Pro licenses in the tenant.

* Premium capacity limits


For workspaces in Premium capacity:

There is a limit of 100 TB per Premium capacity.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 20/116

There is no per-user storage limit.

Workspace storage usage is shown as 0 (as shown in this screenshot) if the workspace is assigned to
a Premium capacity.

Incorrect:
Not C: If your organization is using the original version of Power BI Premium, you're required to
migrate to the modern Premium Gen2 platform. Microsoft began migrating all Premium capacities to
Gen2.

Reference: https://docs.microsoft.com/en-us/power-bi/enterprise/service-premium-capacity-
manage-gen2
https://docs.microsoft.com/en-us/power-bi/admin/service-admin-manage-your-data-storage-in-
power-bi

Question: 19

You have a dataset that contains a table named UserPermissions. UserPermissions contains the
following data.

You plan to create a security role named User Security for the dataset. You need to filter the dataset
based on the current users. What should you include in the DAX expression?

A. [UserPermissions] - USERNAME()
B. [UserPermissions] - USERPRINCIPALNAME()
C. [User] = USERPRINCIPALNAME()
D. [User] = USERNAME()
E. [User] = USEROBJECTID()

Answer: D
Explanation:

USERNAME() returns the domain name and username from the credentials given to the system at
connection time.
It should be compared to column name of User, which in DAX is expressed through [User].

Reference: https://docs.microsoft.com/en-us/dax/username-function-dax

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 21/116

Question: 20

You have a Power Bl dataset that uses DirectQuery against an Azure SQL database.
Multiple reports use the dataset.
A database administrator reports that too many queries are being sent from Power Bl to the
database.
You need to reduce the number of queries sent to the database. The solution must meet the
following requirements:
• DirectQuery must continue to be used.
• Visual interactions in all the reports must remain as they are configured currently.
• Consumers of the reports must only be allowed to apply filters from the Filter pane.
Which two settings should you select? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Disabling cross highlighting/filtering by default


B. Add a single Apply button to the filter pane to apply changes at once
C. Add an Apply button to each slicer to apply changes when you're ready
D. Add Apply buttons to all basic filters to apply changes when you're ready
E. Ignore the Privacy Levels and potentially improve performance

Answer: BC
Explanation:

Reduce queries

Reduce the number of queries sent by Power BI using the Query reduction settings. For slicers, select
the “Add an Apply button to each slicer to apply changes when you’re ready” option. For filters,
select “Add a single Apply button to the filter pane to apply changes at once (preview).”

Reference: https://maqsoftware.com/insights/power-bi-best-practices

Question: 21

DRAG DROP
You have a Power Bl dataset that contains two tables named Table1 and Table2. The dataset is used
by one report.
You need to prevent project managers from accessing the data in two columns in Table1 named
Budget and Forecast.
Which four actions should you perform in sequence? To answer, move the appropriate actions from
the list of actions to the answer area and arrange them in the correct order.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 22/116

Answer:
Explanation:

Step 1: From Power BI Desktop, create a role named Project Managers.


Create roles
You can define roles within Power BI Desktop.

Step 2: Open Tabular Editor


Under Tables, select the table to which you want to apply a DAX rule.

In the Table filter DAX expression box, enter the DAX expressions. This expression returns a value of
true or false. For example: [Entity ID] = “Value”.

Step 3: From Power BI Desktop, add a DAX filter to the Project Managers role.

Step 4: For Table1, the Budget and Forecast columns, set the permissions to None.

Reference: https://docs.microsoft.com/en-us/power-bi/guidance/rls-guidance

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 23/116

Question: 22
You have a Power Bl data model.
You need to refresh the data from the source every 15 minutes.
What should you do first?

A. Enable the XMLA endpoint.


B. Define an incremental refresh policy.
C. Change the storage mode of the dataset.
D. Configure a scheduled refresh.

Answer: D
Explanation:

To get to the Scheduled refresh screen:

1. In the navigation pane, under Datasets, select More options (...) next to a dataset listed.
2. Select Schedule refresh.

Reference: https://docs.microsoft.com/en-us/power-bi/connect-data/refresh-scheduled-refresh

Question: 23

HOTSPOT
You are configuring an aggregation table as shown in the following exhibit.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 24/116

The detail table is named FactSales and the aggregation table is named FactSales(Agg).
You need to aggregate SalesAmount for each store.
Which type of summarization should you use for SalesAmount and StoreKey? To answer, select the
appropriate options in the answer area,
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 25/116

Box 1: Sum
The Manage aggregations dialog shows a row for each column in the table, where you can specify
the aggregation behavior. In the following example, queries to the Sales detail table are internally
redirected to the Sales Agg aggregation table.

Box 2: GroupBy

Reference: https://docs.microsoft.com/en-us/power-bi/transform-model/aggregations-advanced

Question: 24
DRAG DROP
You have a Power Bl dataset. The dataset contains data that is updated frequently.
You need to improve the performance of the dataset by using incremental refreshes.
Which four actions should you perform in sequence to enable the incremental refreshes? To answer,
move the appropriate actions from the list of actions to the answer area and arrange them in the
correct order.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 26/116

Answer:
Explanation:

Step 1: Create RangeStart and RangeEnd parameters.


Create parameters
In this task, use Power Query Editor to create RangeStart and RangeEnd parameters with default
values. The default values apply only when filtering the data to be loaded into the model in Power BI
Desktop. The values you enter should include only a small amount of the most recent data from your
data source. When published to the service, these values are overridden by the incremental refresh

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 27/116

policy.

Step 2: Apply a custom Date/Time filter to the data.


Filter data
With RangeStart and RangeEnd parameters defined, apply a filter based on conditions in the
RangeStart and RangeEnd parameters.

Before continuing with this task, verify your source table has a date column of Date/Time data type.

Step 3: Define the incremental refresh policy for the table.


Define policy
After you've defined RangeStart and RangeEnd parameters, and filtered data based on those
parameters, you define an incremental refresh policy. The policy is applied only after the model is
published to the service and a manual or scheduled refresh operation is performed.

Step 4: Publish the model to the Power BI service.


Save and publish to the service
When your RangeStart and RangeEnd parameters, filtering, and refresh policy settings are complete,
be sure to save your model, and then publish to the service.

Reference: https://docs.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-configure

Question: 25
You are configuring a Power Bl report for accessibility as shown in the following table.

You need to change the default colors of all three visuals to make the report more accessible to users
who have color vision deficiency. Which two settings should you configure in the Customize theme

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 28/116

window? Each correct answer presents part of the solution. NOTE: Each correct selection is worth
one point.

A. Theme colors
B. Sentiment colors
C. Divergent colors
D. First-level elements colors

Answer: AB
Explanation:

Reference: https://docs.microsoft.com/en-us/power-bi/create-reports/desktop-report-themes

Question: 26

You are creating a Python visual in Power Bl Desktop.


You need to retrieve the value of a column named Unit Price from a DataFrame.
How should you reference the Unit Price column in the Python code?

A. pandas.DataFrame('Unit Price')
B. dataset['Unit Price']
C. data = [Unit Price]
D. ('Unit Price')

Answer: A
Explanation:

You can retrieve a column in a pandas DataFrame object by using the DataFrame object name,
followed by the label of the column name in brackets.
So if the DataFrame object name is dataframe1 and the column we are trying to retrieve the 'X'
column, then we retrieve the column using the statement, dataframe1['X'].

Here's a simple Python script that imports pandas and uses a data frame:

import pandas as pd
data = [['Alex',10],['Bob',12],['Clarke',13]]
df = pd.DataFrame(data,columns=['Name','Age'],dtype=float)
print (df)

When run, this script returns:


Name Age
0 Alex 10.0
1 Bob 12.0
2 Clarke 13.0

Reference: http://www.learningaboutelectronics.com/Articles/How-to-retrieve-a-column-from-a-

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 29/116

pandas-dataframe-object-in-Python.php

Question: 27

DRAG DROP
You are using DAX Studio to query an XMLA endpoint.
You need to identify the duplicate values in a column named Email in a table named Subscription.
How should you complete the DAX expression? To answer, drag the appropriate values to the targets.
Each value may be used once, more than once. may need to drag the split bar between panes or
scroll to view content.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: CALCULATE

Box 2: CURRENTGROUP
CURRENTGROUP returns a set of rows from the table argument of a GROUPBY expression that
belong to the current row of the GROUPBY result.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 30/116

Remarks
This function can only be used within a GROUPBY expression.

This function takes no arguments and is only supported as the first argument to one of the following
aggregation functions: AVERAGEX, COUNTAX, COUNTX, GEOMEANX, MAXX, MINX, PRODUCTX,
STDEVX.S, STDEVX.P, SUMX, VARX.S, VARX.P.

Note: COUNTX counts the number of rows that contain a non-blank value or an expression that
evaluates to a non-blank value, when evaluating an expression over a table.

Reference: https://docs.microsoft.com/en-us/dax/currentgroup-function-dax

Question: 28
HOTSPOT
You have the following code in an Azure Synapse notebook.

Use the drop-down menus to select the answer choice that completes each statement based on the
information presented in the code.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 31/116

Box 1: three scatterplots


Compare Plots
Example, Draw two plots on the same figure:
import matplotlib.pyplot as plt
import numpy as np

#day one, the age and speed of 13 cars:


x = np.array([5,7,8,7,2,17,2,9,4,11,12,9,6])
y = np.array([99,86,87,88,111,86,103,87,94,78,77,85,86])
plt.scatter(x, y)

#day two, the age and speed of 15 cars:


x = np.array([2,2,8,1,15,8,12,9,7,3,11,4,7,14,12])
y = np.array([100,105,84,105,90,99,90,95,94,100,79,112,91,80,85])
plt.scatter(x, y)

plt.show()

Result:

Box 2: three marker symbols


One for each scatterplot. One default, and two defined.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 32/116

Default is point.
v is triangle down.
^ is triangle up.

Reference: https://www.w3schools.com/python/matplotlib_scatter.asp
https://matplotlib.org/stable/api/markers_api.html

Question: 29
HOTSPOT
You have the following code in an Azure Synapse notebook.

Use the drop-down menus to select the answer choice that completes each statement based on the
information presented in the code.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: stacked bar chart

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 33/116

matplotlib.pyplot.bar makes a bar plot.

The bars are positioned at x with the given alignment. Their dimensions are given by height and
width. The vertical baseline is bottom (default 0).

Many parameters can take either a single value applying to all bars or a sequence of values, one for
each bar.

Stacked bars can be achieved by passing individual bottom values per bar.

Stacked bar chart


This is an example of creating a stacked bar plot with error bars using bar. Note the parameters yeer
used for error bars, and bottom to stack the women's bars on top of the men's bars.

import matplotlib.pyplot as plt

labels = ['G1', 'G2', 'G3', 'G4', 'G5']


men_means = [20, 35, 30, 35, 27]
women_means = [25, 32, 34, 20, 25]
men_std = [2, 3, 4, 1, 2]
women_std = [3, 5, 2, 3, 3]
width = 0.35 # the width of the bars: can also be len(x) sequence

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 34/116

fig, ax = plt.subplots()

ax.bar(labels, men_means, width, yerr=men_std, label='Men')


ax.bar(labels, women_means, width, yerr=women_std, bottom=men_means,
label='Women')

ax.set_ylabel('Scores')
ax.set_title('Scores by group and gender')
ax.legend()

plt.show()

Box 2: two items


Blue item and Green Item.

matplotlib.legend
The legend module defines the Legend class, which is responsible for drawing legends associated
with axes and/or figures.

Note: A Diagram Legend is an element that you can add to your diagram to provide information
about the colors and/or line thicknesses and styles that have been used in the current diagram,
where those colors and other styles have some particular meaning.

Reference: https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.bar.html
https://matplotlib.org/stable/gallery/lines_bars_and_markers/bar_stacked.html
https://matplotlib.org/stable/api/legend_api.html

Question: 30
You have a Power Bl report that contains one visual.
You need to provide users with the ability to change the visual type without affecting the view for
other users.
What should you do?

A. From Report setting, select Personalize visuals.


B. From Tabular Editor, create a new perspective.
C. From the Bookmarks pane, select Focus mode, and then select Add.
D. From Visual options in Report settings, select Use the modern visual header with updated styling
options.

Answer: A
Explanation:

Enable personalization in a report


You can enable the feature either in Power BI Desktop or the Power BI service. You can also enable it
in embedded reports.

To enable the feature in Power BI Desktop, go to File > Options and settings > Options > Current file >

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 35/116

Report settings. Make sure Personalize visuals is turned on.

Question: 31

You have a Power Bl report that contains the visual shown in the following exhibit.

You need to make the visual more accessible to users who have color vision deficiency. What should
you do?

A. Change the font color of values in the Sales column to white.


B. Change the red background color to orange.
C. Add icons to represent the sales status of each product.
D. Add additional measures to the table values.

Answer: A
Explanation:

Themes, contrast and colorblind-friendly colors


You should ensure that your reports have enough contrast between text and any background colors.
Certain color combinations are particularly difficult for users with color vision deficiencies to
distinguish. These include the following combinations:

**---> green and black


green and red
green and brown
blue and purple
green and blue
light green and yellow
blue and grey
green and grey

Avoid using these colors together in a chart, or on the same report page.

Reference: https://docs.microsoft.com/en-us/power-bi/create-reports/desktop-accessibility-

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 36/116

creating-reports

Question: 32

HOTSPOT
You are creating a Power Bl Desktop report.
You add a Python visual to the report page.
You plan to create a scatter chart to visualize the data.
You add Python code to the Python script editor.
You need to create the scatter chart.
How should you complete the Python code? To answer, select the appropriate options in the answer
area.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: matplotlib.pyplot
Create a scatter plot
Let's create a scatter plot to see if there's a correlation between age and weight.

Under Paste or type your script code here, enter this code:

import matplotlib.pyplot as plt


dataset.plot(kind='scatter', x='Age', y='Weight', color='red')
plt.show()

Box 2: chart.show()

Reference: https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-python-
visuals#create-a-scatter-plot

Question: 33

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 37/116

You have the following Python code in an Apache Spark notebook.

Which type of chart will the code produce?

A. a stacked bar chart


B. a pie chart
C. a bar chart
D. an area chart

Answer: D
Explanation:

The matplotlib.pyplot.fill_between function fills the area between two horizontal curves.

The curves are defined by the points (x, y1) and (x, y2). This creates one or multiple polygons
describing the filled area.

Reference: https://matplotlib.org/3.5.0/api/_as_gen/matplotlib.pyplot.fill_between.html

Question: 34

You use Azure Synapse Analytics and Apache Spark notebooks to You need to use PySpark to gain
access to the visual libraries. Which Python libraries should you use?

A. Seaborn only
B. Matplotlib and Seaborn
C. Matplotlib only
D. Matplotlib and TensorFlow
E. TensorFlow only
F. Seaborn and TensorFlow

Answer: B
Explanation:

Matplotlib
You can render standard plotting libraries, like Matplotlib, using the built-in rendering functions for
each library.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 38/116

Matplotlib is a plotting library for the Python programming language and its numerical mathematics
extension NumPy.

Additional libraries
Beyond these libraries, the Azure Synapse Analytics Runtime also includes the following set of
libraries that are often used for data visualization:

Seaborn
Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface
for drawing attractive and informative statistical graphics.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-data-
visualization
https://seaborn.pydata.org/

Question: 35

You are using a Python notebook in an Apache Spark pool in Azure Synapse Analytics. You need to
present the data distribution statistics from a DataFrame in a tabular view. Which method should you
invoke on the DataFrame?

A. freqltems
B. explain
C. rollup
D. describe

Answer: D
Explanation:

The aggregating statistic can be calculated for multiple columns at the same time with the describe
function.

Example:
titanic[["Age", "Fare"]].describe()
Out[6]:
Age Fare
count 714.000000 891.000000
mean 29.699118 32.204208
std 14.526497 49.693429
min 0.420000 0.000000
25% 20.125000 7.910400
50% 28.000000 14.454200
75% 38.000000 31.000000
max 80.000000 512.329200

Reference:
https://pandas.pydata.org/docs/getting_started/intro_tutorials/06_calculate_statistics.html

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 39/116

Question: 36

You have a kiosk that displays a Power Bl report page. The report uses a dataset that uses Import
storage mode. You need to ensure that the report page updates all the visuals every 30 minutes.
Which two actions should you perform? Each correct answer presents part of the solution. NOTE:
Each correct selection is worth one point.

A. Enable Power Bl embedded.


B. Configure the data sources to use DirectQuery.
C. Configure the data sources to use a streaming dataset
D. Select Auto page refresh.
E. Enable the XMIA endpoint.
F. Add a Microsoft Power Automate visual to the report page.

Answer: BD
Explanation:

Automatic page refresh in Power BI enables your active report page to query for new data, at a
predefined cadence, for DirectQuery sources.

Automatic page refresh is available for DirectQuery sources and some LiveConnect scenarios, so it
will only be available when you are connected to a supported data source. This restriction applies to
both automatic page refresh types.

Reference: https://docs.microsoft.com/en-us/power-bi/create-reports/desktop-automatic-page-
refresh

Question: 37

You have an Azure Synapse Analytics dedicated SQL pool.


You need to ensure that the SQL pool is scanned by Azure Purview.
What should you do first?

A. Register a data source.


B. Search the data catalog.
C. Create a data share connection.
D. Create a data policy.

Answer: B
Explanation:

Question: 38

You have a Power Bl workspace that contains one dataset and four reports that connect to the
dataset. The dataset uses Import storage mode and contains the following data sources:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 40/116

• A CSV file in an Azure Storage account


• An Azure Database for PostgreSQL database
You plan to use deployment pipelines to promote the content from development to test to
production. There will be different data source locations for each stage. What should you include in
the deployment pipeline to ensure that the appropriate data source locations are used during each
stage?

A. parameter rules
B. selective deployment
C. auto-binding across pipelines
D. data source rules

Answer: A
Explanation:

Note: Create deployment rules


When working in a deployment pipeline, different stages may have different configurations. For
example, each stage can have different databases or different query parameters. The development
stage might query sample data from the database, while the test and production stages query the
entire database.

When you deploy content between pipeline stages, configuring deployment rules enables you to
allow changes to content, while keeping some settings intact. For example, if you want a dataset in a
production stage to point to a production database, you can define a rule for this. The rule is defined
in the production stage, under the appropriate dataset. Once the rule is defined, content deployed
from test to production, will inherit the value as defined in the deployment rule, and will always
apply as long as the rule is unchanged and valid.

Question: 39

HOTSPOT
You need to configure a source control solution for Azure Synapse Analytics. The solution must meet
the following requirements:
• Code must always be merged to the main branch before being published, and the main branch
must be used for publishing resource
• The workspace templates must be stored in the publish branch.
• A branch named dev123 will be created to support the development of a new feature.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 41/116

Answer:
Explanation:

Box 1: main
Code must always be merged to the main branch before being published, and the main branch must
be used for publishing resources.

Collaboration branch - Your Azure Repos collaboration branch that is used for publishing. By default,
its master. Change this setting in case you want to publish resources from another branch. You can
select existing branches or create new.
Each Git repository that's associated with a Synapse Studio has a collaboration branch. (main or
master is the default collaboration branch).

Box 2: workspace_publish
A branch named dev123 will be created to support the development of a new feature.
The workspace templates must be stored in the publish branch.

Creating feature branches


Users can also create feature branches by clicking + New Branch in the branch dropdown.

By default, Synapse Studio generates the workspace templates and saves them into a branch called
workspace_publish. To configure a custom publish branch, add a publish_config.json file to the root
folder in the collaboration branch.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/cicd/source-control

Question: 40
You need to provide users with a reproducible method to connect to a data source and transform the
data by using an Al function. The solution must meet the following requirement
• Minimize development effort.
• Avoid including data in the file.
Which type of file should you create?

A. PBIDS
B. PBIX
C. PBIT

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 42/116

Answer: C
Explanation:

A PBIT file is a template created by Power BI Desktop, a Microsoft application used to create reports
and visualizations. It contains queries, visualization settings, data models, reports, and other data
added by the user.
A PBIT file acts as a Power BI template. It doesn’t include any data from your source systems.

Reference: https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-data-sources

Question: 41

You are planning a Power Bl solution for a customer.


The customer will have 200 Power Bl users. The customer identifies the following requirements:
• Ensure that all the users can create paginated reports.
• Ensure that the users can create reports containing Al visuals.
• Provide autoscaling of the CPU resources during heavy usage spikes.
You need to recommend a Power Bl solution for the customer. The solution must minimize costs.
What should you recommend?

A. Power Bl Premium per user


B. a Power Bl Premium per capacity
C. Power Bl Pro per user
D. Power Bl Report Server

Answer: A
Explanation:

Announcing Power BI Premium Per User general availability and autoscale preview for Gen2.

Power BI Premium per user features and capabilities


* Pixel perfect paginated reports are available for operational reporting capabilities based on SSRS
technology. Users can create highly formatted reports in various formats such as PDF and PPT, which
are embeddable in applications and are designed to be printed or shared.
* Automated machine learning (AutoML) in Power BI enables business users to build ML models to
predict outcomes without having to write any code.
* Etc.

Note:
Power BI empowers every business user and business analyst to get amazing insights with AI infused
experiences. With Power BI Premium, we enable business analysts to not only analyze and visualize
their data, but to also build an end-to-end data platform through drag and drop experiences.
Everything from ingesting and transforming data at scale, to building automated machine learning
models, and analyzing massive volumes of data is now possible for our millions of business analysts.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 43/116

Reference: https://powerbi.microsoft.com/nl-be/blog/announcing-power-bi-premium-per-user-
general-availability-and-autoscale-preview-for-gen2/

Question: 42

HOTSPOT
You need to recommend an automated solution to monitor Power Bl user activity. The solution must
meet the following requirements:
• Security admins must identify when users export reports from Power Bl within five days of a new
sensitivity label being applied to the artifacts in Power Bl.
• Power Bl admins must identify updates or changes to the Power Bl capacity.
• The principle of least privilege must be used.
Which log should you include in the recommendation for each group? To answer, select the
appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: the unified audit log in Microsoft 365


Security admins must identify when users export reports from Power BI within five days of a new
sensitivity label being applied to the artifacts in Power BI.

Use the audit log


If your task is to track user activities across Power BI and Microsoft 365, you work with auditing in
Microsoft 365 compliance or use PowerShell. Auditing relies on functionality in Exchange Online,
which automatically supports Power BI.

You can filter the audit data by date range, user, dashboard, report, dataset, and activity type. You
can also download the activities in a csv (comma-separated value) file to analyze offline.

Box 2: Power BI activity log


Power BI admins must identify updates or changes to the Power BI capacity.

Use the activity log


Power BI administrators can analyze usage for all Power BI resources at the tenant level by using
custom reports that are based on the Power BI activity log.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 44/116

Reference: https://docs.microsoft.com/en-us/power-bi/admin/service-admin-auditing

Question: 43
You have a 2-GB Power Bl dataset.
You need to ensure that you can redeploy the dataset by using Tabular Editor. The solution must
minimize how long it will take to apply changes to the dataset from powerbi.com.
Which two actions should you perform in powerbi.com? Each correct answer presents part of the
solution.
NOTE: Each correct selection is worth one point

A. Enable service principal authentication for read-only admin APIs.


B. Turn on Large dataset storage format.
C. Connect the target workspace to an Azure Data Lake Storage Gen2 account.
D. Enable XMLA read-write.

Answer: BD
Explanation:

Optimize datasets for write operations by enabling large models


When using the XMLA endpoint for dataset management with write operations, it's recommended
you enable the dataset for large models. This reduces the overhead of write operations, which can
make them considerably faster. For datasets over 1 GB in size (after compression), the difference can
be significant.

Tabular Editor supports Azure Analysis Services and Power BI Premium Datasets through XMLA
read/write.

Note: Tabular Editor - An open-source tool for creating, maintaining, and managing tabular models
using an intuitive, lightweight editor. A hierarchical view shows all objects in your tabular model.
Objects are organized by display folders with support for multi-select property editing and DAX
syntax highlighting. XMLA read-only is required for query operations. Read-write is required for
metadata operations.

Reference: https://docs.microsoft.com/en-us/power-bi/enterprise/service-premium-connect-tools
https://tabulareditor.github.io/

Question: 44

You have five Power Bl reports that contain R script data sources and R visuals.
You need to publish the reports to the Power Bl service and configure a daily refresh of datasets.
What should you include in the solution?

A. a Power Bl Embedded capacity


B. an on-premises data gateway (standard mode)
C. a workspace that connects to an Azure Data Lake Storage Gen2 account
D. an on-premises data gateway (personal mode)

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 45/116

Answer: D
Explanation:

To schedule refresh of your R visuals or dataset, enable scheduled refresh and install an on-premises
data gateway (personal mode) on the computer containing the workbook and R.

Reference: https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-r-in-query-editor

Question: 45

You have new security and governance protocols for Power Bl reports and datasets. The new
protocols must meet the following requirements.
• New reports can be embedded only in locations that require authentication.
• Live connections are permitted only for workspaces that use Premium capacity datasets.
Which three actions should you recommend performing in the Power Bl Admin portal? Each correct
answer presents part of the solution. NOTE: Each correct selection is worth one point.

A. From Tenant settings, disable Allow XMLA endpoints and Analyze in Excel with on-premises
datasets.
B. From the Premium per user settings, set XMLA Endpoint to Off.
C. From Embed Codes, delete all the codes.
D. From Capacity settings, set XMLA Endpoint to Read Write.
E. From Tenant settings, set Publish to web to Disable.

Answer: ADE
Explanation:

Reference: https://docs.microsoft.com/en-us/power-bi/enterprise/service-premium-connect-tools
https://powerbi.microsoft.com/en-us/blog/power-bi-february-service-update

Question: 46

You have an Azure Synapse Analytics serverless SQL pool.


You need to catalog the serverless SQL pool by using Azure Purview.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Create a managed identity in Azure Active Directory (Azure AD).


B. Assign the Storage Blob Data Reader role to the Azure Purview managed service identity (MSI) for
the storage account associated to the Synapse Analytics workspace.
C. Assign the Owner role to the Azure Purview managed service identity (MSI) for the Azure Purview
resource group.
D. Register a data source.
E. Assign the Reader role to the Azure Purview managed service identity (MSI) for the Synapse
Analytics workspace.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 46/116

Answer: ABE
Explanation:

Authentication for enumerating serverless SQL database resources


There are three places you'll need to set authentication to allow Microsoft Purview to enumerate
your serverless SQL database resources:

The Azure Synapse workspace


The associated storage
The Azure Synapse serverless databases

The steps below will set permissions for all three.

Azure Synapse workspace


In the Azure portal, go to the Azure Synapse workspace resource.
On the left pane, select Access Control (IAM).
Select the Add button.
Set the Reader role and enter your Microsoft Purview account name, which represents its managed
service identity (MSI).
Select Save to finish assigning the role

Azure Synapse Analytics serverless SQL pool catalog Purview Azure Purview managed service
identity

Storage account
In the Azure portal, go to the Resource group or Subscription that the storage account associated
with the Azure Synapse workspace is in.

On the left pane, select Access Control (IAM).


Select the Add button.
Set the Storage blob data reader role and enter your Microsoft Purview account name (which
represents its MSI) in the Select box.
Select Save to finish assigning the role.

Azure Synapse serverless database


Go to your Azure Synapse workspace and open the Synapse Studio.
Select the Data tab on the left menu.
Select the ellipsis (...) next to one of your databases, and then start a new SQL script.
Add the Microsoft Purview account MSI (represented by the account name) on the serverless SQL
databases. You do so by running the following command in your SQL script:
SQL

CREATE LOGIN [PurviewAccountName] FROM EXTERNAL PROVIDER;


Apply permissions to scan the contents of the workspace
You can set up authentication for an Azure Synapse source in either of two ways. Select your scenario
below for steps to apply permissions.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 47/116

Use a managed identity


Use a service principal

Reference: https://docs.microsoft.com/en-us/azure/purview/register-scan-synapse-
workspace?tabs=MI

Question: 47

HOTSPOT
You have a Power Bl dataset that has the query dependencies shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the
information presented in the graphic.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: 3
Power Query doesn't start at the first query and work down, it starts at the bottom (last) query and
works backwards, so 3 tables from 1 will cause it to process that first source table 3 times.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 48/116

Box 2: Using Table.Buffer in the Orders query

Table.Buffer buffers a table in memory, isolating it from external changes during evaluation.
Buffering is shallow. It forces the evaluation of any scalar cell values, but leaves non-scalar values
(records, lists, tables, and so on) as-is.

Note that using this function might or might not make your queries run faster. In some cases, it can
make your queries run more slowly due to the added cost of reading all the data and storing it in
memory, as well as the fact that buffering prevents downstream folding.

Example 1
Load all the rows of a SQL table into memory, so that any downstream operations will no longer be
able to query the SQL server.

Usage
let
Source = Sql.Database("SomeSQLServer", "MyDb"),
MyTable = Source{[Item="MyTable"]}[Data],
BufferMyTable = Table.Buffer(dbo_MyTable)
in
BufferMyTable

Output
table

Reference: https://radacad.com/performance-tip-for-power-bi-enable-load-sucks-memory-up
https://docs.microsoft.com/en-us/powerquery-m/table-buffer

Question: 48
DRAG DROP
You are configuring Azure Synapse Analytics pools to support the Azure Active Directory groups
shown in the following table.

Which type of pool should each group use? To answer, drag the appropriate pool types to the groups.
Each pool type may be used once, more than once, or not at all. You may need to drag the split bar
between panes or scroll to view content.
NOTE: Each correct selection is worth one point.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 49/116

Answer:
Explanation:

Box 1: Apache Spark pool


An Apache Spark pool provides open-source big data compute capabilities. After you've created an
Apache Spark pool in your Synapse workspace, data can be loaded, modeled, processed, and
distributed for faster analytic insight.

Box 2: Dedicated SQL Pool


Dedicated SQL Pool - Data is stored in relational tables

Box 3: Serverless SQL pool


Serverless SQL pool - Cost is incurred for the data processed per query

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/quickstart-create-apache-
spark-pool-portal
https://www.royalcyber.com/blog/data-services/dedicated-sql-pool-vs-serverless-sql/

Question: 49
You are running a diagnostic against a query as shown in the following exhibit.

What can you identify from the diagnostics query?

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 50/116

A. All the query steps are folding.


B. Elevated permissions are being used to query records.
C. The query is timing out.
D. Some query steps are folding.

Answer: A
Explanation:

Understanding folding with Query Diagnostics


One of the most common reasons to use Query Diagnostics is to have a better understanding of what
operations were 'pushed down' by Power Query to be performed by the back-end data source, which
is also known as 'folding'. If we want to see what folded, we can look at what is the 'most specific'
query, or queries, that get sent to the back-end data source. We can look at this for both ODATA and
SQL.

Reference: https://docs.microsoft.com/en-us/power-query/querydiagnosticsfolding

Question: 50

HOTSPOT
You use Advanced Editor in Power Query Editor to edit a query that references two tables named
Sales and Commission. A sample of the data in the Sales table is shown in the following table.

A sample of the data in the Commission table is shown in the following table.

You need to merge the tables by using Power Query Editor without losing any rows in the Sales table.
How should you complete the query? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 51/116

Box 1: Join

Box 2: LeftOuter
Left outer join
One of the join kinds available in the Merge dialog box in Power Query is a left outer join, which
keeps all the rows from the left table and brings in any matching rows from the right table.

Reference: https://docs.microsoft.com/en-us/power-query/merge-queries-left-outer

Question: 51
You are creating an external table by using an Apache Spark pool in Azure Synapse Analytics. The
table will contain more than 20 million rows partitioned by date. The table will be shared with the
SQL engines.
You need to minimize how long it takes for a serverless SQL pool to execute a query data against the
table.
In which file format should you recommend storing the table data?

A. JSON

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 52/116

B. Apache Parquet
C. CSV
D. Delta

Answer: B
Explanation:

Prepare files for querying


If possible, you can prepare files for better performance:

* Convert large CSV and JSON files to Parquet. Parquet is a columnar format. Because it's
compressed, its file sizes are smaller than CSV or JSON files that contain the same data. Serverless
SQL pool skips the columns and rows that aren't needed in a query if you're reading Parquet files.
Serverless SQL pool needs less time and fewer storage requests to read it.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/best-practices-serverless-
sql-pool
https://stackoverflow.com/questions/65320949/parquet-vs-delta-format-in-azure-data-lake-gen-2-
store

Question: 52

You have a Power Bl dataset named Dataset1 that uses DirectQuery against an Azure SQL database
named DB1. DB1 is a transactional database in the third normal form.
You need to recommend a solution to minimize how long it takes to execute the query. The solution
must maintain the current functionality. What should you include in the recommendation?

A. Create calculated columns in Dataset1.


B. Remove the relationships from Dataset1.
C. Normalize the tables in DB1.
D. Denormalize the tables in DB1.

Answer: D
Explanation:

Denormalize to improve query performance.

Note: Normalization prevents data duplications, preserves disk space, and improves the
performance of the disk I/O operations. The downside of the normalization is that the queries based
on these normalized tables require more table joins.
Schema denormalization (i.e. consolidation of some dimension tables) for such databases can
significantly reduce costs of the analytical queries and improve the performance.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 53/116

Reference: https://www.mssqltips.com/sqlservertip/7114/denormalization-dimensions-synapse-
mapping-data-flow/

Question: 53

You are building a Power Bl dataset that will use two data sources.
The dataset has a query that uses a web data source. The web data source uses anonymous
authentication.
You need to ensure that the query can be used by all the other queries in the dataset.
Which privacy level should you select for the data source?

A. Public
B. Organizational
C. Private
D. None

Answer: A
Explanation:

A Public data source gives everyone visibility to the data contained in the data source. Only files,
internet data sources, or workbook data can be marked Public. Data from a Public data source may
be freely folded to other sources.

Reference: https://docs.microsoft.com/en-us/power-bi/enterprise/desktop-privacy-levels

Question: 54

You have a file named File1.txt that has the following characteristics:
• A header row
• Tab delimited values
• UNIX-style line endings
You need to read File1.txt by using an Azure Synapse Analytics serverless SQL pool.
Which query should you execute?

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 54/116

A. Option A
B. Option B
C. Option C
D. Option D

Answer: A
Explanation:

Use FIELDTERMINATOR ='\t' for tab.


Use ROWTERMINATOR ='\0x0A ' for UNIX-style line endings
Use FIRSTROW= 2 for a header row

Note: Using Row Terminators

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 55/116

The row terminator can be the same character as the terminator for the last field. Generally,
however, a distinct row terminator is useful. For example, to produce tabular output, terminate the
last field in each row with the newline character (\n) and all other fields with the tab character (\t).

If you want to output a line feed character only (LF) as the row terminator - as is typical on Unix and
Linux computers - use hexadecimal notation to specify the LF row terminator. For example:

bcp -r '0x0A'

FIRSTROW
FIRSTROW =first_row Specifies the number of the first row to load. The default is 1. This indicates the
first row in the specified data file. The row numbers are determined by counting the row terminators.
FIRSTROW is 1-based.

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/import-export/specify-field-
and-row-terminators-sql-server
https://docs.microsoft.com/en-us/sql/t-sql/functions/openrowset-transact-sql

Question: 55

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet
files by using automatic schema inference. The files contain more than 40 million rows of UTF-8-
encoded business names, survey names, and participant counts. The database is configured to use
the default collation.
The queries use open row set and infer the schema shown in the following table.

You need to recommend changes to the queries to reduce I/O reads and tempdb usage.
Solution: You recommend using openrowset with to explicitly define the collation for businessName
and surveyName as Latim_Generai_100_BiN2_UTF8.
Does this meet the goal?

A. Yes
B. No

Answer: A
Explanation:

Query Parquet files using serverless SQL pool in Azure Synapse Analytics.
Important
Ensure you are using a UTF-8 database collation (for example Latin1_General_100_BIN2_UTF8)

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 56/116

because string values in PARQUET files are encoded using UTF-8 encoding. A mismatch between the
text encoding in the PARQUET file and the collation may cause unexpected conversion errors. You can
easily change the default collation of the current database using the following T-SQL statement: alter
database current collate Latin1_General_100_BIN2_UTF8'.

Note: If you use the Latin1_General_100_BIN2_UTF8 collation you will get an additional
performance boost compared to the other collations. The Latin1_General_100_BIN2_UTF8 collation
is compatible with parquet string sorting rules. The SQL pool is able to eliminate some parts of the
parquet files that will not contain data needed in the queries (file/column-segment pruning). If you
use other collations, all data from the parquet files will be loaded into Synapse SQL and the filtering
is happening within the SQL process. The Latin1_General_100_BIN2_UTF8 collation has additional
performance optimization that works only for parquet and CosmosDB. The downside is that you lose
fine-grained comparison rules like case insensitivity.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/query-parquet-files

Question: 56

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet
files by using automatic schema inference. The files contain more than 40 million rows of UTF-8-
encoded business names, survey names, and participant counts. The database is configured to use
the default collation.
The queries use open row set and infer the schema shown in the following table.

You need to recommend changes to the queries to reduce I/O reads and tempdb usage.
Solution: You recommend using openrowset with to explicitly specify the maximum length for
businessName and surveyName.
Does this meet the goal?

A. Yes
B. No

Answer: B
Explanation:

Instead use Solution: You recommend using OPENROWSET WITH to explicitly define the collation for
businessName and surveyName as Latin1_General_100_BIN2_UTF8.

Query Parquet files using serverless SQL pool in Azure Synapse Analytics.
Important

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 57/116

Ensure you are using a UTF-8 database collation (for example Latin1_General_100_BIN2_UTF8)
because string values in PARQUET files are encoded using UTF-8 encoding. A mismatch between the
text encoding in the PARQUET file and the collation may cause unexpected conversion errors. You can
easily change the default collation of the current database using the following T-SQL statement: alter
database current collate Latin1_General_100_BIN2_UTF8'.

Note: If you use the Latin1_General_100_BIN2_UTF8 collation you will get an additional
performance boost compared to the other collations. The Latin1_General_100_BIN2_UTF8 collation
is compatible with parquet string sorting rules. The SQL pool is able to eliminate some parts of the
parquet files that will not contain data needed in the queries (file/column-segment pruning). If you
use other collations, all data from the parquet files will be loaded into Synapse SQL and the filtering
is happening within the SQL process. The Latin1_General_100_BIN2_UTF8 collation has additional
performance optimization that works only for parquet and CosmosDB. The downside is that you lose
fine-grained comparison rules like case insensitivity.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/query-parquet-files

Question: 57

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet
files by using automatic schema inference. The files contain more than 40 million rows of UTF-8-
encoded business names, survey names, and participant counts. The database is configured to use
the default collation.
The queries use open row set and infer the schema shown in the following table.

You need to recommend changes to the queries to reduce I/O reads and tempdb usage.
Solution: You recommend defining a data source and view for the Parquet files. You recommend
updating the query to use the view.
Does this meet the goal?

A. Yes
B. No

Answer: B
Explanation:

Solution: You recommend using OPENROWSET WITH to explicitly specify the maximum length for
businessName and surveyName.

The size of the varchar(8000) columns are too big. Better reduce their size.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 58/116

A SELECT...FROM OPENROWSET(BULK...) statement queries the data in a file directly, without


importing the data into a table. SELECT...FROM OPENROWSET(BULK...) statements can also list bulk-
column aliases by using a format file to specify column names, and also data types.

Reference: https://docs.microsoft.com/en-us/sql/t-sql/functions/openrowset-transact-sql

Question: 58

Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might
have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have the Power Bl data model shown in the exhibit. (Click the Exhibit tab.)

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 59/116

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 60/116

Users indicate that when they build reports from the data model, the reports take a long time to
load.
You need to recommend a solution to reduce the load times of the reports.
Solution: You recommend moving all the measures to a calculation group.
Does this meet the goal?

A. Yes
B. No

Answer: B
Explanation:

Instead denormalize For Performance.


Even though it might mean storing a bit of redundant data, schema denormalization can sometimes
provide better query performance. The only question then becomes is the extra space used worth
the performance benefit.

Reference: https://www.mssqltips.com/sqlservertutorial/3211/denormalize-for-performance/

Question: 59

Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might
have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have the Power BI data model shown in the exhibit (Click the Exhibit tab.)

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 61/116

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 62/116

Users indicate that when they build reports from the data model, the reports take a long time to
load.
You need to recommend a solution to reduce the load times of the reports.
Solution: You recommend denormalizing the data model.
Does this meet the goal?

A. Yes
B. No

Answer: A
Explanation:

Denormalize For Performance.


Even though it might mean storing a bit of redundant data, schema denormalization can sometimes
provide better query performance. The only question then becomes is the extra space used worth
the performance benefit.

Reference: https://www.mssqltips.com/sqlservertutorial/3211/denormalize-for-performance/

Question: 60

Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might
have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have the Power Bl data model shown in the exhibit. (Click the Exhibit tab.)

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 63/116

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 64/116

Users indicate that when they build reports from the data model, the reports take a long time to
load.
You need to recommend a solution to reduce the load times of the reports.
Solution: You recommend normalizing the data model.
Does this meet the goal?

A. Yes
B. No

Answer: B
Explanation:

Instead denormalize For Performance.


Even though it might mean storing a bit of redundant data, schema denormalization can sometimes
provide better query performance. The only question then becomes is the extra space used worth
the performance benefit.

Reference: https://www.mssqltips.com/sqlservertutorial/3211/denormalize-for-performance/

Question: 61

Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might
have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a Power Bl dataset named Datasetl.
In Datasetl, you currently have 50 measures that use the same time intelligence logic.
You need to reduce the number of measures, while maintaining the current functionality.
Solution: From Power Bl Desktop, you group the measures in a display folder.
Does this meet the goal?

A. Yes
B. No

Answer: B
Explanation:

Solution: From DAX Studio, you write a query that uses grouping sets.

A grouping is a set of discrete values that are used to group measure fields.

Reference: https://docs.microsoft.com/en-us/power-bi/developer/visuals/capabilities

Question: 62

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 65/116

Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might
have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a Power Bl dataset named Dataset1.
In Dataset1, you currently have 50 measures that use the same time intelligence logic.
You need to reduce the number of measures, while maintaining the current functionality.
Solution: From Tabular Editor, you create a calculation group.
Does this meet the goal?

A. Yes
B. No

Answer: B
Explanation:

Solution: From DAX Studio, you write a query that uses grouping sets.

A grouping is a set of discrete values that are used to group measure fields.

Reference: https://docs.microsoft.com/en-us/power-bi/developer/visuals/capabilities

Question: 63

Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might
have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a Power Bl dataset named Datasetl.
In Dataset1, you currently have 50 measures that use the same time intelligence logic.
You need to reduce the number of measures, while maintaining the current functionality.
Solution: From DAX Studio, you write a query that uses grouping sets.
Does this meet the goal?

A. Yes
B. No

Answer: A
Explanation:

A grouping is a set of discrete values that are used to group measure fields.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 66/116

Reference: https://docs.microsoft.com/en-us/power-bi/developer/visuals/capabilities

Question: 64

You open a Power Bl Desktop report that contains an imported data model and a single report page.
You open Performance analyzer, start recording, and refresh the visuals on the page. The recording
produces the results shown in the following exhibit

What can you identify from the results?

A. The Actual/Forecast Hours by Type visual takes a long time to render on the report page when the
data is cross-filtered.
B. The Actual/Forecast Billable Hrs YTD visual displays the most data.
C. Unoptimized DAX queries cause the page to load slowly.
D. When all the visuals refresh simultaneously, the visuals spend most of the time waiting on other
processes to finish.

Answer: D
Explanation:

Most time is spent in the category Other - time required by the visual for preparing queries, waiting
for other visuals to complete, or performing other background processing.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 67/116

Note: Each visual's log information includes the time spent (duration) to complete the following
categories of tasks:

DAX query - if a DAX query was required, this is the time between the visual sending the query, and
for Analysis Services to return the results.
Visual display - time required for the visual to draw on the screen, including time required to retrieve
any web images or geocoding.
Other - time required by the visual for preparing queries, waiting for other visuals to complete, or
performing other background processing.

Reference: https://docs.microsoft.com/en-us/power-bi/create-reports/desktop-performance-
analyzer

Question: 65

You have a Power Bl dataset that contains the following measure.

You need to improve the performance of the measure without affecting the logic or the results. What
should you do?

A. Replace both calculate functions by using a variable that contains the calculate function.
B. Remove the alternative result of blank( ) from the divide function.
C. Create a variable and replace the values for [sales Amount].
D. Remove "calendar'[Flag] = "YTD" from the code.

Answer: A

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 68/116

Explanation:

Question: 66

You are implementing a reporting solution that has the following requirements:
• Reports for external customers must support 500 concurrent requests. The data for these reports is
approximately 7 GB and is stored in Azure Synapse Analytics.
• Reports for the security team use data that must have local security rules applied at the database
level to restrict access. The data being reviewed is 2 GB.
Which storage mode provides the best response time for each group of users?

A. DirectQuery for the external customers and import for the security team.
B. DirectQuery for the external customers and DirectQuery for the security team.
C. Import for the external customers and DirectQuery for the security team.
D. Import for the external customers and import for the security team.

Answer: A
Explanation:

With DirectQuery, queries are sent back to your Azure Synapse Analytics in real time as you explore
the data. Real-time queries, combined with the scale of Synapse Analytics enables users to create
dynamic reports in minutes against terabytes of data.

Need import for the security team for local security rules.

Reference: https://docs.microsoft.com/en-us/power-bi/connect-data/service-azure-sql-data-
warehouse-with-direct-connect

Question: 67

You are optimizing a Power Bl data model by using DAX Studio.


You need to capture the query events generated by a Power Bl Desktop report.
What should you use?

A. the DMV list


B. a Query Plan trace
C. an All Queries trace
D. a Server Timings trace

Answer: C
Explanation:

The All Queries trace in Dax Studio supports capturing the query events from all client tools (not just
queries sent from DAX Studio like the Query Plan and Server Timings features do). The ‘All Queries”
trace is really useful when you wish to see the queries that are generated by a client tool like Power

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 69/116

BI Desktop.

Reference: https://daxstudio.org/documentation/features/all-queries-trace/

Question: 68

You discover a poorly performing measure in a Power Bl data model.


You need to review the query plan to analyze the amount of time spent in the storage engine and the
formula engine.
What should you use?

A. Tabular Editor
B. Performance analyzer in Power Bl Desktop
C. Vertipaq Analyzer
D. DAX Studio

Answer: B
Explanation:

Monitor report performance in Power BI Desktop using the Performance Analyzer. Monitoring will
help you learn where the bottlenecks are, and how you can improve report performance.

Monitoring performance is relevant in the following situations:

Your Import data model refresh is slow.


Your DirectQuery or Live Connection reports are slow.
Your model calculations are slow.
Slow queries or report visuals should be a focal point of continued optimization.

Reference: https://docs.microsoft.com/en-us/power-bi/guidance/monitor-report-performance

Question: 69

You are using DAX Studio to analyze a slow-running report query. You need to identify inefficient join
operations in the query. What should you review?

A. the query statistics


B. the query plan
C. the query history
D. the server timings

Answer: B
Explanation:

Open DAX Studio.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 70/116

Paste the query there, enable Query Plan display and Server Timings, run your query (with clear
cache), and then study the query plan for large row counts. Once the culprit is identified you can
decide how to rewrite your DAX to make that part faster.

Reference: https://community.powerbi.com/t5/Power-Query/DAX-Query-taking-longer-time/td-
p/1171961
https://www.sqlbi.com/wp-content/uploads/DAX-Query-Plans.pdf

Question: 70

HOTSPOT
You are building a Power Bl dataset that contains a table named Calendar. Calendar contains the
following calculated column.
pfflag = IF('Calendar'[Date] < TOOAYQ, "Past", "Future")
You need to create a measure that will perform a fiscal prior year-to-date calculation that meets the
following requirements:
• Returns the fiscal prior year-to-date value for [sales Amount]
• Uses a fiscal year end of June 30
• Produces no result for dates in the future
How should you complete the DAX expression? To answer, select the appropriate options in the
answer area.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 71/116

Box 1: CALCULATETABLE
CALCULATETABLE evaluates a table expression in a modified filter context.
Syntax: CALCULATETABLE(<expression>[, <filter1> [, <filter2> [, …]]])

Incorrect:
* SUMMARIZECOLUMNS
SUMMARIZECOLUMNS returns a summary table over a set of groups.
Syntax: SUMMARIZECOLUMNS( <groupBy_columnName> [, < groupBy_columnName >]…,
[<filterTable>]…[, <name>, <expression>]…)

* CROSSJOIN returns a table that contains the Cartesian product of all rows from all tables in the
arguments. The columns in the new table are all the columns in all the argument tables.
Syntax: CROSSJOIN(<table>, <table>[, <table>]…)

* UNION creates a union (join) table from a pair of tables.


Syntax: UNION(<table_expression1>, <table_expression2> [,<table_expression>]…)

Box 2: SAMEPERIODLASTYEAR
SAMEPERIODLASTYEAR returns a table that contains a column of dates shifted one year back in time
from the dates in the specified dates column, in the current context.

Syntax: SAMEPERIODLASTYEAR(<dates>)

The dates returned are the same as the dates returned by this equivalent formula: DATEADD(dates, -
1, year)

Example:
The following sample formula creates a measure that calculates the previous year sales of Reseller
sales.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 72/116

= CALCULATE(SUM(ResellerSales_USD[SalesAmount_USD]),
SAMEPERIODLASTYEAR(DateTime[DateKey))

Box 3: TODAY()
TODAY() returns the current date.
The TODAY function is useful when you need to have the current date displayed on a worksheet,
regardless of when you open the workbook. It is also useful for calculating intervals.

Example:
The following sample formula creates a measure that calculates the 'Running Total' for Internet sales.

= CALCULATE(SUM(InternetSales_USD[SalesAmount_USD]), DATESYTD(DateTime[DateKey]))

Reference: https://docs.microsoft.com/en-us/dax/calculatetable-function-dax
https://docs.microsoft.com/en-us/dax/sameperiodlastyear-function-dax
https://docs.microsoft.com/en-us/dax/datesytd-function-dax

Question: 71
DRAG DROP
You have a shared dataset in Power Bl named Dataset1.
You have an on-premises Microsoft SQL Server database named DB1.
You need to ensure that Dataset1 refreshes data from DB1.
Which three actions should you perform in sequence? To answer, move the appropriate actions from
the list of actions to the answer area and arrange them in the correct order.

Answer:
Explanation:

Step 1: Install the on-premises data gateway (standard mode)


The personal mode is only for a single user, not to be used for a shared dataset.

Step 2: From powerbi.com, add a data source to the gateway clusters


After you install the on-premises data gateway, you can add data sources that can be used with the
gateway.

Add a data source

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 73/116

Under Data Source Type, select SQL Server.

After you fill in everything, select Create. You can now use this data source for scheduled refresh or
DirectQuery against a SQL Server that's on-premises. You see Created New data source if it
succeeded.

Step 3: From powerbi.com, configure Dataset1 to use a data gateway.

Connect a dataset to a SQL Server database


In Power BI Desktop, you connected directly to your on-premises SQL Server database, but the Power
BI service requires a data gateway to act as a bridge between the cloud and your on-premises
network. Follow these steps to add your on-premises SQL Server database as a data source to a
gateway and then connect your dataset to this data source.

Sign in to Power BI. In the upper-right corner, select the settings gear icon and then select Settings.

On the Datasets tab, select the dataset AdventureWorksProducts, so you can connect to your on-
premises SQL Server database through a data gateway.

Expand Gateway connection and verify that at least one gateway is listed.

Under Actions, expand the toggle button to view the data sources and select the Add to gateway link.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 74/116

On the Gateways management page, on the Data Source Settings tab, enter and verify the following
information, and select Add.

On the Datasets tab, expand the Gateway connection section again. Select the data gateway you
configured, which shows a Status of running on the machine where you installed it, and select Apply.

Reference: https://docs.microsoft.com/en-us/power-bi/connect-data/service-gateway-personal-
mode
https://docs.microsoft.com/en-us/power-bi/connect-data/service-gateway-sql-tutorial
https://docs.microsoft.com/en-us/power-bi/connect-data/service-gateway-enterprise-manage-sql

Question: 72
You need to save Power Bl dataflows in an Azure Storage account.
Which two prerequisites are required to support the configuration? Each correct answer presents
part of the solution.
NOTE: Each correct selection is worth one point.

A. The storage account must be protected by using an Azure Firewall.


B. The connection must be created by a user that is assigned the Storage Blob Data Owner role.
C. The storage account must have hierarchical namespace enabled.
D. Dataflows must exist already for any directly connected Power Bl workspaces.
E. The storage account must be created in a separate Azure region from the Power Bl tenant and
workspaces.

Answer: BC
Explanation:

Reference: https://docs.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-
azure-data-lake-storage-integration

Question: 73

You have a Power Bl tenant that contains 10 workspaces.


You need to create dataflows in three of the workspaces. The solution must ensure that data
engineers can access the resulting data by using Azure Data Factory.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point

A. Associate the Power Bl tenant to an Azure Data Lake Storage account.


B. Add the managed identity for Data Factory as a member of the workspaces.
C. Create and save the dataflows to an Azure Data Lake Storage account.
D. Create and save the dataflows to the internal storage of Power BI

Answer: AC
Explanation:

Data used with Power BI is stored in internal storage provided by Power BI by default. With the

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 75/116

integration of dataflows and Azure Data Lake Storage Gen 2 (ADLS Gen2), you can store your
dataflows in your organization's Azure Data Lake Storage Gen2 account. This essentially allows you to
"bring your own storage" to Power BI dataflows, and establish a connection at the tenant or
workspace level.
Reference: https://docs.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-
azure-data-lake-storage-integration

Question: 74

HOTSPOT
You have the Power BI workspaces shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the
information presented in the graphic.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: Infrastrucrue Svcs


Infrastrucrue Svcs is a Premium workspace.

If users have a free license and the workspace is stored in Premium (dedicated) capacity, they will be
able to view and interact with the content in that workspace.
If users have a free license and the workspace is stored in shared capacity (not premium), they will

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 76/116

not be able to see the content in shared workspace, only "My workspace".
If users have pro license, they will be able to view and interact with the content in that workspace.

Box 2: Admin
We need to activate the Orpaned workspace.

An orphaned workspace is one that does not have an admin assigned.


If you’re a Service Admin, you can now view all of your organization’s workspaces through the Admin
Portal in the user interface.

It’s easy to Recover an orphan from this screen. Simply select the workspace and click Recover, then
add yourself or another user as an admin.

Reference: https://community.powerbi.com/t5/Service/Difference-between-Public-and-Private-
workspace/m-p/1382219
https://docs.microsoft.com/en-us/power-bi/admin/service-admin-portal-workspaces

Question: 75
You plan to modify a Power Bl dataset.
You open the Impact analysis panel for the dataset and select Notify contacts.
Which contacts will be notified when you use the Notify contacts feature?

A. any users that accessed a report that uses the dataset within the last 30 days
B. the workspace admins of any workspace that uses the dataset
C. the Power Bl admins
D. all the workspace members of any workspace that uses the dataset

Answer: D
Explanation:

Notify contacts
If you've made a change to a dataset or are thinking about making a change, you might want to
contact the relevant users to tell them about it. When you notify contacts, an email is sent to the
contact lists of all the impacted workspaces. Your name appears on the email so the contacts can find
you and reply back in a new email thread.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 77/116

Reference: https://docs.microsoft.com/en-us/power-bi/collaborate-share/service-dataset-impact-
analysis

Question: 76

You are using GitHub as a source control solution for an Azure Synapse Studio workspace. You need
to modify the source control solution to use an Azure DevOps Git repository. What should you do
first?

A. Disconnect from the GitHub repository.


B. Create a new pull request.
C. Change the workspace to live mode.
D. Change the active branch.

Answer: A
Explanation:

By default, Synapse Studio authors directly against the Synapse service. If you have a need for
collaboration using Git for source control, Synapse Studio allows you to associate your workspace
with a Git repository, Azure DevOps, or GitHub.

Prerequisites
Users must have the Azure Contributor (Azure RBAC) or higher role on the Synapse workspace to
configure, edit settings and disconnect a Git repository with Synapse.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/cicd/source-control

Question: 77

You have a Power BI workspace named Workspace1 that contains five dataflows.

You need to configure Workspace1 to store the dataflows in an Azure Data Lake Storage Gen2
account.

What should you do first?

A. Delete the dataflow queries.


B. From the Power Bl Admin portal, enable tenant-level storage.
C. Disable load for all dataflow queries.
D. Change the Data source settings in the dataflow queries.

Answer: B
Explanation:

Configuring Azure connections is an optional setting with additional properties that can optionally be

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 78/116

set:

* Tenant Level storage, which lets you set a default, and/or


* Workspace-level storage, which lets you specify the connection per workspace

You can optionally configure tenant-level storage if you want to use a centralized data lake only, or
want this to be the default option.

Reference: https://docs.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-
azure-data-lake-storage-integration

Question: 78

You are creating a Power 81 single-page report.


Some users will navigate the report by using a keyboard, and some users will navigate the report by
using a screen reader.
You need to ensure that the users can consume content on a report page in a logical order.
What should you configure on the report page?

A. the bookmark order


B. the X position
C. the layer order
D. the tab order

Answer: D
Explanation:

Tab order is the order in which users interact with the items on a page using the keyboard. Generally,
we want tab order to be predictable and to closely match the visual order on the page (unless there
is a good reason to deviate).

Note: If you are using the keyboard to navigate in a Power BI report, the order in which you arrive at
visuals will not follow your vision unless you set the new tab order property. If you have low or no
vision, this becomes an even bigger issue because you may not be able to see that you are navigating
visuals out of visual order because the screen reader just reads whatever comes next.

Reference: https://datasavvy.me/2018/12/26/tab-order-enhances-power-bi-report-accessibility/

Question: 79

You plan to generate a line chart to visualize and compare the last six months of sales data for two
departments. You need to increase the accessibility of the visual. What should you do?

A. Replace long text with abbreviations and acronyms.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 79/116

B. Configure a unique marker for each series.


C. Configure a distinct color for each series.
D. Move important information to a tooltip.

Answer: C
Explanation:

Themes, contrast and colorblind-friendly colors.


You should ensure that your reports have enough contrast between text and any background colors.
Certain color combinations are particularly difficult for users with color vision deficiencies to
distinguish. These include the following combinations:

green and red


green and brown
blue and purple
green and blue
light green and yellow
blue and grey
green and grey
green and black

Avoid using these colors together in a chart, or on the same report page.

Reference: https://docs.microsoft.com/en-us/power-bi/create-reports/desktop-accessibility-
creating-reports

Question: 80

You have a Power Bl dataset that has only the necessary fields visible for report development.
You need to ensure that end users see only 25 specific fields that they can use to personalize visuals.
What should you do?

A. From Tabular Editor, create a new role.


B. Hide all the fields in the dataset.
C. Configure object-level security (OLS).
D. From Tabular Editor, create a new perspective.

Answer: B
Explanation:

Question: 81

HOTSPOT
You are using Azure Synapse Studio to explore a dataset that contains data about taxi trips.
You need to create a chart that will show the total trip distance according to the number of
passengers as shown in the following exhibit.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 80/116

How should you configure the chart? To answer, select the appropriate options in the answer are
a. NOTE: Each correct selection is worth one point.

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 81/116

Question: 82

HOTSPOT
You have an Azure Synapse workspace named Workspace1.
You need to use PySpark in a notebook to read data from a SQL pool as an Apache Spark DataFrame
and display the top five
How should you complete the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 82/116

Box 1: sqlanalytics
Read from a SQL Pool table with Spark
// Read the table we just created in the sql pool as a Spark dataframe
val spark_read = spark.read.
sqlanalytics(s"$sql_pool_name.dbo.PublicHoliday")
spark_read.show(5, truncate = false)

Box 2: spark_read.show
Sample output:

Reference: https://github.com/Azure-
Samples/Synapse/blob/main/Notebooks/Scala/03%20Read%20and%20write%20from%20SQL%20p
ool%20table.ipynb

Question: 83
You have a Power Bl report that contains the table shown in the following exhibit.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 83/116

The table contains conditional formatting that shows which stores are above, near, or below the
monthly quota for returns. You need to ensure that the table is accessible to consumers of reports
who have color vision deficiency. What should you do?

A. Add alt text to explain the information that each color conveys.
B. Move the conditional formatting icons to a tooltip report.
C. Change the icons to use a different shape for each color.
D. Remove the icons and use red, yellow, and green background colors instead.

Answer: A
Explanation:

Report accessibility checklist, All Visuals.


* Ensure alt text is added to all non-decorative visuals on the page.
* Avoid using color as the only means of conveying information. Use text or icons to supplement or
replace the color.
* Check that your report page works for users with color vision deficiency.
* Etc.

Reference: https://docs.microsoft.com/en-us/power-bi/create-reports/desktop-accessibility-
creating-reports

Question: 84

DRAG DROP
You plan to create a Power Bl report that will use an OData feed as the data source. You will retrieve
all the entities from two different collections by using the same service root
The OData feed is still in development. The location of the feed will change once development is
complete.
The report will be published before the OData feed development is complete.
You need to minimize development effort to change the data source once the location changes.
Which three actions should you perform in sequence? To answer, move the appropriate actions from
the list of actions to the answer area and arrange them in the correct order.

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 84/116

Step 1: Create a parameter that contains the service root URI

Step 2: Get data from OData feed source and use the parameter to populate the first part of the URL.
The URI is in the first part of the query.
Example: let
Source = OData.Feed ("https://analytics.dev.azure.com/{organization}/{project}/_odata/v3.0-
preview/WorkItemSnapshot? "
&"$apply=filter( "
&"WorkItemType eq 'Bug' "
&"and StateCategory ne 'Completed' "
&"and startswith(Area/AreaPath,'{areapath}') "
&"and DateValue ge {startdate} "
&") "
&"/groupby( "

&"(DateValue,State,WorkItemType,Priority,Severity,Area/AreaPath,Iteration/IterationPath,AreaSK), "
&"aggregate($count as Count) "
&") "
,null, [Implementation="2.0",OmitValues = ODataOmitValues.Nulls,ODataVersion = 4])
in
Source

Box 3: From Advanced Editor, duplicate the query and change the resource path in the URL.

Choose Get Data, and then Blank Query.


From the Power BI Query editor, choose Advanced Editor.
The Advanced Editor window opens.
Edit the query.
Etc.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 85/116

Reference: https://docs.microsoft.com/en-us/azure/devops/report/powerbi/odataquery-connect

Question: 85
You are using an Azure Synapse Analytics serverless SQL pool to query network traffic logs in the
Apache Parquet format. A sample of the data is shown in the following table.

You need to create a Transact-SQL query that will return the source IP address.
Which function should you use in the select statement to retrieve the source IP address?

A. JS0N_VALUE
B. FOR.JSON
C. CONVERT
D. FIRST VALUE

Answer: A
Explanation:

Question: 86

You have an Azure Synapse Analytics dataset that contains data about jet engine performance. You
need to score the dataset to identify the likelihood of an engine failure. Which function should you
use in the query?

A. PIVOT
B. GROUPING
C. PREDICT
D. CAST

Answer: A
Explanation:

Question: 87

You are optimizing a dataflow in a Power Bl Premium capacity. The dataflow performs multiple joins.
You need to reduce the load time of the dataflow.
Which two actions should you perform? Each correct answer presents part of the solution. NOTE:
Each correct selection is worth one point.

A. Reduce the memory assigned to the dataflows.


B. Execute non-foldable operations before foldable operations.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 86/116

C. Execute foldable operations before non-foldable operations.


D. Place the ingestion operations and transformation operations in a single dataflow.
E. Place the ingestion operations and transformation operations in separate dataflows.

Answer: CE
Explanation:

Using the compute engine to improve performance


Take the following steps to enable workloads trigger the compute engine, and always improve
performance:

For computed and linked entities in the same workspace:

Ensure you perform the operations that fold, such as merges, joins, conversion, and others.
For ingestion focus on getting the data into the storage as fast as possible, using filters only if they
reduce the overall dataset size. It's best practice to keep your transformation logic separate from this
step, and allow the engine to focus on the initial gathering of ingredients. Next, separate your
transformation and business logic into a separate dataflow in the same workspace, using linked or
computed entities; doing so allows for the engine to activate and accelerate your computations. In
our analogy, it's like food preparation in the kitchen: food preparation is typically a separate and
distinct step from gathering your raw ingredients, and a pre-requisite for putting the food in the
oven. Similarly, your logic needs to be prepared separately before it can take advantage of the
compute engine.

Reference: https://docs.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-
premium-workload-configuration

Question: 88

HOTSPOT
You have an Azure Data Lake Storage Gen 2 container that stores more than 300,000 files
representing hourly telemetry dat
a. The data is organized in folders by the year, month, and day according to when the telemetry was
captured.
You have the following query in Power Query Editor.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 87/116

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point

Answer:
Explanation:

Box 1: Yes
A key mechanism that allows Azure Data Lake Storage Gen2 to provide file system performance at
object storage scale and prices is the addition of a hierarchical namespace. This allows the collection
of objects/files within an account to be organized into a hierarchy of directories and nested
subdirectories in the same way that the file system on your computer is organized. With a
hierarchical namespace enabled, a storage account becomes capable of providing the scalability and
cost-effectiveness of object storage, with file system semantics that are familiar to analytics engines
and frameworks.

Box 2: No
Table.SelectRows returns a table of rows from the table, that matches the selection condition.

Box 3: Yes
Azure Data Lake Storage has higher throughput and IOPS.

Note: Azure Blob Storage is a general purpose, scalable object store that is designed for a wide
variety of storage scenarios. Azure Data Lake Storage is a hyper-scale repository that is optimized for
big data analytics workloads.

Azure Data Lake Storage use Cases: Batch, interactive, streaming analytics and machine learning data

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 88/116

such as log files, IoT data, click streams, large datasets

Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-namespace
https://docs.microsoft.com/en-us/powerquery-m/table-selectrows
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-comparison-with-blob-
storage

Question: 89

Note: This question is part of a scries of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might
have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have the Power Bl data model shown in the exhibit. (Click the Exhibit tab.)

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 89/116

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 90/116

Users indicate that when they build reports from the data model, the reports take a long time to
load.
You need to recommend a solution to reduce the load times of the reports.
Solution: You recommend creating a perspective that contains the commonly used fields.
Does this meet the goal?

A. Yes
B. No

Answer: B
Explanation:

Instead denormalize For Performance.


Even though it might mean storing a bit of redundant data, schema denormalization can sometimes
provide better query performance. The only question then becomes is the extra space used worth
the performance benefit.

Reference: https://www.mssqltips.com/sqlservertutorial/3211/denormalize-for-performance/

Question: 90

Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might
have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a Power Bl dataset named Dataset1.
In Dataset1, you currently have 50 measures that use the same time intelligence logic.
You need to reduce the number of measures, while maintaining the current functionality.
Solution: From Power Bl Desktop, you create a hierarchy.
Does this meet the goal?

A. Yes
B. No

Answer: B
Explanation:

Instead use the solution: From DAX Studio, you write a query that uses grouping sets.

A grouping is a set of discrete values that are used to group measure fields.

Note: A hierarchy is an ordered set of values that are linked to the level above. An example of a
hierarchy could be Country, State, and City. Cities are in a State, and States make up a Country. In
Power BI visuals can handle hierarchy data and provide controls for the user to navigate up and down

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 91/116

the hierarchy.

Reference: https://docs.microsoft.com/en-us/power-bi/developer/visuals/capabilities
https://powerbi.tips/2018/09/how-to-navigate-hierarchies/

Question: 91

You have a Power BI tenant.

You plan to register the tenant in an Azure Purview account.

You need to ensure that you can scan the tenant by using Azure Purview.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. From the Microsoft 365 admin center, create a Microsoft 365 group.
B. From the Power Bl Admin center, set Allow live connections to Enabled.
C. From the Power Bl Admin center, set Allow service principals to use read-only Power Bl admin
APIs to Enabled.
D. From the Azure Active Directory admin center, create a security group.
E. From the Power Bl Admin center, set Share content with external users to Enabled.

Answer: CD
Explanation:

Scan same-tenant Power BI using Azure IR and Managed Identity in public network.
Make sure Power BI and Microsoft Purview accounts are in the same tenant.
Make sure Power BI tenant Id is entered correctly during the registration.
From Azure portal, validate if Microsoft Purview account Network is set to public access.
From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public
network.
(D) In Azure Active Directory tenant, create a security group.
From Azure Active Directory tenant, make sure Microsoft Purview account MSI is member of the new
security group.
On the Power BI Tenant Admin portal, validate if Allow service principals to use read-only Power BI
admin APIs is enabled for the new security group.

Associate the security group with Power BI tenant


Log into the Power BI admin portal.

Select the Tenant settings page.


(C) Select Admin API settings > Allow service principals to use read-only Power BI admin APIs
(Preview).

Select Specific security groups.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 92/116

Select Admin API settings > Enhance admin APIs responses with detailed metadata > Enable the
toggle to allow Microsoft Purview Data Map automatically discover the detailed metadata of Power
BI datasets as part of its scans.

Reference: https://docs.microsoft.com/en-us/azure/purview/register-scan-power-bi-tenant

Question: 92

You have a deployment pipeline for a Power BI workspace. The workspace contains two datasets that
use import storage mode.

A database administrator reports a drastic increase in the number of queries sent from the Power BI
service to an Azure SQL database since the creation of the deployment pipeline.

An investigation into the issue identifies the following:

One of the datasets is larger than 1 GB and has a fact table that contains more than 500 million rows.
When publishing dataset changes to development, test, or production pipelines, a refresh is
triggered against the entire dataset.

You need to recommend a solution to reduce the size of the queries sent to the database when the
dataset changes are published to development, test, or production.

What should you recommend?

A. Turn off auto refresh when publishing the dataset changes to the Power Bl service.
B. In the dataset. change the fact table from an import table to a hybrid table.
C. Enable the large dataset storage format for workspace.
D. Create a dataset parameter to reduce the fact table row count in the development and test
pipelines.

Answer: B
Explanation:

Hybrid tables
Hybrid tables are tables with incremental refresh that can have both import and direct query
partitions. During a clean deployment, both the refresh policy and the hybrid table partitions are
copied. When deploying to a pipeline stage that already has hybrid table partitions, only the refresh
policy is copied. To update the partitions, refresh the table.

Refreshes are faster - Only the most recent data that has changed needs to be refreshed.

Reference: https://docs.microsoft.com/en-us/power-bi/create-reports/deployment-pipelines-best-
practices

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 93/116

Question: 93

You have a Power BI Premium capacity.

You need to increase the number of virtual cores associated to the capacity.

Which role do you need?

A. Power Bl workspace admin


B. capacity admin
C. Power Platform admin
D. Power Bl admin

Answer: D
Explanation:

Change capacity size


Power BI admins and global administrators can change Power BI Premium capacity. Capacity admins
who are not a Power BI admin or global administrator don't have this option.

Reference: https://docs.microsoft.com/en-us/power-bi/enterprise/service-admin-premium-manage

Question: 94

You are attempting to configure certification for a Power BI dataset and discover that the certification
setting for the dataset is unavailable.

What are two possible causes of the issue? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. The workspace is in shared capacity.


B. You have insufficient permissions.
C. Dataset certification is disabled for the Power Bl tenant.
D. The sensitivity level for the dataset is set to Highly Confidential.
E. Row-level security (RLS) is missing from the dataset.

Answer: BC
Explanation:

Reference: https://docs.microsoft.com/en-us/power-bi/admin/service-admin-setup-certification
https://docs.microsoft.com/en-us/power-bi/collaborate-share/service-endorse-content

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 94/116

Question: 95

Your company is migrating its current, custom-built reporting solution to Power BI.

The Power BI tenant must support the following scenarios:

40 reports that will be embedded in external websites. The websites control their own security. The
reports will be consumed by 50 users monthly.
Forty-five users that require access to the workspaces and apps in the Power BI Admin portal. Ten of
the users must publish and consume datasets that are larger than 1 GB.
Ten developers that require Text Analytics transformations and paginated reports for datasets. An
additional 15 users will consume the reports.

You need to recommend a licensing solution for the company. The solution must minimize costs.

Which two Power BI license options should you include in the recommendation? Each correct answer
presents part of the solution.

NOTE: Each correct selection is worth one point.

A. 70 Premium per user


B. one Premium
C. 70 Pro
D. one Embedded
E. 35 Pro
F. 35 Premium per user

Answer: BF
Explanation:

B:
Free - 40 reports that will be embedded in external websites. The websites control their own
security.
Free - The reports will be consumed by 50 users monthly.
Free + 1 Premium for the Worspace -Forty-five users that require access to the workspaces and apps
in the Power BI Admin portal.

F: Ten of the users must publish and consume datasets that are larger than 1 GB.
Ten developers that require Text Analytics transformations and paginated reports for datasets. An
additional 15 users will consume the reports.

Power BI Premium per user features and capabilities


* Pixel perfect paginated reports are available for operational reporting capabilities based on SSRS
technology. Users can create highly formatted reports in various formats such as PDF and PPT, which
are embeddable in applications and are designed to be printed or shared.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 95/116

Note: There are three kinds of Power BI per-user licenses: Free, Pro, and Premium Per User.
Power BI (free): Access to content in My Workspace
Power BI (free) + Workspace is Premium: Consume content shared with them
Power BI Pro: Publish content to other workspaces, share dashboards, subscribe to dashboards and
reports, share with users who have a Pro license
Power BI Pro + Workspace is Premium: Distribute content to users who have free licenses
Power BI Premium Per User: Publish content to other workspaces, share dashboards, subscribe to
dashboards and reports, share with users who have a Premium Per User license

Power BI Premium Per User + Workspace is Premium: Distribute content to users who have free and
Pro licenses

Reference: https://docs.microsoft.com/en-us/power-bi/fundamentals/service-features-license-type

Question: 96

You have two Power BI reports named Report1 and Report2.

Report1 connects to a shared dataset named Dataset1.

Report2 connects to a local dataset that has the same structure as Dataset1. Report2 contains several
calculated tables and parameters.

You need to prepare Report2 to use Dataset1.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Remove the data source permissions.


B. Delete all the Power Query Editor objects.
C. Modify the source of each query.
D. Update all the parameter values.
E. Delete all the calculated tables.

Answer: CD
Explanation:

C: Power BI Desktop also comes with Power Query Editor. Use Power Query Editor to connect to one
or many data sources, shape and transform the data to meet your needs, then load that model into
Power BI Desktop.

D: Common uses for parameters


Here are some of the most common ways to use parameters.

Control paginated report data

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 96/116

* Filter paginated report data at the data source by writing dataset queries that contain variables.
* Etc.

Reference: https://docs.microsoft.com/en-us/power-bi/transform-model/desktop-query-overview
https://docs.microsoft.com/en-us/learn/modules/dax-power-bi-add-calculated-tables/1-
introduction

Question: 97

HOTSPOT

You have an Azure Synapse notebook.

You need to create the visual shown in the following exhibit.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 97/116

Answer:
Explanation:

Box 1: fill_between
atplotlib.pyplot.fill_between fills the area between two horizontal curves.

The curves are defined by the points (x, y1) and (x, y2). This creates one or multiple polygons
describing the filled area.

Box 2: suptitle
Set the title of the visual.
suptitle adds a centred title to the figure.

Reference:
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.fill_between.html#matplotlib.pyplot.fill
_between
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.suptitle.html#matplotlib.pyplot.suptitle

Question: 98
You use an Apache Spark notebook in Azure Synapse Analytics to filter and transform data.

You need to review statistics for a DataFrame that includes:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 98/116

The column name


The column type
The number of distinct values
Whether the column has missing values

Which function should you use?

A. displayHTML()
B. display(df, summary=true)
C. %%configure
D. display(df)
E. %%lsmagic

Answer: B
Explanation:

display(df) statistic details


You can use display(df, summary = true) to check the statistics summary of a given Apache Spark
DataFrame that include the column name, column type, unique values, and missing values for each
column. You can also select on specific column to see its minimum value, maximum value, mean
value and standard deviation.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-data-
visualization

Question: 99

HOTSPOT

You are using an Azure Synapse notebook to create a Python visual.

You run the following code cell to import a dataset named Iris.

A sample of the data is shown in the following table.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 99/116

You need to create the visual shown in the exhibit. (Click the Exhibit tab.)

How should you complete the Python code? To answer, select the appropriate options in the answer
area.

NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: pairplot
A pairs plot allows us to see both distribution of single variables and relationships between two
variables. Pair plots are a great method to identify trends for follow-up analysis and, fortunately, are
easily implemented in Python!

Example, let’s plot data using pairplot:


From the picture below, we can observe the variations in each plot. The plots are in matrix format
where the row name represents x axis and column name represents the y axis. The main-diagonal
subplots are the univariate histograms (distributions) for each attribute.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 100/116

Box 2: sepal_width
sepal_width is displayed with a height of 2.5 (between 2.0 and 4.5).

Reference: https://medium.com/analytics-vidhya/pairplot-visualization-16325cd725e6

Question: 100
HOTSPOT

You use Vertipaq Analyzer to analyze a model.

The Relationships tab contains the results shown in the following exhibit.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 101/116

Use the drop-down menus to select the answer choice that completes each statement based on the
information presented in the graphic.

NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: Customer
There are 1804 invalid rows (records) in the Customer table.

Box 2: 22
There are 22 missing keys.

Note: VertiPaq Analyzer in DAX Studio is useful in identifying referential integrity violations which
slow down your DAX codes. It helps you determine which table or column needs to be optimized and
improved.

Reference: https://blog.enterprisedna.co/vertipaq-analyzer-tutorial-relationships-referential-
integrity/

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 102/116

Question: 101
You use the Vertipaq Analyzer to analyze tables in a dataset as shown in the Tables exhibit. (Click the
Tables tab.)

The table relationships for the dataset are shown in the Relationships exhibit. (Click the Relationships
tab.)

You need to reduce the model size by eliminating invalid relationships.

Which column should you remove?

A. Sales[Sales Amount]
B. Sales[RowlD]
C. Sales[Sales ID]
D. Plan[RowlD]

Answer: B
Explanation:

Sales[Row ID] has 858,786 missing keys and 858,789 Max From Cardinality.
Note: The Max From Cardinality column defines the cost of the relationship which is the amount of

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 103/116

time DAX needs to transfer the filters from the dimensions table to the fact table.

Reference: https://blog.enterprisedna.co/vertipaq-analyzer-tutorial-relationships-referential-
integrity/

Question: 102

You have a sales report as shown in the following exhibit.

The sales report has the following characteristics:

The measures are optimized.


The dataset uses import storage mode.
Data points, hierarchies, and fields cannot be removed or filtered from the report page.

From powerbi.com, users experience slow load times when viewing the report.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 104/116

You need to reduce how long it takes for the report to load without affecting the data displayed in the
report.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Change the report theme to monochromatic.


B. Replace the single-value cards with a multi-row card.
C. Replace the product category charts with a bar chart for sales and a hierarchy of Category and Sub
Category on the axis.
D. Replace all the filters on the Filters pane with visual slicers on the report page.

Answer: BC
Explanation:

Question: 103

DRAG DROP

You manage a Power BI dataset that queries a fact table named SalesDetails. SalesDetails contains
three date columns named OrderDate, CreatedOnDate, and ModifiedDate.

You need to implement an incremental refresh of SalesDetails. The solution must ensure that
OrderDate starts on or after the beginning of the prior year.

Which four actions should you perform in sequence? To answer, move the appropriate actions from
the list of actions to the answer area and arrange them in the correct order.

NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct
orders you select.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 105/116

Answer:
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 106/116

Step 1: Create RangeStart and RangeEndDateTime parameters.


When configuring incremental refresh in Power BI Desktop, you first create two Power Query
date/time parameters with the reserved, case-sensitive names RangeStart and RangeEnd. These
parameters, defined in the Manage Parameters dialog in Power Query Editor are initially used to
filter the data loaded into the Power BI Desktop model table to include only those rows with a
date/time within that period.

Step 2: Add an applied step that adds a custom date filter OrderDate is Between RangeStart and
RangeEnd.
With RangeStart and RangeEnd parameters defined, you then apply custom Date filters on your
table's date column. The filters you apply select a subset of data that will be loaded into the model
when you click Apply.

Step 3: Configure an incremental refresh to archive data that starts two years before the refresh date.
After filters have been applied and a subset of data has been loaded into the model, you then define
an incremental refresh policy for the table. After the model is published to the service, the policy is
used by the service to create and manage table partitions and perform refresh operations. To define
the policy, you will use the Incremental refresh and real-time data dialog box to specify both required
settings and optional settings.

Step 4: Add an applied step that filters OrderDate to the start of the prior year.

Reference: https://docs.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-overview

Question: 104

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 107/116

DRAG DROP

You have an Azure Synapse Analytics serverless SQL pool.

You need to return a list of files and the number of rows in each file.

How should you complete the Transact-SQL statement? To answer, drag the appropriate values to the
targets. Each value may be used once, more than once, or not at all. You may need to drag the split
bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: APPROX_COUNT_DISTINCT
The APPROX_COUNT_DISTINCT function returns the approximate number of unique non-null values
in a group.

Box 2: OPENROWSET
OPENROWSET function in Synapse SQL reads the content of the file(s) from a data source. The data
source is an Azure storage account and it can be explicitly referenced in the OPENROWSET function
or can be dynamically inferred from URL of the files that you want to read. The OPENROWSET
function can optionally contain a DATA_SOURCE parameter to specify the data source that contains
files.

The OPENROWSET function can be referenced in the FROM clause of a query as if it were a table
name OPENROWSET. It supports bulk operations through a built-in BULK provider that enables data
from a file to be read and returned as a rowset.

Reference: https://docs.microsoft.com/en-us/sql/t-sql/functions/approx-count-distinct-transact-sql
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-openrowset

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 108/116

Question: 105
HOTSPOT

You have an Azure Synapse Analytics serverless SQL pool and an Azure Data Lake Storage Gen2
account.

You need to query all the files in the ‘csv/taxi/’ folder and all its subfolders. All the files are in CSV
format and have a header row.

How should you complete the query? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: BULK 'csv/taxi*.CSV',


*.CSV to get all the CSV files.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 109/116

Box 2: FIRSTROW=2
As there is a header we should read from the second line.

Note: FIRSTROW = 'first_row'

Specifies the number of the first row to load. The default is 1 and indicates the first row in the
specified data file. The row numbers are determined by counting the row terminators. FIRSTROW is
1-based.

Incorrect:
Not FIRSTROW=1. FIRSTROW=1 is used when there is no header.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-openrowset

Question: 106
You have a group of data scientists who must create machine learning models and run periodic
experiments on a large dataset.

You need to recommend an Azure Synapse Analytics pool for the data scientists. The solution must
minimize costs.

Which type of pool should you recommend?

A. a Data Explorer pool


B. an Apache Spark pool
C. a dedicated SQL pool
D. a serverless SQL pool

Answer: B
Explanation:

In Azure Synapse, training machine learning models can be performed on the Apache Spark Pools
with tools like PySpark/Python, Scala, or .NET.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/machine-learning/what-is-
machine-learning

Question: 107

HOTSPOT

You manage a dataset that contains the two data sources as shown in the following table.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 110/116

When you attempt to refresh the dataset in powerbi.com, you receive the following error message:
“[Unable to combine data] Add Columns is accessing data sources that have privacy levels which
cannot be used together. Please rebuild this data combination.”

You discover that the dataset contains queries that fold data from the SharePoint folder to the Azure
SQL database.

You need to resolve the error. The solution must provide the highest privacy possible.

Which privacy level should you select for each data source? To answer, select the appropriate options
in the answer area.

NOTE: Each correct selection is worth one point.

Answer:
Explanation:

Box 1: Private

This Formula.Firewall error is the result of Power Query’s Data Privacy Firewall (aka the Firewall)

Note: Folding is a term that refers to converting expressions in M (such as filters, renames, joins, and
so on) into operations against a raw data source (such as SQL, OData, and so on).

Box 2: Organizational
Organizational Limits the visibility of a data source to a trusted group of people. It is isolated from all
Public data sources, but is visible to other Organizational data sources. A common example is a

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 111/116

Microsoft Word document on an intranet SharePoint site with permissions enabled for a trusted
group.

Reference: https://support.microsoft.com/en-us/office/set-privacy-levels-power-query-cc3ede4d-
359e-4b28-bc72-9bee7900b540

Question: 108
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You are using an Azure Synapse Analytics serverless SQL pool to query a collection of Apache Parquet
files by using automatic schema inference. The files contain more than 40 million rows of UTF-8-
encoded business names, survey names, and participant counts. The database is configured to use
the default collation.
The queries use open row set and infer the schema shown in the following table.

You need to recommend changes to the queries to reduce I/O reads and tempdb usage.
Solution: You recommend defining an external table for the Parquet files and updating the query to
use the table
Does this meet the goal?

A. Yes
B. No

Answer: B
Explanation:

Question: 109

You have a deployment pipeline for a Power BI workspace. The workspace contains two datasets that
use import storage mode.

A database administrator reports a drastic increase in the number of queries sent from the Power BI
service to an Azure SQL database since the creation of the deployment pipeline.

An investigation into the issue identifies the following:

One of the datasets is larger than 1 GB and has a fact table that contains more than 500 million rows.
When publishing dataset changes to development, test, or production pipelines, a refresh is
triggered against the entire dataset.

You need to recommend a solution to reduce the size of the queries sent to the database when the
dataset changes are published to development, test, or production.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 112/116

What should you recommend?

A. From Capacity settings in the Power Bl Admin portal, reduce the Max Intermediate Row Set Count
setting.
B. Configure the dataset to use a composite model that has a DirectQuery connection to the fact
table.
C. Enable the large dataset storage format for workspace.
D. From Capacity settings in the Power Bl Admin portal, increase the Max Intermediate Row Set
Count setting.

Answer: B
Explanation:

A composite model in Power BI means part of your model can be a DirectQuery connection to a data
source (for example, SQL Server database), and another part as Import Data (for example, an Excel
file). Previously, when you used DirectQuery, you couldn’t even add another data source into the
model.
DirectQuery and Import Data have different advantages.
Now the Composite Model combines the good things of both Import and DirectQuery into one
model. Using the Composite Model, you can work with big data tables using DirectQuery, and still
import smaller tables using Import Data.

Reference: https://radacad.com/composite-model-directquery-and-import-data-combined-
evolution-begins-in-power-bi
https://powerbi.microsoft.com/en-us/blog/five-new-power-bi-premium-capacity-settings-is-
available-on-the-portal-preloaded-with-default-values-admin-can-review-and-override-the-defaults-
with-their-preference-to-better-fence-their-capacity/

Question: 110

You are using a Python notebook in an Apache Spark pool in Azure Synapse Analytics.

You need to present the data distribution statistics from a DataFrame in a tabular view.

Which method should you invoke on the DataFrame?

A. rollup
B. cov
C. explain
D. describe

Answer: D
Explanation:

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 113/116

The aggregating statistic can be calculated for multiple columns at the same time with the describe
function.

Example:
titanic[["Age", "Fare"]].describe()
Out[6]:
Age Fare
count 714.000000 891.000000
mean 29.699118 32.204208
std 14.526497 49.693429
min 0.420000 0.000000
25% 20.125000 7.910400
50% 28.000000 14.454200
75% 38.000000 31.000000
max 80.000000 512.329200

Reference:
https://pandas.pydata.org/docs/getting_started/intro_tutorials/06_calculate_statistics.html

Question: 111

You are using a Python notebook in an Apache Spark pool in Azure Synapse Analytics.

You need to present the data distribution statistics from a DataFrame in a tabular view.

Which method should you invoke on the DataFrame?

A. sample
B. describe
C. freqltems
D. explain

Answer: B
Explanation:

pandas.DataFrame.describe
Descriptive statistics include those that summarize the central tendency, dispersion and shape of a
dataset’s distribution, excluding NaN values.

Analyzes both numeric and object series, as well as DataFrame column sets of mixed data types. The
output will vary depending on what is provided.

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 114/116

Reference: https://pandas.pydata.org/pandas-
docs/stable/reference/api/pandas.DataFrame.describe.html

Question: 112

You have a deployment pipeline for a Power BI workspace. The workspace contains two datasets that
use import storage mode.

A database administrator reports a drastic increase in the number of queries sent from the Power Bi
service to an Azure SQL database since the creation of the deployment pipeline.

An investigation into the issue identifies the following:

One of the datasets is larger than 1 GB and has a fact table that contains more than 500 million rows.
When publishing dataset changes to development, test, or production pipelines, a refresh is
triggered against the entire dataset.

You need to recommend a solution to reduce the size of the queries sent to the database when the
dataset changes are published to development, test, or production.

What should you recommend?

A. Request the authors of the deployment pipeline datasets to reduce the number of datasets
republished during development.
B. In the dataset, delete the fact table.
C. Configure the dataset to use a composite model that has a DirectQuery connection to the fact
table.
D. From Capacity settings in the Power Bi Admin portal, reduce the Max Intermediate Row Set Count
setting.

Answer: C
Explanation:

Previously in Power BI Desktop, when you used a DirectQuery in a report, no other data connections,
whether DirectQuery or import, were allowed for that report. With composite models, that
restriction is removed. A report can seamlessly include data connections from more than one
DirectQuery or import data connection, in any combination you choose.

The composite models capability in Power BI Desktop consists of three related features:

* Composite models: Allows a report to have two or more data connections from different source
groups, such as one or more DirectQuery connections and an import connection, two or more
DirectQuery connections, or any combination thereof.

* Etc.

Reference: https://docs.microsoft.com/en-us/power-bi/transform-model/desktop-composite-

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 115/116

models

Question: 113

You are using a Python notebook in an Apache Spark pool in Azure Synapse Analytics.

You need to present the data distribution statistics from a DataFrame in a tabular view.

Which method should you invoke on the DataFrame?

A. freqlcems
B. corr
C. summary
D. rollup

Answer: B
Explanation:

pandas.DataFrame.corr computes pairwise correlation of columns, excluding NA/null values.

Incorrect:
* freqItems
pyspark.sql.DataFrame.freqItems
Finding frequent items for columns, possibly with false positives. Using the frequent element count
algorithm described in https://doi.org/10.1145/762471.762473, proposed by Karp, Schenker, and
Papadimitriou.'

* summary is used for index.


* There is no panda method for rollup. Rollup would not be correct anyway.

Reference: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.corr.html

https://www.validexamdumps.com/DP-500.html
Questions and Answers PDF 116/116

Thank you for your visit.


To try more exams, please visit below link
https://www.validexamdumps.com/DP-500.html

https://www.validexamdumps.com/DP-500.html

You might also like