Professional Documents
Culture Documents
Solution on Azure
18%
Question 21
You have published a pipeline that you want to run every week.
You plan to use the Schedule.create method to create the schedule.
What kind of object must you create first to configure how frequently the pipeline runs?
(Orchestrating Operations with Pipelines)
Datastore
PipelineParameter
ScheduleRecurrance
Answer is ScheduleRecurrance. You need a ScheduleRecurrance object to create a schedule that runs at a regular interval.
Question 22
You have trained a model using the Python SDK for Azure Machine Learning.
You want to deploy the model as a containerized real-time service with high scalability and security.
What kind of compute should you create to host the service?
(Deploying and Consuming Models)
Answer is An Azure Kubernetes Services (AKS) inferencing cluster. You should use an AKS cluster to deploy a model as a
scalable, secure, containerized service.
Question 23
You are deploying a model as a real-time inferencing service. What functions must the entry script for the service include?
(Deploying and Consuming Models)
Answer is init() and run(raw_data) You need to implement init and run functions in the entry (scoring) script.
Question 24
You are creating a batch inferencing pipeline that you want to use to predict new values for a large volume of data files?
You want the pipeline to run the scoring script on multiple nodes and collate the results.
What kind of step should you include in the pipeline?
(Deploying and Consuming Models)
PythonScriptStep
ParallelRunStep
AdlaStep
Answer is ParallelRunStep. You should use a ParallelRunStep step to run the scoring script in parallel.
Question 25
You have configured the step in your batch inferencing pipeline with an output_action="append_row" property.
In which file should you look for the batch inferencing results?
(Deploying and Consuming Models)
output.txt
parallel_run_step.txt
stdoutlogs.txt
Answer is parallel_run_step.txt Using the append_row output action causes the results from the ParallelRunStep step to be
collated in a file named parallel_run_step.txt.
Question 26
You plan to use hyperparameter tuning to find optimal discrete values for a set of hyperparameters.
You want to try every possible combination of a set of specified discrete values.
Which kind of sampling should you use?
(Training Optimal Models)
Grid Sampling
Random Sampling
Bayesian Sampling
Answer is Grid Sampling. You should use a Grid sampling to try every combination of discrete hyperparameter values.
Question 27
You are using hyper parameter tuning to train an optimal model. Your training script calculates the area under the curve (AUC) metric for
the trained model like this:
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
hyperdrive = HyperDriveConfig(estimator=sklearn_estimator,
hyperparameter_sampling=grid_sampling,
policy=None,
primary_metric_name='AUC',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=6,
max_concurrent_runs=4)
run.log('Accuracy', np.float(auc))
print(auc)
run.log('AUC', np.float(auc))
Your script needs to log the primary metric using the same name as specified in the hyperdrive config.
Question 28
You are using automated machine learning to train a model that predicts the species of an iris based on its petal and sepal
measurements.
Which kind of task should you specify for automated machine learning?
(Training Optimal Models)
Regression
Classification
Forecasting
Question 29
You have submitted an automated machine learning run using the Python SDk for Azure Machine Learning.
When the run completes, which method of the run object should you use to retrieve the best model?
(Training Optimal Models)
load_model()
get_output()
get_metrics()
Answer is get_output(). The get_output method of an automated machine learning run returns the best mode and the child run
that trained it.
Question 30
You have trained a model, and you want to quantify the influence of each feature on a specific individual prediction.
What kind of feature importance should you examine?
(Interpreting Models)
Answer is Local feature importance. Local importance indicates the influence of features on a specific prediction. Global
importance gives an overall indication of feature influence.
161-170