DP-100 Designing and Implementing a Data Science Solution on Azure Dumps

If you are looking for free DP-100 dumps than here we have some sample question answers available. You can prepare from our Microsoft DP-100 exam questions notes and prepare exam with this practice test. Check below our updated DP-100 exam dumps.

DumpsGroup are top class study material providers and our inclusive range of DP-100 Real exam questions would be your key to success in Microsoft Microsoft Azure Certification Exam in just first attempt. We have an excellent material covering almost all the topics of Microsoft DP-100 exam. You can get this material in Microsoft DP-100 PDF and DP-100 practice test engine formats designed similar to the Real Exam Questions. Free DP-100 questions answers and free Microsoft DP-100 study material is available here to get an idea about the quality and accuracy of our study material.


discount banner

Sample Question 4

You are creating a classification model for a banking company to identify possible instances of credit card fraud. You plan to create the model in Azure Machine Learning by using automated machine learning. The training dataset that you are using is highly unbalanced. You need to evaluate the classification model. Which primary metric should you use?

A. normalized_mean_absolute_error 
B. [spearman_correlation 
C. AUC.weighted 
D. accuracy 
E. normalized_root_mean_squared_error 


Sample Question 5

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train and register a machine learning model. You plan to deploy the model as a real-time web service. Applications must use key-based authentication to use the model. You need to deploy the web service. Solution: Create an AksWebservice instance. Set the value of the auth_enabled property to True. Deploy the model to the service. Does the solution meet the goal?

A. Yes
 B. No


Sample Question 6

You create an Azure Machine Learning workspace. You use the Azure Machine Learning SDK for Python. You must create a dataset from remote paths. The dataset must be reusable within the workspace. You need to create the dataset. How should you complete the following code segment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. 


Sample Question 7

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train a classification model by using a logistic regression algorithm. You must be able to explain the model’s predictions by calculating the importance of each feature, both as an overall global relative importance value and as a measure of local importance for a specific set of predictions. You need to create an explainer that you can use to retrieve the required global and local feature importance values. Solution: Create a MimicExplainer. Does the solution meet the goal? 

A. Yes 
B. No


Sample Question 8

You create an Azure Machine Learning workspace. You must configure an event handler to send an email notification wten data drift is detected in the workspace datasets. You must minimize development efforts. You need to configure an Azure service to send the notification. Which Azure service should you use?

A. Azure Function apps 
B. Azure DevOps pipeline 
C. Azure Automation runbook 
D. Azure Logic Apps 


Sample Question 9

You create a binary classification model. The model is registered in an Azure Machine Learning workspace. You use the Azure Machine Learning Fairness SDK to assess the model fairness. You develop a training script for the model on a local machine. You need to load the model fairness metrics into Azure Machine Learning studio. What should you do?

A. Implement the download_dashboard_by_upload_id function 
B. Implement the creace_group_metric_sec function 
C. Implement the upload_dashboard_dictionary function 
D. Upload the training script


Sample Question 10

You use Azure Machine Learning Studio to build a machine learning experiment. You need to divide data into two distinct datasets. Which module should you use? 

A. Partition and Sample 
B. Assign Data to Clusters 
C. Group Data into Bins 
D. Test Hypothesis Using t-Test


Sample Question 11

You create a workspace by using Azure Machine Learning Studio. You must run a Python SDK v2 notebook in the workspace by using Azure Machine Learning Studio. You must preserve the current values of variables set in the notebook for the current instance. You need to maintain the state of the notebook. What should you do?

A. Change the compute. 
B. Change the current kernel 
C. Stop the compute. 
D. Stop the current kernel. 


Sample Question 12

You have an Azure Machine Learning workspace named workspaces. You must add a datastore that connects an Azure Blob storage container to workspaces. You must be able to configure a privilege level. You need to configure authentication. Which authentication method should you use?

A. Account key 
B. SAS token 
C. Service principal 
D. Managed identity 


Sample Question 13

You run a script as an experiment in Azure Machine Learning. You have a Run object named run that references the experiment run. You must review the log files that were generated during the experiment run. You need to download the log files to a local folder for review. Which two code segments can you run to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

A. run.get_details() 
B. run.get_file_names() 
C. run.get_metrics()
D. run.download_files(output_directory='./runfiles') 
E. run.get_all_logs(destination='./runlogs') 


Sample Question 14

You develop and train a machine learning model to predict fraudulent transactions for a hotel booking website. Traffic to the site varies considerably. The site experiences heavy traffic on Monday and Friday and much lower traffic on other days. Holidays are also high web traffic days. You need to deploy the model as an Azure Machine Learning real-time web service endpoint on compute that can dynamically scale up and down to support demand. Which deployment compute option should you use? 

A. attached Azure Databricks cluster
 B. Azure Container Instance (ACI) 
C. Azure Kubernetes Service (AKS) inference cluster 
D. Azure Machine Learning Compute Instance 
E. attached virtual machine in a different region 


Sample Question 15

You train and register a model in your Azure Machine Learning workspace. You must publish a pipeline that enables client applications to use the model for batch inferencing. You must use a pipeline with a single ParallelRunStep step that runs a Python inferencing script to get predictions from the input data. You need to create the inferencing script for the ParallelRunStep pipeline step. Which two functions should you include? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

A. run(mini_batch) D 
B. main() 
C. batch()
 D. init() 
E. score(mini_batch) 


Sample Question 16

You are creating a new Azure Machine Learning pipeline using the designer. The pipeline must train a model using data in a comma-separated values (CSV) file that is published on a website. You have not created a dataset for this file. You need to ingest the data from the CSV file into the designer pipeline using the minimal administrative effort. Which module should you add to the pipeline in Designer?

A. Convert to CSV 
B. Enter Data Manually D
 C. Import Data 
D. Dataset


Sample Question 17

You use Azure Machine Learning to train a model. You must use a sampling method for tuning hyperparameters. The sampling method must pick samples based on how the model performed with previous samples. You need to select a sampling method. Which sampling method should you use? 

A. Grid
 B. Bayesian 
C. Random 


Sample Question 18

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train a classification model by using a logistic regression algorithm. You must be able to explain the model’s predictions by calculating the importance of each feature, both as an overall global relative importance value and as a measure of local importance for a specific set of predictions. You need to create an explainer that you can use to retrieve the required global and local feature importance values. Solution: Create a PFIExplainer. Does the solution meet the goal?

A. Yes 
B. No 


Sample Question 19

You create a multi-class image classification deep learning model. You train the model by using PyTorch version 1.2. You need to ensure that the correct version of PyTorch can be identified for the inferencing environment when the model is deployed. What should you do?

A. Save the model locally as a.pt file, and deploy the model as a local web service. 
B. Deploy the model on computer that is configured to use the default Azure Machine Learning conda environment. 
C. Register the model with a .pt file extension and the default version property. 
D. Register the model, specifying the model_framework and model_framework_version properties. 


Sample Question 20

You use Azure Machine Learning studio to analyze a dataset containing a decimal column named column1. You need to verity that the column1 values are normally distributed. Which static should you use?

A. Profile
 B. Type 
C. Max 
D. Mean 


Sample Question 21

You are a lead data scientist for a project that tracks the health and migration of birds. You create a multi-class image classification deep learning model that uses a set of labeled bird photographs collected by experts. You have 100,000 photographs of birds. All photographs use the JPG format and are stored in an Azure blob container in an Azure subscription. You need to access the bird photograph files in the Azure blob container from the Azure Machine Learning service workspace that will be used for deep learning model training. You must minimize data movement. What should you do?

A. Create an Azure Data Lake store and move the bird photographs to the store. 
B. Create an Azure Cosmos DB database and attach the Azure Blob containing bird photographs storage to the database. 
C. Create and register a dataset by using TabularDataset class that references the Azure blob storage containing bird photographs. 
D. Register the Azure blob storage containing the bird photographs as a datastore in Azure Machine Learning service. 
E. Copy the bird photographs to the blob datastore that was created with your Azure Machine Learning service workspace. 


Sample Question 22

You create an Azure Machine Learning workspace named workspaces. You create a Python SDK v2 notebook to perform custom model training in wortcspacel. You need to run the notebook from Azure Machine Learning Studio in workspace1. What should you provision first?

A. default storage account 
B. real-time endpoint 
C. Azure Machine Learning compute cluster
D. Azure Machine Learning compute instance 


Sample Question 23

You create an Azure Machine Learning workspace. You train an MLflow-formatted regression model by using tabular structured data. You must use a Responsible Al dashboard to assess the model. You need to use the Azure Machine Learning studio Ul to generate the Responsible A dashboard. What should you do first?

A. Deploy the model to a managed online endpoint. 
B. Register the model with the workspace. 
C. Create the model explanations. 
D. Convert the model from the MLflow format to a custom format. 


Sample Question 24

You have a Python script that executes a pipeline. The script includes the following code: from azureml.core import Experiment pipeline_run = Experiment(ws, 'pipeline_test').submit(pipeline) You want to test the pipeline before deploying the script. You need to display the pipeline run details written to the STDOUT output when the pipeline completes. Which code segment should you add to the test script?

A. pipeline_run.get.metrics() 
B. pipeline_run.wait_for_completion(show_output=True) 
C. pipeline_param = PipelineParameter(name="stdout", default_value="console") 
D. pipeline_run.get_status() 


Sample Question 25

You train a machine learning model. You must deploy the model as a real-time inference service for testing. The service requires low CPU utilization and less than 48 MB of RAM. The compute target for the deployed service must initialize automatically while minimizing cost and administrative overhead. Which compute target should you use?

A. Azure Kubernetes Service (AKS) inference cluster 
B. Azure Machine Learning compute cluster 
C. Azure Container Instance (ACI) 
D. attached Azure Databricks cluster 


Sample Question 26

You have machine learning models produce unfair predictions across sensitive features. You must use a post-processing technique to apply a constraint to the models to mitigate their unfairness. You need to select a post-processing technique and model type. What should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.


Sample Question 27

You create a training pipeline by using the Azure Machine Learning designer. You need to load data into a machine learning pipeline by using the Import Data component. Which two data sources could you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point

A. Azure Blob storage container through a registered datastore
 B. Azure SQL Database 
C. URL via HTTP
 D. Azure Data Lake Storage Gen2 
E. Registered dataset 


Sample Question 28

You are creating a compute target to train a machine learning experiment. The compute target must support automated machine learning, machine learning pipelines, and Azure Machine Learning designer training. You need to configure the compute target Which option should you use?

A. Azure HDInsight 
B. Azure Machine Learning compute cluster 
C. Azure Batch 
D. Remote VM 


Sample Question 29

You create a script that trains a convolutional neural network model over multiple epochs and logs the validation loss after each epoch. The script includes arguments for batch size and learning rate. You identify a set of batch size and learning rate values that you want to try. You need to use Azure Machine Learning to find the combination of batch size and learning rate that results in the model with the lowest validation loss. What should you do? 

A. Run the script in an experiment based on an AutoMLConfig object 
B. Create a PythonScriptStep object for the script and run it in a pipeline 
C. Use the Automated Machine Learning interface in Azure Machine Learning studio 
D. Run the script in an experiment based on a ScriptRunConfig object 
E. Run the script in an experiment based on a HyperDriveConfig object 


Sample Question 30

You plan to run a script as an experiment using a Script Run Configuration. The script uses modules from the scipy library as well as several Python packages that are not typically installed in a default conda environment You plan to run the experiment on your local workstation for small datasets and scale out the experiment by running it on more powerful remote compute clusters for larger datasets. You need to ensure that the experiment runs successfully on local and remote compute with the least administrative effort. What should you do?

A. Create and register an Environment that includes the required packages. Use this Environment for all experiment runs. 
B. Always run the experiment with an Estimator by using the default packages. 
C. Do not specify an environment in the run configuration for the experiment. Run the experiment by using the default environment. 
D. Create a config.yaml file defining the conda packages that are required and save the file in the experiment folder. 
E. Create a virtual machine (VM) with the required Python configuration and attach the VM as a compute target. Use this compute target for all experiment runs. 


Sample Question 31

You use the Azure Machine Learning Python SDK to define a pipeline to train a model. The data used to train the model is read from a folder in a datastore. You need to ensure the pipeline runs automatically whenever the data in the folder changes. What should you do? 

A. Set the regenerate_outputs property of the pipeline to True 
B. Create a ScheduleRecurrance object with a Frequency of auto. Use the object to create a Schedule for the pipeline 
C. Create a PipelineParameter with a default value that references the location where the training data is stored 
D. Create a Schedule for the pipeline. Specify the datastore in the datastore property, and the folder containing the training data in the path_on_datascore property 


Sample Question 32

You have a dataset that is stored m an Azure Machine Learning workspace. You must perform a data analysis for differentiate privacy by using the SmartNoise SDK. You need to measure the distribution of reports for repeated queries to ensure that they are balanced Which type of test should you perform?

A. Bias 
B. Accuracy 
C. Privacy 
D. Utility 


Sample Question 33

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run: from azureml.core import Run import pandas as pd run = Run.get_context() data = pd.read_csv('data.csv') label_vals = data['label'].unique() # Add code to record metrics here run.complete() The experiment must record the unique labels in the data as metrics for the run that can be reviewed later. You must add code to the script to record the unique label values as run metrics at the point indicated by the comment. Solution: Replace the comment with the following code: for label_val in label_vals: run.log('Label Values', label_val) Does the solution meet the goal? 

A. Yes 
B. No 


Sample Question 34

You have a Jupyter Notebook that contains Python code that is used to train a model. You must create a Python script for the production deployment. The solution must minimize code maintenance. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

A. Refactor the Jupyter Notebook code into functions 
B. Save each function to a separate Python file 
C. Define a main() function in the Python script 
D. Remove all comments and functions from the Python script 


Sample Question 35

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run: from azureml.core import Run import pandas as pd run = Run.get_context() data = pd.read_csv('data.csv') label_vals = data['label'].unique() # Add code to record metrics here run.complete() The experiment must record the unique labels in the data as metrics for the run that can be reviewed later. You must add code to the script to record the unique label values as run metrics at the point indicated by the comment. Solution: Replace the comment with the following code: run.log_table('Label Values', label_vals) Does the solution meet the goal? 

A. Yes 
B. No 


Sample Question 36

You create an Azure Machine Learning compute resource to train models. The compute resource is configured as follows: Minimum nodes: 2 Maximum nodes: 4 You must decrease the minimum number of nodes and increase the maximum number of nodes to the following values: Minimum nodes: 0 Maximum nodes: 8 You need to reconfigure the compute resource. What are three possible ways to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

A. Use the Azure Machine Learning studio. 
B. Run the update method of the AmlCompute class in the Python SDK. 
C. Use the Azure portal. 
D. Use the Azure Machine Learning designer. 
E. Run the refresh_state() method of the BatchCompute class in the Python SDK


Sample Question 37

You manage an Azure Machine Learning workspace by using the Azure CLI ml extension v2. You need to define a YAML schema to create a compute cluster. Which schema should you use? 

A. https://azuremlschemas.azureedge.net/latest/computdnstarKeichema.json 
B. https://azuremlschemas.azureedge.net/latest/amlCompute.schemajson 
C. https://azuremlschemas.azureedge.net/latest/vmCompute.schema.json 
D. https://azuremlschemas.azureedge.net/latest/kubernetesCompute.schema.json 


Sample Question 38

You use an Azure Machine Learning workspace. You have a trained model that must be deployed as a web service. Users must authenticate by using Azure Active Directory. What should you do?

A. Deploy the model to Azure Kubernetes Service (AKS). During deployment, set the token_auth_enabled parameter of the target configuration object to true 
B. Deploy the model to Azure Container Instances. During deployment, set the auch_enabled parameter of the target configuration object to true 
C. Deploy the model to Azure Container Instances. During deployment, set the coken_auch_enabled parameter of the target configuration object to true 
D. Deploy the model to Azure Kubernetes Service (AKS). During deployment, set the auch. enabled parameter of the target configuration object to true 


Sample Question 39

You create a deep learning model for image recognition on Azure Machine Learning service using GPU-based training. You must deploy the model to a context that allows for real-time GPU-based inferencing. You need to configure compute resources for model inferencing. Which compute type should you use?

A. Azure Container Instance 
B. Azure Kubernetes Service 
C. Field Programmable Gate Array 
D. Machine Learning Compute 


Sample Question 40

You use the designer to create a training pipeline for a classification model. The pipeline uses a dataset that includes the features and labels required for model training. You create a real-time inference pipeline from the training pipeline. You observe that the schema for the generated web service input is based on the dataset and includes the label column that the model predicts. Client applications that use the service must not be required to submit this value. You need to modify the inference pipeline to meet the requirement. What should you do?

A. Add a Select Columns in Dataset module to the inference pipeline after the dataset and use it to select all columns other than the label. 
B. Delete the dataset from the training pipeline and recreate the real-time inference pipeline.
 C. Delete the Web Service Input module from the inference pipeline. 
D. Replace the dataset in the inference pipeline with an Enter Data Manually module that includes data for the feature columns but not the label column. 


Sample Question 41

You are implementing hyperparameter tuning by using Bayesian sampling for an Azure ML Python SDK v2-based model training from a notebook. The notebook is in an Azure Machine Learning workspace. The notebook uses a training script that runs on a compute cluster with 20 nodes. The code implements Bandit termination policy with slack_factor set to 02 and a sweep job with max_concurrent_trials set to 10. You must increase effectiveness of the tuning process by improving sampling convergence. You need to select which sampling convergence to use. What should you select?

A. Set the value of slack.factor of earty.termination policy to 0.1. 
B. Set the value of max_concurrent_trials to 4. 
C. Set the value of slack_factor of eartyjermination policy to 0.9. 
D. Set the value of max.concurrentjrials to 20. 


Sample Question 42

You have an Azure Machine Learning workspace. You build a deep learning model. You need to publish a GPU-enabled model as a web service. Which two compute targets can you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. 

A. Azure Kubernetes Service (AKS) 
B. Azure Container Instances (ACI) 
C. Local web service 
D. Azure Machine Learning compute clusters 


Sample Question 43

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You plan to use a Python script to run an Azure Machine Learning experiment. The script creates a reference to the experiment run context, loads data from a file, identifies the set of unique values for the label column, and completes the experiment run: from azureml.core import Run import pandas as pd run = Run.get_context() data = pd.read_csv('data.csv') label_vals = data['label'].unique() # Add code to record metrics here run.complete() The experiment must record the unique labels in the data as metrics for the run that can be reviewed later. You must add code to the script to record the unique label values as run metrics at the point indicated by the comment. Solution: Replace the comment with the following code: run.upload_file('outputs/labels.csv', './data.csv') Does the solution meet the goal?

A. Yes
 B. No 


Sample Question 44

You are implementing hyperparameter tuning for a model training from a notebook. The notebook is in an Azure Machine Learning workspace. You add code that imports all relevant Python libraries. You must configure Bayesian sampling over the search space for the num_hidden_layers and batch_size hyperparameters. You need to complete the following Python code to configure Bayesian sampling. Which code segments should you use? To answer, select the appropriate options in the answer area NOTE: Each correct selection is worth one point.


Sample Question 45

You train and register a machine learning model. You create a batch inference pipeline that uses the model to generate predictions from multiple data files. You must publish the batch inference pipeline as a service that can be scheduled to run every night. You need to select an appropriate compute target for the inference service. Which compute target should you use?

A. Azure Machine Learning compute instance 
B. Azure Machine Learning compute cluster 
C. Azure Kubernetes Service (AKS)-based inference cluster 
D. Azure Container Instance (ACI) compute target 


Sample Question 46

You create a multi-class image classification model with automated machine learning in Azure Machine Learning. You need to prepare labeled image data as input for model training in the form of an Azure Machine Learning tabular dataset. Which data format should you use?

A. COCO 
B. JSONL 
C. JSON
D. Pascal VOC 


Sample Question 47

You are using Azure Machine Learning to monitor a trained and deployed model. You implement Event Grid to respond to Azure Machine Learning events. Model performance has degraded due to model input data changes. You need to trigger a remediation ML pipeline based on an Azure Machine Learning event. Which event should you use?

A. RunStatusChanged
 B. DatasetDriftDetected 
C. ModelDeployed
 D. RunCompleted 


Sample Question 48

You create an Azure Machine Learning pipeline named pipeline1 with two steps that contain Python scripts. Data processed by the first step is passed to the second step. You must update the content of the downstream data source of pipeline1 and run the pipeline again You need to ensure the new run of pipeline1 fully processes the updated content. Solution: Set the allow_reuse parameter of the PythonScriptStep object of both steps to False Does the solution meet the goal?

A. Yes
 B. No 


Sample Question 49

You plan to use automated machine learning to train a regression model. You have data that has features which have missing values, and categorical features with few distinct values. You need to configure automated machine learning to automatically impute missing values and encode categorical features as part of the training task. Which parameter and value pair should you use in the AutoMLConfig class? 

A. featurization = 'auto'
 B. enable_voting_ensemble = True 
C. task = 'classification' 
D. exclude_nan_labels = True 
E. enable_tf = True 


Sample Question 50

You are a data scientist working for a bank and have used Azure ML to train and register a machine learning model that predicts whether a customer is likely to repay a loan. You want to understand how your model is making selections and must be sure that the model does not violate government regulations such as denying loans based on where an applicant lives. You need to determine the extent to which each feature in the customer data is influencing predictions. What should you do?

A. Enable data drift monitoring for the model and its training dataset. 
B. Score the model against some test data with known label values and use the results to calculate a confusion matrix. 
C. Use the Hyperdrive library to test the model with multiple hyperparameter values. 
D. Use the interpretability package to generate an explainer for the model. 
E. Add tags to the model registration indicating the names of the features in the training dataset.


Sample Question 51

You train and register an Azure Machine Learning model You plan to deploy the model to an online endpoint You need to ensure that applications will be able to use the authentication method with a non-expiring artifact to access the model. Solution: Create a managed online endpoint with the default authentication settings. Deploy the model to the online endpoint. Does the solution meet the goal?

A. Yes 
B. No 


Sample Question 52

You deploy a real-time inference service for a trained model. The deployed model supports a business-critical application, and it is important to be able to monitor the data submitted to the web service and the predictions the data generates. You need to implement a monitoring solution for the deployed model using minimal administrative effort. What should you do?

A. View the explanations for the registered model in Azure ML studio. 
B. Enable Azure Application Insights for the service endpoint and view logged data in the Azure portal. 
C. Create an ML Flow tracking URI that references the endpoint, and view the data logged by ML Flow. 
D. View the log files generated by the experiment used to train the model.


Sample Question 53

You plan to provision an Azure Machine Learning Basic edition workspace for a data science project. You need to identify the tasks you will be able to perform in the workspace. Which three tasks will you be able to perform? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. 

A. Create a Compute Instance and use it to run code in Jupyter notebooks. 
B. Create an Azure Kubernetes Service (AKS) inference cluster. 
C. Use the designer to train a model by dragging and dropping pre-defined modules. 
D. Create a tabular dataset that supports versioning. 
E. Use the Automated Machine Learning user interface to train a model. 


Sample Question 54

You use Azure Machine Learning designer to create a real-time service endpoint. You have a single Azure Machine Learning service compute resource. You train the model and prepare the real-time pipeline for deployment You need to publish the inference pipeline as a web service. Which compute type should you use?

A. HDInsight 
B. Azure Databricks 
C. Azure Kubernetes Services 
D. the existing Machine Learning Compute resource 
E. a new Machine Learning Compute resource 


Sample Question 55

You train a model and register it in your Azure Machine Learning workspace. You are ready to deploy the model as a real-time web service. You deploy the model to an Azure Kubernetes Service (AKS) inference cluster, but the deployment fails because an error occurs when the service runs the entry script that is associated with the model deployment. You need to debug the error by iteratively modifying the code and reloading the service, without requiring a re-deployment of the service for each code update. What should you do?

A. Register a new version of the model and update the entry script to load the new version of the model from its registered path. 
B. Modify the AKS service deployment configuration to enable application insights and redeploy to AKS. 
C. Create an Azure Container Instances (ACI) web service deployment configuration and deploy the model on ACI. 
D. Add a breakpoint to the first line of the entry script and redeploy the service to AKS.
E. Create a local web service deployment configuration and deploy the model to a local Docker container.


Sample Question 56

You use Azure Machine Learning designer to create a training pipeline for a regression model. You need to prepare the pipeline for deployment as an endpoint that generates predictions asynchronously for a dataset of input data values. What should you do?

A. Clone the training pipeline. 
B. Create a batch inference pipeline from the training pipeline. 
C. Create a real-time inference pipeline from the training pipeline. 
D. Replace the dataset in the training pipeline with an Enter Data Manually module. 


Sample Question 57

You retrain an existing model. You need to register the new version of a model while keeping the current version of the model in the registry. What should you do?

A. Register a model with a different name from the existing model and a custom property named version with the value 2. 
B. Register the model with the same name as the existing model. 
C. Save the new model in the default datastore with the same name as the existing model. Do not register the new model. 
D. Delete the existing model and register the new one with the same name. 


Sample Question 58

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train a classification model by using a logistic regression algorithm. You must be able to explain the model’s predictions by calculating the importance of each feature, both as an overall global relative importance value and as a measure of local importance for a specific set of predictions. You need to create an explainer that you can use to retrieve the required global and local feature importance values. Solution: Create a TabularExplainer. Does the solution meet the goal?

A. Yes 
B. No 


Sample Question 59

You create an MLflow model You must deploy the model to Azure Machine Learning for batch inference. You need to create the batch deployment. Which two components should you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point

A. Compute target 
B. Kubernetes online endpoint 
C. Model files 
D. Online endpoint 
E. Environment 


Sample Question 60

You manage an Azure Machine Learning workspace named workspaces You must develop Python SDK v2 code to attach an Azure Synapse Spark pool as a compute target in workspaces The code must invoke the constructor of the SynapseSparkCompute class. You need to invoke the constructor. What should you use? 

A. Synapse workspace web URL and Spark pool name 
B. resource ID of the Synapse Spark pool and a user-defined name 
C. pool URL of the Synapse Spark pool and a system-assigned name 
D. Synapse workspace name and workspace web URL 


Sample Question 61

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You train and register a machine learning model. You plan to deploy the model as a real-time web service. Applications must use key-based authentication to use the model. You need to deploy the web service. Solution: Create an AciWebservice instance. Set the value of the ssl_enabled property to True. Deploy the model to the service. Does the solution meet the goal?

A. Yes 
B. No 


Sample Question 62

You are training machine learning models in Azure Machine Learning. You use Hyperdrive to tune the hyperparameters. In previous model training and tuning runs, many models showed similar performance. You need to select an early termination policy that meets the following requirements: • accounts for the performance of all previous runs when evaluating the current run • avoids comparing the current run with only the best performing run to date Which two early termination policies should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. 

A. Bandit 
B. Median stopping 
C. Default 
D. Truncation selection 


Sample Question 63

You register a model that you plan to use in a batch inference pipeline. The batch inference pipeline must use a ParallelRunStep step to process files in a file dataset. The script has the ParallelRunStep step runs must process six input files each time the inferencing function is called. You need to configure the pipeline. Which configuration setting should you specify in the ParallelRunConfig object for the PrallelRunStep step?

A. process_count_per_node= "6" 
B. node_count= "6" 
C. mini_batch_size= "6" 
D. error_threshold= "6" 


Sample Question 64

You are developing a machine learning model. You must inference the machine learning model for testing. You need to use a minimal cost compute target Which two compute targets should you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point

A. Local web service 
B. Remote VM 
C. Azure Databricks 
D. Azure Machine Learning Kubernetes
 E. Azure Container Instances 



Exam Code: DP-100
Exam Name: Designing and Implementing a Data Science Solution on Azure
Last Update: April 27, 2024
Questions: 407