Make success possible with our Latest and Unique AWS Certified Specialty MLS-C01 Practice Exam!
Name: AWS Certified Machine Learning - Specialty
Exam Code: MLS-C01
Certification: AWS Certified Specialty
Vendor: Amazon
Total Questions: 307
Last Updated: April 24, 2025
234 Satisfied Customers
Success is simply the result of the efforts you put into the preparation. We at Dumpsgroup wish to make that preparation a lot easier. The AWS Certified Machine Learning - Specialty MLS-C01 Practice Exam we offer is solely for best results. Our IT experts put in their blood and sweat into carefully selecting and compiling these unique Practice Questions. So, you can achieve your dreams of becoming a AWS Certified Specialty professional. Now is the time to press that big buy button and take the first step to a better and brighter future.
Passing the Amazon MLS-C01 exam is simpler if you have globally valid resources and Dumpsgroup provides you just that. Millions of customers come to us daily, leaving the platform happy and satisfied. Because we aim to provide you with AWS Certified Specialty Practice Questions aligned with the latest patterns of the AWS Certified Machine Learning - Specialty Exam. And not just that, our reliable customer services are 24 hours at your beck and call to support you in every way necessary. Order now to see the MLS-C01 Exam results you always desired.
You must have heard about candidates failing in a large quantity and perhaps tried yourself and fail to pass AWS Certified Machine Learning - Specialty. It is best to try Dumpsgroup’s MLS-C01 Practice Questions this time around. Dumpsgroup not only provides an authentic, valid, and accurate resource for your preparation. They simplified the training by dividing it into two different formats for ease and comfort. Now you can get the Amazon MLS-C01 in both PDF and Online Test Engine formats. Choose whichever or both to start your AWS Certified Specialty certification exam preparation.
Furthermore, Dumpsgroup gives a hefty percentage off on these Spoto MLS-C01 Practice Exam by applying a simple discount code; when the actual price is already so cheap. The updates for the first three months, from the date of your purchase, are FREE. Our esteemed customers cannot stop singing praises of our Amazon MLS-C01 Practice Questions. That is because we offer only the questions with the highest possibility of appearing in the actual exam. Download the free demo and see for yourself.
We know you have been struggling to compete with your colleagues in your workplace. That is why we provide the MLS-C01 Practice Questions to let you gain the upper hand that you always wanted. These questions and answers are a thorough guide in a simple and exam-like format! That makes understanding and excelling in your field way lot easier. Our aim is not just to help to pass the AWS Certified Specialty Exam but to make a Amazon professional out of you. For that purpose, our MLS-C01 Practice Exams are the best choice.
There are many resources available online for the preparation of the AWS Certified Machine Learning - Specialty Exam. But that does mean that all of them are reliable. When your future as a AWS Certified Specialty certified is at risk, you have got to think twice while choosing Amazon MLS-C01 Practice Questions. Dumpsgroup is not only a verified source of training material but has been in this business for years. In those years, we researched on MLS-C01 Practice Exam and came up with the best solution. So, you can trust that we know what we are doing. Moreover, we have joined hands with Amazon experts and professionals who are exceptional in their skills. And these experts approved our MLS-C01 Practice Questions for AWS Certified Machine Learning - Specialty preparation.
A. Use Amazon SageMaker Feature Store to select the features. Create a data flow toperform feature-level metadata analysis. Create an Amazon DynamoDB table to storefeature-level metadata. Use Amazon QuickSight to analyze the metadata.
B. Use Amazon SageMaker Feature Store to set feature groups for the current featuresthat the ML models use. Assign the required metadata for each feature. Use SageMakerStudio to analyze the metadata.
C. Use Amazon SageMaker Features Store to apply custom algorithms to analyze thefeature-level metadata that the company requires. Create an Amazon DynamoDB table tostore feature-level metadata. Use Amazon QuickSight to analyze the metadata.
D. Use Amazon SageMaker Feature Store to set feature groups for the current featuresthat the ML models use. Assign the required metadata for each feature. Use AmazonQuickSight to analyze the metadata.
ANSWER : D
A. Create an AWS Lambda function that can transform the incoming records. Enable datatransformation on the ingestion Kinesis Data Firehose delivery stream. Use the Lambdafunction as the invocation target.
B. Deploy an Amazon EMR cluster that runs Apache Spark and includes the transformationlogic. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function to launch the cluster each day and transform the records that accumulatein Amazon S3. Deliver the transformed records to Amazon S3.
C. Deploy an Amazon S3 File Gateway in the stores. Update the in-store software todeliver data to the S3 File Gateway. Use a scheduled daily AWS Glue job to transform thedata that the S3 File Gateway delivers to Amazon S3.
D. Launch a fleet of Amazon EC2 instances that include the transformation logic. Configurethe EC2 instances with a daily cron job to transform the records that accumulate in AmazonS3. Deliver the transformed records to Amazon S3.
ANSWER : A
A. Initialize the model with random weights in all layers including the last fully connectedlayer
B. Initialize the model with pre-trained weights in all layers and replace the last fullyconnected layer.
C. Initialize the model with random weights in all layers and replace the last fully connectedlayer
D. Initialize the model with pre-trained weights in all layers including the last fully connectedlayer
ANSWER : B
A. Use Amazon SageMaker Data Wrangler preconfigured transformations to explorefeature transformations. Use SageMaker Data Wrangler templates for visualization. Exportthe feature processing workflow to a SageMaker pipeline for automation.
B. Use an Amazon SageMaker notebook instance to experiment with different featuretransformations. Save the transformations to Amazon S3. Use Amazon QuickSight forvisualization. Package the feature processing steps into an AWS Lambda function forautomation.
C. Use AWS Glue Studio with custom code to experiment with different featuretransformations. Save the transformations to Amazon S3. Use Amazon QuickSight forvisualization. Package the feature processing steps into an AWS Lambda function forautomation.
D. Use Amazon SageMaker Data Wrangler preconfigured transformations to experimentwith different feature transformations. Save the transformations to Amazon S3. UseAmazon QuickSight for visualzation. Package each feature transformation step into aseparate AWS Lambda function. Use AWS Step Functions for workflow automation.
ANSWER : A
A. Create a new endpoint configuration that includes a production variant for each of thetwo models.
B. Create a new endpoint configuration that includes two target variants that point todifferent endpoints.
C. Deploy the new model to the existing endpoint.
D. Update the existing endpoint to activate the new model.
E. Update the existing endpoint to use the new endpoint configuration.
ANSWER : A,E
A. XGBoost
B. Image Classification - TensorFlow
C. Object Detection - TensorFlow
D. Semantic segmentation - MXNet
ANSWER : B
A. Latent Dirichlet Allocation (LDA)
B. Recurrent neural network (RNN)
C. K-means
D. Convolutional neural network (CNN)
ANSWER : D
How should the data scientist transform the data?
A. Use ETL jobs in AWS Glue to separate the dataset into a target time series dataset andan item metadata dataset. Upload both datasets as .csv files to Amazon S3.
B. Use a Jupyter notebook in Amazon SageMaker to separate the dataset into a relatedtime series dataset and an item metadata dataset. Upload both datasets as tables inAmazon Aurora.
C. Use AWS Batch jobs to separate the dataset into a target time series dataset, a relatedtime series dataset, and an item metadata dataset. Upload them directly to Forecast from alocal machine.
D. Use a Jupyter notebook in Amazon SageMaker to transform the data into the optimizedprotobuf recordIO format. Upload the dataset in this format to Amazon S3.
ANSWER : A
A. Load the data into an Amazon SagcMaker Studio notebook. Calculate the first and thirdquartile Use a SageMaker Data Wrangler data (low to remove only values that are outside of those quartiles.
B. Use an Amazon SageMaker Data Wrangler bias report to find outliers in the dataset Usea Data Wrangler data flow to remove outliers based on the bias report.
C. Use an Amazon SageMaker Data Wrangler anomaly detection visualization to findoutliers in the dataset. Add a transformation to a Data Wrangler data flow to removeoutliers.
D. Use Amazon Lookout for Equipment to find and remove outliers from the dataset.
ANSWER : C
A. Use SageMaker Clarify to automatically detect data bias
B. Turn on the bias detection option in SageMaker Ground Truth to automatically analyzedata features.
C. Use SageMaker Model Monitor to generate a bias drift report.
D. Configure SageMaker Data Wrangler to generate a bias report.
E. Use SageMaker Experiments to perform a data check
ANSWER : A,D
A. Precision = 0.91Recall = 0.6
B. Precision = 0.61Recall = 0.98
C. Precision = 0.7Recall = 0.9
D. Precision = 0.98Recall = 0.8
ANSWER : B
A. Pick a date so that 80% to the data points precede the date Assign that group of datapoints as the training dataset. Assign all the remaining data points to the validation dataset.
B. Pick a date so that 80% of the data points occur after the date. Assign that group of datapoints as the training dataset. Assign all the remaining data points to the validation dataset.
C. Starting from the earliest date in the dataset. pick eight data points for the trainingdataset and two data points for the validation dataset. Repeat this stratified sampling untilno data points remain.
D. Sample data points randomly without replacement so that 80% of the data points are inthe training dataset. Assign all the remaining data points to the validation dataset.
ANSWER : A
A. Set up a private workforce that consists of the internal team. Use the private workforceand the SageMaker Ground Truth active learning feature to label the data. Use AmazonRekognition Custom Labels for model training and hosting.
B. Set up a private workforce that consists of the internal team. Use the private workforceto label the data. Use Amazon Rekognition Custom Labels for model training and hosting.
C. Set up a private workforce that consists of the internal team. Use the private workforceand the SageMaker Ground Truth active learning feature to label the data. Use theSageMaker Object Detection algorithm to train a model. Use SageMaker batch transformfor inference.
D. Set up a public workforce. Use the public workforce to label the data. Use theSageMaker Object Detection algorithm to train a model. Use SageMaker batch transformfor inference.
ANSWER : A
A. Use Amazon Kinesis Data Firehose to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using Amazon QuickSight ML Insights. Visualize the data in QuickSight.
B. Use Amazon Kinesis Data Streams to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using the Random Cut Forest (RCF) trained model inAmazon SageMaker. Visualize the data in Amazon QuickSight.
C. Use Amazon Kinesis Data Firehose to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using the Random Cut Forest (RCF) trained model inAmazon SageMaker. Visualize the data in Amazon QuickSight.
D. Use Amazon Kinesis Data Streams to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using Amazon QuickSight ML Insights. Visualize thedata in QuickSight.
ANSWER : A
A. Use an object detection algorithm to identify a visitor’s hair in video frames. Pass theidentified hair to an ResNet-50 algorithm to determine hair style and hair color.
B. Use an object detection algorithm to identify a visitor’s hair in video frames. Pass theidentified hair to an XGBoost algorithm to determine hair style and hair color.
C. Use a semantic segmentation algorithm to identify a visitor’s hair in video frames. Passthe identified hair to an ResNet-50 algorithm to determine hair style and hair color.
D. Use a semantic segmentation algorithm to identify a visitor’s hair in video frames. Passthe identified hair to an XGBoost algorithm to determine hair style and hair.
ANSWER : C
A. Continue to use the SageMaker linear learner algorithm. Reduce the number of featureswith the SageMaker principal component analysis (PCA) algorithm.
B. Continue to use the SageMaker linear learner algorithm. Reduce the number of featureswith the scikit-iearn multi-dimensional scaling (MDS) algorithm.
C. Continue to use the SageMaker linear learner algorithm. Set the predictor type toregressor.
D. Use the SageMaker k-means algorithm with k of less than 1.000 to train the model
E. Use the SageMaker k-nearest neighbors (k-NN) algorithm. Set a dimension reductiontarget of less than 1,000 to train the model.
ANSWER : A,E
A. Alexa for Business
B. Amazon Connect
C. Amazon Lex
D. Amazon Poly
E. Amazon Comprehend
F. Amazon Transcribe
ANSWER : C,E,F
A. Call the CreateNotebookInstanceLifecycleConfig API operation
B. Create a new SageMaker notebook instance and mount the Amazon Elastic Block Store(Amazon EBS) volume from the original instance
C. Stop and then restart the SageMaker notebook instance
D. Call the UpdateNotebookInstanceLifecycleConfig API operation
ANSWER : C
A. Use AWS Data Pipeline to transform the data and Amazon RDS to run queries.
B. Use AWS Glue to catalogue the data and Amazon Athena to run queries.
C. Use AWS Batch to run ETL on the data and Amazon Aurora to run the queries.
D. Use AWS Lambda to transform the data and Amazon Kinesis Data Analytics to runqueries.
ANSWER : B
A. Use SageMaker Model Debugger to automatically debug the predictions, generate theexplanation, and attach the explanation report.
B. Use AWS Lambda to provide feature importance and partial dependence plots. Use theplots to generate and attach the explanation report.
C. Use SageMaker Clarify to generate the explanation report. Attach the report to thepredicted results.
D. Use custom Amazon Cloud Watch metrics to generate the explanation report. Attach thereport to the predicted results.
ANSWER : C
A. Use AWS Lambda to run a predefined SageMaker pipeline to perform thetransformations on each new dataset that arrives in the S3 bucket.
B. Run an AWS Step Functions step and a predefined SageMaker pipeline to perform thetransformations on each new dalaset that arrives in the S3 bucket
C. Use Apache Airflow to orchestrate a set of predefined transformations on each newdataset that arrives in the S3 bucket.
D. Configure Amazon EventBridge to run a predefined SageMaker pipeline to perform thetransformations when a new data is detected in the S3 bucket.
ANSWER : D
A. Use SageMaker Pipelines to create an automated workflow that extracts fresh data,trains the model, and deploys a new version of the model.
B. Configure SageMaker Model Monitor with an accuracy threshold to check for model drift.Initiate an Amazon CloudWatch alarm when the threshold is exceeded. Connect theworkflow in SageMaker Pipelines with the CloudWatch alarm to automatically initiateretraining.
C. Store the model predictions in Amazon S3 Create a daily SageMaker Processing jobthat reads the predictions from Amazon S3, checks for changes in model predictionaccuracy, and sends an email notification if a significant change is detected.
D. Rerun the steps in the Jupyter notebook that is hosted on SageMaker Studio notebooksto retrain the model and redeploy a new version of the model.
E. Export the training and deployment code from the SageMaker Studio notebooks into aPython script. Package the script into an Amazon Elastic Container Service (Amazon ECS)task that an AWS Lambda function can initiate.
ANSWER : A,B
A. Amazon SageMaker DeepAR forecasting algorithm
B. Amazon SageMaker XGBoost algorithm
C. Amazon SageMaker Latent Dirichlet Allocation (LDA) algorithm
D. A convolutional neural network (CNN) and ResNet
ANSWER : D
A. Replace On-Demand Instances with Spot Instances
B. Configure model auto scaling dynamically to adjust the number of instancesautomatically.
C. Replace CPU-based EC2 instances with GPU-based EC2 instances.
D. Use multiple training instances.
E. Use a pre-trained version of the model. Run incremental training.
ANSWER : C,D
A. Use a ResNet model. Initiate full training mode by initializing the network with randomweights.
B. Use an Inception model that is available with the SageMaker image classificationalgorithm.
C. Create a .lst file that contains a list of image files and corresponding class labels. Uploadthe .lst file to Amazon S3.
D. Initiate transfer learning. Train the model by using the images of less common species.
E. Use an augmented manifest file in JSON Lines format.
ANSWER : C,D
A. The historical sensor data does not include a significant number of data points andattributes for certain time periods.
B. The historical sensor data shows that simple rule-based thresholds can predict cranefailures.
C. The historical sensor data contains failure data for only one type of crane model that isin operation and lacks failure data of most other types of crane that are in operation.
D. The historical sensor data from the cranes are available with high granularity for the last3 years.
E. The historical sensor data contains most common types of crane failures that thecompany wants to predict.
ANSWER : D,E
A. Use Amazon EMR Serveriess with PySpark.
B. Use AWS Glue DataBrew.
C. Use Amazon SageMaker Studio Data Wrangler.
D. Use Amazon SageMaker Studio Notebook with Pandas.
ANSWER : C
A. AWS Glue jobs
B. Amazon EMR cluster
C. Amazon Athena
D. AWS Lambda
ANSWER : A
A. IP Insights
B. K-nearest neighbors (k-NN)
C. Linear learner with a logistic function
D. Random Cut Forest (RCF)
E. XGBoost
ANSWER : D,E
A. The data scientist should obtain a correlated equilibrium policy by formulating thisproblem as a multi-agent reinforcement learning problem.
B. The data scientist should obtain the optimal equilibrium policy by formulating thisproblem as a single-agent reinforcement learning problem.
C. Rather than finding an equilibrium policy, the data scientist should obtain accuratepredictors of traffic flow by using historical data through a supervised learning approach.
D. Rather than finding an equilibrium policy, the data scientist should obtain accuratepredictors of traffic flow by using unlabeled simulated data representing the new trafficpatterns in the city and applying an unsupervised learning approach.
ANSWER : A
A. Image Classification
B. Optical Character Recognition (OCR)
C. Object Detection
D. Pose estimation
E. Image Generative Adversarial Networks (GANs)
ANSWER : C,D
A. Configure an S3 event notification that invokes an AWS Lambda function when newdocuments are created. Configure the Lambda function to create three SageMaker batchtransform jobs, one batch transform job for each model for each document.
B. Deploy all the models to a single SageMaker endpoint. Treat each model as aproduction variant. Configure an S3 event notification that invokes an AWS Lambdafunction when new documents are created. Configure the Lambda function to call eachproduction variant and return the results of each model.
C. Deploy each model to its own SageMaker endpoint Configure an S3 event notificationthat invokes an AWS Lambda function when new documents are created. Configure theLambda function to call each endpoint and return the results of each model.
D. Deploy each model to its own SageMaker endpoint. Create three AWS Lambdafunctions. Configure each Lambda function to call a different endpoint and return theresults. Configure three S3 event notifications to invoke the Lambda functions when newdocuments are created.
ANSWER : B
A. Convert current documents to SSML with pronunciation tags
B. Create an appropriate pronunciation lexicon.
C. Output speech marks to guide in pronunciation
D. Use Amazon Lex to preprocess the text files for pronunciation
ANSWER : B
A. Create an Amazon SageMaker notebook instance for pulling all the models fromAmazon S3 using the boto3 library. Remove the existing instances and use the notebook toperform a SageMaker batch transform for performing inferences offline for all the possibleusers in all the cities. Store the results in different files in Amazon S3. Point the web clientto the files.
B. Prepare an Amazon SageMaker Docker container based on the open-source multimodelserver. Remove the existing instances and create a multi-model endpoint inSageMaker instead, pointing to the S3 bucket containing all the models Invoke theendpoint from the web client at runtime, specifying the TargetModel parameter according tothe city of each request.
C. Keep only a single EC2 instance for hosting all the models. Install a model server in theinstance and load each model by pulling it from Amazon S3. Integrate the instance with theweb client using Amazon API Gateway for responding to the requests in real time,specifying the target resource according to the city of each request.
D. Prepare a Docker container based on the prebuilt images in Amazon SageMaker.Replace the existing instances with separate SageMaker endpoints. one for each citywhere the company operates. Invoke the endpoints from the web client, specifying the URL and EndpomtName parameter according to the city of each request.
ANSWER : B
A. Configure the AWS Data Exchange product as a producer for an Amazon Kinesis datastream. Use an Amazon Kinesis Data Firehose delivery stream to transfer the data toAmazon S3 Run an AWS Glue job that will merge the existing business data with theAthena table. Write the result set back to Amazon S3.
B. Use an S3 event on the AWS Data Exchange S3 bucket to invoke an AWS Lambdafunction. Program the Lambda function to use Amazon SageMaker Data Wrangler tomerge the existing business data with the Athena table. Write the result set back toAmazon S3.
C. Use an S3 event on the AWS Data Exchange S3 bucket to invoke an AWS LambdaFunction Program the Lambda function to run an AWS Glue job that will merge the existingbusiness data with the Athena table Write the results back to Amazon S3.
D. Provision an Amazon Redshift cluster. Subscribe to the AWS Data Exchange productand use the product to create an Amazon Redshift Table Merge the data in AmazonRedshift. Write the results back to Amazon S3.
ANSWER : B
A.Modify the HPO configuration as follows:
Select the most accurate hyperparameter configuration form this HPO job.
B.Run three different HPO jobs that use different learning rates form the following intervalsfor MinValue and MaxValue while using the same number of training jobs for each HPOjob:[0.01, 0.1][0.001, 0.01][0.0001, 0.001]Select the most accurate hyperparameter configuration form these three HPO jobs.
C.Modify the HPO configuration as follows:
Select the most accurate hyperparameter configuration form this training job.
D.Run three different HPO jobs that use different learning rates form the following intervalsfor MinValue and MaxValue. Divide the number of training jobs for each HPO job by three:[0.01, 0.1][0.001, 0.01][0.0001, 0.001]Select the most accurate hyperparameter configuration form these three HPO jobs.
ANSWER : C
A. Train a model by using a user-based collaborative filtering algorithm on AmazonSageMaker. Host the model on a SageMaker real-time endpoint. Configure an Amazon APIGateway API and an AWS Lambda function to handle real-time inference requests that theweb application sends. Exclude the items that the user previously purchased from theresults before sending the results back to the web application.
B. Use an Amazon Personalize PERSONALIZED_RANKING recipe to train a model.Create a real-time filter to exclude items that the user previously purchased. Create anddeploy a campaign on Amazon Personalize. Use the GetPersonalizedRanking APIoperation to get the real-time recommendations.
C. Use an Amazon Personalize USER_ PERSONAL IZATION recipe to train a modelCreate a real-time filter to exclude items that the user previously purchased. Create anddeploy a campaign on Amazon Personalize. Use the GetRecommendations API operationto get the real-time recommendations.
D. Train a neural collaborative filtering model on Amazon SageMaker by using GPU instances. Host the model on a SageMaker real-time endpoint. Configure an Amazon APIGateway API and an AWS Lambda function to handle real-time inference requests that theweb application sends. Exclude the items that the user previously purchased from theresults before sending the results back to the web application.
ANSWER : C
A. Perform incremental training to update the model. Activate Amazon SageMaker Model Monitor to detect model performance issues and to send notifications.
B. Use Amazon SageMaker Model Governance. Configure Model Governance toautomatically adjust model hyper para meters. Create a performance threshold alarm inAmazon CloudWatch to send notifications.
C. Use Amazon SageMaker Debugger with appropriate thresholds. Configure Debugger tosend Amazon CloudWatch alarms to alert the team Retrain the model by using only datafrom the previous several months.
D. Use only data from the previous several months to perform incremental training toupdate the model. Use Amazon SageMaker Model Monitor to detect model performanceissues and to send notifications.
ANSWER : A
A. Specificity
B. False positive rate
C. Accuracy
D. Fl score
E. True positive rate
ANSWER : D,E
A. A/B testing
B. Canary release
C. Shadow deployment
D. Blue/green deployment
ANSWER : C
A. Use Amazon SageMaker Ground Truth to sort the data into two groups named"enrolled" or "not enrolled."
B. Use a forecasting algorithm to run predictions.
C. Use a regression algorithm to run predictions.
D. Use a classification algorithm to run predictions
E. Use the built-in Amazon SageMaker k-means algorithm to cluster the data into twogroups named "enrolled" or "not enrolled."
ANSWER : A,D
A. Use a deep convolutional neural network (CNN) classifier with the images as input.Include a linear output layer that outputs the probability that an image contains a car.
B. Use a deep convolutional neural network (CNN) classifier with the images as input.Include a softmax output layer that outputs the probability that an image contains a car.
C. Use a deep multilayer perceptron (MLP) classifier with the images as input. Include alinear output layer that outputs the probability that an image contains a car.
D. Use a deep multilayer perceptron (MLP) classifier with the images as input. Include asoftmax output layer that outputs the probability that an image contains a car.
ANSWER : A
A. Exponential transformation
B. Logarithmic transformation
C. Polynomial transformation
D. Sinusoidal transformation
ANSWER : B
A. Use AWS Lambda to read and aggregate the data hourly. Transform the data and storeit in Amazon S3 by using Amazon Kinesis Data Firehose.
B. Use Amazon Kinesis Data Firehose to read and aggregate the data hourly. Transformthe data and store it in Amazon S3 by using a short-lived Amazon EMR cluster.
C. Use Amazon Kinesis Data Analytics to read and aggregate the data hourly. Transformthe data and store it in Amazon S3 by using Amazon Kinesis Data Firehose.
D. Use Amazon Kinesis Data Firehose to read and aggregate the data hourly. Transform the data and store it in Amazon S3 by using AWS Lambda.
ANSWER : C
A. Create an IAM role in the development account that the integration account andproduction account can assume. Attach IAM policies to the role that allow access to thefeature repository and the S3 buckets.
B. Share the feature repository that is associated the S3 buckets from the developmentaccount to the integration account and the production account by using AWS ResourceAccess Manager (AWS RAM).
C. Use AWS Security Token Service (AWS STS) from the integration account and theproduction account to retrieve credentials for the development account.
D. Set up S3 replication between the development S3 buckets and the integration andproduction S3 buckets.
E. Create an AWS PrivateLink endpoint in the development account for SageMaker.
ANSWER : A,B
A. An AWS KMS key policy that allows access to the customer master key (CMK)
B. A SageMaker notebook security group that allows access to Amazon S3
C. An 1AM role that allows access to the specific S3 bucket
D. A permissive S3 bucket policy
E. An S3 bucket owner that matches the notebook owner
F. A SegaMaker notebook subnet ACL that allow traffic to Amazon S3.
ANSWER : A,B,C