Associate-Cloud-Engineer Google Cloud Certified - Associate Cloud Engineer Dumps
If you are looking for free Associate-Cloud-Engineer dumps than here we have some sample question answers available. You can prepare from our Google Associate-Cloud-Engineer exam questions notes and prepare exam with this practice test. Check below our updated Associate-Cloud-Engineer exam dumps.
DumpsGroup are top class study material providers and our inclusive range of Associate-Cloud-Engineer Real exam questions would be your key to success in Google Google Cloud Certified Certification Exam in just first attempt. We have an excellent material covering almost all the topics of Google Associate-Cloud-Engineer exam. You can get this material in Google Associate-Cloud-Engineer PDF and Associate-Cloud-Engineer practice test engine formats designed similar to the Real Exam Questions. Free Associate-Cloud-Engineer questions answers and free Google Associate-Cloud-Engineer study material is available here to get an idea about the quality and accuracy of our study material.
Sample Question 4
You have a number of applications that have bursty workloads and are heavily dependent
on topics to decouple publishing systems from consuming systems. Your company would
like to go serverless to enable developers to focus on writing code without worrying about
infrastructure. Your solution architect has already identified Cloud Pub/Sub as a suitable
alternative for decoupling systems. You have been asked to identify a suitable GCP
Serverless service that is easy to use with Cloud Pub/Sub. You want the ability to scale
down to zero when there is no traffic in order to minimize costs. You want to follow Google
recommended practices. What should you suggest?
A. Cloud Run for Anthos B. Cloud Run C. App Engine Standard D. Cloud Functions.
Answer: D Explanation:
Cloud Functions is Google Cloud’s event-driven serverless compute platform that lets you
run your code locally or in the cloud without having to provision servers. Cloud Functions
scales up or down, so you pay only for compute resources you use. Cloud Functions have
excellent integration with Cloud Pub/Sub, lets you scale down to zero and is recommended
by Google as the ideal serverless platform to use when dependent on Cloud Pub/Sub."If
you’re building a simple API (a small set of functions to be accessed via HTTP or Cloud
Pub/Sub), we recommend using Cloud
Functions."Ref: https://cloud.google.com/serverless-options
Sample Question 5
You need to track and verity modifications to a set of Google Compute Engine instances in
your Google Cloud project. In particular, you want to verify OS system patching events on
your virtual machines (VMs). What should you do?
A. Review the Compute Engine activity logs Select and review the Admin Event logs B. Review the Compute Engine activity logs Select and review the System Event logs C. Install the Cloud Logging Agent In Cloud Logging review the Compute Engine syslog logs D. Install the Cloud Logging Agent In Cloud Logging, review the Compute Engine operation logs
Answer: A
Sample Question 6
Your web application has been running successfully on Cloud Run for Anthos. You want to
evaluate an updated version of the application with a specific percentage of your
production users (canary deployment). What should you do?
A. Create a new service with the new version of the application. Split traffic between this
version and the version that is currently running. B. Create a new revision with the new version of the application. Split traffic between this version and the version that is currently running. C. Create a new service with the new version of the application. Add an HTTP Load Balancer in front of both services. D. Create a new revision with the new version of the application. Add an HTTP Load Balancer in front of both revisions.
Your company has a Google Cloud Platform project that uses BigQuery for data
warehousing. Your data science team changes frequently and has few members. You need
to allow members of this team to perform queries. You want to follow Googlerecommended practices. What should you do?
A. 1. Create an IAM entry for each data scientist's user account.2. Assign the BigQuery
jobUser role to the group. B. 1. Create an IAM entry for each data scientist's user account.2. Assign the BigQuery dataViewer user role to the group. C. 1. Create a dedicated Google group in Cloud Identity.2. Add each data scientist's user account to the group.3. Assign the BigQuery jobUser role to the group. D. 1. Create a dedicated Google group in Cloud Identity.2. Add each data scientist's user account to the group.3. Assign the BigQuery dataViewer user role to the group.
Answer: C Explanation: Read the dataset's metadata and to list tables in the dataset. Read data and metadata from the dataset's tables. When applied at the project or organization level, this role can also enumerate all datasets in the project. Additional roles, however, are
necessary to allow the running of jobs.
BigQuery Data Viewer
(roles/bigquery.dataViewer)
When applied to a table or view, this role provides permissions to:
Read data and metadata from the table or view.
This role cannot be applied to individual models or routines.
When applied to a dataset, this role provides permissions to:
Read the dataset's metadata and list tables in the dataset.
Read data and metadata from the dataset's tables.
When applied at the project or organization level, this role can also enumerate all datasets
in the project. Additional roles, however, are necessary to allow the running of jobs.
Lowest-level resources where you can grant this role:
Table
View
BigQuery Job User
(roles/bigquery.jobUser)
Provides permissions to run jobs, including queries, within the project.
Lowest-level resources where you can grant this role:
Project
to run jobs https://cloud.google.com/bigquery/docs/access-control#bigquery.jobUser
databaseUser needs additional role permission to run jobs
https://cloud.google.com/spanner/docs/iam#spanner.databaseUser
Sample Question 8
Your company is moving its entire workload to Compute Engine. Some servers should be
accessible through the Internet, and other servers should only be accessible over the
internal network. All servers need to be able to talk to each other over specific ports and
protocols. The current on-premises network relies on a demilitarized zone (DMZ) for the
public servers and a Local Area Network (LAN) for the private servers. You need to desig the networking infrastructure on
Google Cloud to match these requirements. What should you do?
A. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LAN. 2. Set up
firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and
another firewall rule to allow public ingress traffic for the DMZ. B. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LAN. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ. C. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LAN. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ. D. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LAN. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.
The DevOps group in your organization needs full control of Compute Engine resources in
your development project. However, they should not have permission to create or update
any other resources in the project. You want to follow Google's recommendations for
setting permissions for the DevOps group. What should you do?
A. Grant the basic role roles/viewer and the predefined role roles/compute.admin to the
DevOps group. B. Create an 1AM policy and grant all compute. instanceAdmln." permissions to the policy Attach the policy to the DevOps group. C. Create a custom role at the folder level and grant all compute. instanceAdmln. * permissions to the role Grant the custom role to the DevOps group. D. Grant the basic role roles/editor to the DevOps group.
Answer: A
Sample Question 10
You are designing an application that lets users upload and share photos. You expect your
application to grow really fast and you are targeting a worldwide audience. You want to
delete uploaded photos after 30 days. You want to minimize costs while ensuring your
application is highly available. Which GCP storage solution should you choose?
A. Persistent SSD on VM instances. B. Cloud Filestore. C. Multiregional Cloud Storage bucket. D. Cloud Datastore database.
Answer: C Explanation:
Cloud Storage allows world-wide storage and retrieval of any amount of data at any time.
We dont need to set up auto-scaling ourselves. Cloud Storage autoscaling is managed by
GCP. Cloud Storage is an object store so it is suitable for storing photos. Cloud Storage
allows world-wide storage and retrieval so cater well to our worldwide audience. Cloud
storage provides us lifecycle rules that can be configured to automatically delete objects
older than 30 days. This also fits our requirements. Finally, Google Cloud Storage offers
several storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline
Storage ($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which
are significantly cheaper than any of the options above.
Ref: https://cloud.google.com/storage/docs
Ref: https://cloud.google.com/storage/pricing
Sample Question 11
You are migrating a business critical application from your local data center into Google
Cloud. As part of your high-availability strategy, you want to ensure that any data used by
the application will be immediately available if a zonal failure occurs. What should you do?
A. Store the application data on a zonal persistent disk. Create a snapshot schedule for the
disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to
a new VM in another zone. B. Store the application data on a zonal persistent disk. If an outage occurs, create an instance in another zone with this disk attached. C. Store the application data on a regional persistent disk. Create a snapshot schedule for the disk. If an outage occurs, create a new disk from the most recent snapshot and attach it to a new VM in another zone. D. Store the application data on a regional persistent disk If an outage occurs, create an instance in another zone with this disk attached.
Answer: D
Sample Question 12
Your company has developed a new application that consists of multiple microservices.
You want to deploy the application to Google Kubernetes Engine (GKE), and you want to
ensure that the cluster can scale as more applications are deployed in the future. You want
to avoid manual intervention when each new application is deployed. What should you do?
A. Deploy the application on GKE, and add a HorizontalPodAutoscaler to the deployment. B. Deploy the application on GKE, and add a VerticalPodAutoscaler to the deployment. C. Create a GKE cluster with autoscaling enabled on the node pool. Set a minimum and maximum for the size of the node pool. D. Create a separate node pool for each application, and deploy each application to its dedicated node pool.
You want to verify the IAM users and roles assigned within a GCP project named myproject. What should you do?
A. Run gcloud iam roles list. Review the output section. B. Run gcloud iam service-accounts list. Review the output section. C. Navigate to the project and then to the IAM section in the GCP Console. Review the members and roles. D. Navigate to the project and then to the Roles section in the GCP Console. Review the roles and status.
Answer: C Explanation: Logged onto console and followed the steps and was able to see all the assigned users and roles.
Sample Question 14
Your company has embraced a hybrid cloud strategy where some of the applications are
deployed on Google Cloud. A Virtual Private Network (VPN) tunnel connects your Virtual
Private Cloud (VPC) in Google Cloud with your company's on-premises network. Multiple
applications in Google Cloud need to connect to an on-premises database server, and you
want to avoid having to change the IP configuration in all of your applications when the IP
of the database changes.
What should you do?
A. Configure Cloud NAT for all subnets of your VPC to be used when egressing from the
VM instances. B. Create a private zone on Cloud DNS, and configure the applications with the DNS name. C. Configure the IP of the database as custom metadata for each instance, and query the metadata server. D. Query the Compute Engine internal DNS from the applications to retrieve the IP of the database.
Answer: B Explanation: Forwarding zones Cloud DNS forwarding zones let you configure target
name servers for specific private zones. Using a forwarding zone is one way to implement
outbound DNS forwarding from your VPC network. A Cloud DNS forwarding zone is a
special type of Cloud DNS private zone. Instead of creating records within the zone, you
specify a set of forwarding targets. Each forwarding target is an IP address of a DNS
server, located in your VPC network, or in an on-premises network connected to your VPC network by Cloud VPN or Cloud Interconnect.
https://cloud.google.com/nat/docs/overview
DNS configuration Your on-premises network must have DNS zones and records
configured so that Google domain names resolve to the set of IP addresses for either
private.googleapis.com or restricted.googleapis.com. You can create Cloud DNS managed
private zones and use a Cloud DNS inbound server policy, or you can configure onpremises name servers. For example, you can use BIND or Microsoft Active Directory
DNS. https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid#configdomain
Sample Question 15
You have developed a containerized web application that will serve Internal colleagues
during business hours. You want to ensure that no costs are incurred outside of the hours
the application is used. You have just created a new Google Cloud project and want to
deploy the application. What should you do?
A. Deploy the container on Cloud Run for Anthos, and set the minimum number of
instances to zero B. Deploy the container on Cloud Run (fully managed), and set the minimum number of instances to zero. C. Deploy the container on App Engine flexible environment with autoscaling. and set the value min_instances to zero in the app yaml D. Deploy the container on App Engine flexible environment with manual scaling, and set the value instances to zero in the app yaml
The storage costs for your application logs have far exceeded the project budget. The logs
are currently being retained indefinitely in the Cloud Storage bucket myapp-gcp-ace-logs.
You have been asked to remove logs older than 90 days from your Cloud Storage bucket.
You want to optimize ongoing Cloud Storage spend. What should you do?
A. Write a script that runs gsutil Is -| – gs://myapp-gcp-ace-logs/** to find and remove items
older than 90 days. Schedule the script with cron. B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil lifecycle set config-json-file. C. Write a lifecycle management rule in XML and push it to the bucket with gsutil lifecycle set config-xml-file. D. Write a script that runs gsutil Is -Ir gs://myapp-gcp-ace-logs/** to find and remove items older than 90 days. Repeat this process every morning.
Answer: B Explanation:
You write a lifecycle management rule in XML and push it to the bucket with gsutil lifecycle
set config-xml-file. is not right.
gsutil lifecycle set enables you to set the lifecycle configuration on one or more buckets
based on the configuration file provided. However, XML is not a valid supported type for the
configuration file.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
Write a script that runs gsutil ls -lr gs://myapp-gcp-ace-logs/** to find and remove
items older than 90 days. Repeat this process every morning. is not right.
This manual approach is error-prone, time-consuming and expensive. GCP Cloud Storage
provides lifecycle management rules that let you achieve this with minimal effort.
Write a script that runs gsutil ls -l gs://myapp-gcp-ace-logs/** to find and remove
items older than 90 days. Schedule the script with cron. is not right.
This manual approach is error-prone, time-consuming and expensive. GCP Cloud Storage
provides lifecycle management rules that let you achieve this with minimal effort. Write a lifecycle management rule in JSON and push it to the bucket with gsutil
lifecycle set config-json-file. is the right answer.
You can assign a lifecycle management configuration to a bucket. The configuration
contains a set of rules which apply to current and future objects in the bucket. When an
object meets the criteria of one of the rules, Cloud Storage automatically performs a
specified action on the object. One of the supported actions is to Delete objects. You can
set up a lifecycle management to delete objects older than 90 days. gsutil lifecycle set
enables you to set the lifecycle configuration on the bucket based on the configuration file.
JSON is the only supported type for the configuration file. The config-json-file specified on
the command line should be a path to a local file containing the lifecycle configuration
JSON document.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
Ref: https://cloud.google.com/storage/docs/lifecycle
Sample Question 17
Your organization is a financial company that needs to store audit log files for 3 years. Your
organization has hundreds of Google Cloud projects. You need to implement a costeffective approach for log file retention. What should you do?
A. Create an export to the sink that saves logs from Cloud Audit to BigQuery. B. Create an export to the sink that saves logs from Cloud Audit to a Coldline Storage bucket. C. Write a custom script that uses logging API to copy the logs from Stackdriver logs to BigQuery. D. Export these logs to Cloud Pub/Sub and write a Cloud Dataflow pipeline to store logs to Cloud SQL.
Answer: B Explanation: Coldline Storage is the perfect service to store audit logs from all the projects and is very cost-efficient as well. Coldline Storage is a very low-cost, highly durable storage service for storing infrequently accessed data.
Sample Question 18
You need to verify that a Google Cloud Platform service account was created at a
particular time. What should you do?
A. Filter the Activity log to view the Configuration category. Filter the Resource type to
Service Account. B. Filter the Activity log to view the Configuration category. Filter the Resource type to Google Project. C. Filter the Activity log to view the Data Access category. Filter the Resource type to Service Account. D. Filter the Activity log to view the Data Access category. Filter the Resource type to Google Project.
You have two subnets (subnet-a and subnet-b) in the default VPC. Your database servers
are running in subnet-a. Your application servers and web servers are running in subnet-b.
You want to configure a firewall rule that only allows database traffic from the application
servers to the database servers. What should you do?
A.
* Create service accounts sa-app and sa-db.
• Associate service account: sa-app with the application servers and the service account
sa-db with the database servers.
• Create an ingress firewall rule to allow network traffic from source service account sa-app
to target service account sa-db. B. • Create network tags app-server and db-server. • Add the app-server lag lo the application servers and the db-server lag to the database servers. • Create an egress firewall rule to allow network traffic from source network tag app-server to target network tag db-server. C. * Create a service account sa-app and a network tag db-server. Associate the service account sa-app with the application servers and the network tag dbserver with
the database servers.
• Create an ingress firewall rule to allow network traffic from source VPC IP addresses and
target the subnet-a IP addresses. D. • Create a network lag app-server and service account sa-db. • Add the tag to the application servers and associate the service account with the database servers. • Create an egress firewall rule to allow network traffic from source network tag app-server to target service account sa-db.
Answer: C
Sample Question 20
Your company is using Google Workspace to manage employee accounts. Anticipated
growth will increase the number of personnel from 100 employees to 1.000 employees
within 2 years. Most employees will need access to your company's Google Cloud account.
The systems and processes will need to support 10x growth without performance
degradation, unnecessary complexity, or security issues. What should you do?
A. Migrate the users to Active Directory. Connect the Human Resources system to Active
Directory. Turn on Google Cloud Directory Sync (GCDS) for Cloud Identity. Turn on Identity
Federation from Cloud Identity to Active Directory. B. Organize the users in Cloud Identity into groups. Enforce multi-factor authentication in Cloud Identity. C. Turn on identity federation between Cloud Identity and Google Workspace. Enforce multi-factor authentication for domain wide delegation. D. Use a third-party identity provider service through federation. Synchronize the users from Google Workplace to the third-party provider in real time.
Answer: B
Sample Question 21
You built an application on your development laptop that uses Google Cloud services. Your
application uses Application Default Credentials for authentication and works fine on your
development laptop. You want to migrate this application to a Compute Engine virtual
machine (VM) and set up authentication using Google- recommended practices and
minimal changes. What should you do?
A. Assign appropriate access for Google services to the service account used by the
Compute Engine VM. B. Create a service account with appropriate access for Google services, and configure the application to use this account. C. Store credentials for service accounts with appropriate access for Google services in a config file, and deploy this config file with your application. D. Store credentials for your user account with appropriate access for Google services in a config file, and deploy this config file with your application.
Answer: B Explanation:
In general, Google recommends that each instance that needs to call a Google API should
run as a service account with the minimum permissions necessary for that instance to do
its job. In practice, this means you should configure service accounts for your instances
with the following process: Create a new service account rather than using the Compute
Engine default service account. Grant IAM roles to that service account for only the
resources that it needs. Configure the instance to run as that service account. Grant the
instance the https://www.googleapis.com/auth/cloud-platform scope to allow full access to
all Google Cloud APIs, so that the IAM permissions of the instance are completely
determined by the IAM roles of the service account. Avoid granting more access than
necessary and regularly check your service account permissions to make sure they are upto-date. https://cloud.google.com/compute/docs/access/create-enable-service-accountsfor-instances#best_pract...
Reference: https://cloud.google.com/compute/docs/access/create-enable-serviceaccounts-for-instances
Sample Question 22
Your company developed a mobile game that is deployed on Google Cloud. Gamers are
connecting to the game with their personal phones over the Internet. The game sends UDP
packets to update the servers about the gamers' actions while they are playing in
multiplayer mode. Your game backend can scale over multiple virtual machines (VMs), and
you want to expose the VMs over a single IP address. What should you do?
A. Configure an SSL Proxy load balancer in front of the application servers. B. Configure an Internal UDP load balancer in front of the application servers. C. Configure an External HTTP(s) load balancer in front of the application servers.
tree D. Configure an External Network load balancer in front of the application servers.
You are designing an application that uses WebSockets and HTTP sessions that are not
distributed across the web servers. You want to ensure the application runs properly on
Google Cloud Platform. What should you do?
A. Meet with the cloud enablement team to discuss load balancer options. B. Redesign the application to use a distributed user session service that does not rely on WebSockets and HTTP sessions. C. Review the encryption requirements for WebSocket connections with the security team. D. Convert the WebSocket code to use HTTP streaming.
Answer: A Explanation:
Google HTTP(S) Load Balancing has native support for the WebSocket protocol
when you use HTTP or HTTPS, not HTTP/2, as the protocol to the backend.
Ref: https://cloud.google.com/load-balancing/docs/https#websocket_proxy_support
So the next possible step is to Meet with the cloud enablement team to discuss
load balancer options.
We dont need to convert WebSocket code to use HTTP streaming or Redesign the
application, as WebSocket support is offered by Google HTTP(S) Load Balancing.
Reviewing the encryption requirements is a good idea but it has nothing to do with
WebSockets.
Sample Question 24
You are managing a project for the Business Intelligence (BI) department in your company A data pipeline ingests data into BigQuery via streaming. You want the users in the BI
department to be able to run the custom SQL queries against the latest data in BigQuery.
What should you do?
A. Create a Data Studio dashboard that uses the related BigQuery tables as a source and
give the BI team view access to the Data Studio dashboard. B. Create a Service Account for the BI team and distribute a new private key to each member of the BI team. C. Use Cloud Scheduler to schedule a batch Dataflow job to copy the data from BigQuery to the BI team's internal data warehouse. D. Assign the IAM role of BigQuery User to a Google Group that contains the members of the BI team.
Answer: D Explanation: When applied to a dataset, this role provides the ability to read the dataset's
metadata and list tables in the dataset. When applied to a project, this role also provides
the ability to run jobs, including queries, within the project. A member with this role can
enumerate their own jobs, cancel their own jobs, and enumerate datasets within a project.
Additionally, allows the creation of new datasets within the project; the creator is granted
the BigQuery Data Owner role (roles/bigquery.dataOwner) on these new datasets.
https://cloud.google.com/bigquery/docs/access-control
Sample Question 25
You want to deploy an application on Cloud Run that processes messages from a Cloud
Pub/Sub topic. You want to follow Google-recommended practices. What should you do?
A. 1. Create a Cloud Function that uses a Cloud Pub/Sub trigger on that topic.2. Call your
application on Cloud Run from the Cloud Function for every message. B. 1. Grant the Pub/Sub Subscriber role to the service account used by Cloud Run.2. Create a Cloud Pub/Sub subscription for that topic.3. Make your application pull messages from that subscription. C. 1. Create a service account.2. Give the Cloud Run Invoker role to that service account for your Cloud Run application.3. Create a Cloud Pub/Sub subscription that uses that service account and uses your Cloud Run application as the push endpoint. D. 1. Deploy your application on Cloud Run on GKE with the connectivity set to Internal.2. Create a Cloud Pub/Sub subscription for that topic.3. In the same Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.
Answer: C
Explanation: https://cloud.google.com/run/docs/tutorials/pubsub#integrating-pubsub
1. Create a service account. 2. Give the Cloud Run Invoker role to that service account for
your Cloud Run application. 3. Create a Cloud Pub/Sub subscription that uses that service
account and uses your Cloud Run application as the push endpoint
Sample Question 26
A company wants to build an application that stores images in a Cloud Storage bucket and
wants to generate thumbnails as well as resize the images. They want to use a google
managed service that can scale up and scale down to zero automatically with minimal effort. You have been asked to recommend a service. Which GCP service would you
suggest?
A. Google Compute Engine B. Google App Engine C. Cloud Functions D. Google Kubernetes Engine
Answer: C
Description automatically generated with low confidence
Cloud Functions is Google Cloud’s event-driven serverless compute platform. It
automatically scales based on the load and requires no additional configuration. You pay
only for the resources used.
Ref: https://cloud.google.com/functions
While all other options i.e. Google Compute Engine, Google Kubernetes Engine, Google
App Engine support autoscaling, it needs to be configured explicitly based on the load and
is not as trivial as the scale up or scale down offered by Google’s cloud functions.
Sample Question 27
You are working for a startup that was officially registered as a business 6 months ago. As
your customer base grows, your use of Google Cloud increases. You want to allow all
engineers to create new projects without asking them for their credit card information. What
should you do?
A. Create a Billing account, associate a payment method with it, and provide all project
creators with permission to associate that billing account with their projects. B. Grant all engineer’s permission to create their own billing accounts for each new project. C. Apply for monthly invoiced billing, and have a single invoice tor the project paid by the finance team. D. Create a billing account, associate it with a monthly purchase order (PO), and send the PO to Google Cloud.
Answer: A
Sample Question 28
You are assigned to maintain a Google Kubernetes Engine (GKE) cluster named dev that
was deployed on Google Cloud. You want to manage the GKE configuration using the command line interface (CLI). You have just downloaded and installed the Cloud SDK. You
want to ensure that future CLI commands by default address this specific cluster. What
should you do?
A. Use the command gcloud config set container/cluster dev. B. Use the command gcloud container clusters update dev. C. Create a file called gke.default in the ~/.gcloud folder that contains the cluster name. D. Create a file called defaults.json in the ~/.gcloud folder that contains the cluster name.
You have a workload running on Compute Engine that is critical to your business. You want
to ensure that the data on the boot disk of this workload is backed up regularly. You need
to be able to restore a backup as quickly as possible in case of disaster. You also want
older backups to be cleaned automatically to save on cost. You want to follow Google recommended practices. What should you do?
A. Create a Cloud Function to create an instance template. B. Create a snapshot schedule for the disk using the desired interval. C. Create a cron job to create a new disk from the disk using gcloud. D. Create a Cloud Task to create an image and export it to Cloud Storage.
Answer: B Explanation: Best practices for persistent disk snapshots
You can create persistent disk snapshots at any time, but you can create snapshots more
quickly and with greater reliability if you use the following best practices.
Creating frequent snapshots efficiently
Use snapshots to manage your data efficiently.
Create a snapshot of your data on a regular schedule to minimize data loss due to
unexpected failure.
Improve performance by eliminating excessive snapshot downloads and by creating an
image and reusing it.
Set your snapshot schedule to off-peak hours to reduce snapshot time.
Snapshot frequency limits
Creating snapshots from persistent disks
You can snapshot your disks at most once every 10 minutes. If you want to issue a burst of
requests to snapshot your disks, you can issue at most 6 requests in 60 minutes.
If the limit is exceeded, the operation fails and returns the following error:
https://cloud.google.com/compute/docs/disks/snapshot-best-practices
Sample Question 30
You are about to deploy a new Enterprise Resource Planning (ERP) system on Google
Cloud. The application holds the full database in-memory for fast data access, and you
need to configure the most appropriate resources on Google Cloud for this application.
What should you do?
A. Provision preemptible Compute Engine instances. B. Provision Compute Engine instances with GPUs attached. C. Provision Compute Engine instances with local SSDs attached.
%3F D. Provision Compute Engine instances with M1 machine type.
You are setting up a Windows VM on Compute Engine and want to make sure you can log
in to the VM via RDP. What should you do?
A. After the VM has been created, use your Google Account credentials to log in into the
VM. B. After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM. C. When creating the VM, add metadata to the instance using ‘windows-password’ as the key and a password as the value. D. After the VM has been created, download the JSON private key for the default Compute
Engine service account. Use the credentials in the JSON file to log in to the VM.
You manage an App Engine Service that aggregates and visualizes data from BigQuery.
The application is deployed with the default App Engine Service account. The data that
needs to be visualized resides in a different project managed by another team. You do not
have access to this project, but you want your application to be able to read data from the
BigQuery dataset. What should you do?
A. Ask the other team to grant your default App Engine Service account the role of
BigQuery Job User. B. Ask the other team to grant your default App Engine Service account the role of BigQuery Data Viewer. C. In Cloud IAM of your project, ensure that the default App Engine service account has the role of BigQuery Data Viewer. D. In Cloud IAM of your project, grant a newly created service account from the other team
the role of BigQuery Job User in your project.
Answer: B Explanation:
The resource that you need to get access is in the other project.
roles/bigquery.dataViewer BigQuery Data Viewer
When applied to a table or view, this role provides permissions to:
Read data and metadata from the table or view.
This role cannot be applied to individual models or routines.
When applied to a dataset, this role provides permissions to:
Read the dataset's metadata and list tables in the dataset.
Read data and metadata from the dataset's tables.
When applied at the project or organization level, this role can also enumerate all datasets
in the project. Additional roles, however, are necessary to allow the running of jobs.
Sample Question 33
You have an application that runs on Compute Engine VM instances in a custom Virtual
Private Cloud (VPC). Your company's security policies only allow the use to internal IP
addresses on VM instances and do not let VM instances connect to the internet. You need
to ensure that the application can access a file hosted in a Cloud Storage bucket within
your project. What should you do?
A. Enable Private Service Access on the Cloud Storage Bucket. B. Add slorage.googleapis.com to the list of restricted services in a VPC Service Controls perimeter and add your project to the list to protected projects. C. Enable Private Google Access on the subnet within the custom VPC. D. Deploy a Cloud NAT instance and route the traffic to the dedicated IP address of the Cloud Storage bucket.
Answer: C
Sample Question 34
You are performing a monthly security check of your Google Cloud environment and want
to know who has access to view data stored in your Google Cloud
Project. What should you do?
A. Enable Audit Logs for all APIs that are related to data storage. B. Review the IAM permissions for any role that allows for data access. Most Voted C. Review the Identity-Aware Proxy settings for each resource. D. Create a Data Loss Prevention job.
You have files in a Cloud Storage bucket that you need to share with your suppliers. You
want to restrict the time that the files are available to your suppliers to 1 hour. You want to
follow Google recommended practices. What should you do?
A. Create a service account with just the permissions to access files in the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -m 1h gs:///*. B. Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -d 1h gs:///**. C. Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -p 60m gs:///. D. Create a JSON key for the Default Compute Engine Service Account. Execute the command gsutil signurl -t 60m gs:///***
Answer: B Explanation: This command correctly specifies the duration that the signed url should be
valid for by using the -d flag. The default is 1 hour so omitting the -d flag would have also
resulted in the same outcome. Times may be specified with no suffix (default hours), or
with s = seconds, m = minutes, h = hours, d = days. The max duration allowed is
7d.Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
Sample Question 36
Your coworker has helped you set up several configurations for gcloud. You've noticed that
you're running commands against the wrong project. Being new to the company, you
haven't yet memorized any of the projects. With the fewest steps possible, what's the
fastest way to switch to the correct configuration?
A. Run gcloud configurations list followed by gcloud configurations activate . B. Run gcloud config list followed by gcloud config activate. C. Run gcloud config configurations list followed by gcloud config configurations activate. D. Re-authenticate with the gcloud auth login command and select the correct configurations on login
Answer: C
Explanation: as gcloud config configurations list can help check for the existing
configurations and activate can help switch to the configuration.
gcloud config configurations list lists existing named configurations
gcloud config configurations activate activates an existing named configuration
Obtains access credentials for your user account via a web-based authorization flow. When
this command completes successfully, it sets the active account in the current configuration
to the account specified. If no configuration exists, it creates a configuration named default
Sample Question 37
All development (dev) teams in your organization are located in the United States. Each
dev team has its own Google Cloud project. You want to restrict access so that each dev
team can only create cloud resources in the United States (US). What should you do?
A. Create a folder to contain all the dev projects Create an organization policy to limit resources in US locations. B. Create an organization to contain all the dev projects. Create an Identity and Access Management (1AM) policy to limit the resources in US regions. C. Create an Identity and Access Management <IAM) policy to restrict the resources locations in the US. Apply the policy to all dev projects. D. Create an Identity and Access Management (IAM)policy to restrict the resources locations in all dev projects. Apply the policy to all dev roles.
Answer: C
Sample Question 38
Your company completed the acquisition of a startup and is now merging the IT systems of
both companies. The startup had a production Google Cloud project in their organization.
You need to move this project into your organization and ensure that the project is billed lo
your organization. You want to accomplish this task with minimal effort. What should you
do?
A. Use the projects. move method to move the project to your organization. Update the
billing account of the project to that of your organization. B. Ensure that you have an Organization Administrator Identity and Access Management (1AM) role assigned to you in both organizations. Navigate to the Resource Manager in the startup's Google Cloud organization, and drag the project to your company's organization. C. Create a Private Catalog tor the Google Cloud Marketplace, and upload the resources of the startup’s production project to the Catalog. Share the Catalog with your organization, and deploy the resources in your company’s project. D. Create an infrastructure-as-code template tor all resources in the project by using Terraform. and deploy that template to a new project in your organization. Delete the protect from the startup's Google Cloud organization.
Answer: A
Sample Question 39
Your team is running an on-premises ecommerce application. The application contains a
complex set of microservices written in Python, and each microservice is running on
Docker containers. Configurations are injected by using environment variables. You need
to deploy your current application to a serverless Google Cloud cloud solution. What should
you do?
A. Use your existing CI/CD pipeline Use the generated Docker images and deploy them to
Cloud Run. Update the configurations and the required endpoints. B. Use your existing continuous integration and delivery (CI/CD) pipeline. Use the generated Docker images and deploy them to Cloud Function. Use the same configuration as on-premises. C. Use the existing codebase and deploy each service as a separate Cloud Function Update the configurations and the required endpoints. D. Use your existing codebase and deploy each service as a separate Cloud Run Use the same configurations as on-premises.
Answer: A
Sample Question 40
You are developing a financial trading application that will be used globally. Data is stored
and queried using a relational structure, and clients from all over the world should get the
exact identical state of the data. The application will be deployed in multiple regions to
provide the lowest latency to end users. You need to select a storage option for the
application data while minimizing latency. What should you do?
A. Use Cloud Bigtable for data storage. B. Use Cloud SQL for data storage. C. Use Cloud Spanner for data storage. D. Use Firestore for data storage.
Answer: C Explanation: Keywords, Financial data (large data) used globally, data stored and queried using relational structure (SQL), clients should get exact identical copies(Strong Consistency), Multiple region, low latency to end user, select storage option to minimize latency.
Sample Question 41
You created a Kubernetes deployment by running kubectl run nginx image=nginx
replicas=1. After a few days, you decided you no longer want this deployment. You
identified the pod and deleted it by running kubectl delete pod. You noticed the pod got
recreated.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-nqqmt 1/1 Running 0 9m41s
$ kubectl delete pod nginx-84748895c4-nqqmt
pod nginx-84748895c4-nqqmt deleted $ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-k6bzl 1/1 Running 0 25s
What should you do to delete the deployment and avoid pod getting recreated?
A. kubectl delete deployment nginx B. kubectl delete –deployment=nginx C. kubectl delete pod nginx-84748895c4-k6bzl –no-restart 2 D. kubectl delete inginx
Answer: A Explanation: This command correctly deletes the deployment. Pods are managed by
kubernetes workloads (deployments). When a pod is deleted, the deployment detects the
pod is unavailable and brings up another pod to maintain the replica count. The only way to
delete the workload is by deleting the deployment itself using the kubectl delete
deployment command.
$ kubectl delete deployment nginx
deployment.apps nginx deleted
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources
Sample Question 42
You have been asked to migrate a docker application from datacenter to cloud. Your
solution architect has suggested uploading docker images to GCR in one project and
running an application in a GKE cluster in a separate project. You want to store images in
the project img-278322 and run the application in the project prod-278986. You want to tag
the image as acme_track_n_trace:v1. You want to follow Google-recommended practices.
What should you do?
A. Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace B. Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace:v1 C. Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace D. Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace:v1
Answer: B Explanation:
Explanation
Run gcloud builds submit tag gcr.io/img-278322/acme_track_n_trace:v1. is the
right answer.
This command correctly tags the image as acme_track_n_trace:v1 and uploads the image
to the img-278322 project.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Sample Question 43
You are configuring Cloud DNS. You want !to create DNS records to point
home.mydomain.com, mydomain.com. and www.mydomain.com to the IP address of your
Google Cloud load balancer. What should you do?
A. Create one CNAME record to point mydomain.com to the load balancer, and create two
A records to point WWW and HOME lo mydomain.com respectively. B. Create one CNAME record to point mydomain.com to the load balancer, and create two AAAA records to point WWW and HOME to mydomain.com respectively. C. Create one A record to point mydomain.com to the load balancer, and create two CNAME records to point WWW and HOME to mydomain.com respectively. D. Create one A record to point mydomain.com lo the load balancer, and create two NS records to point WWW and HOME to mydomain.com respectively.
Answer: C
Exam Code: Associate-Cloud-EngineerExam Name: Google Cloud Certified - Associate Cloud EngineerLast Update: May 13, 2024Questions: 269