Spring Sale Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmas50

Google Associate-Cloud-Engineer - Google Cloud Certified - Associate Cloud Engineer

Page: 3 / 10
Total 332 questions

You need to run an important query in BigQuery but expect it to return a lot of records. You want to find out how much it will cost to run the query. You are using on-demand pricing. What should you do?

A.

Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand.

B.

Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator.

C.

Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator.

D.

Run a select count (*) to get an idea of how many records your query will look through. Then convert that number of rows to dollars using the Pricing Calculator.

You want to set up a Google Kubernetes Engine cluster Verifiable node identity and integrity are required for the cluster, and nodes cannot be accessed from the internet. You want to reduce the operational cost of managing your cluster, and you want to follow Google-recommended practices. What should you do?

A.

Deploy a private autopilot cluster

B.

Deploy a public autopilot cluster.

C.

Deploy a standard public cluster and enable shielded nodes.

D.

Deploy a standard private cluster and enable shielded nodes.

Your company plans to migrate its on-premises PostgreSQL database to Google Cloud. The workloads are demanding, requiring fast transactional and analytical performance. You need to select a fully managed database service on Google Cloud. Your solution must also be able to synchronously replicate and optimize the storage layer. What should you do?

A.

Use the psql client installed on a Compute Engine Instance. Connect to the Cloud SQL instance to perform the database migration.

B.

Migrate the database to AlloyDB for PostgreSQL by using Database Migration Service.

C.

Migrate the database to Cloud SQL for PostgreSQL by using Database Migration Service.

D.

Create a Compute Engine instance. Install and configure PostgreSQL on the instance, and migrate the database.

Several employees at your company have been creating projects with Cloud Platform and paying for it with their personal credit cards, which the company reimburses. The company wants to centralize all these projects under a single, new billing account. What should you do?

A.

Contact cloud-billing@google.com with your bank account details and request a corporate billing account for your company.

B.

Create a ticket with Google Support and wait for their call to share your credit card details over the phone.

C.

In the Google Platform Console, go to the Resource Manage and move all projects to the root Organization.

D.

In the Google Cloud Platform Console, create a new billing account and set up a payment method.

For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Stackdriver Logging agent on all the instances. You want to minimize cost. What should you do?

A.

1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances’ metadata to add the following value: logs-destination: bq://platform-logs.

B.

1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2. Create a Cloud Function that is triggered by messages in the logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.

C.

1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.

D.

1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2. Configure this Cloud Function to create a BigQuery Job that executes this query:INSERT INTO dataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.

You are building a multi-player gaming application that will store game information in a database. As the popularity of the application increases, you are concerned about delivering consistent performance. You need to ensure an optimal gaming performance for global users, without increasing the management complexity. What should you do?

A.

Use Cloud SQL database with cross-region replication to store game statistics in the EU, US, and APAC regions.

B.

Use Cloud Spanner to store user data mapped to the game statistics.

C.

Use BigQuery to store game statistics with a Redis on Memorystore instance in the front to provide global consistency.

D.

Store game statistics in a Bigtable database partitioned by username.

You created a Kubernetes deployment by running kubectl run nginx image=nginx replicas=1. After a few days, you decided you no longer want this deployment. You identified the pod and deleted it by running kubectl delete pod. You noticed the pod got recreated.

$ kubectlgetpods

NAME READY STATUS RESTARTS AGE

nginx-84748895c4-nqqmt 1/1 Running 0 9m41s

$ kubectldeletepod nginx-84748895c4-nqqmt

pod nginx-84748895c4-nqqmt deleted

$ kubectlgetpods

NAME READY STATUS RESTARTS AGE

nginx-84748895c4-k6bzl 1/1 Running 0 25s

What should you do to delete the deployment and avoid pod getting recreated?

A.

kubectl delete deployment nginx

B.

kubectl delete –deployment=nginx

C.

kubectl delete pod nginx-84748895c4-k6bzl –no-restart 2

D.

kubectl delete inginx

Your Dataproc cluster runs in a single Virtual Private Cloud (VPC) network in a single subnet with range 172.16.20.128/25. There are no private IP addresses available in the VPC network. You want to add new VMs to communicate with your cluster using the minimum number of steps. What should you do?

A.

Modify the existing subnet range to 172.16.20.0/24.

B.

Create a new Secondary IP Range in the VPC and configure the VMs to use that range.

C.

Create a new VPC network for the VMs. Enable VPC Peering between the VMs’ VPC network and the Dataproc cluster VPC network.

D.

Create a new VPC network for the VMs with a subnet of 172.32.0.0/16. Enable VPC network Peering between the Dataproc VPC network and the VMs VPC network. Configure a custom Route exchange.

You need to immediately change the storage class of an existing Google Cloud bucket. You need to reduce service cost for infrequently accessed files stored in that bucket and for all files that will be added to that bucket in the future. What should you do?

A.

Use the gsutil to rewrite the storage class for the bucket Change the default storage class for the bucket

B.

Use the gsutil to rewrite the storage class for the bucket Set up Object Lifecycle management on the bucket

C.

Create a new bucket and change the default storage class for the bucket Set up Object Lifecycle management on lite bucket

D.

Create a new bucket and change the default storage class for the bucket import the files from the previous bucket into the new bucket

You need to grant access for three users so that they can view and edit table data on a Cloud Spanner instance. What should you do?

A.

Run gcloud iam roles describe roles/spanner.databaseUser. Add the users to the role.

B.

Run gcloud iam roles describe roles/spanner.databaseUser. Add the users to a new group. Add the group to the role.

C.

Run gcloud iam roles describe roles/spanner.viewer --project my-project. Add the users to the role.

D.

Run gcloud iam roles describe roles/spanner.viewer --project my-project. Add the users to a new group. Add the group to the role.