Summer Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: ecus65

Google Associate-Cloud-Engineer - Google Cloud Certified - Associate Cloud Engineer

Page: 10 / 10
Total 325 questions

Your company is running a three-tier web application on virtual machines that use a MySQL database. You need to create an estimated total cost of cloud infrastructure to run this application on Google Cloud instances and Cloud SQL. What should you do?

A.

Use the Google Cloud Pricing Calculator to determine the cost of every Google Cloud resource you expect to use. Use similar size instances for the web server, and use your current on-premises machines as a comparison for Cloud SQL.

B.

Implement a similar architecture on Google Cloud, and run a reasonable load test on a smaller scale. Check the billing information, and calculate the estimated costs based on the real load your system usually handles.

C.

Use the Google Cloud Pricing Calculator and select the Cloud Operations template to define your web application with as much detail as possible.

D.

Create a Google spreadsheet with multiple Google Cloud resource combinations. On a separate sheet, import the current Google Cloud prices and use these prices for the calculations within formulas.

You host a static website on Cloud Storage. Recently, you began to include links to PDF files on this site. Currently, when users click on the links to these PDF files, their browsers prompt them to save the file onto their local system. Instead, you want the clicked PDF files to be displayed within the browser window directly, without prompting the user to save the file locally. What should you do?

A.

Enable Cloud CDN on the website frontend.

B.

Enable ‘Share publicly’ on the PDF file objects.

C.

Set Content-Type metadata to application/pdf on the PDF file objects.

D.

Add a label to the storage bucket with a key of Content-Type and value of application/pdf.

You are running multiple VPC-native Google Kubernetes Engine clusters in the same subnet. The IPs available for the nodes are exhausted, and you want to ensure that the clusters can grow in nodes when needed. What should you do?

A.

Create a new subnet in the same region as the subnet being used.

B.

Add an alias IP range to the subnet used by the GKE clusters.

C.

Create a new VPC, and set up VPC peering with the existing VPC.

D.

Expand the CIDR range of the relevant subnet for the cluster.

You are about to deploy a new Enterprise Resource Planning (ERP) system on Google Cloud. The application holds the full database in-memory for fast data access, and you need to configure the most appropriate resources on Google Cloud for this application. What should you do?

A.

Provision preemptible Compute Engine instances.

B.

Provision Compute Engine instances with GPUs attached.

C.

Provision Compute Engine instances with local SSDs attached.

D.

Provision Compute Engine instances with M1 machine type.

For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Stackdriver Logging agent on all the instances. You want to minimize cost. What should you do?

A.

1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances’ metadata to add the following value: logs-destination: bq://platform-logs.

B.

1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2. Create a Cloud Function that is triggered by messages in the logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.

C.

1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.

D.

1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2. Configure this Cloud Function to create a BigQuery Job that executes this query:INSERT INTO dataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.

The core business of your company is to rent out construction equipment at a large scale. All the equipment that is being rented out has been equipped with multiple sensors that send event information every few seconds. These signals can vary from engine status, distance traveled, fuel level, and more. Customers are billed based on the consumption monitored by these sensors. You expect high throughput – up to thousands of events per hour per device – and need to retrieve consistent databased on the time of the event. Storing and retrieving individual signals should be atomic. What should you do?

A.

Create a file in Cloud Storage per device and append new data to that file.

B.

Create a file in Cloud Filestore per device and append new data to that file.

C.

Ingest the data into Datastore. Store data in an entity group based on the device.

D.

Ingest the data into Cloud Bigtable. Create a row key based on the event timestamp.

Your company developed an application to deploy on Google Kubernetes Engine. Certain parts of the application are not fault-tolerant and are allowed to have downtime Other parts of the application are critical and must always be available. You need to configure a Goorj e Kubernfl:es Engine duster while optimizing for cost. What should you do?

A.

Create a cluster with a single node-pool by using standard VMs. Label the fault-tolerant Deployments as spot-true.

B.

Create a cluster with a single node-pool by using Spot VMs. Label the critical Deployments as spot-false.

C.

Create a cluster with both a Spot W node pool and a rode pool by using standard VMs Deploy the critical.deployments on the Spot VM node pool and the fault; tolerant deployments on the node pool by using standard VMs.

D.

Create a cluster with both a Spot VM node pool and by using standard VMs. Deploy the critical deployments on the mode pool by using standard VMs and the fault-tolerant deployments on the Spot VM node pool.