Spring Sale Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: xmas50

Google Professional-Data-Engineer - Google Professional Data Engineer Exam

Page: 1 / 6
Total 400 questions

Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?

A.

Store the common data in BigQuery as partitioned tables.

B.

Store the common data in BigQuery and expose authorized views.

C.

Store the common data encoded as Avro in Google Cloud Storage.

D.

Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.

Flowlogistic’s management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system. You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose?

A.

Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage

B.

Cloud Pub/Sub, Cloud Dataflow, and Local SSD

C.

Cloud Pub/Sub, Cloud SQL, and Cloud Storage

D.

Cloud Load Balancing, Cloud Dataflow, and Cloud Storage

You have a data analyst team member who needs to analyze data by using BigQuery. The data analyst wants to create a data pipeline that would load 200 CSV files with an average size of 15MB from a Cloud Storage bucket into BigQuery daily. The data needs to be ingested and transformed before being accessed in BigQuery for analysis. You need to recommend a fully managed, no-code solution for the data analyst. What should you do?

A.

Create a Cloud Run function and schedule it to run daily using Cloud Scheduler to load the data into BigQuery.

B.

Use the BigQuery Data Transfer Service to load files from Cloud Storage to BigQuery, create a BigQuery job which transforms the data using BigQuery SQL and schedule it to run daily.

C.

Build a custom Apache Beam pipeline and run it on Dataflow to load the file from Cloud Storage to BigQuery and schedule it to run daily using Cloud Composer.

D.

Create a pipeline by using BigQuery pipelines and schedule it to load the data into BigQuery daily.

You need to create a data pipeline that copies time-series transaction data so that it can be queried from within BigQuery by your data science team for analysis. Every hour, thousands of transactions are updated with a new status. The size of the intitial dataset is 1.5 PB, and it will grow by 3 TB per day. The data is heavily structured, and your data science team will build machine learning models based on this data. You want to maximize performance and usability for your data science team. Which two strategies should you adopt? Choose 2 answers.

A.

Denormalize the data as must as possible.

B.

Preserve the structure of the data as much as possible.

C.

Use BigQuery UPDATE to further reduce the size of the dataset.

D.

Develop a data pipeline where status updates are appended to BigQuery instead of updated.

E.

Copy a daily snapshot of transaction data to Cloud Storage and store it as an Avro file. Use BigQuery’s support for external data sources to query.

You are building a report-only data warehouse where the data is streamed into BigQuery via the streaming API Following Google's best practices, you have both a staging and a production table for the data How should you design your data loading to ensure that there is only one master dataset without affecting performance on either the ingestion or reporting pieces?

A.

Have a staging table that is an append-only model, and then update the production table every three hourswith the changes written to staging

B.

Have a staging table that is an append-only model, and then update the production table every ninetyminutes with the changes written to staging

C.

Have a staging table that moves the staged data over to the production table and deletes the contents of thestaging table every three hours

D.

Have a staging table that moves the staged data over to the production table and deletes the contents of the staging table every thirty minutes

You are running your BigQuery project in the on-demand billing model and are executing a change data capture (CDC) process that ingests data. The CDC process loads 1 GB of data every 10 minutes into a temporary table, and then performs a merge into a 10 TB target table. This process is very scan intensive and you want to explore options to enable a predictable cost model. You need to create a BigQuery reservation based on utilization information gathered from BigQuery Monitoring and apply the reservation to the CDC process. What should you do?

A.

Create a BigQuery reservation for the job.

B.

Create a BigQuery reservation for the service account running the job.

C.

Create a BigQuery reservation for the dataset.

D.

Create a BigQuery reservation for the project.

You are designing a Dataflow pipeline for a batch processing job. You want to mitigate multiple zonal failures at job submission time. What should you do?

A.

Specify a worker region by using the —region flag.

B.

Set the pipeline staging location as a regional Cloud Storage bucket.

C.

Submit duplicate pipelines in two different zones by using the —zone flag.

D.

Create an Eventarc trigger to resubmit the job in case of zonal failure when submitting the job.

You used Cloud Dataprep to create a recipe on a sample of data in a BigQuery table. You want to reuse this recipe on a daily upload of data with the same schema, after the load job with variable execution time completes. What should you do?

A.

Create a cron schedule in Cloud Dataprep.

B.

Create an App Engine cron job to schedule the execution of the Cloud Dataprep job.

C.

Export the recipe as a Cloud Dataprep template, and create a job in Cloud Scheduler.

D.

Export the Cloud Dataprep job as a Cloud Dataflow template, and incorporate it into a Cloud Composer job.

You are collecting loT sensor data from millions of devices across the world and storing the data in BigQuery. Your access pattern is based on recent data tittered by location_id and device_version with the following query:

You want to optimize your queries for cost and performance. How should you structure your data?

A.

Partition table data by create_date, location_id and device_version

B.

Partition table data by create_date cluster table data by tocation_id and device_version

C.

Cluster table data by create_date location_id and device_version

D.

Cluster table data by create_date, partition by location and device_version

MJTelco’s Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to scale its compute power up as required. Which Cloud Dataflow pipeline configuration setting should you update?

A.

The zone

B.

The number of workers

C.

The disk size per worker

D.

The maximum number of workers