Summer Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: ecus65

Google Associate-Data-Practitioner - Google Cloud Associate Data Practitioner (ADP Exam)

Your company is setting up an enterprise business intelligence platform. You need to limit data access between many different teams while following the Google-recommended approach. What should you do first?

A.

Create a separate Looker Studio report for each team, and share each report with the individuals within each team.

B.

Create one Looker Studio report with multiple pages, and add each team's data as a separate data source to the report.

C.

Create a Looker (Google Cloud core) instance, and create a separate dashboard for each team.

D.

Create a Looker (Google Cloud core) instance, and configure different Looker groups for each team.

You are responsible for managing Cloud Storage buckets for a research company. Your company has well-defined data tiering and retention rules. You need to optimize storage costs while achieving your data retention needs. What should you do?

A.

Configure the buckets to use the Archive storage class.

B.

Configure a lifecycle management policy on each bucket to downgrade the storage class and remove objects based on age.

C.

Configure the buckets to use the Standard storage class and enable Object Versioning.

D.

Configure the buckets to use the Autoclass feature.

You manage a large amount of data in Cloud Storage, including raw data, processed data, and backups. Your organization is subject to strict compliance regulations that mandate data immutability for specific data types. You want to use an efficient process to reduce storage costs while ensuring that your storage strategy meets retention requirements. What should you do?

A.

Configure lifecycle management rules to transition objects to appropriate storage classes based on access patterns. Set up Object Versioning for all objects to meet immutability requirements.

B.

Move objects to different storage classes based on their age and access patterns. Use Cloud Key Management Service (Cloud KMS) to encrypt specific objects with customer-managed encryption keys (CMEK) to meet immutability requirements.

C.

Create a Cloud Run function to periodically check object metadata, and move objects to the appropriate storage class based on age and access patterns. Use object holds to enforce immutability for specific objects.

D.

Use object holds to enforce immutability for specific objects, and configure lifecycle management rules to transition objects to appropriate storage classes based on age and access patterns.

You are working with a small dataset in Cloud Storage that needs to be transformed and loaded into BigQuery for analysis. The transformation involves simple filtering and aggregation operations. You want to use the most efficient and cost-effective data manipulation approach. What should you do?

A.

Use Dataproc to create an Apache Hadoop cluster, perform the ETL process using Apache Spark, and load the results into BigQuery.

B.

Use BigQuery's SQL capabilities to load the data from Cloud Storage, transform it, and store the results in a new BigQuery table.

C.

Create a Cloud Data Fusion instance and visually design an ETL pipeline that reads data from Cloud Storage, transforms it using built-in transformations, and loads the results into BigQuery.

D.

Use Dataflow to perform the ETL process that reads the data from Cloud Storage, transforms it using Apache Beam, and writes the results to BigQuery.

You work for a healthcare company that has a large on-premises data system containing patient records with personally identifiable information (PII) such as names, addresses, and medical diagnoses. You need a standardized managed solution that de-identifies PII across all your data feeds prior to ingestion to Google Cloud. What should you do?

A.

Use Cloud Run functions to create a serverless data cleaning pipeline. Store the cleaned data in BigQuery.

B.

Use Cloud Data Fusion to transform the data. Store the cleaned data in BigQuery.

C.

Load the data into BigQuery, and inspect the data by using SQL queries. Use Dataflow to transform the data and remove any errors.

D.

Use Apache Beam to read the data and perform the necessary cleaning and transformation operations. Store the cleaned data in BigQuery.

You have a Cloud SQL for PostgreSQL database that stores sensitive historical financial data. You need to ensure that the data is uncorrupted and recoverable in the event that the primary region is destroyed. The data is valuable, so you need to prioritize recovery point objective (RPO) over recovery time objective (RTO). You want to recommend a solution that minimizes latency for primary read and write operations. What should you do?

A.

Configure the Cloud SQL for PostgreSQL instance for regional availability (HA) with asynchronous replication to a secondary instance in a different region.

B.

Configure the Cloud SQL for PostgreSQL instance for multi-region backup locations.

C.

Configure the Cloud SQL for PostgreSQL instance for regional availability (HA). Back up the Cloud SQL for PostgreSQL database hourly to a Cloud Storage bucket in a different region.

D.

Configure the Cloud SQL for PostgreSQL instance for regional availability (HA) with synchronous replication to a secondary instance in a different zone.

Your retail company collects customer data from various sources:

Online transactions: Stored in a MySQL database

Customer feedback: Stored as text files on a company server

Social media activity: Streamed in real-time from social media platforms

You are designing a data pipeline to extract this data. Which Google Cloud storage system(s) should you select for further analysis and ML model training?

A.

1. Online transactions: Cloud Storage

2. Customer feedback: Cloud Storage

3. Social media activity: Cloud Storage

B.

1. Online transactions: BigQuery

2. Customer feedback: Cloud Storage

3. Social media activity: BigQuery

C.

1. Online transactions: Bigtable

2. Customer feedback: Cloud Storage

3. Social media activity: CloudSQL for MySQL

D.

1. Online transactions: Cloud SQL for MySQL

2. Customer feedback: BigQuery

3. Social media activity: Cloud Storage

You want to build a model to predict the likelihood of a customer clicking on an online advertisement. You have historical data in BigQuery that includes features such as user demographics, ad placement, and previous click behavior. After training the model, you want to generate predictions on new data. Which model type should you use in BigQuery ML?

A.

Linear regression

B.

Matrix factorization

C.

Logistic regression

D.

K-means clustering

Your retail company wants to analyze customer reviews to understand sentiment and identify areas for improvement. Your company has a large dataset of customer feedback text stored in BigQuery that includes diverse language patterns, emojis, and slang. You want to build a solution to classify customer sentiment from the feedback text. What should you do?

A.

Preprocess the text data in BigQuery using SQL functions. Export the processed data to AutoML Natural Language for model training and deployment.

B.

Export the raw data from BigQuery. Use AutoML Natural Language to train a custom sentiment analysis model.

C.

Use Dataproc to create a Spark cluster, perform text preprocessing using Spark NLP, and build a sentiment analysis model with Spark MLlib.

D.

Develop a custom sentiment analysis model using TensorFlow. Deploy it on a Compute Engine instance.

You have a BigQuery dataset containing sales data. This data is actively queried for the first 6 months. After that, the data is not queried but needs to be retained for 3 years for compliance reasons. You need to implement a data management strategy that meets access and compliance requirements, while keeping cost and administrative overhead to a minimum. What should you do?

A.

Use BigQuery long-term storage for the entire dataset. Set up a Cloud Run function to delete the data from BigQuery after 3 years.

B.

Partition a BigQuery table by month. After 6 months, export the data to Coldline storage. Implement a lifecycle policy to delete the data from Cloud Storage after 3 years.

C.

Set up a scheduled query to export the data to Cloud Storage after 6 months. Write a stored procedure to delete the data from BigQuery after 3 years.

D.

Store all data in a single BigQuery table without partitioning or lifecycle policies.