A) primary key
B) unique key
C) row key
D) master key
Correct Answer
verified
Multiple Choice
A) Primitive role
B) Predefined role
C) Authorized view
D) It's not possible to give access to only the first three columns of a table.
Correct Answer
verified
Multiple Choice
A) Check the dashboard application to see if it is not displaying correctly.
B) Run a fixed dataset through the Cloud Dataflow pipeline and analyze the output.
C) Use Google Stackdriver Monitoring on Cloud Pub/Sub to find the missing messages.
D) Switch Cloud Dataflow to pull messages from Cloud Pub/Sub instead of Cloud Pub/Sub pushing messages to Cloud Dataflow.
Correct Answer
verified
Multiple Choice
A) Cloud Speech-to-Text API
B) Cloud Natural Language API
C) Dialogflow Enterprise Edition
D) Cloud AutoML Natural Language
Correct Answer
verified
Multiple Choice
A) cron
B) Cloud Composer
C) Cloud Scheduler
D) Workflow Templates on Cloud Dataproc
Correct Answer
verified
Multiple Choice
A) Allocate sufficient memory to the Hadoop cluster, so that the intermediary data of that particular Hadoop job can be held in memory
B) Allocate sufficient persistent disk space to the Hadoop cluster, and store the intermediate data of that particular Hadoop job on native HDFS
C) Allocate more CPU cores of the virtual machine instances of the Hadoop cluster so that the networking bandwidth for each instance can scale up
D) Allocate additional network interface card (NIC) , and configure link aggregation in the operating system to use the combined throughput when working with Cloud Storage
Correct Answer
verified
Multiple Choice
A) Organize your data in a single table, export, and compress and store the BigQuery data in Cloud Storage.
B) Organize your data in separate tables for each month, and export, compress, and store the data in Cloud Storage.
C) Organize your data in separate tables for each month, and duplicate your data on a separate dataset in BigQuery.
D) Organize your data in separate tables for each month, and use snapshot decorators to restore the table to a time prior to the corruption.
Correct Answer
verified
Multiple Choice
A) Assign the users/groups data viewer access at the table level for each table
B) Create SQL views for each team in the same dataset in which the data resides, and assign the users/groups data viewer access to the SQL views
C) Create authorized views for each team in the same dataset in which the data resides, and assign the users/groups data viewer access to the authorized views
D) Create authorized views for each team in datasets created for each team. Assign the authorized views data viewer access to the dataset in which the data resides. Assign the users/groups data viewer access to the datasets in which the authorized views reside
Correct Answer
verified
Multiple Choice
A) Use Cloud ML Engine for training existing Spark ML models
B) Rewrite your models on TensorFlow, and start using Cloud ML Engine
C) Use Cloud Dataproc for training existing Spark ML models, but start reading data directly from BigQuery
D) Spin up a Spark cluster on Compute Engine, and train Spark ML models on the data exported from BigQuery
Correct Answer
verified
Multiple Choice
A) 500 TB
B) 1 GB
C) 1 TB
D) 500 GB
Correct Answer
verified
Multiple Choice
A) Put the data into Google Cloud Storage.
B) Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.
C) Tune the Cloud Dataproc cluster so that there is just enough disk for all data.
D) Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.
Correct Answer
verified
Multiple Choice
A) Perform a DML INSERT, UPDATE, or DELETE to replicate each individual CDC record in real time directly on the reporting table.
B) Insert each new CDC record and corresponding operation type to a staging table in real time.
C) Periodically DELETE outdated records from the reporting table.
D) Periodically use a DML MERGE to perform several DML INSERT, UPDATE, and DELETE operations at the same time on the reporting table.
E) Insert each new CDC record and corresponding operation type in real time to the reporting table, and use a materialized view to expose only the newest version of each unique record.
Correct Answer
verified
Multiple Choice
A) Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster via the command line.
B) Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources. Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources.
C) Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the default autoscaling setting for worker instances.
D) Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-default Compute Engine machine types when needed.
Correct Answer
verified
Multiple Choice
A) Add capacity (memory and disk space) to the database server by the order of 200.
B) Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.
C) Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.
D) Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated reports.
Correct Answer
verified
Multiple Choice
A) Use gcloud kms keys create to create a symmetric key. Then use gcloud kms encrypt to encrypt each archival file with the key and unique additional authenticated data (AAD) . Use gsutil cp to upload each encrypted file to the Cloud Storage bucket, and keep the AAD outside of Google Cloud. Use gcloud kms keys create to create a symmetric key. Then use gcloud kms encrypt to encrypt each archival file with the key and unique additional authenticated data (AAD) . Use gsutil cp to upload each encrypted file to the Cloud Storage bucket, and keep the AAD outside of Google Cloud.
B) Use gcloud kms keys create to create a symmetric key. Then use gcloud kms encrypt to encrypt each archival file with the key. Use gsutil cp to upload each encrypted file to the Cloud Storage bucket. Manually destroy the key previously used for encryption, and rotate the key once. Use gcloud kms keys create to create a symmetric key. Then use to encrypt each archival file with the key. Use gsutil cp to upload each encrypted file to the Cloud Storage bucket. Manually destroy the key previously used for encryption, and rotate the key once.
C) Specify customer-supplied encryption key (CSEK) in the . boto configuration file. Use gsutil cp to upload each archival file to the Cloud Storage bucket. Save the CSEK in Cloud Memorystore as permanent storage of the secret. Specify customer-supplied encryption key (CSEK) in the . boto configuration file. Use gsutil cp to upload each archival file to the Cloud Storage bucket. Save the CSEK in Cloud Memorystore as permanent storage of the secret.
D) Specify customer-supplied encryption key (CSEK) in the . boto configuration file. Use gsutil cp to upload each archival file to the Cloud Storage bucket. Save the CSEK in a different project that only the security team can access. to upload each archival file to the Cloud Storage bucket. Save the CSEK in a different project that only the security team can access.
Correct Answer
verified
Multiple Choice
A) Set up the Pub/Sub emulator on your local machine. Validate the behavior of your new subscriber logic before deploying it to production.
B) Create a Pub/Sub snapshot before deploying new subscriber code. Use a Seek operation to re-deliver messages that became available after the snapshot was created.
C) Use Cloud Build for your deployment. If an error occurs after deployment, use a Seek operation to locate a timestamp logged by Cloud Build at the start of the deployment.
D) Enable dead-lettering on the Pub/Sub topic to capture messages that aren't successfully acknowledged. If an error occurs after deployment, re-deliver any messages captured by the dead-letter queue.
Correct Answer
verified
Multiple Choice
A) Relational
B) mySQL
C) NoSQL
D) SQL Server
Correct Answer
verified
Multiple Choice
A) BigQuery
B) Cloud SQL
C) Cloud BigTable
D) Cloud Datastore
Correct Answer
verified
Multiple Choice
A) Create a Stackdriver Monitoring dashboard based on the BigQuery metric query/scanned_bytes Create a Stackdriver Monitoring dashboard based on the BigQuery metric query/scanned_bytes
B) Create a Stackdriver Monitoring dashboard based on the BigQuery metric slots/allocated_for_project slots/allocated_for_project
C) Create a log export for each project, capture the BigQuery job execution logs, create a custom metric based on the totalSlotMs , and create a Stackdriver Monitoring dashboard based on the custom metric Create a log export for each project, capture the BigQuery job execution logs, create a custom metric based on the totalSlotMs , and create a Stackdriver Monitoring dashboard based on the custom metric
D) Create an aggregated log export at the organization level, capture the BigQuery job execution logs, create a custom metric based on the totalSlotMs , and create a Stackdriver Monitoring dashboard based on the custom metric Create an aggregated log export at the organization level, capture the BigQuery job execution logs, create a custom metric based on the
Correct Answer
verified
Multiple Choice
A) Use Cloud Vision AutoML with the existing dataset.
B) Use Cloud Vision AutoML, but reduce your dataset twice.
C) Use Cloud Vision API by providing custom labels as recognition hints.
D) Train your own image recognition model leveraging transfer learning techniques.
Correct Answer
verified
Showing 181 - 200 of 256
Related Exams