You have an application on a general-purpose Compute Engine instance that is experiencing excessive disk read throttling on its Zonal SSD Persistent Disk. The application primarily reads large files from disk. The disk size is currently 350 GB. You want to provide the maximum amount of throughput while minimizing costs. What should you do?
Correct Answer: C
Standard persistent disks are efficient and economical for handling sequential read/write operations, but they aren't optimized to handle high rates of random input/output operations per second (IOPS). If your apps require high rates of random IOPS, use SSD persistent disks. SSD persistent disks are designed for single- digit millisecond latencies. Observed latency is application specific. Reference: https://cloud.google.com/compute/docs/disks/performance Local SSDs Local SSDs are physically attached to the server that hosts your VM instance. Local SSDs have higher throughput and lower latency than standard persistent disks or SSD persistent disks. The data that you store on a local SSD persists only until the instance is stopped or deleted. Each local SSD is 375 GB in size, but you can attach a maximum of 24 local SSD partitions for a total of 9 TB per instance. Performance Local SSDs are designed to offer very high IOPS and low latency. Unlike persistent disks, you must manage the striping on local SSDs yourself. Combine multiple local SSD partitions into a single logical volume to achieve the best local SSD performance per instance, or format local SSD partitions individually. Local SSD performance depends on which interface you select. Local SSDs are available through both SCSI and NVMe interfaces.
Associate-Cloud-Engineer Exam Question 7
You have a web application deployed as a managed instance group. You have a new version of the application to gradually deploy. Your web application is currently receiving live web traffic. You want to ensure that the available capacity does not decrease during the deployment. What should you do?
(Your company is modernizing its applications and refactoring them to containerized microservices. You need to deploy the infrastructure on Google Cloud so that teams can deploy their applications. The applications cannot be exposed publicly. You want to minimize management and operational overhead. What should you do?)
Correct Answer: C
GKE Autopilot is a mode of operation in GKE where Google manages the underlying infrastructure, including nodes, node pools, and their upgrades. This significantly reduces the management and operational overhead for the user, allowing teams to focus solely on deploying and managing their containerized applications. Since the applications are not exposed publicly, the zonal or regional nature of the cluster primarily impacts availability within Google Cloud, and Autopilot is available for both. Autopilot minimizes the operational burden, which is a key requirement. Option A: A Standard zonal GKE cluster requires you to manage the nodes yourself, including sizing, scaling, and upgrades, increasing operational overhead compared to Autopilot. Option B: Manually installing and managing Kubernetes on a fleet of Compute Engine instances involves the highest level of management overhead, which contradicts the requirement to minimize it. Option D: A Standard regional GKE cluster provides higher availability than a zonal cluster by replicating the control plane and nodes across multiple zones within a region. However, it still requires you to manage the underlying nodes, unlike Autopilot. Reference to Google Cloud Certified - Associate Cloud Engineer Documents: The different modes of GKE operation, including Standard and Autopilot, and their respective management responsibilities and benefits, are clearly outlined in the Google Kubernetes Engine documentation, a core topic for the Associate Cloud Engineer certification. The emphasis on reduced operational overhead with Autopilot is a key differentiator.
Associate-Cloud-Engineer Exam Question 9
You have deployed an application on a Compute Engine instance. An external consultant needs to access the Linux-based instance. The consultant is connected to your corporate network through a VPN connection, but the consultant has no Google account. What should you do?
Correct Answer: C
The best option is to instruct the external consultant to generate an SSH key pair, and request the public key from the consultant. Then, add the public key to the instance yourself, and have the consultant access the instance through SSH with their private key. This way, you can grant the consultant access to the instance without requiring a Google account or exposing the instance's public IP address. This option also follows the best practice of using user-managed SSH keys instead of service account keys for SSH access1. Option A is not feasible because the external consultant does not have a Google account, and therefore cannot use Identity-Aware Proxy (IAP) to access the instance. IAP requires the user to authenticate with a Google account and have the appropriate IAM permissions to access the instance2. Option B is not secure because it exposes the instance's public IP address, which can increase the risk of unauthorized access or attacks. Option D is not correct because it reverses the roles of the public and private keys. The public key should be added to the instance, and the private key should be kept by the consultant. Sharing the private key with anyone else can compromise the security of the SSH connection3. 1: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys 2: https://cloud.google.com/iap/docs/using-tcp-forwarding 3: https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances
Associate-Cloud-Engineer Exam Question 10
You need to create a Compute Engine instance in a new project that doesn't exist yet. What should you do?
Correct Answer: A
https://cloud.google.com/sdk/gcloud/reference/projects/create Quickstart: Creating a New Instance Using the Command Line Before you begin 1. In the Cloud Console, on the project selector page, select or create a Cloud project. 2. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project. To use the gcloud command-line tool for this quickstart, you must first install and initialize the Cloud SDK: 1. Download and install the Cloud SDK using the instructions given on Installing Google Cloud SDK. 2. Initialize the SDK using the instructions given on Initializing Cloud SDK. To use gcloud in Cloud Shell for this quickstart, first activate Cloud Shell using the instructions given on Starting Cloud Shell. https://cloud.google.com/ai-platform/deep-learning-vm/docs/quickstart-cli#before-you-begin