Google Professional Cloud Network Engineer Exam Dumps and Practice Test Questions Set 7 Q91 – 105

Visit here for our full Google Professional Cloud Network Engineer exam dumps and practice test questions.

Question 91

Your organization needs to connect multiple VPCs across different projects while maintaining centralized control over routing and firewall policies. Which architecture should you implement?

A) VPC Peering between all VPCs

B) Shared VPC with a host project

C) Cloud VPN connections between VPCs

D) Cloud Interconnect for each VPC

Answer: B

Explanation:

A Shared VPC with a host project is the optimal architecture for connecting multiple VPCs across different projects while maintaining centralized control over routing and firewall policies. Shared VPC allows an organization to define a common VPC network in a host project and share subnets with service projects, enabling resources in service projects to communicate using internal IP addresses while network administrators maintain centralized control over network resources. This architecture provides the governance, security, and operational efficiency required for enterprise environments.

In a Shared VPC architecture, one project is designated as the host project containing the shared VPC network with all its subnets, routes, and firewall rules. Other projects become service projects that are attached to the host project, gaining the ability to create resources in shared subnets. Network administrators with appropriate permissions in the host project control all networking aspects including subnet creation, IP address allocation, routing configuration, and firewall policies. Service project administrators can create compute resources like virtual machines and managed instance groups in the shared subnets but cannot modify network configurations. This separation of concerns enables centralized network governance while allowing application teams autonomy over their compute resources.

Shared VPC provides several important benefits for enterprise organizations. It enables centralized network administration where a single team controls routing, firewall rules, and network topology across multiple projects, ensuring consistent security policies and simplifying compliance. It allows efficient IP address management by allocating address space from a common pool rather than fragmenting address space across independent VPCs. Resources across service projects can communicate using private IP addresses without requiring VPC peering, reducing complexity and avoiding peering limitations. The architecture supports hierarchical organization structures where business units or application teams operate in separate projects while sharing common network infrastructure. Billing separation is maintained as each project is charged for its own resources despite sharing the network. This combination of centralized control with project-level isolation makes Shared VPC the preferred architecture for most enterprise GCP deployments.

Option A, VPC Peering between all VPCs, creates a mesh topology that becomes complex to manage with many VPCs. Peering does not provide centralized control as each VPC maintains independent routing and firewall policies, and transitive peering is not supported.

Option C, Cloud VPN connections between VPCs, introduces unnecessary complexity and cost for connecting VPCs within GCP. VPN is designed for hybrid connectivity rather than inter-VPC communication and would not provide centralized policy control.

Option D, Cloud Interconnect for each VPC, is designed for connecting on-premises networks to GCP, not for connecting VPCs to each other. This would be an inappropriate and expensive solution for the stated requirement.

Shared VPC is the enterprise-standard architecture for multi-project GCP deployments requiring centralized network governance and control.

Question 92

You need to implement a highly available Cloud VPN connection between your on-premises network and GCP. What is the recommended architecture?

A) Single Classic VPN gateway with one tunnel

B) HA VPN with two tunnels to different on-premises devices

C) Multiple Classic VPN gateways in different regions

D) Single HA VPN gateway with one tunnel

Answer: B

Explanation:

HA VPN with two tunnels to different on-premises devices is the recommended architecture for implementing highly available Cloud VPN connectivity between on-premises networks and GCP. HA VPN provides a 99.99% service availability SLA when properly configured with redundant tunnels, significantly higher than Classic VPN which offers no SLA. This architecture eliminates single points of failure by using redundant gateways and tunnels, ensuring continuous connectivity even during maintenance or component failures.

HA VPN uses two external IP addresses on the GCP side, each associated with a separate interface on the HA VPN gateway. For maximum availability, these two interfaces should connect via separate tunnels to two different peer VPN devices on the customer side, creating full redundancy at both ends. If the customer has only one on-premises VPN device, the configuration should still create two tunnels from the HA VPN gateway’s two interfaces to that single device, providing redundancy on the GCP side. The 99.99% SLA is achieved when traffic can successfully flow between the two HA VPN gateway interfaces and appropriate peer configurations, meeting specific topology requirements defined by Google Cloud.

The HA VPN architecture provides several advantages over Classic VPN. It uses dynamic routing via BGP to automatically detect failures and reroute traffic through operational tunnels without manual intervention or route updates. The dual-tunnel configuration provides active-active bandwidth aggregation where traffic load balances across both tunnels when both are operational. During tunnel or device failures, traffic automatically fails over to the remaining operational path typically within seconds, minimizing disruption. The redundant external IP addresses on separate gateway interfaces protect against GCP-side infrastructure issues. Configuration is simplified through predefined topology options that guide administrators through proper setup for different peer configurations. HA VPN supports higher throughput per tunnel compared to Classic VPN and scales better for large deployments. Organizations should always choose HA VPN over Classic VPN for production workloads requiring reliable connectivity.

Option A, Single Classic VPN gateway with one tunnel, provides no redundancy and no SLA. Any failure of the tunnel, gateway, or on-premises device completely interrupts connectivity. Classic VPN is deprecated for new deployments.

Option C, Multiple Classic VPN gateways in different regions, adds complexity without providing the same availability guarantees. Classic VPN lacks SLA regardless of configuration, and cross-region tunnels introduce unnecessary latency.

Option D, Single HA VPN gateway with one tunnel, does not meet the requirements for the 99.99% SLA. While using HA VPN hardware, a single tunnel configuration does not provide the redundancy necessary for high availability.

HA VPN with redundant tunnels is the only architecture that provides SLA-backed high availability for GCP VPN connectivity.

Question 93

Your application requires low-latency access to a managed database service from multiple regions. Which load balancing solution should you use?

A) External HTTP(S) Load Balancer

B) Internal TCP/UDP Load Balancer

C) External TCP/UDP Network Load Balancer

D) Traffic Director

Answer: A

Explanation:

An External HTTP(S) Load Balancer is the appropriate solution for providing low-latency access to a managed database service from multiple regions when the database is accessed via HTTP or HTTPS APIs. This global load balancer uses Google’s global network infrastructure with anycast IP addresses to route users to the nearest healthy backend based on network proximity, minimizing latency. The load balancer integrates with Cloud CDN for caching and provides sophisticated traffic management capabilities.

The External HTTP(S) Load Balancer operates at Layer 7, allowing it to make intelligent routing decisions based on HTTP headers, paths, and other application-layer information. When configured with backend services in multiple regions, the load balancer automatically directs each user request to the geographically closest available backend that has capacity. This proximity-based routing minimizes network latency by reducing the distance packets travel. The load balancer performs health checks against backends and automatically removes unhealthy instances from the serving pool, ensuring traffic only reaches operational backends. Cross-region load balancing provides automatic failover where traffic from one region can be served by backends in another region if the local backends become unavailable.

Several features make the External HTTP(S) Load Balancer suitable for global applications. It provides a single global anycast IP address that users worldwide can access, with Google’s edge network routing traffic optimally to backend resources. Cloud Armor integration protects against DDoS attacks and provides application-level access controls. SSL/TLS termination at the load balancer offloads encryption overhead from backend servers. URL mapping and host rules enable sophisticated request routing to different backend services based on request characteristics. Connection draining and session affinity ensure graceful handling of backend changes and maintain user experience. The load balancer scales automatically to handle traffic spikes without manual intervention. For database services exposed via HTTP APIs like Cloud SQL Admin API or application-level database services, this load balancer provides the global reach and low latency required.

Option B, Internal TCP/UDP Load Balancer, is a regional resource designed for internal traffic within a VPC network. It cannot serve traffic from the internet or provide global load balancing across multiple regions.

Option C, External TCP/UDP Network Load Balancer, is a regional load balancer that operates at Layer 4. While it can handle external traffic, it does not provide global load balancing or proximity-based routing across multiple regions.

Option D, Traffic Director, is a service mesh control plane for managing traffic between services, typically for microservices architectures. It is not designed for external load balancing to managed database services.

The External HTTP(S) Load Balancer provides the global reach, low latency, and reliability required for multi-region application access.

Question 94

You need to analyze network traffic for security threats and compliance in your VPC. Which service should you use?

A) Cloud Monitoring

B) VPC Flow Logs

C) Cloud Logging

D) Packet Mirroring

Answer: D

Explanation:

Packet Mirroring is the appropriate service for analyzing network traffic for security threats and compliance when deep packet inspection is required. Packet Mirroring clones traffic from specified instances and forwards copies to monitoring and analysis tools, enabling security teams to inspect actual packet contents including payload data. This capability is essential for intrusion detection systems, network forensics, application performance monitoring, and compliance requirements that mandate traffic inspection.

Packet Mirroring works by configuring policies that specify which traffic should be mirrored based on source instances, subnets, or tags, and where mirrored traffic should be sent. The service creates exact copies of packets including all headers and payloads, forwarding them to collector instances running security and monitoring tools. Mirroring can be configured for all traffic from specified sources or filtered based on protocol, IP addresses, or port numbers to focus on relevant traffic. The mirrored traffic is encapsulated for delivery to collectors, typically using VXLAN or similar tunneling protocols. Collectors can be internal load balancers distributing traffic to multiple analysis instances, enabling scalable inspection architectures.

Packet Mirroring enables several critical security and monitoring use cases. Intrusion detection and prevention systems can inspect mirrored traffic for attack signatures, anomalous behavior, and policy violations without being in the direct traffic path. Network forensics tools can capture and analyze traffic for incident response and security investigations. Application performance monitoring solutions can examine actual application traffic to diagnose performance issues. Compliance requirements for regulated industries often mandate traffic inspection and logging, which Packet Mirroring facilitates. The mirroring occurs transparently to applications and users, introducing minimal performance impact on production traffic. Security tools can operate independently from application infrastructure, analyzing traffic without requiring changes to application deployment or architecture.

Option A, Cloud Monitoring, collects metrics, creates dashboards, and generates alerts based on time-series data. While valuable for operational monitoring, it does not provide packet-level traffic analysis or deep packet inspection.

Option B, VPC Flow Logs, captures metadata about network connections including source and destination IPs, ports, protocols, and byte counts. Flow logs provide connection-level visibility but do not capture actual packet payloads needed for security threat analysis.

Option C, Cloud Logging, aggregates log data from various sources for analysis and retention. While useful for security audit logs and application logs, it does not capture or analyze network packet data.

Packet Mirroring is the essential service for security monitoring requiring deep inspection of actual network traffic contents.

Question 95

Your organization wants to enforce that all VM instances use only approved disk images. How should you implement this control?

A) Use VPC Service Controls

B) Implement Organization Policy constraints

C) Configure IAM permissions

D) Use Cloud Armor security policies

Answer: B

Explanation:

Implementing Organization Policy constraints is the correct approach for enforcing that all VM instances use only approved disk images across your GCP organization. Organization Policies provide centralized governance controls that constrain resource configurations, preventing users from creating resources that violate organizational standards even if they have IAM permissions to create resources. The specific constraint for controlling allowed images ensures consistent security baselines and compliance across all projects.

Organization Policies work by defining constraints at the organization, folder, or project level in the resource hierarchy. For controlling VM images, the compute.trustedImageProjects constraint specifies which projects contain approved images that can be used for creating instances. When this constraint is configured, users can only create instances from images in the specified trusted projects, preventing the use of public images or images from untrusted sources. The compute.storageResourceUseRestrictions constraint can further limit which storage resources including images can be used based on resource locations. These constraints are evaluated when instances are created, and creation attempts violating the policy are rejected.

Organization Policies provide several advantages for governance and compliance. They enforce controls preventively rather than reactively, blocking non-compliant resource creation rather than detecting violations after the fact. Policies inherit through the resource hierarchy, allowing centralized definition at the organization level while permitting exceptions at lower levels when necessary. The controls apply regardless of how resources are created, whether through Console, gcloud commands, API calls, or infrastructure-as-code tools. Policies work in conjunction with IAM, where IAM grants capabilities and Organization Policies restrict how those capabilities can be used. For image control specifically, organizations typically maintain a project containing hardened, patched, approved images, then configure policies allowing only those images. This approach ensures all VM instances across the organization start from known-good baseline configurations.

Option A, VPC Service Controls, creates security perimeters protecting data in GCP services by controlling data movement across perimeter boundaries. While valuable for data protection, Service Controls do not enforce VM image usage policies.

Option C, IAM permissions, control who can perform actions but do not constrain what resources can be used. Users with compute.instances.create permission could use any image unless Organization Policies restrict choices.

Option D, Cloud Armor security policies, protect applications from DDoS and web attacks at the edge. Cloud Armor does not control VM configuration or enforce image usage policies.

Organization Policy constraints are the correct mechanism for enforcing centralized governance controls over resource configurations including approved VM images.

Question 96

You need to provide private connectivity from on-premises to Google APIs and services without using public IP addresses. What should you configure?

A) Cloud VPN with default internet gateway

B) Private Google Access for on-premises hosts

C) Cloud NAT with Cloud Router

D) Direct Peering

Answer: B

Explanation:

Private Google Access for on-premises hosts enables private connectivity from on-premises networks to Google APIs and services without using public IP addresses or traversing the public internet. This feature extends the Private Google Access functionality available within VPC networks to on-premises environments, allowing resources in data centers connected via Cloud VPN or Cloud Interconnect to access Google services using internal IP addresses. This approach enhances security by keeping traffic on private networks and can reduce costs by avoiding internet egress charges.

Private Google Access for on-premises hosts works by configuring advertised routes and DNS resolution to direct Google API traffic through private connectivity. The on-premises network must connect to GCP via Cloud VPN or Cloud Interconnect. In the VPC, Cloud Router is configured to advertise specific IP address ranges for Google APIs to the on-premises network via BGP. These ranges include restricted.googleapis.com and private.googleapis.com, which provide private IP addresses for accessing Google services. On-premises DNS is configured to resolve Google API domain names to these private IP address ranges rather than public addresses. When on-premises applications make API calls, traffic routes through the VPN or Interconnect connection to Google’s private service infrastructure.

The configuration provides several important benefits. Traffic to Google APIs remains on private networks rather than traversing the public internet, improving security and potentially compliance posture. On-premises resources can access Google services even if they do not have public IP addresses or internet connectivity, simplifying network architecture. For organizations with Cloud Interconnect, traffic may incur lower costs compared to internet egress because it uses the dedicated connection. The private connectivity provides consistent network paths and can offer better performance characteristics than public internet routes. Supported services include most Google Cloud services like Cloud Storage, BigQuery, Cloud Pub/Sub, and Compute Engine APIs. Services that require specific endpoints like Google Workspace are accessed differently. Organizations should carefully plan DNS configuration and ensure Cloud Router properly advertises the necessary routes.

Option A, Cloud VPN with default internet gateway, would route traffic through the VPN to GCP and then out to the internet for Google API access. This uses public IP addresses and does not provide the private connectivity required.

Option C, Cloud NAT with Cloud Router, enables outbound internet connectivity for instances without public IP addresses. Cloud NAT does not provide private access to Google APIs from on-premises.

Option D, Direct Peering, is a direct connection to Google’s edge network for accessing Google services but operates using public IP addresses and does not extend into GCP VPC networks for private access.

Private Google Access for on-premises hosts is the correct solution for private connectivity to Google services from on-premises networks.

Question 97

Your application uses WebSocket connections and requires session affinity. Which load balancer should you use?

A) External HTTP(S) Load Balancer

B) External TCP/UDP Network Load Balancer

C) Internal HTTP(S) Load Balancer

D) Internal TCP/UDP Load Balancer

Answer: A

Explanation:

The External HTTP(S) Load Balancer is the appropriate choice for applications using WebSocket connections that require session affinity. This Layer 7 load balancer supports WebSocket protocol through HTTP connection upgrades and provides multiple session affinity options to ensure subsequent connections from the same client reach the same backend instance. The load balancer’s application-layer capabilities enable proper handling of WebSocket handshakes and long-lived connections while maintaining the benefits of global load balancing.

WebSocket protocol begins as a standard HTTP request with an Upgrade header indicating the client wishes to establish a WebSocket connection. The External HTTP(S) Load Balancer recognizes these upgrade requests and properly forwards them to backend instances. Once the WebSocket handshake completes, the connection upgrades to the WebSocket protocol, and the load balancer maintains the long-lived bidirectional connection between client and backend. The load balancer’s timeout settings can be configured to accommodate WebSocket connections that may remain open for extended periods, significantly longer than typical HTTP request-response cycles.

Session affinity, also called sticky sessions, ensures requests from the same client consistently route to the same backend instance, essential for stateful applications including many WebSocket implementations. The External HTTP(S) Load Balancer supports multiple affinity methods. Client IP affinity hashes the client’s IP address to select a backend, directing all connections from the same source IP to the same backend. Cookie-based affinity uses HTTP cookies where the load balancer generates a cookie identifying the selected backend, and subsequent requests with that cookie return to the same backend. Generated cookie affinity creates a load balancer cookie automatically, while HTTP cookie affinity uses application-defined cookies. For WebSocket specifically, cookie-based affinity works during the initial HTTP handshake phase. The combination of WebSocket support and session affinity makes this load balancer suitable for real-time applications like chat systems, gaming, collaborative tools, and financial trading platforms that require persistent bidirectional connections.

Option B, External TCP/UDP Network Load Balancer, operates at Layer 4 and can handle WebSocket traffic but provides only basic session affinity through 5-tuple hashing. It lacks application-layer features and is regional rather than global.

Option C, Internal HTTP(S) Load Balancer, supports WebSockets and session affinity but is designed for internal traffic within a VPC network, not external traffic from the internet as typically required for WebSocket applications.

Option D, Internal TCP/UDP Load Balancer, is an internal Layer 4 load balancer that can pass WebSocket traffic but lacks application-layer capabilities and is intended for internal use cases.

The External HTTP(S) Load Balancer provides the global reach, WebSocket support, and sophisticated session affinity required for WebSocket applications.

Question 98

You need to ensure that traffic between VMs in the same VPC is encrypted. What is the recommended approach?

A) Configure VPC firewall rules to require encryption

B) Implement application-level TLS encryption

C) Use Cloud VPN between VMs

D) Enable VPC network encryption automatically handles this

Answer: B

Explanation:

Implementing application-level TLS encryption is the recommended approach for ensuring traffic between VMs in the same VPC is encrypted when explicit encryption verification is required. While Google Cloud automatically encrypts all traffic between VMs at the physical network layer, implementing TLS at the application layer provides end-to-end encryption that applications can verify cryptographically and ensures data remains encrypted through all network layers. This approach aligns with defense-in-depth security principles and meets compliance requirements that mandate verifiable encryption.

Google Cloud automatically encrypts all traffic between virtual machines using authenticated encryption at the physical network layer. This encryption occurs transparently without user configuration and protects data as it traverses Google’s network infrastructure between VMs, whether in the same zone, different zones, or different regions. The encryption uses AES with 128-bit keys and includes authentication to prevent tampering. However, this physical layer encryption operates below the guest operating system level and cannot be directly verified by applications. For regulatory compliance or security policies requiring verifiable encryption, application-level encryption provides cryptographic proof that only the intended endpoints can decrypt the data.

Application-level TLS encryption involves configuring services to communicate over HTTPS or other TLS-protected protocols. Web servers are configured with TLS certificates and applications connect using HTTPS URLs. Database connections use TLS modes where supported by the database client and server. Microservices architectures implement mutual TLS where both client and server authenticate each other using certificates. Service mesh technologies like Istio can automatically manage TLS for microservices communication. Application-level encryption provides several benefits beyond the automatic physical layer encryption. It creates an encrypted tunnel that protects data through all network layers including inside the guest OS. Applications can verify peer identities through certificate validation. Encryption keys and certificates are managed at the application level where they can be rotated and audited. The encryption remains under application control rather than relying solely on infrastructure capabilities. This layered approach where both infrastructure and application provide encryption follows security best practices.

Option A, VPC firewall rules, control which traffic is allowed or denied but do not provide encryption. Firewall rules operate at Layer 3/4 and examine packet headers, not payload encryption.

Option C, Cloud VPN between VMs, introduces unnecessary complexity and overhead. VPN is designed for site-to-site or remote access connectivity, not for encrypting traffic between VMs in the same VPC.

Option D suggests VPC network encryption automatically handles encryption, which is partially true but incomplete. While physical layer encryption occurs automatically, it is not verifiable at the application level and may not meet all compliance requirements without explicit TLS.

Application-level TLS encryption provides verifiable end-to-end protection meeting the strictest security and compliance requirements.

Question 99

Your organization wants to connect to Google Cloud using a dedicated physical connection for consistent high bandwidth. Which service should you use?

A) Cloud VPN

B) Direct Peering

C) Dedicated Interconnect

D) Partner Interconnect

Answer: C

Explanation:

Dedicated Interconnect is the correct service for connecting to Google Cloud using dedicated physical connections providing consistent high bandwidth. Dedicated Interconnect establishes private, direct connections between an organization’s on-premises network and Google Cloud through physical fiber cables in colocation facilities, offering predictable performance, higher throughput, and potentially lower costs compared to internet-based connectivity. This service is ideal for organizations with substantial bandwidth requirements and workloads that benefit from direct physical connectivity.

Dedicated Interconnect operates through physical cross-connects in supported colocation facilities where both the customer’s network equipment and Google’s network edge devices are present. The customer orders circuits from the colocation facility connecting their network equipment to Google’s dedicated interconnect equipment. Each connection provides 10 Gbps or 100 Gbps of dedicated bandwidth per circuit, and multiple connections can be provisioned for higher aggregate bandwidth or redundancy. These physical circuits connect to VLAN attachments which map to Cloud Routers in VPC networks. The Cloud Router establishes BGP sessions over the physical connection, advertising routes for VPC subnets to on-premises and learning routes for on-premises networks.

Dedicated Interconnect provides several advantages for enterprise connectivity. The dedicated bandwidth is not shared with internet traffic, providing consistent performance and predictable latency. High throughput applications benefit from circuits supporting 10 Gbps or 100 Gbps per connection with the ability to provision up to eight connections per Interconnect for 80 Gbps or 800 Gbps aggregate. Egress pricing for traffic through Interconnect is typically lower than internet egress rates, providing cost savings for workloads with substantial data transfer. The private connection avoids traversing the public internet, reducing exposure to internet-based attacks. The service supports SLA guarantees when configured with redundant connections in recommended topologies. Organizations can connect multiple VPCs to Interconnect using VLAN attachments. For maximum availability, Google recommends provisioning at least two Interconnect connections in different colocation facilities, creating geographic redundancy.

Option A, Cloud VPN, provides connectivity over the public internet using encrypted tunnels. While suitable for many use cases, VPN does not offer dedicated physical connectivity or the same high bandwidth and consistent performance as Interconnect.

Option B, Direct Peering, establishes direct connections to Google’s edge network for accessing Google services but does not connect into GCP VPC networks. Peering is for high-volume access to Google services, not for hybrid cloud architectures.

Option D, Partner Interconnect, provides connectivity through supported service providers when the organization cannot meet Dedicated Interconnect requirements for physical colocation. While similar, Dedicated Interconnect provides direct physical connections without intermediary providers.

Dedicated Interconnect is the premier service for high-bandwidth, dedicated physical connectivity between on-premises networks and Google Cloud.

Question 100

You need to route traffic through a next-generation firewall appliance in GCP. What is the recommended architecture?

A) VPC firewall rules with priority settings

B) Network tags for selective routing

C) Custom static routes with next hop as firewall instance

D) Cloud Armor policies

Answer: C

Explanation:

Custom static routes with the next hop configured as the firewall instance is the recommended architecture for routing traffic through next-generation firewall appliances in GCP. This approach uses VPC routing capabilities to direct traffic to virtual firewall appliances, enabling advanced security inspection including intrusion prevention, malware detection, URL filtering, and application control that go beyond capabilities of VPC firewall rules. The architecture creates traffic paths where packets traverse the firewall before reaching their destination.

The implementation involves deploying firewall virtual appliances from partners like Palo Alto Networks, Fortinet, or Check Point into the VPC network. The firewall instances are configured with multiple network interfaces, typically one for untrusted traffic and one for trusted traffic, or separate interfaces for ingress and egress. Custom static routes are created in the VPC with destination IP ranges that should be inspected, setting the next hop to the firewall instance’s IP address. When packets match the custom route’s destination, they are forwarded to the firewall instance for inspection. After inspection, the firewall forwards permitted traffic toward the actual destination using its own routing configuration or additional VPC routes.

Several architectural patterns implement firewall insertion. For north-south traffic between the internet and VPC resources, an external load balancer or external IP addresses direct inbound traffic to the firewall, which then forwards to internal resources. For outbound traffic, custom routes with destination 0.0.0.0/0 next-hopping to the firewall force internet-bound traffic through inspection. For east-west traffic between VPC subnets, custom routes in each subnet forward traffic destined for other subnets through the firewall. High availability is achieved by deploying multiple firewall instances with load balancers distributing traffic, or using protocols like VRRP when supported by the appliance. Traffic symmetry must be maintained ensuring both directions of a flow traverse the same firewall instance to maintain connection state. Network tags and multiple route tables can implement selective routing where only specified workloads’ traffic routes through firewalls while other traffic follows default paths, optimizing cost and performance.

Option A, VPC firewall rules with priority settings, provides basic Layer 3/4 packet filtering but does not enable advanced security features requiring deep packet inspection like application awareness, IPS, or malware detection.

Option B, Network tags for selective routing, describes a mechanism for applying routes to specific instances but does not itself create the routing to firewall appliances. Tags are used in conjunction with custom routes, not as a standalone solution.

Option D, Cloud Armor policies, protects applications at the edge against DDoS and web attacks but does not provide general network traffic inspection or routing through firewall appliances.

Custom static routes directing traffic to firewall appliances enable advanced security capabilities through established next-generation firewall solutions.

Question 101

Your application requires communication between Compute Engine VMs and Cloud Run services. What networking approach should you use?

A) VPC Peering between Compute Engine and Cloud Run

B) Serverless VPC Access connector

C) Cloud VPN between environments

D) External HTTP(S) Load Balancer

Answer: B

Explanation:

A Serverless VPC Access connector is the correct networking approach for enabling communication between Compute Engine VMs and Cloud Run services when private connectivity is required. Serverless VPC Access creates a connection between serverless environments like Cloud Run, Cloud Functions, and App Engine, and VPC networks, allowing serverless services to send requests to resources with internal IP addresses. This capability enables private communication with Compute Engine VMs, Cloud SQL instances, Memorystore, and other VPC resources without exposing them to the public internet.

Serverless VPC Access connectors work by creating a dedicated connector resource in a specific region and VPC network. The connector is configured with an IP address range from the VPC, typically using a /28 subnet that is not used by other resources. Cloud Run services or other serverless products are configured to route requests through the connector. When a Cloud Run container makes requests to IP addresses in the VPC ranges, traffic flows through the connector into the VPC network where it can reach internal resources. The connector handles network address translation and routing, making VPC resources accessible to serverless services as if they were in the same network.

The connector enables several important architectural patterns. Cloud Run services can connect to Cloud SQL databases using private IP addresses rather than Cloud SQL proxy or public IPs, improving security and reducing latency. Serverless applications can call internal APIs hosted on Compute Engine VMs without exposing those VMs to the internet. Access to internal services like Memorystore Redis instances, internal load balancers, or legacy systems running in VPC networks becomes possible from serverless platforms. The connector can be configured to route all egress traffic through the VPC or only specific routes, providing flexibility in traffic management. Multiple connectors can be created for different regions or networks. Security is enhanced because serverless services access internal resources without requiring public IP addresses, external load balancers, or complex authentication mechanisms. Firewall rules in the VPC control which traffic from the connector is permitted, maintaining security boundaries.

Option A, VPC Peering, connects two VPC networks but Cloud Run is a serverless platform not contained within a VPC that can be peered. Peering is not applicable to serverless-to-VPC connectivity.

Option C, Cloud VPN, is designed for site-to-site or remote access connectivity, not for connecting serverless services to VPC resources. VPN would introduce unnecessary complexity and cost.

Option D, External HTTP(S) Load Balancer, could expose Compute Engine services to Cloud Run but requires resources to have public exposure and does not provide true private connectivity within the VPC.

Serverless VPC Access connectors are the purpose-built solution for private connectivity between serverless services and VPC networks.

Question 102

You need to implement a network security control that blocks traffic based on geolocation of the source. Which service should you use?

A) VPC firewall rules

B) Cloud Armor security policies

C) Identity-Aware Proxy

D) Packet Mirroring

Answer: B

Explanation:

Cloud Armor security policies are the correct service for implementing network security controls that block traffic based on the geolocation of the source. Cloud Armor provides Layer 7 security capabilities including geographic-based access control, IP allowlisting and denylisting, rate limiting, and protection against DDoS and web application attacks. These policies apply to services behind Google Cloud load balancers, protecting applications at the edge before traffic reaches backend resources.

Cloud Armor policies consist of rules that match on various traffic characteristics and specify actions to take on matching traffic. Geographic-based rules use Google’s understanding of IP address geolocation to identify where requests originate. Rules can be configured to allow or deny traffic from specific countries or regions, enabling compliance with data sovereignty requirements or blocking traffic from regions associated with attacks. The evaluation occurs at Google’s edge, preventing unwanted traffic from ever reaching your infrastructure. Multiple rules can be combined in a single policy with priorities determining evaluation order, allowing complex access control logic such as allowing specific countries while denying all others.

Cloud Armor integrates with several load balancing services. Security policies attach to backend services of External HTTP(S) Load Balancers, protecting web applications and APIs. The policies also work with External HTTP(S) Load Balancer for hybrid and multi-cloud environments. For applications using these load balancers, Cloud Armor provides a first line of defense at the Google Cloud edge. Additional Cloud Armor capabilities beyond geolocation include custom rules using Common Expression Language for sophisticated matching logic, preconfigured WAF rules protecting against OWASP Top 10 vulnerabilities, rate-based banning to prevent abuse, and adaptive protection using machine learning to detect and respond to attacks. Logging integration sends security events to Cloud Logging for analysis and alerting. For applications with global user bases, geographic controls can implement regional access restrictions while allowing legitimate traffic, balancing security and usability.

Option A, VPC firewall rules, operate at Layer 3/4 based on IP addresses, protocols, and ports. Firewall rules cannot directly evaluate geolocation and would require maintaining extensive lists of IP address ranges by country, which is impractical and inaccurate.

Option C, Identity-Aware Proxy, provides context-aware access control based on user identity and device context but does not provide network-layer geographic filtering. IAP operates after traffic reaches your infrastructure.

Option D, Packet Mirroring, copies traffic for analysis and monitoring but does not enforce security policies or block traffic. Mirroring is for visibility, not protection.

Cloud Armor security policies provide comprehensive Layer 7 protection including geographic-based access control for applications behind Google Cloud load balancers.

Question 103

Your organization needs to ensure that DNS queries from VMs resolve internal hostnames. What should you configure?

A) Cloud DNS private zones

B) External DNS forwarder

C) hosts file on each VM

D) Public Cloud DNS zones

Answer: A

Explanation:

Cloud DNS private zones are the correct solution for ensuring DNS queries from VMs resolve internal hostnames within your GCP environment. Private zones provide authoritative DNS resolution for internal domain names, allowing you to create DNS records for resources using private IP addresses without exposing that information to the public internet. This capability is essential for service discovery, application communication, and maintaining internal naming conventions in cloud deployments.

Cloud DNS private zones are created with specific domain names and associated with one or more VPC networks. When VMs in those VPC networks query DNS names within the private zone’s domain, Cloud DNS authoritatively responds with the configured records. Record types supported include A records mapping hostnames to IP addresses, CNAME records for aliases, PTR records for reverse DNS, and others. The private zones are completely separate from public DNS, so internal names and IP addresses remain private within your VPC. VMs automatically use the Cloud DNS resolver at 169.254.169.254 which handles both private and public DNS queries appropriately.

Private zones enable several important capabilities for cloud networking. Internal services can be addressed by meaningful hostnames rather than IP addresses, simplifying configuration and improving readability. As services move or scale, DNS records can be updated to reflect changes while applications continue using consistent names. Multi-tier applications can use DNS for service discovery where frontend tiers resolve backend service names to find current IP addresses. Private DNS integrates with hybrid architectures through Cloud DNS inbound and outbound forwarding policies. Inbound policies allow on-premises systems to query Cloud DNS for GCP resource names. Outbound policies forward DNS queries for on-premises domains from GCP to on-premises DNS servers. Split-horizon DNS is supported where the same domain name resolves differently depending on whether the query comes from inside or outside the VPC. DNS peering allows private zones to be shared across VPC networks, useful in Shared VPC or multi-VPC architectures.

Option B, External DNS forwarder, could technically provide DNS resolution but introduces unnecessary complexity, additional management overhead, and potential security concerns. Cloud DNS private zones provide native, managed DNS capability.

Option C, hosts files on each VM, could map names to IP addresses but requires manual maintenance on every VM, does not scale, and lacks central management. This approach is impractical for cloud environments.

Option D, Public Cloud DNS zones, resolve names from the public internet but expose internal infrastructure information publicly. Public zones are inappropriate for internal-only hostnames and IP addresses.

Cloud DNS private zones provide the managed, scalable, and secure solution for internal DNS resolution in GCP.

Question 104

You need to control which API requests are allowed to reach resources in a VPC Service Controls perimeter. What should you configure?

A) VPC firewall rules

B) Ingress and egress policies in the service perimeter

C) IAM policies on resources

D) Cloud Armor rules

Answer: B

Explanation:

Configuring ingress and egress policies in the service perimeter is the correct approach for controlling which API requests are allowed to reach resources protected by VPC Service Controls. VPC Service Controls creates security perimeters around Google Cloud resources, restricting API-based access to data to prevent exfiltration and control access based on contextual factors like originating network and identity. Ingress and egress policies define the conditions under which API requests can cross the perimeter boundary, providing fine-grained data protection.

VPC Service Controls perimeters enclose Google Cloud services and projects, creating a security boundary that restricts data movement. By default, a perimeter blocks all API requests crossing the boundary in either direction. Ingress policies specify conditions under which external requests can access resources inside the perimeter, controlling inbound data flow. Egress policies define when requests from inside the perimeter can access resources outside, controlling outbound data flow. These policies evaluate multiple contextual factors including the identity making the request, the originating network using Access Levels, the target resource, and specific API methods being called.

Ingress policies typically address scenarios like allowing specific identities from authorized networks to access protected resources, permitting access from other trusted perimeters, or enabling API calls from specific projects or VPCs. For example, an ingress policy might allow data analysts from corporate networks to query BigQuery datasets in the perimeter while blocking access from other locations. Egress policies handle scenarios like allowing protected resources to call external APIs for specific purposes, permitting data export to designated external buckets for backup, or enabling integration with third-party services. An egress policy might allow Cloud Functions in the perimeter to invoke external APIs while preventing direct data export to arbitrary locations.

The policies use Access Levels to define network context requirements. Access Levels can specify IP address ranges, VPC networks via Private Google Access, device attributes, and geographic locations. Combining identity-based IAM permissions with context-aware VPC Service Controls creates defense-in-depth where both who the user is and where they are accessing from determine permissions. This approach helps organizations meet compliance requirements for data sovereignty, prevent insider threats, and protect against compromised credentials. The policies support both allow and deny actions with detailed logging of all perimeter crossing attempts for audit and security monitoring.

Option A, VPC firewall rules, control network-layer traffic between compute resources but do not control API access to managed services or provide data exfiltration protection. Firewall rules operate at Layer 3/4, not the API layer.

Option C, IAM policies on resources, control who can perform actions but do not restrict based on context like network location or enforce perimeter boundaries. IAM and VPC Service Controls work together, with IAM defining who and VPC Service Controls defining under what conditions.

Option D, Cloud Armor rules, protect applications against web attacks and DDoS but do not control API access to Google Cloud services or enforce data perimeters.

Ingress and egress policies in VPC Service Controls perimeters provide sophisticated, context-aware control over API access to protected resources.

Question 105

Your application requires a static external IP address that persists even when the underlying VM is deleted. What type of IP address should you use?

A) Ephemeral external IP address

B) Reserved static external IP address

C) Internal IP address

D) Alias IP address

Answer: B

Explanation:

A reserved static external IP address is the correct type to use when your application requires a static external IP address that persists even when the underlying VM is deleted or stopped. Reserved static IPs are standalone resources that exist independently of compute instances, allowing them to be attached and detached from VMs as needed while maintaining the same IP address value. This capability is essential for applications that require consistent addressing, DNS records pointing to fixed IPs, or external services configured with allowlisted IP addresses.

Reserved static external IP addresses are created as regional or global resources depending on intended use. Regional static IPs are used for Compute Engine VM instances, internal load balancing, and Cloud NAT gateways within a specific region. Global static IPs are used for global load balancers providing anycast services accessible from anywhere. When you reserve a static IP, Google Cloud allocates an address from the available pool and associates it with your project. This address remains allocated to your project until explicitly released, even if no resources currently use it. The IP can be assigned to a VM instance during creation or attached to an existing instance. If the instance is deleted, the static IP remains in your project’s inventory available for immediate reuse.

Reserved static IPs address several common requirements and use cases. Applications that rely on IP allowlisting by partner systems or external services need consistent source IP addresses that do not change with instance recreation. DNS records for public-facing services should point to stable IP addresses to avoid cache invalidation and propagation delays associated with IP changes. Disaster recovery and high availability scenarios benefit from quickly reassigning a static IP from a failed instance to a replacement without waiting for DNS updates. Regulatory or audit requirements sometimes mandate fixed IP addresses for tracking and accountability. Organizations can document and track reserved static IPs as assets, facilitating IP address management and planning.

Option A, Ephemeral external IP addresses, are temporary addresses assigned to VMs for the duration of their existence. When a VM is stopped or deleted, the ephemeral IP is released back to the pool and may be assigned elsewhere, failing to meet the persistence requirement.

Option C, Internal IP addresses, are used for private communication within VPC networks and cannot be accessed from the public internet. Internal IPs do not provide external connectivity required by the question.

Option D, Alias IP addresses, are additional internal IP addresses assigned to VM network interfaces for multi-IP scenarios like containerized workloads. Alias IPs are internal addresses and do not provide persistent external addressing.

Reserved static external IP addresses provide the persistent, stable public addressing required for applications needing consistent external IP addresses.