Visit here for our full Google Professional Cloud Network Engineer exam dumps and practice test questions.
Question 76
An organization needs to connect their on-premises network to multiple VPC networks in Google Cloud. Which connectivity option provides the most scalable solution?
A) Cloud VPN to each VPC
B) Partner Interconnect with VLAN attachments to each VPC
C) Direct Peering
D) Carrier Peering
Answer: B
Explanation:
Partner Interconnect with VLAN attachments provides the most scalable solution for connecting on-premises networks to multiple VPC networks because each VLAN attachment can connect to a different VPC network while sharing the same underlying physical connection. This architecture enables organizations to establish dedicated, high-bandwidth connectivity to Google Cloud once, then efficiently extend that connectivity to multiple VPC networks through logical VLAN segmentation.
Partner Interconnect leverages service provider networks to establish Layer 2 connections between on-premises locations and Google Cloud. Organizations work with supported service providers who already have established connectivity to Google’s network, eliminating the need to provision physical cross-connects directly to Google facilities. This approach reduces deployment complexity and potentially lowers costs while providing enterprise-grade connectivity.
VLAN attachments represent the logical connections between Partner Interconnect and individual VPC networks. Each VLAN attachment uses a specific 802.1Q VLAN tag to isolate traffic for a particular VPC network. Organizations can create multiple VLAN attachments on the same Partner Interconnect connection, with each attachment connecting to a different VPC network or even different projects. This multi-tenancy capability enables clean separation of production, development, and testing environments.
The scalability advantages are significant. Adding connectivity to additional VPC networks requires only creating new VLAN attachments without provisioning additional physical circuits or modifying on-premises hardware. Each VLAN attachment provides dedicated bandwidth allocation and independent routing configuration. The approach supports hundreds of VLAN attachments per interconnect connection, accommodating complex multi-VPC architectures common in large enterprises.
Cloud VPN to each VPC creates separate encrypted tunnels which can become operationally complex and may not provide sufficient bandwidth for demanding workloads. Direct Peering and Carrier Peering connect to Google services rather than VPC networks and do not provide the same level of integration. Only Partner Interconnect with VLAN attachments delivers the combination of high bandwidth, dedicated connectivity, and multi-VPC scalability required for enterprise hybrid cloud architectures.
Question 77
A company wants to implement a hub-and-spoke network topology in Google Cloud. Which feature should they use?
A) VPC Network Peering between all VPCs
B) Cloud VPN tunnels in a mesh configuration
C) Network Connectivity Center with spoke attachments
D) Shared VPC with service projects
Answer: C
Explanation:
Network Connectivity Center provides the purpose-built solution for implementing hub-and-spoke network topologies in Google Cloud by creating a centralized hub that connects multiple spoke resources including VPC networks, on-premises sites, and other cloud environments. This managed service simplifies complex network architectures by providing a single control plane for connectivity management while enabling transitive routing between spokes through the hub.
The hub-and-spoke topology addresses common enterprise networking requirements where a central location provides shared services, security enforcement, or connectivity aggregation while branch locations or workload-specific networks connect through that central point. Network Connectivity Center implements this pattern at scale with spoke attachments representing connections from various sources to the central hub including Cloud VPN tunnels, VLAN attachments from Cloud Interconnect or Partner Interconnect, and Router appliance instances.
Spoke attachments establish the logical connections from edge resources to the Network Connectivity Center hub. Each spoke declares the IP address ranges it can reach, and the hub automatically propagates these routes to other spokes based on configured policies. This automatic route exchange enables any-to-any connectivity between spokes without requiring manual route configuration in each location, dramatically simplifying network management as the topology grows.
The transitive routing capability distinguishes Network Connectivity Center from alternatives. Spokes can communicate with each other through the hub even though they do not have direct connections. For example, an on-premises site connected via Cloud VPN can reach another on-premises site connected through Partner Interconnect by routing through the hub. This eliminates the need for complex mesh configurations where every site connects to every other site.
VPC Network Peering creates direct connections between VPCs but does not provide transitive routing and becomes unmanageable at scale. Cloud VPN mesh configurations require extensive manual configuration. Shared VPC addresses project-level resource sharing but not hub-and-spoke topology. Only Network Connectivity Center provides the centralized management, automatic route propagation, and transitive routing capabilities that define efficient hub-and-spoke network architectures.
Question 78
An organization needs to control egress traffic from their VPC network and apply URL filtering. What should they implement?
A) Cloud NAT
B) Cloud Firewall with egress rules
C) Cloud Armor
D) Secure Web Proxy
Answer: D
Explanation:
Secure Web Proxy provides comprehensive egress traffic control with URL filtering capabilities, enabling organizations to inspect and control outbound HTTP and HTTPS traffic from VPC networks based on URL categories, custom URL lists, and other Layer 7 criteria. This managed proxy service offers advanced security controls beyond basic IP and port filtering, addressing requirements for preventing data exfiltration, blocking access to malicious sites, and enforcing acceptable use policies.
The Secure Web Proxy architecture intercepts outbound traffic from VPC resources before it reaches the internet. Traffic is transparently redirected to the proxy service which performs SSL/TLS inspection to examine encrypted HTTPS traffic, applies URL filtering policies to determine whether requests should be allowed or blocked, logs all traffic for security monitoring and compliance, and enforces additional security controls like malware scanning or data loss prevention.
URL filtering capabilities enable sophisticated access control policies based on website categories defined by Google including productivity tools, social media, gaming, adult content, and thousands of other classifications. Organizations can allow or deny entire categories, create custom URL lists for specific domains that require exceptions, and implement time-based policies that vary access based on business hours. The categorization database updates automatically ensuring protection against newly identified threats.
Policy configuration uses a familiar firewall-like rule structure where administrators define match conditions based on source identifiers like service accounts or network tags, destination URL patterns or categories, and actions to allow, deny, or redirect traffic. Multiple policies can be applied with priority ordering determining which policy takes precedence when requests match multiple rules. Integration with Cloud Logging provides visibility into policy enforcement and traffic patterns.
Cloud NAT provides network address translation for outbound connectivity but no URL filtering. Cloud Firewall operates at Layer 3/4 with IP and port filtering only. Cloud Armor protects inbound traffic to load balancers. Only Secure Web Proxy delivers the Layer 7 inspection and URL filtering capabilities required for advanced egress traffic control.
Question 79
A company wants to distribute incoming traffic across multiple backend instances based on client IP address affinity. Which load balancing feature should they configure?
A) Round robin distribution
B) Session affinity based on client IP
C) Least connections algorithm
D) Random distribution
Answer: B
Explanation:
Session affinity based on client IP, also called client IP affinity or sticky sessions, ensures that requests from the same client IP address consistently route to the same backend instance for the duration of the session. This load balancing feature addresses application requirements where session state is stored locally on backend instances rather than in shared storage, maintaining user experience continuity by routing subsequent requests from a client to the backend that holds their session data.
The session affinity mechanism works by creating an association between client IP addresses and specific backend instances. When the first request arrives from a client, the load balancer selects a backend instance using the configured distribution algorithm. For subsequent requests from that same client IP address, the load balancer routes traffic to the same backend instance as long as that instance remains healthy and available. This affinity typically persists for a configured duration or until the backend becomes unavailable.
Google Cloud load balancers support multiple session affinity options including client IP affinity which hashes the client’s IP address to select a backend, generated cookie affinity where the load balancer creates a cookie to track affinity, and HTTP cookie affinity using application-generated cookies. Client IP affinity is the simplest approach requiring no application modifications and working with any protocol, though it may result in uneven distribution when many clients share IP addresses through NAT.
Use cases for session affinity include stateful applications that store shopping cart data or user preferences in memory on backend instances, legacy applications that cannot be easily modified to use shared session storage, and workloads where maintaining affinity improves cache hit rates by routing similar requests to instances that have cached relevant data. However, session affinity can reduce the effectiveness of load balancing if traffic is not evenly distributed across client IP addresses.
Round robin, least connections, and random distribution algorithms do not maintain affinity and may route subsequent requests from the same client to different backends. While these algorithms provide better load distribution, they do not address session state requirements. Only session affinity based on client IP ensures requests from the same client consistently reach the same backend instance.
Question 80
An organization needs to implement DDoS protection for their application hosted on Google Cloud. What should they use?
A) Cloud Firewall rules
B) Cloud Armor security policies
C) VPC Service Controls
D) Identity-Aware Proxy
Answer: B
Explanation:
Cloud Armor security policies provide comprehensive DDoS protection for applications hosted on Google Cloud by leveraging Google’s global infrastructure to absorb and mitigate distributed denial of service attacks before they impact backend resources. This managed security service combines volumetric attack mitigation, protocol-based filtering, and application-layer protection to defend against the full spectrum of DDoS attack types while maintaining legitimate traffic flow.
Cloud Armor operates at Google’s network edge, filtering malicious traffic before it reaches application load balancers or backend services. The service automatically detects and mitigates large-scale volumetric attacks that attempt to overwhelm network bandwidth or infrastructure capacity. Google’s vast global network and anycast architecture distribute attack traffic across multiple points of presence, preventing any single location from being overwhelmed while specialized mitigation systems filter attack packets.
Adaptive protection, a key Cloud Armor feature, uses machine learning to establish baseline traffic patterns and automatically detect anomalous traffic that may indicate attacks. When anomalous patterns are detected, Cloud Armor can automatically deploy rate limiting rules and other countermeasures to block attack traffic while allowing legitimate requests through. This automation provides protection against zero-day attacks and evolving attack techniques without requiring manual policy updates.
Security policy rules enable administrators to define custom protections including rate limiting to prevent abuse from individual sources, geographic restrictions to block traffic from regions where the application does not serve users, allow lists and deny lists for known good or bad IP addresses, and preconfigured rules to block common attack patterns. Integration with Google’s threat intelligence provides continuously updated signatures for known attack sources.
Cloud Firewall rules provide basic IP and port filtering but lack DDoS-specific protections. VPC Service Controls restrict API access. Identity-Aware Proxy manages application access for authenticated users. Only Cloud Armor delivers the comprehensive, edge-deployed DDoS protection capabilities including volumetric mitigation, adaptive protection, and application-layer filtering required to defend against modern distributed attacks.
Question 81
A company needs to monitor network traffic for compliance and security analysis. Which Google Cloud service provides packet mirroring capabilities?
A) VPC Flow Logs
B) Packet Mirroring
C) Cloud IDS
D) Network Intelligence Center
Answer: B
Explanation:
Packet Mirroring provides the capability to capture and inspect complete copies of network traffic flowing through VPC networks, enabling deep packet inspection for security analysis, compliance monitoring, threat detection, and troubleshooting. This feature clones packets from specified sources and forwards them to collector destinations where security tools can perform detailed analysis without impacting production traffic or application performance.
Packet Mirroring operates by configuring mirroring policies that define which traffic to capture based on source criteria including specific VM instances, subnets, or network tags, traffic direction such as ingress, egress, or both, and optionally protocol and port filters to reduce mirror volume. The service captures complete packet payloads including headers and data, providing full visibility into communication patterns and content unlike flow logs which capture only metadata.
Mirrored packets are encapsulated and forwarded to collector instances running packet analysis tools. Common collector destinations include intrusion detection systems for threat detection, network monitoring tools for performance analysis, forensics platforms for security investigations, and compliance tools for regulatory requirement verification. The collector instances can be in the same VPC, a different VPC, or in separate projects enabling centralized security monitoring architectures.
The service handles the complexity of packet duplication and delivery without requiring tap hardware or network reconfiguration. Packet Mirroring integrates with Cloud Load Balancing to mirror traffic to and from load-balanced backends, operates across zones and regions enabling flexible collector placement, and scales automatically to handle varying traffic volumes. Filtering capabilities minimize collector load by capturing only relevant traffic.
VPC Flow Logs capture metadata summaries not complete packets. Cloud IDS inspects traffic for threats but is a managed service rather than packet capture. Network Intelligence Center provides network monitoring insights but not raw packet capture. Only Packet Mirroring provides the complete packet capture and forwarding capabilities required for deep inspection and detailed security analysis.
Question 82
An organization wants to implement a private connection between their VPC and Google APIs without traversing the internet. What should they configure?
A) Cloud NAT
B) Private Google Access
C) VPC Network Peering
D) Cloud VPN
Answer: B
Explanation:
Private Google Access enables VM instances with only internal IP addresses to reach Google APIs and services through Google’s internal network without requiring external IP addresses or internet access. This capability addresses security requirements for keeping resources completely private while still accessing managed services like Cloud Storage, BigQuery, and Cloud Pub/Sub, ensuring traffic remains within Google’s network infrastructure.
The Private Google Access configuration is enabled at the subnet level within VPC networks. When enabled for a subnet, instances in that subnet with only internal IP addresses can send traffic to Google APIs using private routing paths. The traffic uses private Google IP address ranges and routes through Google’s internal network rather than traversing the public internet, providing better security, potentially improved latency, and avoiding internet gateway costs.
Private Google Access supports two address ranges for accessing Google services. The restricted.googleapis.com range provides access to most Google APIs while blocking access to other Google services. The private.googleapis.com range provides broader access to Google and Google Cloud services. Organizations configure Cloud DNS to resolve API hostnames to these private IP ranges, directing traffic through the appropriate network path.
Use cases include VMs in private subnets that need to write logs to Cloud Logging, store data in Cloud Storage, query BigQuery datasets, or access other managed services without exposing the VMs to the internet. This architecture supports defense-in-depth security strategies where application tiers have no internet access while still consuming necessary cloud services. Private Google Access also enables hybrid architectures where on-premises systems access Google APIs through Cloud VPN or Interconnect connections.
Cloud NAT provides outbound internet access but not private access to Google APIs. VPC Network Peering connects VPC networks. Cloud VPN connects on-premises networks. While these services complement Private Google Access in hybrid architectures, only Private Google Access specifically enables private communication between VPC resources and Google APIs without internet traversal.
Question 83
A company needs to implement network segmentation with different firewall rules for development, staging, and production environments. What is the best approach?
A) Use separate VPC networks for each environment
B) Use network tags and tag-based firewall rules
C) Use different subnets with identical CIDR ranges
D) Use separate projects without network connectivity
Answer: B
Explanation:
Network tags combined with tag-based firewall rules provide the most flexible and manageable approach for implementing network segmentation within a VPC network while maintaining connectivity between environments when needed. This method enables granular security control by applying different firewall policies to instances based on their role or environment without requiring separate VPC networks or complex network architectures.
Network tags are labels attached to VM instances that identify characteristics like environment type, application tier, or security zone. Tags are simple text strings such as development, staging, production, web-server, or database. Multiple tags can be applied to individual instances enabling classification across multiple dimensions. Tags propagate automatically with instances during operations like creating machine images or instance templates.
Firewall rules use tags in both source and target specifications. Target tags determine which instances the rule applies to, enabling environment-specific rules like allowing port 8080 access only to instances tagged development. Source tags enable rules based on traffic origin like allowing database access only from instances tagged application-tier. This tag-based approach creates logical security zones that follow workloads regardless of their subnet or IP address.
The management advantages are significant. New instances automatically receive appropriate firewall treatment when tagged correctly during creation. Environment-specific rules are centrally managed and consistently applied across all tagged instances. Security policies can evolve by modifying firewall rules without reconfiguring individual instances. The approach supports complex scenarios like allowing all production instances to communicate while isolating development instances.
Separate VPC networks per environment create operational complexity and may hinder testing workflows requiring production data. Different subnets with identical CIDR ranges create routing conflicts. Separate projects without connectivity prevent necessary inter-environment communication. Only network tags with tag-based firewall rules provide the granular control, operational flexibility, and management simplicity needed for effective multi-environment network segmentation.
Question 84
An organization needs to provide temporary, time-limited access to specific Google Cloud resources without creating permanent service accounts. What should they use?
A) Service account keys
B) Workload Identity Federation
C) Short-lived credentials from Security Token Service
D) OAuth 2.0 client credentials
Answer: C
Explanation:
Security Token Service provides short-lived credentials that enable temporary, time-limited access to Google Cloud resources without creating permanent service accounts or managing long-lived keys. This approach implements the principle of least privilege by granting access only when needed for specific durations, reducing the attack surface from credential compromise and simplifying credential lifecycle management.
STS generates OAuth 2.0 access tokens with limited validity periods, typically one hour by default, that can be used to authenticate to Google Cloud APIs. The tokens provide the same level of access as the identity they represent but expire automatically after their validity period, eliminating the need for manual credential revocation. This automatic expiration significantly reduces risk from stolen or leaked credentials which become useless once expired.
Common use cases include providing temporary access for batch jobs or CI/CD pipelines that need credentials only during execution, granting time-bound access to external contractors or partners without creating permanent accounts, implementing just-in-time access where credentials are generated on-demand for specific tasks, and enabling emergency access scenarios where temporary elevated privileges are needed for incident response.
The credential request process involves authenticating with an existing identity such as a service account, user account, or workload identity, specifying the target service account to impersonate, and requesting token generation with appropriate scopes and lifetime. The returned short-lived token can then be used to authenticate API requests. Token exchange and refresh mechanisms enable long-running processes to obtain new tokens before expiration.
Service account keys are long-lived and require careful management. Workload Identity Federation enables identity mapping but still requires token generation. OAuth 2.0 client credentials are for user authentication. While these mechanisms have their uses, only Security Token Service specifically provides the short-lived, temporary credential generation capability that enables time-limited resource access without permanent credential creation.
Question 85
A company wants to analyze network traffic patterns and identify anomalies. Which Network Intelligence Center feature should they use?
A) Connectivity Tests
B) Network Topology
C) Performance Dashboard
D) Network Analyzer
Answer: C
Explanation:
The Performance Dashboard within Network Intelligence Center provides comprehensive network traffic analysis and anomaly detection capabilities by collecting, aggregating, and visualizing network performance metrics across VPC networks, Cloud VPN connections, and Cloud Interconnect attachments. This unified monitoring interface enables network engineers to identify performance issues, traffic anomalies, and capacity constraints through interactive dashboards and automated insights.
Performance Dashboard presents metrics including packet loss rates indicating network quality issues or congestion, latency measurements showing delay characteristics across network paths, throughput statistics revealing bandwidth utilization and capacity limits, and connection counts tracking the number of active flows. These metrics are collected continuously and displayed with time-series graphs enabling trend analysis and correlation of events across the network infrastructure.
Anomaly detection algorithms automatically analyze historical traffic patterns to establish baselines for normal behavior and generate alerts when metrics deviate significantly from expected ranges. For example, sudden spikes in latency, unexpected drops in throughput, or abnormal increases in packet loss trigger notifications enabling rapid response to potential issues. The automated detection reduces the manual effort required to monitor complex networks continuously.
The dashboard supports filtering and grouping by various dimensions including VPC network, subnet, region, connection type, and time range enabling focused analysis of specific network segments or timeframes. Drill-down capabilities allow investigating anomalies by examining detailed metrics for affected paths. Integration with Cloud Logging and Cloud Monitoring enables correlation with application metrics and system events for comprehensive troubleshooting.
Connectivity Tests validates network path reachability. Network Topology visualizes network architecture. Network Analyzer recommends configuration improvements. While these features provide valuable network management capabilities, only Performance Dashboard delivers the traffic analysis, metric visualization, and anomaly detection functionality required for identifying network performance issues and traffic pattern anomalies.
Question 86
An organization needs to ensure that traffic between specific VM instances in different VPC networks is encrypted. What should they implement?
A) VPC Network Peering with encryption enabled
B) Cloud VPN tunnels between the VPCs
C) Application-level encryption
D) Private Google Access
Answer: C
Explanation:
Application-level encryption provides the most reliable method for ensuring traffic confidentiality between VM instances across different VPC networks because it operates independently of the network infrastructure, guaranteeing that data remains encrypted throughout transit regardless of the underlying network path or configuration changes. This approach implements end-to-end encryption where data is encrypted at the source application and decrypted only at the destination application.
Application-level encryption using protocols like TLS/SSL ensures that sensitive data remains confidential even if network infrastructure is compromised or misconfigured. The encryption is controlled entirely by the applications, not dependent on network services that may be modified by network administrators or affected by configuration errors. Certificate-based authentication can provide mutual verification ensuring that applications communicate only with intended counterparts.
Implementation typically involves configuring applications to use encrypted communication protocols such as HTTPS for web services, TLS for database connections, or SSH for remote access. Many application frameworks and libraries provide built-in encryption support simplifying implementation. The encryption overhead is generally acceptable for most workloads and the security benefits outweigh performance considerations for sensitive data.
In Google Cloud’s network architecture, traffic between VMs typically traverses Google’s physical network infrastructure which Google encrypts at the physical layer, but this infrastructure encryption does not guarantee that traffic cannot be observed by systems with appropriate permissions within the same organization. Application-level encryption ensures that even administrators with network access cannot view unencrypted data in transit.
VPC Network Peering does not provide encryption and cannot be configured to encrypt traffic. Cloud VPN tunnels encrypt traffic but add complexity and may not support all traffic patterns efficiently. Private Google Access addresses API access not inter-VM encryption. Only application-level encryption provides guaranteed end-to-end confidentiality for traffic between specific instances regardless of network configuration.
Question 87
A company wants to automatically discover and map their entire Google Cloud network topology. Which tool should they use?
A) VPC Flow Logs
B) Network Intelligence Center Topology
C) Cloud Asset Inventory
D) Cloud Monitoring
Answer: B
Explanation:
Network Intelligence Center Topology provides automated discovery and visualization of complete Google Cloud network architectures including VPC networks, subnets, VM instances, load balancers, Cloud VPN connections, Cloud Interconnect attachments, and VPC Network Peering relationships. This interactive visualization tool creates comprehensive network maps without requiring manual documentation, enabling network engineers to understand complex architectures and troubleshoot connectivity issues efficiently.
The Topology feature automatically discovers all network resources within the organization or selected projects by querying Google Cloud APIs and building a unified view of network relationships. The visualization displays hierarchical relationships showing how VPC networks contain subnets, which subnets contain VM instances, how VPC networks connect through peering or VPN, and how on-premises networks integrate through hybrid connectivity. This complete view enables understanding of traffic paths and dependencies.
Interactive controls allow filtering and focusing on specific network segments or resource types. Users can zoom into particular VPC networks to examine details, highlight specific traffic flows to understand routing, and filter by region, project, or resource type to reduce visual complexity. Color coding and iconography provide quick identification of resource types and status. The topology updates automatically as network configurations change maintaining accuracy.
Use cases include onboarding new team members who need to understand network architecture quickly, troubleshooting connectivity issues by visualizing complete traffic paths, planning network changes by understanding impact on connected resources, and compliance documentation requiring accurate network diagrams. The automated nature eliminates the maintenance burden of manual network documentation which often becomes outdated.
VPC Flow Logs capture traffic metadata not topology. Cloud Asset Inventory tracks resource configurations but does not visualize network relationships. Cloud Monitoring provides metrics and alerting. While these tools provide valuable data about networks, only Network Intelligence Center Topology delivers the automated discovery and interactive visualization specifically designed for comprehensive network architecture mapping.
Question 88
An organization needs to implement fine-grained access control for their application based on user identity and context. What should they use?
A) Cloud Firewall rules
B) Cloud Armor
C) Identity-Aware Proxy
D) VPC Service Controls
Answer: C
Explanation:
Identity-Aware Proxy provides application-level access control based on user identity and contextual attributes, enabling organizations to protect applications without VPN by verifying user identity and device characteristics before granting access. This zero-trust security model replaces traditional network perimeter security with identity and context-based decisions, ensuring that only authorized users accessing from appropriate contexts can reach protected applications.
IAP operates as a centrally managed access proxy that sits in front of applications. When users attempt to access protected applications, IAP intercepts the request and redirects to Google’s authentication service. After verifying user identity through organizational identity providers like Google Workspace, Azure AD, or other SAML/OIDC providers, IAP evaluates access policies to determine whether the authenticated user should be granted access based on identity, group membership, and context.
Context-aware access policies extend simple identity checks with additional criteria including device security posture through endpoint verification, geographic location restrictions limiting access to specific regions or countries, IP address requirements allowing access only from corporate networks, and access level conditions combining multiple criteria. These policies implement defense-in-depth where multiple factors must align for access approval.
The proxy forwards authorized requests to backend applications while injecting signed headers containing verified user identity information. Applications can trust these headers because they originate from IAP and are cryptographically signed preventing forgery. This enables applications to implement additional authorization logic based on user identity without handling authentication complexity. IAP handles authentication once centrally rather than in each application.
Cloud Firewall rules filter by IP and port without identity awareness. Cloud Armor protects against attacks but does not provide identity-based access control. VPC Service Controls restrict API access not application access. Only Identity-Aware Proxy delivers comprehensive identity and context-based access control with authentication integration and fine-grained policy enforcement for protecting applications.
Question 89
A company needs to connect multiple VPC networks while maintaining isolated routing domains. What should they use?
A) VPC Network Peering with import/export route filters
B) Shared VPC
C) Cloud VPN with static routing
D) Multiple VPC networks without connectivity
Answer: A
Explanation:
VPC Network Peering with import and export route filters enables connecting multiple VPC networks while maintaining control over which routes are exchanged between networks, effectively creating isolated routing domains that share some connectivity. This approach provides the low-latency, high-bandwidth benefits of VPC peering while preventing automatic propagation of all routes, addressing scenarios where selective connectivity is required between networks.
VPC Network Peering establishes direct network connectivity between two VPC networks using Google’s internal network infrastructure. Without route filters, all subnet routes are automatically exchanged enabling any resource in either VPC to communicate with any resource in the peered VPC. This default behavior may be excessive when networks should remain partially isolated or when specific routing policies need to be enforced.
Route import and export filters control which routes are exchanged during peering. Export filters on the first VPC determine which of its routes are advertised to the peer. Import filters on the second VPC determine which advertised routes are actually accepted and installed. Filters use policy names that reference network tags, allowing tag-based route filtering where only routes to subnets with specific tags are exchanged.
Common use cases include connecting production and development VPCs where production should access some development services but development should not access production, implementing hub-and-spoke topologies where spoke networks should not communicate with each other, and creating segmented architectures where different trust zones share limited connectivity. The filtering provides security through network isolation while maintaining necessary communication paths.
Shared VPC connects service projects to a host project but uses a different model. Cloud VPN adds latency and complexity for VPC-to-VPC connectivity. Completely isolated VPC networks prevent any communication. Only VPC Network Peering with route filters provides the balance of direct connectivity, performance, and controlled route exchange needed for maintaining isolated routing domains with selective connectivity.
Question 90
An organization wants to implement centralized internet egress for multiple VPC networks. What architecture should they use?
A) Cloud NAT in each VPC
B) Proxy VM in a shared VPC with routing
C) Cloud Router advertising default routes
D) External IP addresses on all VMs
Answer: B
Explanation:
A proxy VM deployed in a shared VPC with appropriate routing configuration provides centralized internet egress control for multiple VPC networks, enabling organizations to implement consistent security policies, traffic inspection, URL filtering, and logging for all outbound internet traffic through a single control point. This architecture consolidates egress management reducing operational complexity while enabling advanced security controls not available with distributed egress approaches.
The architecture uses a shared VPC network that hosts one or more proxy VMs running squid, nginx, or commercial proxy software. Service project VPC networks connect to the shared VPC through VPC Network Peering or through VPN tunnels. Routing configuration in service project VPCs uses custom routes with next-hop pointing to the proxy VM internal IP address, directing internet-bound traffic through the proxy rather than directly to the internet.
The proxy VMs perform traffic inspection, apply filtering policies, log all requests, and potentially cache frequently accessed content. Multiple proxy VMs behind an internal load balancer provide high availability and horizontal scaling. The proxy VMs connect to the internet through Cloud NAT or external IP addresses, creating a controlled egress point. Security policies applied at the proxy affect all traffic from connected VPC networks ensuring consistent enforcement.
Benefits include centralized security policy management where URL filtering, malware scanning, or data loss prevention applies uniformly across all workloads, comprehensive audit logging of all internet access for compliance and security monitoring, potential cost optimization through caching and bandwidth management, and simplified network architecture where only proxy VMs need internet access while all other VMs remain fully private.
Cloud NAT in each VPC creates distributed egress requiring separate management. Cloud Router advertises routes but does not provide traffic control. External IPs on all VMs eliminate centralized control. Only the proxy VM architecture with centralized routing provides the unified internet egress control point enabling comprehensive security policy enforcement across multiple VPC networks.