Visit here for our full Cisco 350-601 exam dumps and practice test questions.
Question 91
A network engineer needs to configure a vPC between two Nexus switches. Which component is required for proper vPC operation?
A) vPC peer-keepalive link only
B) vPC peer link only
C) Both vPC peer link and peer-keepalive link
D) Neither link is required
Answer: C
Explanation:
Both the vPC peer link and peer-keepalive link are essential components required for proper virtual Port Channel operation between Cisco Nexus switches. The peer link carries data traffic and synchronization information between vPC peer switches, while the peer-keepalive link provides out-of-band heartbeat monitoring to detect peer switch failures and prevent split-brain scenarios. Together, these links enable the redundant, loop-free topology that vPC provides for connecting downstream devices.
The vPC peer link is a high-bandwidth connection, typically implemented as a port-channel using multiple 10GbE or higher speed interfaces between the two vPC peer switches. This link serves multiple critical functions including carrying traffic destined for the vPC peer switch, synchronizing MAC address tables and IGMP states between peers, forwarding orphan port traffic from one peer to the other, and transmitting vPC control protocol messages. The peer link must have sufficient bandwidth to handle these functions, especially orphan port traffic that must traverse this link.
Configuration of the vPC peer link involves creating a port-channel interface on both switches, assigning member interfaces to the port-channel, and designating the port-channel as the vPC peer link using the vpc peer-link command. The peer link should use dedicated interfaces rather than sharing with other traffic when possible. Best practices recommend using at least two 10GbE interfaces in the peer link port-channel to provide adequate bandwidth and redundancy.
The vPC peer-keepalive link provides independent health monitoring between vPC peers through a separate management or dedicated link. This link carries only small keepalive messages, typically using a routed connection through the management VRF or a separate Layer 3 link. The peer-keepalive link detects when the peer switch has failed versus when only the peer link has failed, enabling appropriate failover behavior and preventing both switches from becoming active simultaneously.
Peer-keepalive configuration specifies the source and destination IP addresses for keepalive messages, the VRF or interface to use, and optionally keepalive interval and timeout parameters. The peer-keepalive link should use a path completely independent of the peer link to ensure it remains operational even when the peer link fails. Using the management network or a dedicated out-of-band link provides this independence.
The interaction between peer link and peer-keepalive enables proper vPC failure handling. When the peer link fails but peer-keepalive succeeds, switches know the peer is alive but unreachable via peer link, maintaining vPC operation in a degraded state. When both links fail, the secondary switch suspends its vPC member ports preventing split-brain scenarios. When only peer-keepalive fails but peer link works, vPC continues operating normally. This coordinated failure detection ensures predictable behavior.
vPC domain configuration ties these components together. The vPC domain ID must match on both peers. Within the domain configuration, administrators specify the peer-keepalive destination, role priority determining primary and secondary roles, and other domain-wide parameters. Consistent domain configuration across both peers is essential for proper vPC establishment and operation.
Additional vPC components beyond peer link and peer-keepalive include member ports configured as vPCs that connect to downstream devices, orphan ports that connect only to one vPC peer, and the vPC primary and secondary role determination. Understanding how all components interact enables successful vPC deployment and troubleshooting.
Using only the peer-keepalive link without a peer link would prevent data synchronization and traffic forwarding between peers, making vPC non-functional. The keepalive link alone cannot carry the data traffic and state synchronization that vPC requires. Both links are necessary for complete vPC functionality.
Using only the peer link without peer-keepalive would prevent proper failure detection and could lead to split-brain scenarios where both switches believe they are active after certain failure modes. Without keepalive verification, switches cannot distinguish between peer failure and peer link failure, preventing correct failover behavior. Both links are required for reliable operation.
Neither link being required would make vPC impossible as there would be no mechanism for control communication, data forwarding, or failure detection between peers. vPC fundamentally requires communication between peer switches, which these links provide. Both are mandatory components of vPC architecture.
Question 92
An engineer is troubleshooting VXLAN connectivity issues. Which command shows the VXLAN VNI to VLAN mapping?
A) show vlan
B) show interface nve
C) show nve vni
D) show vxlan tunnel
Answer: C
Explanation:
The show nve vni command displays the mapping between VXLAN Network Identifiers and VLANs, showing which VNI is associated with which VLAN, the replication mode configured, and the operational state of each VNI. This command is essential for verifying VXLAN configuration and troubleshooting connectivity issues by confirming that VNI-to-VLAN mappings are correctly established on Network Virtualization Edge interfaces.
Understanding VNI-to-VLAN mapping is fundamental to VXLAN operation. Each VLAN that will be extended across the VXLAN overlay must be mapped to a unique VNI. This mapping occurs on the NVE interface configuration where each VNI is associated with its corresponding VLAN. The show nve vni command output confirms these associations are active and shows additional details about each VNI’s operational status.
The command output includes several important fields. The VNI column shows the VXLAN Network Identifier number. The Associated VLAN column displays the mapped VLAN ID. The Multicast Group column shows the multicast group used for BUM traffic replication if using multicast mode. The State column indicates whether the VNI is up and operational. The Mode column shows whether the VNI uses multicast or ingress replication.
Multicast versus ingress replication configuration appears in the show nve vni output, helping engineers verify the intended replication method. Multicast replication uses multicast groups for efficient BUM traffic distribution across the VXLAN fabric. Ingress replication uses head-end replication where the source VTEP replicates packets to all destination VTEPs. The output confirms which method is active for each VNI.
Troubleshooting VXLAN connectivity frequently involves verifying VNI mappings are consistent across all VTEPs participating in the overlay. Each VTEP extending a particular VLAN should use the same VNI for that VLAN. Inconsistent VNI mappings between VTEPs prevent proper communication. Using show nve vni on all VTEPs and comparing results identifies mapping inconsistencies.
The command also reveals VNI operational state issues. If a VNI shows as down, configuration problems exist preventing VNI activation. Common causes include the associated VLAN not existing, the NVE interface being down, or multicast group configuration issues. The operational state indication directs troubleshooting toward specific problem areas.
VNI configuration on Nexus switches involves creating the VNI under the NVE interface, associating it with a VLAN, and specifying the replication mode. The member vni command under interface nve1 defines each VNI, with associate-vrf or ingress-replication static peer-ip sub-commands configuring specific VNI properties. After configuration, show nve vni verifies the configuration activated correctly.
Additional VXLAN verification commands complement show nve vni including show nve peers showing discovered VTEPs, show interface nve1 displaying NVE interface status, show vxlan showing overall VXLAN configuration, and show l2route evpn mac-ip all for EVPN control plane information. Using multiple commands provides comprehensive VXLAN troubleshooting visibility.
The show vlan command displays standard VLAN information including VLAN IDs, names, and member ports but does not show VXLAN-specific information like VNI mappings or overlay configuration. While useful for basic VLAN verification, show vlan does not provide the VXLAN overlay details needed for troubleshooting VNI mapping issues.
The show interface nve command shows the status and configuration of the NVE interface itself including the source interface, encapsulation type, and whether the interface is up, but it does not display individual VNI mappings. This command verifies NVE interface operation but not the specific VNI-to-VLAN associations needed for mapping verification.
The show vxlan tunnel command is not a standard Cisco NX-OS command syntax. While various show commands provide VXLAN tunnel information, this specific command format does not exist on Nexus platforms. The correct command for VNI mapping information is show nve vni, making this option both incorrect and non-existent.
Question 93
A data center uses VXLAN EVPN for overlay networking. What is the primary function of the Route Distinguisher (RD) in EVPN?
A) To encrypt VXLAN traffic
B) To make routes unique across different VRFs or tenants
C) To compress routing updates
D) To authenticate BGP neighbors
Answer: B
Explanation:
The Route Distinguisher in EVPN makes routes unique across different VRFs or tenants by prepending a unique identifier to route prefixes, enabling BGP to carry multiple instances of the same IP prefix for different customers or VRFs without conflicts. In VXLAN EVPN deployments, RDs allow the same MAC or IP addresses to exist in different tenant networks while being advertised through the same BGP control plane, essential for multi-tenancy support in data center overlay networks.
Route Distinguishers are 8-byte values prepended to route prefixes creating VPN-IPv4 or VPN-IPv6 address families in BGP. The RD transforms a regular IPv4 prefix like 10.1.1.0/24 into a unique VPNv4 prefix like RD:10.1.1.0/24. Since different VRFs can use different RDs, the same IP prefix in multiple VRFs becomes distinguishable in BGP. This mechanism is fundamental to VPN and multi-tenant operation over shared infrastructure.
In EVPN, RDs serve similar purposes for Layer 2 information. EVPN advertises MAC addresses, IP-to-MAC bindings, and IP prefixes using BGP EVPN routes. The RD makes these advertisements unique per tenant or VRF. Two tenants using the same MAC address in different VRFs have their advertisements distinguished by different RDs, preventing confusion and enabling proper multi-tenant isolation.
RD format typically follows two common patterns: ASN:nn format using a two-byte AS number followed by a four-byte number, or IP-address:nn format using a four-byte IP address followed by a two-byte number. For example, 65000:1 or 192.168.1.1:1 are valid RDs. The choice of format is flexible as long as RD uniqueness is maintained where needed.
RD assignment strategies vary based on design requirements. Per-VRF unique RDs assign different RDs to each VRF on each device, providing maximum flexibility. Per-VRF common RDs use the same RD for a VRF across all devices, simplifying configuration but requiring careful route target management. Automatic RD assignment can simplify configuration in some implementations. Each strategy has tradeoffs between simplicity and flexibility.
The relationship between RDs and Route Targets is important but distinct. RDs make routes unique so BGP can carry them without conflicts. Route Targets control which routes are imported into which VRFs. A route has one RD that makes it unique but can have multiple Route Targets controlling its distribution. Understanding this distinction clarifies each component’s role in VPN routing.
EVPN uses RDs in all route types including Type 2 MAC/IP advertisement routes, Type 3 Inclusive Multicast routes, Type 5 IP prefix routes, and others. Each route type includes an RD in its NLRI making the route unique. This consistent use of RDs across EVPN route types enables comprehensive multi-tenant support for both Layer 2 and Layer 3 forwarding.
Configuration of RDs on Cisco Nexus switches occurs within VRF definitions using the rd command. For EVPN, RDs are configured under the VNI configuration within the EVPN context or under VRF definitions for L3VNI. Proper RD configuration is essential for EVPN route advertisement to function correctly. Misconfigured or duplicate RDs where uniqueness is required causes routing problems.
Encryption of VXLAN traffic is handled by technologies like MACsec or IPsec, not by Route Distinguishers. RDs are BGP route identifiers with no encryption functionality. They operate at the control plane routing level, while encryption operates at the data plane transmission level. RDs do not provide any security or encryption functions.
Compression of routing updates is not a function of Route Distinguishers. BGP has various mechanisms for efficient update handling like route refresh and update packing, but RDs are not involved in compression. RDs actually slightly increase routing update size by adding 8 bytes to each route prefix, opposite of compression.
Authentication of BGP neighbors uses MD5 signatures or more modern TCP-AO mechanisms configured through BGP neighbor commands, not Route Distinguishers. RDs are route identifiers within the BGP NLRI and have no role in BGP session authentication or security. Authentication is a session-level security function unrelated to RD operation.
Question 94
An engineer needs to verify that a Nexus switch is properly forwarding traffic for a specific VLAN. Which command shows the MAC address table for that VLAN?
A) show mac address-table vlan [vlan-id]
B) show vlan id [vlan-id]
C) show interface status
D) show spanning-tree vlan [vlan-id]
Answer: A
Explanation:
The show mac address-table vlan command followed by the specific VLAN ID displays all MAC addresses learned in that VLAN along with the interfaces where each MAC is located and the entry type, providing essential information for verifying traffic forwarding and troubleshooting connectivity issues. This command shows which MAC addresses the switch has learned on which ports for the specified VLAN, confirming that the switch is properly learning endpoints and can forward traffic to them.
MAC address table verification is fundamental to Layer 2 troubleshooting. When connectivity issues occur, checking whether the switch has learned the source and destination MAC addresses in the appropriate VLAN immediately reveals whether the problem is MAC learning or elsewhere. If MAC addresses appear on expected ports, the switch can forward traffic properly. If MAC addresses are missing or on wrong ports, addressing that resolves forwarding issues.
The command output includes several key fields. The VLAN column shows which VLAN the entry belongs to. The MAC Address column displays the learned MAC address. The Type column indicates whether the entry is dynamic, static, or other types. The Age column shows how long since the entry was refreshed. The Ports column displays which interface the MAC was learned on. This comprehensive information enables detailed forwarding path analysis.
Dynamic MAC address learning occurs when the switch receives frames and learns the source MAC address on the ingress interface. These learned addresses populate the MAC address table enabling the switch to forward subsequent frames to those MAC addresses out the correct interface. The aging time, typically 300 seconds by default, removes entries not refreshed by receiving frames, preventing stale entries from persisting indefinitely.
Static MAC addresses configured manually remain in the table until explicitly removed, useful for ensuring critical devices’ MAC addresses remain stable regardless of traffic patterns. Static entries can enforce that specific MAC addresses only appear on designated ports, providing basic MAC-based security. The Type field distinguishes static from dynamic entries enabling appropriate management of each.
Port channel member interfaces often show MAC addresses learned on the port-channel interface rather than individual member interfaces, reflecting that the MAC address table uses the port-channel as the logical forwarding destination. Understanding this behavior prevents confusion when troubleshooting port-channel environments where MAC addresses may not appear on expected individual physical interfaces.
VLAN-specific filtering using the vlan parameter focuses output on relevant entries for troubleshooting specific VLAN connectivity. Without VLAN filtering, the show mac address-table command displays all VLANs’ entries, which can be overwhelming in large networks. Filtering to the VLAN in question makes analysis manageable and identifies the specific MAC learning behavior for that VLAN.
Additional MAC address table troubleshooting commands include show mac address-table address showing entries for a specific MAC across all VLANs, show mac address-table interface showing entries learned on a specific interface, and show mac address-table count showing entry counts per VLAN or overall. These variations provide different perspectives on MAC learning for comprehensive troubleshooting.
The show vlan id command displays configuration information about a specific VLAN including its name, state, and member ports but does not show the MAC address table or learned MAC addresses. While useful for verifying VLAN configuration and port membership, this command does not provide the MAC forwarding information needed to verify traffic forwarding paths.
The show interface status command displays interface operational states, speed, duplex, and VLAN information but does not show MAC addresses learned on interfaces. This command verifies interface operational state and basic configuration but not the MAC address learning that determines forwarding behavior. Interface status and MAC learning are complementary but separate troubleshooting areas.
The show spanning-tree vlan command displays spanning-tree topology information for a VLAN including root bridge, port states, and costs but does not show the MAC address table. Spanning-tree information explains which ports forward or block for loop prevention but not which MAC addresses the switch has learned. Both spanning-tree and MAC information are valuable but serve different troubleshooting purposes.
Question 95
A network administrator is configuring QoS on Nexus switches. Which QoS model uses classification, marking, queuing, and scheduling to manage traffic?
A) Best-effort only
B) IntServ (Integrated Services)
C) DiffServ (Differentiated Services)
D) FIFO (First-In-First-Out)
Answer: C
Explanation:
Differentiated Services is the scalable QoS model that uses classification, marking, queuing, and scheduling to provide differentiated treatment for different traffic types without per-flow state maintenance. DiffServ operates by classifying traffic into behavior aggregates based on DSCP values in IP headers, marking packets appropriately, queuing different traffic classes separately, and scheduling how queues are serviced, enabling priority treatment for important traffic while maintaining scalability across large networks.
Classification in DiffServ identifies traffic types based on various criteria including source/destination addresses, protocol types, port numbers, or existing markings. Classification policies examine packet headers and match criteria to determine which traffic class each packet belongs to. On Nexus switches, classification uses class-maps defining match criteria for identifying different traffic types requiring different QoS treatment.
Marking assigns QoS values to packets based on classification results, typically by setting DSCP values in the IP header or CoS values in the 802.1Q header. Marking at network edges enables core network devices to provide appropriate treatment based on markings without deep packet inspection. Consistent marking across the network enables end-to-end QoS with simple, scalable implementation at each network node.
Queuing separates classified and marked traffic into different queues receiving different treatment. Nexus switches support multiple queues per interface enabling separation of voice, video, critical data, and best-effort traffic. Queue depth and drop policies can be configured per queue, providing control over how much buffer space each traffic class receives and how congestion is managed within each class.
Scheduling determines how queues are serviced including strict priority for latency-sensitive traffic, weighted fair queuing for proportional bandwidth sharing, and shaped queuing for rate-limiting specific traffic classes. Nexus switches support priority queuing for delay-sensitive traffic like voice, ensuring it is transmitted before other traffic while preventing starvation of lower-priority queues through bandwidth guarantees.
DiffServ scalability comes from per-hop behaviors rather than per-flow state. Each router independently determines packet treatment based on DSCP markings without maintaining flow state or signaling end-to-end reservations. This stateless operation enables DiffServ to scale to internet-size networks where maintaining per-flow state would be impractical. Network devices need only maintain classification and queue service policies, not individual flow information.
Implementation on Nexus switches uses MQC-style configuration with class-maps for classification, policy-maps defining actions for each class, and service-policies applying policies to interfaces. The queuing type commands configure queue characteristics, while system qos or qos commands enable and configure global QoS settings. This hierarchical configuration provides flexibility while maintaining clarity.
Common DiffServ PHBs include Expedited Forwarding for low-latency traffic like voice, Assured Forwarding classes for different priority levels of data traffic with drop precedence, and Default Forwarding for best-effort traffic. These standard PHBs enable consistent QoS treatment across multi-vendor networks when all devices honor the same DSCP-to-behavior mappings.
Best-effort only provides no QoS differentiation, treating all traffic equally. While simple, best-effort cannot prioritize important traffic over bulk traffic or provide latency guarantees for delay-sensitive applications. For networks with mixed traffic types and different application requirements, best-effort alone is insufficient and DiffServ provides necessary differentiation.
IntServ requires per-flow resource reservation using RSVP signaling with admission control and guaranteed resource allocation for each flow. While IntServ provides strong QoS guarantees, it does not scale to large networks due to per-flow state maintenance requirements. IntServ is rarely used in data center networks where DiffServ’s scalability is essential for handling large numbers of flows.
FIFO queuing transmits packets in order received without classification or prioritization, providing no QoS differentiation. While simple and fair for similar traffic, FIFO cannot prioritize important traffic or manage congestion effectively. DiffServ with multiple queues and sophisticated scheduling provides far better traffic management than simple FIFO, making FIFO inadequate for modern data center requirements.
Question 96
An engineer is configuring OSPF on a Nexus switch. Which network type should be used for Ethernet interfaces to avoid DR/BDR election?
A) Broadcast
B) Point-to-point
C) NBMA
D) Point-to-multipoint
Answer: B
Explanation:
Point-to-point network type on OSPF interfaces avoids Designated Router and Backup Designated Router election, forming direct neighbor relationships without the overhead and convergence delays of DR/BDR election. For Ethernet interfaces in data center environments where only two OSPF neighbors exist on each link, configuring point-to-point network type simplifies operation, speeds convergence, and eliminates unnecessary DR/BDR election overhead that provides no benefit on point-to-point links.
DR and BDR election occurs on broadcast and NBMA network types to reduce the number of adjacencies on multi-access segments. On segments with many routers, forming full mesh adjacencies between all routers creates scaling issues. DR and BDR reduce this by having all routers form adjacencies only with the DR and BDR, reducing adjacency count from n squared to n. However, this optimization is unnecessary on point-to-point links with only two routers.
Point-to-point network type eliminates DR/BDR election because only two routers exist on the link, making the election pointless. Routers on point-to-point links form direct OSPF adjacencies without electing a DR. This simplifies configuration, speeds initial convergence because election does not occur, and provides cleaner OSPF operation. The Hello and Dead intervals may differ from broadcast networks, with faster default timers enabling quicker failure detection.
Configuring point-to-point network type on Nexus switches uses the ip ospf network point-to-point command under interface configuration mode. This overrides the default network type, which is typically broadcast for Ethernet interfaces. Applying this configuration to all Ethernet interfaces in point-to-point topology scenarios provides consistent behavior and optimal OSPF operation.
Data center spine-leaf topologies particularly benefit from point-to-point OSPF configuration. Each leaf connects to each spine forming point-to-point links where DR/BDR election is unnecessary. Configuring all spine-leaf links as point-to-point simplifies OSPF operation across the fabric. Combined with appropriate timer tuning, point-to-point configuration optimizes OSPF for data center use cases.
The convergence advantage of point-to-point comes from eliminating DR/BDR election delay. On broadcast networks, routers must complete DR/BDR election before forming adjacencies and exchanging routing information. This election process adds delay to initial convergence. Point-to-point networks skip election, proceeding directly to adjacency formation and routing exchange, reducing convergence time.
Troubleshooting OSPF adjacencies is simpler with point-to-point configuration. Broadcast network issues often involve DR/BDR election problems, wrong DR selection, or DR priority configuration. Point-to-point eliminates these potential issues. Troubleshooting focuses on basic OSPF configuration, reachability, timers, and authentication without DR/BDR complexity.
Best practices for OSPF in data centers include using point-to-point for all interfaces in spine-leaf topologies, tuning hello and dead timers for fast convergence, enabling OSPF authentication for security, and considering unnumbered interfaces to conserve IP addresses. These practices combined optimize OSPF for modern data center requirements.
Broadcast network type performs DR/BDR election on multi-access networks, which is the default for Ethernet interfaces but unnecessary and undesirable for point-to-point links. Election adds delay and complexity without benefit in two-router scenarios. For avoiding DR/BDR election on Ethernet interfaces, broadcast is the wrong network type to use.
NBMA network type requires manual neighbor configuration and performs DR/BDR election, introducing even more complexity than broadcast. NBMA is designed for non-broadcast multi-access networks like Frame Relay, not modern Ethernet networks. For Ethernet interfaces in point-to-point scenarios, NBMA adds unnecessary complexity and is inappropriate.
Point-to-multipoint network type is designed for scenarios where one router connects to multiple routers but broadcast is not available, treating the segment as a collection of point-to-point links. While avoiding DR/BDR election, point-to-multipoint is more complex than pure point-to-point and is unnecessary for simple two-router scenarios. Standard point-to-point is simpler and more appropriate.
Question 97
A data center is implementing Cisco ACI. What is the primary function of the Application Policy Infrastructure Controller (APIC)?
A) To perform packet forwarding
B) To provide centralized policy management and configuration
C) To act as a default gateway
D) To provide end-user access
Answer: B
Explanation:
The Application Policy Infrastructure Controller provides centralized policy management and configuration for the entire ACI fabric, serving as the single point of automation, management, and monitoring for the software-defined network. APIC defines application-centric policies that the fabric enforces, manages the lifecycle of fabric configuration, provides visibility into fabric operation, and abstracts infrastructure complexity through an application-focused model, enabling administrators to define network requirements in business terms rather than low-level networking constructs.
APIC operates as a clustered system with multiple controllers providing high availability and scalability. A minimum of three APICs in a cluster is recommended for production environments. The controllers maintain synchronized configuration databases and elect a leader for certain functions, but all controllers can accept API calls and GUI connections. This clustering provides resilience against individual controller failures while maintaining centralized policy management.
Policy definition in APIC uses an application-centric model where administrators define application requirements including which endpoints can communicate, what security policies apply, what quality of service is needed, and where applications are deployed. APIC translates these high-level policies into low-level switch configurations pushed to fabric leaf and spine switches. This abstraction allows network configuration to align with application needs rather than requiring application design to conform to network constraints.
The APIC communicates with fabric switches using OpFlex protocol, which separates policy definition from policy enforcement. APIC defines policies centrally, while leaf switches enforce policies locally on data plane traffic. This distributed enforcement model provides scalability as the data plane operates at line rate on switch ASICs while the control plane centralized on APIC provides consistent policy across the fabric.
Configuration management through APIC provides version control, rollback capabilities, and configuration validation. All changes are tracked with change logs identifying who made changes and when. Administrators can create configuration snapshots and rollback to previous configurations if issues arise. Configuration validation checks policies for conflicts or errors before applying to the fabric, preventing configuration mistakes.
Monitoring and troubleshooting capabilities in APIC provide comprehensive visibility into fabric health, performance, and events. APIC collects statistics, errors, and events from all fabric elements presenting unified dashboards and detailed analytics. Features like atomic counters track specific traffic flows, health scores quantify component status, and fault management identifies and prioritizes issues requiring attention.
Integration with external systems extends APIC value beyond fabric management. APIC provides REST APIs enabling integration with orchestration platforms, cloud management systems, and custom automation tools. This API-first design allows ACI to integrate into broader data center automation workflows. APIC also integrates with VMware, Microsoft, Kubernetes, and other platforms for comprehensive policy enforcement across virtual and container environments.
Multitenancy support in APIC enables multiple isolated tenants sharing the same physical fabric. Each tenant’s policies remain completely separate with no visibility between tenants. This isolation enables service providers or large enterprises to consolidate multiple networks onto shared ACI fabric while maintaining security and policy independence. APIC manages all tenants from a single interface while enforcing strict isolation.
Packet forwarding occurs on leaf and spine switches in the ACI fabric, not on APIC. APIC is purely a control plane component that defines policies and configures switches, but actual traffic forwarding happens entirely on fabric switches at line rate using hardware ASICs. APIC never sits in the data path and does not forward packets.
Default gateway functionality is provided by leaf switches in ACI fabric, not by APIC. Leaf switches serve as distributed default gateways using anycast gateway addressing where all leaf switches use the same gateway IP for a subnet. APIC manages this configuration but does not act as a gateway itself.
End-user access occurs through ports on leaf switches in the ACI fabric, not through APIC. Users and servers connect to leaf switch ports configured according to policies defined in APIC. APIC is a management component not involved in direct user or server connectivity beyond configuration management.
Question 98
Which routing protocol is commonly used in Cisco ACI fabric between spine and leaf switches?
A) OSPF
B) EIGRP
C) IS-IS
D) MP-BGP
Answer: D
Explanation:
Multiprotocol BGP is the routing protocol used in Cisco ACI fabric between spine and leaf switches, carrying both overlay and underlay routing information in an EVPN-based architecture. MP-BGP provides scalability, stability, and multi-tenant support necessary for large-scale data center fabrics. The protocol operates in a spine-leaf topology where each leaf switch forms BGP peerings with all spine switches, creating a simple, predictable routing architecture that scales linearly.
The ACI fabric underlay uses MP-BGP to distribute endpoint reachability information learned by leaf switches to all other leaf switches through the spines. When a leaf switch learns a new endpoint MAC or IP address, it advertises this information to spine switches using BGP EVPN routes. Spines redistribute these advertisements to all other leaves, providing fabric-wide endpoint awareness. This overlay-underlay separation enables multi-tenancy and policy-based forwarding.
BGP EVPN route types in ACI include Type-2 routes for MAC and MAC+IP advertisement, Type-3 routes for BUM traffic handling, and Type-5 routes for IP prefix advertisement. These route types enable the fabric to support both Layer-2 and Layer-3 forwarding with optimal efficiency. EVPN provides control plane learning replacing data plane flooding with protocol-based distribution, improving scalability and convergence.
The spine-leaf BGP topology uses each leaf as a BGP route reflector client peering with all spines acting as route reflectors. This design eliminates the need for full mesh BGP peerings between leaves, dramatically simplifying configuration and scaling. Adding new leaves requires only configuring peerings to existing spines, not to all existing leaves. This architectural simplicity enables fabrics to scale to hundreds of leaves.
Autonomous system numbering in ACI fabric typically uses private AS numbers with each leaf switch having a unique AS number while spines share a common AS number or have unique AS numbers depending on design. This AS numbering prevents routing loops and provides clear hierarchy. BGP path attributes and route policies control route selection and loop prevention in the fabric topology.
BGP configuration in ACI is automated by APIC, which generates and deploys BGP configurations to all spine and leaf switches based on fabric membership discovery. Administrators do not manually configure BGP on individual switches. APIC determines appropriate BGP parameters including AS numbers, router IDs, peer relationships, and address families, then pushes complete configurations to switches. This automation ensures consistency and eliminates configuration errors.
BGP stability and scalability characteristics make it ideal for ACI fabric. BGP handles large routing tables efficiently, converges predictably, supports extensive policy control through route maps and communities, and provides well-understood operational behavior from decades of internet routing use. These proven characteristics transfer to data center fabrics where similar scale and stability requirements exist.
Fast convergence in ACI BGP uses aggressive timers and BFD for rapid failure detection. When links or nodes fail, BFD detects failures within milliseconds, triggering BGP to remove affected routes and converge to alternate paths. This fast convergence combined with ECMP load balancing provides the high availability and performance required for modern data center applications.
OSPF is not used in ACI fabric for spine-leaf routing. While OSPF works well in many environments, its link-state nature and flooding behavior do not provide the scale, multi-tenancy, and policy capabilities that BGP EVPN offers. ACI architecture specifically selected BGP for its advantages in large-scale, multi-tenant environments.
EIGRP is not used in ACI fabric and is proprietary to Cisco with limited applicability in modern software-defined networks. EIGRP lacks the multi-tenant isolation and policy capabilities needed for ACI. The protocol is primarily used in campus and branch networks, not data center fabrics where BGP’s proven internet-scale characteristics are preferred.
IS-IS is not used in ACI fabric routing between spine and leaf switches. While IS-IS appears in some data center designs, ACI specifically uses MP-BGP EVPN for its overlay routing. IS-IS lacks the native overlay capabilities and multi-tenancy support that EVPN provides, making BGP the superior choice for ACI’s requirements.
Question 99
A Nexus switch has multiple VLANs configured but one VLAN is not passing traffic. Which command helps verify that the VLAN is active and has member ports?
A) show vlan brief
B) show ip interface brief
C) show running-config
D) show interface status
Answer: A
Explanation:
The show vlan brief command provides a concise overview of all VLANs on the switch including VLAN IDs, names, status, and member ports, enabling quick verification that a specific VLAN exists, is active, and has appropriate port memberships. This command is the first troubleshooting step for VLAN connectivity issues, immediately revealing whether the VLAN is configured, active, and has the expected ports assigned, any of which could cause traffic to fail.
The command output includes VLAN ID, Name, Status, and Ports columns. The VLAN ID identifies each VLAN numerically. The Name field shows configured VLAN names aiding human identification. The Status column indicates whether the VLAN is active or suspended. The Ports column lists all switch ports assigned to each VLAN. This comprehensive information in brief format enables rapid assessment of VLAN configuration.
VLAN status is particularly important for troubleshooting. A VLAN showing as suspended will not forward traffic even if properly configured. VLAN suspension occurs when spanning-tree puts the VLAN in a non-forwarding state or when administratively suspended. If a problem VLAN shows suspended status, investigating the suspension reason becomes the focus rather than port membership or other configuration.
Port membership verification through show vlan brief confirms that all expected ports are members of the VLAN. If a server should be in VLAN 10 but its port does not appear in VLAN 10’s port list, traffic will not flow regardless of other configuration. Missing port membership is a common VLAN connectivity problem quickly identified through this command output.
Trunk port representation in show vlan brief varies by version but typically shows trunk ports that allow the VLAN or indicates trunking status. Understanding how trunk ports appear in output prevents misinterpretation. Trunk ports may not appear in VLAN port lists the same way access ports do, but the command still indicates whether VLANs are allowed on trunks through other show commands or output details.
Comparing actual port membership to expected membership identifies discrepancies. If documentation indicates certain ports should be in a VLAN but show vlan brief does not list them, either configuration is incorrect or documentation is outdated. This comparison between expected and actual configuration quickly identifies configuration drift or errors that cause connectivity problems.
The brief format provides essential information without overwhelming detail, making it ideal for quick troubleshooting and verification. More detailed VLAN information is available through show vlan id or show vlan name commands when deeper investigation is needed, but show vlan brief suffices for most initial troubleshooting by answering the fundamental questions of whether the VLAN exists, is active, and has member ports.
Systematic VLAN troubleshooting uses show vlan brief as a starting point. If the VLAN is missing, configuration is incomplete. If suspended, focus on reasons for suspension. If active but lacking expected ports, investigate port configuration. If active with correct ports but still not passing traffic, proceed to other troubleshooting areas like spanning-tree, trunk configuration, or inter-VLAN routing. This methodical approach efficiently identifies problem root causes.
The show ip interface brief command displays IP addressing and status of Layer-3 interfaces including SVIs, routed interfaces, and management interfaces but does not show VLAN membership or Layer-2 port assignments. While useful for Layer-3 troubleshooting, this command does not verify VLAN existence, status, or port membership needed for diagnosing VLAN connectivity issues.
The show running-config command displays the complete switch configuration including VLAN definitions and port assignments but provides overwhelming detail making quick verification difficult. While running-config contains all VLAN information, parsing thousands of configuration lines to verify VLAN status and membership is inefficient compared to show vlan brief’s focused output.
The show interface status command displays port operational status, speed, duplex, and VLAN for access ports but presents information per-interface rather than per-VLAN. This interface-centric view makes verifying all ports in a specific VLAN difficult. While useful for troubleshooting specific port issues, show interface status is less efficient than show vlan brief for VLAN-wide verification.
Question 100
An engineer needs to configure port channels on Nexus switches. Which protocol should be used for automatic port channel negotiation?
A) STP
B) LACP
C) HSRP
D) OSPF
Answer: B
Explanation:
Link Aggregation Control Protocol automatically negotiates port channel formation between switches, dynamically determining which interfaces should bundle into a port channel and detecting configuration mismatches or failures that would prevent proper operation. LACP provides standards-based link aggregation that works across multi-vendor environments, automatically handles member link failures, and verifies consistent configuration across port channel ends, making it the preferred protocol for port channel implementation on Nexus switches.
LACP operates by exchanging protocol messages called LACPDUs between potential port channel members. Each interface sends LACPDUs advertising its willingness to form a channel, configuration parameters, and system and port priorities. Receiving switches compare LACPDU information against local configuration to determine compatibility. When both ends agree on port channel membership and parameters, interfaces bundle into a functioning port channel.
Port channel modes determine LACP behavior. Active mode sends LACPDUs continuously and actively attempts to form port channels. Passive mode listens for LACPDUs but does not initiate formation. For LACP to form a port channel, at least one end must be active. Best practice uses active mode on both ends ensuring port channel formation succeeds and enabling fastest detection of configuration changes or failures.
System priority and port priority control LACP negotiations. System priority identifies which switch controls port channel decisions when disagreements occur. Port priority determines which interfaces become active members when more interfaces are LACP-enabled than can be active in the channel. Understanding priority operation helps design predictable port channel behavior and troubleshoot unexpected member selections.
Member interface requirements ensure proper port channel operation. All member interfaces must have matching speed and duplex settings. Configuration parameters like spanning-tree, VLANs on trunks, and other settings should be consistent. LACP detects many mismatches and refuses to form channels with incompatible configurations, preventing forwarding loops or misconfigurations that would cause network problems.
LACP load balancing distributes traffic across port channel members based on configurable hash algorithms. Common algorithms include source-destination IP, source-destination MAC, source-destination port, or combinations. Load balancing occurs per-flow rather than per-packet preserving packet order within flows. Proper load balancing configuration optimizes bandwidth utilization across member links.
Failure detection and recovery in LACP provides resilience. When member interfaces fail, LACP quickly detects failure through lost LACPDUs and removes failed members from the channel. Remaining members continue forwarding traffic with reduced bandwidth. When failed members recover, LACP automatically adds them back to the channel. This dynamic membership adjustment provides high availability without manual intervention.
Configuration on Nexus switches involves creating a port-channel interface, configuring it with desired VLANs or routing parameters, then assigning physical interfaces to the channel with channel-group and LACP mode commands. The channel-group number ties physical interfaces to their port-channel, while the mode command specifies LACP active or passive. This configuration automatically enables LACP negotiation on assigned interfaces.
Spanning-tree protocol prevents Layer 2 loops but does not negotiate or form port channels. STP is complementary to port channels, treating the entire channel as a single logical link for loop prevention. While both are important for Layer 2 network design, STP does not provide port channel aggregation functionality.
HSRP provides first-hop redundancy for default gateway high availability but has no role in port channel formation. HSRP enables multiple routers to share a virtual IP address for gateway redundancy, completely unrelated to aggregating multiple physical links into port channels. These protocols serve different purposes in network design.
OSPF is a routing protocol for exchanging routing information between routers but does not aggregate links or negotiate port channels. While OSPF might run over port channel interfaces treating them as single logical links, OSPF does not create or manage port channels. Link aggregation and routing are separate network layer functions.
Question 101
A data center uses BGP for routing between leaf switches and external networks. Which BGP attribute is used to prefer one path over another when all other attributes are equal?
A) AS-Path length
B) Local Preference
C) MED
D) Router ID
Answer: D
Explanation:
Router ID is the final tiebreaker in BGP’s path selection process when all other attributes are equal, with the path from the neighbor with the lower router ID being preferred. This deterministic tiebreaker ensures BGP always selects a single best path even when multiple paths are identical in all other aspects. Understanding BGP’s complete path selection algorithm including the router ID tiebreaker is essential for predicting and controlling BGP routing behavior in data center networks.
BGP path selection follows a specific sequence evaluating multiple attributes in order. The process includes checking path validity, preferring highest weight, highest local preference, locally originated routes, shortest AS-path length, lowest origin type, lowest MED among routes from the same AS, external over internal paths, lowest IGP metric to next hop, oldest external path, and finally lowest router ID. This complete algorithm must be understood for effective BGP design and troubleshooting.
Router ID serves as the ultimate tiebreaker ensuring deterministic path selection. When multiple BGP paths survive all previous comparison steps, the router selects the path learned from the BGP neighbor with the numerically lowest router ID. This breaks ties reliably and repeatably. Without this final tiebreaker, BGP might oscillate between equal paths or select randomly, causing instability.
Router ID configuration on Nexus switches can be explicit using the router-id command under BGP configuration or implicit using the highest loopback IP address if no explicit router ID is configured. Explicit configuration is recommended for predictability and to prevent changes when loopback addresses are modified. Consistent router ID assignment following a planned scheme aids troubleshooting and network understanding.
In data center leaf-spine architectures, understanding path selection is crucial for traffic engineering. When leaves have multiple equal-cost paths to destinations through different spines, BGP’s path selection determines which paths are used. If all other attributes are equal, the spine with the lower router ID will be preferred. This behavior might need to be counteracted through other attributes if traffic engineering requires different path selection.
The question specifically asks about equal attributes, making router ID the correct answer among the options. AS-Path length is compared earlier in the selection process and would determine path selection before reaching the router ID tiebreaker if paths have different lengths. Local Preference and MED are also compared before router ID. When these are equal, router ID provides the final decision.
BGP path selection manipulation uses various attributes to influence routing. Administrators typically prefer using Local Preference within an AS, AS-Path prepending between AS boundaries, MED for influencing inbound traffic, or communities for signaling policies. Router ID is rarely manipulated for traffic engineering as it is the last resort tiebreaker and changing it affects many other aspects of BGP operation.
Predictable path selection in BGP requires understanding the complete algorithm and ensuring desired paths win comparisons at appropriate steps. If administrators want to prefer specific paths, they should manipulate earlier attributes like Local Preference or AS-Path rather than relying on router ID comparisons. However, knowing router ID is the final tiebreaker helps predict behavior when earlier attributes are equal.
AS-Path length is evaluated earlier in BGP’s path selection process, specifically preferring shorter AS-Path before considering router ID. While AS-Path length is an important attribute, the question asks about the tiebreaker when all other attributes are equal. If AS-Path lengths differ, that determines path selection before router ID is considered.
Local Preference is evaluated very early in BGP path selection, before AS-Path and long before router ID. Higher local preference values are preferred. While local preference is a powerful attribute for influencing path selection within an AS, when all attributes including local preference are equal, router ID becomes the tiebreaker.
MED is evaluated after AS-Path but before router ID in BGP’s path selection process. Lower MED values are preferred among routes from the same neighboring AS. When MEDs are equal or not applicable, subsequent attributes including eventually router ID determine path selection. MED is not the final tiebreaker when all attributes are equal.
Question 102
A Nexus switch is experiencing high CPU utilization. Which command shows processes consuming CPU resources?
A) show processes cpu
B) show system resources
C) show interface counters
D) show logging
Answer: A
Explanation:
The show processes cpu command displays detailed information about CPU utilization including overall CPU usage, individual process CPU consumption, and historical averages, enabling identification of which processes are consuming excessive CPU resources and causing performance problems. This command is essential for troubleshooting high CPU scenarios by revealing whether CPU load is distributed across many processes or concentrated in specific processes, guiding appropriate remediation actions.
Command output includes multiple sections providing comprehensive CPU analysis. The summary section shows overall CPU utilization as a percentage for the current moment, one-minute average, and five-minute average. These averages help distinguish between transient spikes and sustained high utilization. Process list sections show all running processes with their individual CPU percentages, enabling identification of CPU-hungry processes.
Process-specific information includes process ID, CPU utilization percentage, memory usage, process name, and runtime. Sorting processes by CPU consumption immediately reveals top CPU consumers. Normal processes typically use minimal CPU during steady state, so processes showing high CPU percentages warrant investigation to determine if behavior is expected or indicates problems like software bugs or attacks.
Common causes of high CPU include routing protocol instability causing excessive recalculations, spanning-tree convergence during topology changes, control plane traffic from attacks or misconfigurations, software bugs causing infinite loops or excessive processing, and management operations like backups or monitoring using excessive resources. Identifying which process consumes CPU helps determine which category of problem exists.
Expected versus unexpected CPU usage requires understanding normal operations. During routing protocol convergence after failures, routing processes legitimately consume CPU temporarily. During spanning-tree convergence, STP processes use CPU briefly. Management operations like SNMP polling or configuration synchronization cause predictable CPU usage. Problems arise when CPU usage is sustained, excessive, or occurs without corresponding network events.
Remediation strategies depend on root cause. For control plane attacks, implementing control plane policing protects CPU. For routing instability, dampening or fixing instability sources reduces CPU load. For software bugs, upgrading software may be necessary. For excessive management traffic, reducing polling frequency or optimizing monitoring approaches helps. Appropriate remediation requires understanding which process consumes CPU and why.
Historical CPU data helps identify patterns. If show processes cpu reveals high CPU only at specific times, correlating timing with other events identifies triggers. Sustained high CPU degrading performance requires immediate attention while periodic brief spikes may be acceptable. Understanding CPU behavior over time distinguishes normal variance from problems requiring action.
Additional troubleshooting commands complement show processes cpu including show system resources showing overall system resource usage, show logging revealing errors correlating with CPU spikes, and show interface counters error revealing traffic anomalies potentially causing CPU load. Using multiple commands provides complete problem understanding enabling effective resolution.
The show system resources command displays overall system resource utilization including memory, CPU summary, and process counts but provides less detailed per-process CPU information than show processes cpu. While useful for overall system health assessment, show system resources does not identify specific processes consuming CPU making it less effective for diagnosing high CPU issues where identifying the culprit process is essential.
The show interface counters command displays traffic statistics and errors on interfaces but does not show CPU utilization or process information. While interface counters help diagnose traffic-related problems that might indirectly cause CPU load, the command does not directly identify CPU-consuming processes. For diagnosing high CPU, show interface counters is supplementary rather than primary.
The show logging command displays system log messages which may contain errors or events correlating with CPU problems but does not directly show CPU utilization or process consumption. Logs are valuable for understanding context around CPU issues but do not quantify CPU usage or identify which processes consume resources. Show logging complements but does not replace show processes cpu for CPU troubleshooting.
Question 103
Which Cisco technology provides automated fabric deployment and management in data center networks?
A) VXLAN
B) POAP
C) Cisco DCNM
D) OTV
Answer: C
Explanation:
Cisco Data Center Network Manager provides automated fabric deployment and management for data center networks including VXLAN fabrics, traditional Layer 2/3 networks, and storage networks, offering centralized configuration management, visualization, monitoring, and automation capabilities. DCNM simplifies complex data center network operations through policy-based automation, reduces configuration errors through templates and validation, and provides comprehensive visibility into fabric health and performance.
DCNM supports multiple fabric types including VXLAN EVPN fabrics for overlay networking, traditional Layer 2/3 networks, FCoE and FC SAN fabrics for storage, and hybrid deployments combining multiple technologies. This versatility makes DCNM suitable for diverse data center environments and migration scenarios where multiple network types coexist during technology transitions.
Fabric deployment automation through DCNM significantly reduces time and effort required to build new networks. DCNM provides wizards and templates for defining fabric parameters, generates configurations for all fabric devices based on templates, pushes configurations to switches automatically, and verifies successful deployment. This automation eliminates manual CLI configuration of hundreds or thousands of configuration commands per switch, reducing human error and deployment time.
Configuration management in DCNM maintains consistency across fabric devices through templates ensuring uniform configuration, compliance checking validating configurations against standards, change management tracking configuration modifications, and rollback capabilities enabling recovery from problematic changes. These capabilities provide governance and control over network configuration preventing drift and misconfiguration.
Monitoring and visibility features provide real-time and historical views of network operation including topology visualization showing fabric structure and relationships, health monitoring identifying devices or links with issues, performance analytics revealing utilization and trends, and fault management correlating and prioritizing network problems. This comprehensive visibility enables proactive problem identification and resolution.
Integration with orchestration and automation platforms extends DCNM value beyond standalone operation. DCNM provides REST APIs enabling integration with cloud management platforms, IT service management systems, and custom automation tools. This integration allows network provisioning to be embedded in broader IT automation workflows, supporting Infrastructure-as-Code and DevOps practices.
Multi-site management capability in DCNM enables managing multiple data center fabrics from a single interface. Large enterprises with many data center locations benefit from centralized management providing consistent policies and configuration across sites. Multi-site visibility identifies configuration differences between sites and provides unified monitoring of distributed infrastructure.
Compliance and audit capabilities track who made configuration changes, when changes occurred, and what was modified. Compliance checking validates configurations against defined standards and security policies. These capabilities support regulatory requirements, internal governance, and troubleshooting by providing comprehensive configuration history and change tracking.
VXLAN is a network virtualization technology creating Layer 2 overlay networks across Layer 3 infrastructure but is not a management or automation platform. While DCNM manages VXLAN fabrics, VXLAN itself does not provide automated deployment or management. VXLAN is a data plane technology while DCNM is a management and orchestration platform.
POAP provides zero-touch provisioning for initial switch deployment, automatically loading software and configuration when new switches boot, but is not a comprehensive fabric management platform. While POAP simplifies initial deployment, it does not provide ongoing management, monitoring, or policy enforcement that DCNM provides. POAP is a component of automated deployment but not a complete management solution.
OTV extends Layer 2 networks across Layer 3 boundaries for data center interconnect but is not a management or automation platform. OTV is a specific technology for DCI scenarios while DCNM is a management platform that can manage networks using OTV among many other technologies. OTV addresses a specific connectivity requirement while DCNM provides comprehensive management.
Question 104
A network engineer is implementing multi-tenancy in a data center. Which technology provides Layer 2 and Layer 3 isolation between tenants?
A) VLANs only
B) VRFs
C) Access Control Lists
D) Port security
Answer: B
Explanation:
Virtual Routing and Forwarding instances provide complete Layer 2 and Layer 3 isolation between tenants by creating separate routing tables and forwarding contexts on shared physical infrastructure, ensuring each tenant’s traffic, routing information, and policies remain completely segregated. VRF technology enables secure multi-tenancy in data centers where multiple customers or departments share network infrastructure while maintaining privacy, security, and independence equivalent to having separate physical networks.
VRF operation creates isolated routing instances on switches and routers where each VRF maintains its own routing table completely separate from other VRFs and the global routing table. Routes in one VRF are never visible in other VRFs unless explicitly leaked through inter-VRF routing configuration. This isolation ensures tenant routing information remains private and prevents routing conflicts between tenants using overlapping address spaces.
Layer 2 isolation in VRF environments uses VLANs assigned to specific VRFs where each VLAN-to-VRF association ensures Layer 2 traffic remains within the tenant’s VRF. SVIs or routed interfaces assigned to VRFs provide Layer 3 routing within tenant contexts. The combination of VLANs for Layer 2 and VRFs for Layer 3 provides complete isolation across both layers.
Address space overlap support is a key VRF benefit enabling different tenants to use identical IP address ranges without conflicts. Each tenant’s address space exists in its own VRF namespace preventing collisions. This capability is essential in service provider and cloud environments where administrators cannot control customer address space selections and must accommodate whatever addressing schemes customers bring.
VRF configuration on Nexus switches involves creating VRF instances using the vrf context command, assigning interfaces or SVIs to VRFs, and optionally configuring routing protocols within VRFs. Interface assignment to VRFs is explicit using the vrf member command under interface configuration. Routing protocols can run per-VRF creating separate protocol instances for each tenant.
Inter-VRF routing enables controlled communication between tenants when business requirements dictate. Route leaking imports specific routes from one VRF to another, route targets in MP-BGP control route distribution across VRFs in MPLS VPNs, and firewall integration provides security enforcement for inter-tenant traffic. These mechanisms allow flexibility in designing tenant connectivity while maintaining default isolation.
VRF-lite is VRF implementation without MPLS, common in data center environments where MPLS is unnecessary. VRF-lite provides all isolation benefits using simple VRF configuration without BGP or MPLS complexity. This simplified approach suits many data center multi-tenancy requirements where tenants connect within a single administrative domain.
Management plane separation through VRFs extends to management traffic where management VRF isolates management traffic from user traffic, providing security and preventing management plane attacks from user data planes. Separate VRFs for management, production, and customer traffic create defense-in-depth security architecture.
VLANs provide Layer 2 isolation but do not inherently provide Layer 3 isolation. Different VLANs can share the same routing table allowing inter-VLAN routing in the global routing context. While VLANs are often used with VRFs, VLANs alone do not provide the complete Layer 2 and Layer 3 tenant isolation that VRFs enable. Multi-tenancy requires VRFs for comprehensive isolation.
Access Control Lists filter traffic based on rules but do not create isolated routing contexts. ACLs can enforce security policies between tenants but require careful configuration to prevent all inter-tenant communication and do not prevent routing table visibility or address space conflicts. ACLs complement VRFs for security but do not replace VRF isolation capabilities.
Port security restricts MAC addresses on switch ports preventing unauthorized devices from connecting but does not provide tenant isolation. Port security is a security feature preventing spoofing or unauthorized access at individual ports. It does not create routing isolation or enable address space overlap required for multi-tenancy. Port security serves different security purposes than tenant isolation.
Question 105
A data center network uses VXLAN with multicast for BUM traffic replication. What is configured on spine switches to support this?
A) PIM
B) IGMP snooping
C) HSRP
D) LACP
Answer: A
Explanation:
Protocol Independent Multicast must be configured on spine switches to support multicast-based VXLAN BUM traffic replication, providing the multicast routing functionality needed to distribute broadcast, unknown unicast, and multicast traffic across the VXLAN overlay. PIM creates multicast distribution trees from source VTEPs to destination VTEPs through the underlay network, enabling efficient replication of overlay BUM traffic without requiring ingress replication that would create scaling limitations.
Multicast-based VXLAN operation assigns each VNI or bridge domain to a multicast group. When a VTEP needs to send BUM traffic for a VNI, it encapsulates the traffic and sends it to the associated multicast group address. The underlay multicast infrastructure, built using PIM, replicates and forwards this traffic to all VTEPs that have registered interest in that multicast group by having endpoints in the corresponding VNI.
PIM configuration on spines enables multicast forwarding by electing a rendezvous point for shared tree operation, building source trees using PIM sparse mode, maintaining multicast routing state, and forwarding multicast packets to appropriate interfaces based on multicast group membership. Spines act as multicast routers forming the core of the multicast distribution infrastructure.
PIM sparse mode is the typical deployment mode for VXLAN fabrics, using explicit joins to build multicast distribution trees only where needed. Sparse mode scales better than dense mode for data center environments where not all segments need all multicast traffic. Rendezvous points centralize initial tree building, with optimization to source-based trees occurring for high-volume flows.
Leaf switches register multicast group membership through IGMP joins sent toward the rendezvous point when they have endpoints in corresponding VNIs. These IGMP joins propagate through spine switches causing PIM to build distribution state. When source VTEPs send multicast traffic, spines replicate and forward based on built distribution trees ensuring all interested VTEPs receive BUM traffic.
The multicast group design for VXLAN typically uses a range of multicast addresses allocated to VNIs, either one group per VNI for maximum isolation or shared groups for multiple VNIs to reduce multicast state. Design tradeoffs balance multicast state scalability against failure domain isolation. Documentation and IP address management track multicast group allocations preventing conflicts.
Alternative VXLAN replication modes include ingress replication where source VTEPs replicate BUM traffic to each remote VTEP individually, suitable for smaller deployments or when multicast is unavailable but creating scaling challenges in large fabrics. Multicast replication provides superior scaling for BUM traffic in large VXLAN deployments making PIM configuration essential.
Underlay-overlay interaction requires PIM on spine switches forming the underlay while VTEP functions operate on leaf switches at the overlay. Understanding this separation clarifies why spines need PIM configuration even though VXLAN endpoint functions reside on leaves. Spines provide transport infrastructure while leaves provide overlay services.
IGMP snooping operates on Layer 2 switches to optimize multicast forwarding by forwarding multicast traffic only to ports with interested receivers rather than flooding, but IGMP snooping does not route multicast between subnets. Spine switches operating at Layer 3 need PIM for multicast routing, not just IGMP snooping for local multicast optimization. Both technologies serve multicast but at different layers.
HSRP provides first-hop redundancy for default gateway availability but has no relationship to multicast replication or VXLAN operation. HSRP enables active-standby router pairs for gateway redundancy, completely unrelated to the multicast distribution infrastructure needed for VXLAN BUM traffic. These protocols serve different purposes in network design.
LACP aggregates physical links into port channels but does not relate to multicast or VXLAN replication. LACP provides link-level redundancy and bandwidth aggregation for Layer 2 links. While port channels might carry VXLAN traffic, LACP does not provide the multicast routing functionality that PIM delivers for VXLAN BUM traffic distribution. These are complementary but separate technologies.