Visit here for our full Cisco 350-601 exam dumps and practice test questions.
Question 121
A data center engineer is configuring VXLAN EVPN and needs to understand the role of route targets. What is the primary purpose of route targets in VXLAN EVPN deployments?
A) To encrypt VXLAN traffic
B) To control the import and export of routes between VRFs
C) To compress VXLAN headers
D) To provide QoS marking
Answer: B
Explanation:
Route targets control the import and export of routes between Virtual Routing and Forwarding instances in VXLAN EVPN deployments. This mechanism enables selective route distribution, allowing network administrators to control which routes are shared between VRFs and implement complex multi-tenancy and service isolation scenarios within the data center fabric.
In VXLAN EVPN architectures, route targets function as BGP extended community attributes attached to routing updates. When a VRF exports routes, it tags them with an export route target. Other VRFs configured to import that route target will receive and install those routes in their routing tables. This selective route sharing mechanism provides the foundation for implementing hub-and-spoke topologies, shared services, and controlled inter-tenant communication.
Route target configuration involves specifying both export and import values for each VRF. A simple configuration might use matching route targets where a VRF exports and imports the same value, creating isolated routing domains. More sophisticated designs use asymmetric route targets where VRFs export one value but import different values, enabling complex routing policies. For example, a shared services VRF might export routes that multiple tenant VRFs import, but tenant VRFs might not import each other’s routes, maintaining isolation while allowing common service access.
The relationship between route targets and VXLAN Network Identifiers is important to understand. While VNIs provide Layer 2 segment isolation and data plane encapsulation, route targets control Layer 3 routing information exchange. A single VRF may contain multiple VNIs for different Layer 2 segments, and all routing information for that VRF uses the configured route targets. This separation of concerns between data plane segmentation and control plane route distribution provides deployment flexibility.
EVPN uses route targets for both Layer 2 and Layer 3 route distribution. EVPN Route Type 2 messages, which carry MAC and IP address information, are tagged with route targets allowing selective MAC/IP route distribution between VTEPs. EVPN Route Type 5 messages, which carry IP prefix information for inter-subnet routing, also use route targets for distribution control. This unified approach simplifies policy configuration across Layer 2 and Layer 3 services.
Best practices for route target design include using consistent numbering schemes that align with VRF purposes, documenting route target assignments to prevent conflicts, and planning for scalability as the number of VRFs grows. Automated provisioning systems should generate route targets systematically to maintain consistency. Regular audits verify that route target configurations implement intended routing policies without unintended leaks between isolation domains.
Route targets do not encrypt traffic, making option A incorrect. They do not compress headers, making option C incorrect. They do not provide QoS marking, making option D incorrect. Route targets specifically control route import and export between VRFs in VXLAN EVPN environments.
Question 122
An administrator is troubleshooting connectivity in a Cisco ACI fabric and needs to verify endpoint learning. Which command shows learned endpoints on a leaf switch?
A) show endpoint
B) show mac address-table
C) show arp
D) show interface
Answer: A
Explanation:
The show endpoint command displays learned endpoints on Cisco ACI leaf switches, providing comprehensive information about endpoint location, identity, and associated policy elements. This command is essential for troubleshooting connectivity issues, verifying endpoint learning, and understanding how the ACI fabric maps endpoints to Bridge Domains, EPGs, and VLANs.
Endpoint learning in ACI differs from traditional switching because the fabric maintains a distributed endpoint database that includes not only MAC addresses but also IP addresses, locations, and policy associations. When endpoints communicate, leaf switches learn their MAC and IP addresses through data plane traffic inspection. This learning information is reported to the Application Policy Infrastructure Controller, which distributes it to other leaf switches, creating a fabric-wide view of endpoint locations.
The show endpoint command output contains multiple fields providing detailed endpoint information. The MAC address identifies the endpoint at Layer 2. The IP address shows Layer 3 identity when available. The Interface field indicates the physical or virtual port where the endpoint was learned. The Endpoint Group shows which EPG the endpoint belongs to, determining applicable policies. The VLAN or VXLAN VNI indicates the encapsulation used. Flags show endpoint characteristics like local versus remote, static versus dynamic, or whether the endpoint is a virtual machine.
Endpoint types in ACI include local endpoints learned on directly connected ports, remote endpoints learned through VXLAN from other leaf switches, and external endpoints reached through Layer 3 Outs. Understanding these distinctions helps administrators troubleshoot connectivity problems. Local endpoints should appear on the leaf switch where devices physically connect. Remote endpoints indicate proper VXLAN communication between leaves. Missing or incorrectly classified endpoints suggest learning, policy, or fabric connectivity issues.
Endpoint learning troubleshooting often involves verifying that endpoints appear in the correct EPG and Bridge Domain. Misclassified endpoints may result from incorrect VLAN-to-EPG mappings, VMM integration issues, or policy configuration errors. The show endpoint command combined with policy verification confirms that endpoints are properly classified and can communicate according to contract definitions.
ACI’s distributed endpoint learning provides operational benefits including rapid convergence when endpoints move, elimination of flooding for unicast traffic through proactive learning distribution, and support for large-scale environments through efficient endpoint database management. The COOP protocol distributes endpoint information between leaf switches and spines, enabling the fabric to forward traffic optimally without traditional MAC learning limitations.
The show mac address-table command exists on some Cisco platforms but is not the primary endpoint verification command in ACI, making option B less suitable. Show arp displays ARP cache but not comprehensive endpoint information, making option C incorrect. Show interface displays interface status but not endpoint details, making option D incorrect. The show endpoint command specifically displays ACI endpoint learning information.
Question 123
A network engineer is implementing First Hop Redundancy Protocol in a data center. Which protocol provides sub-second failover and load balancing across multiple default gateways?
A) HSRP
B) VRRP
C) GLBP
D) FHRP
Answer: C
Explanation:
Gateway Load Balancing Protocol provides both sub-second failover and load balancing across multiple default gateways, making it uniquely suited for data center environments requiring both redundancy and efficient utilization of available gateway resources. Unlike HSRP and VRRP which provide active-standby gateway redundancy, GLBP enables active-active configurations where multiple routers simultaneously forward traffic.
GLBP operates by electing one router as the Active Virtual Gateway which is responsible for answering ARP requests for the virtual IP address. However, instead of providing a single MAC address in ARP responses, the AVG assigns different virtual MAC addresses to each participating router, distributing the virtual MAC addresses among hosts. This distribution creates load balancing because different hosts use different next-hop MAC addresses, spreading traffic across multiple physical routers while maintaining a single virtual IP address.
The GLBP architecture includes two primary roles. The Active Virtual Gateway handles ARP requests and assigns virtual MAC addresses to group members. Active Virtual Forwarders actually forward traffic based on the virtual MAC addresses they were assigned. A GLBP group can have one AVG and up to four AVFs, enabling traffic distribution across four routers. If an AVF fails, other group members assume its virtual MAC address, maintaining connectivity for hosts using that MAC address.
Load balancing algorithms in GLBP determine how virtual MAC addresses are distributed in ARP responses. Round-robin distribution assigns virtual MAC addresses sequentially, providing equal distribution assuming even ARP request patterns. Weighted distribution assigns virtual MAC addresses based on configured weights, allowing more powerful routers to handle more traffic. Host-dependent distribution assigns the same virtual MAC address to a specific host consistently, maintaining flow symmetry for return traffic.
GLBP provides sub-second failover through rapid failure detection and virtual MAC address reassignment. When an AVF fails, typically within seconds, other group members detect the failure and assume the failed router’s virtual MAC address. Hosts using that MAC address experience brief disruption during failover but automatically resume communication without ARP cache timeout. This rapid convergence meets data center requirements for minimal downtime.
Configuration considerations for GLBP include planning virtual IP addressing, configuring authentication to prevent unauthorized routers from joining the group, tuning hello and hold timers to balance failure detection speed with stability, and setting appropriate interface tracking to ensure routers only participate when uplinks are available. GLBP integrates with other redundancy mechanisms like link aggregation and routing protocol fast convergence for comprehensive resilience.
HSRP provides active-standby redundancy without native load balancing, making option A incorrect for the requirement. VRRP similarly provides active-standby operation, making option B incorrect. FHRP is a generic term for first hop redundancy protocols rather than a specific protocol, making option D incorrect. Only GLBP provides both sub-second failover and load balancing across gateways.
Question 124
An administrator is configuring Cisco Nexus switches and needs to implement a loop prevention mechanism for access layer connections. Which feature provides Layer 2 loop prevention without using Spanning Tree Protocol?
A) BPDU Guard
B) Loop Guard
C) Port Security
D) Storm Control
Answer: A
Explanation:
BPDU Guard provides Layer 2 loop prevention for access layer connections by disabling ports that receive Bridge Protocol Data Units, preventing loops caused by unauthorized switches or misconfigurations without requiring Spanning Tree Protocol operation on those ports. This protective mechanism is particularly valuable in data center access layers where end devices should never send BPDUs.
BPDU Guard operates on the principle that access ports connecting to end devices should never receive BPDUs because end devices do not run Spanning Tree Protocol. If a port configured with BPDU Guard receives a BPDU, the feature immediately places the port into err-disabled state, effectively shutting it down. This rapid response prevents loops before they can impact network operation, protecting the broader infrastructure from topology changes or broadcast storms.
The relationship between BPDU Guard and PortFast enhances access layer deployment. PortFast immediately transitions ports to forwarding state without passing through STP listening and learning states, providing instant connectivity for end devices. However, PortFast on a port connected to another switch would create a loop. BPDU Guard protects against this risk by shutting down PortFast-enabled ports that receive BPDUs, combining fast connectivity for legitimate devices with protection against misconfiguration.
BPDU Guard configuration can be applied per interface or globally for all PortFast-enabled interfaces. Interface-level configuration using the spanning-tree bpduguard enable command applies protection to specific ports. Global configuration using spanning-tree portfast bpduguard default automatically enables BPDU Guard on all PortFast interfaces, simplifying deployment for access layers with many ports. The global approach ensures consistent protection without requiring per-interface configuration.
Recovery from err-disabled state caused by BPDU Guard requires administrative intervention to ensure the loop condition is resolved. The show interface status err-disabled command identifies ports in err-disabled state and the reason. After resolving the underlying issue like removing an unauthorized switch, administrators can manually re-enable the port using shutdown followed by no shutdown commands, or configure automatic recovery using errdisable recovery cause bpduguard with a recovery interval.
BPDU Guard best practices include enabling it on all access ports connecting to end devices, using global configuration to ensure comprehensive coverage, implementing err-disabled recovery with appropriate intervals to balance automatic recovery with security, and monitoring err-disabled events to identify misconfigurations or security issues. Combining BPDU Guard with other protective features like Root Guard and Loop Guard creates comprehensive Layer 2 protection.
Loop Guard prevents loops caused by unidirectional link failures but requires STP, making option B incorrect for the requirement of operating without STP. Port Security limits MAC addresses but does not prevent loops, making option C incorrect. Storm Control limits broadcast traffic but does not prevent loops, making option D incorrect. Only BPDU Guard provides loop prevention specifically for access connections without requiring STP operation.
Question 125
A data center architect is designing storage networking and needs to understand N-Port Virtualization. What is the primary benefit of NPIV in Fibre Channel SAN environments?
A) Increased bandwidth
B) Multiple virtual WWPNs per physical port
C) Reduced latency
D) Encryption capability
Answer: B
Explanation:
N-Port Virtualization enables multiple virtual World Wide Port Names per physical Fibre Channel port, allowing multiple logical devices to share a single physical HBA connection while maintaining unique identities for zoning, LUN masking, and storage management. This capability is essential for virtualized environments where multiple virtual machines share physical server resources including storage connectivity.
NPIV addresses the challenge of virtual machine storage connectivity in SAN environments. Traditional Fibre Channel assigns one WWPN per physical HBA port, limiting each physical connection to a single Fibre Channel identity. In virtualized servers running multiple virtual machines, this creates a problem because all VMs would appear as the same initiator to the storage array, preventing individual zoning and LUN presentation. NPIV solves this by allowing the hypervisor to request multiple virtual WWPNs from the fabric, with each VM receiving a unique WWPN.
The NPIV architecture involves several components. The physical HBA port has a physical WWPN called the NPIV-enabled port. The hypervisor or virtualization layer acts as an NPIV client, requesting virtual WWPNs from the fabric. The Fibre Channel switch acts as an NPIV server, allocating virtual WWPNs from a pool and registering them with the fabric name server. Each virtual machine is assigned a virtual WWPN, allowing it to log into the fabric as an independent N-Port.
Virtual WWPN assignment can be static or dynamic. Static assignment involves pre-configuring specific virtual WWPNs for each virtual machine, ensuring consistent identity across reboots and migrations. Dynamic assignment allows the fabric to allocate virtual WWPNs from a pool as needed, simplifying initial configuration but requiring careful management to maintain consistency. Most production environments use static assignment to ensure predictable zoning and LUN masking behavior.
NPIV enables important virtualization capabilities. Virtual machines can have dedicated storage zones ensuring isolation and security. Each VM can have specific LUN assignments independent of other VMs on the same host. Virtual machine migration between physical hosts can preserve storage connectivity by moving the virtual WWPN with the VM. These capabilities make NPIV essential for enterprise virtualization deployments using Fibre Channel storage.
Configuration considerations for NPIV include ensuring switches support NPIV functionality, planning virtual WWPN allocation schemes, configuring zoning to include both physical and virtual WWPNs appropriately, and implementing monitoring to track virtual WWPN usage. Storage arrays must support multiple initiator logins from the same physical port. Hypervisors require NPIV-capable HBAs and proper configuration to request and manage virtual WWPNs.
NPIV does not directly increase bandwidth, making option A incorrect. It does not reduce latency, making option C incorrect. It does not provide encryption, making option D incorrect. NPIV specifically enables multiple virtual WWPNs per physical port for virtualization support.
Question 126
An engineer is configuring Quality of Service on Cisco Nexus switches and needs to understand queuing mechanisms. Which queuing method services queues based on configured bandwidth percentages?
A) Priority Queuing
B) Weighted Fair Queuing
C) First In First Out
D) Round Robin
Answer: B
Explanation:
Weighted Fair Queuing services queues based on configured bandwidth percentages, allocating link capacity proportionally among different traffic classes according to their assigned weights. This mechanism ensures that each class receives its fair share of bandwidth during congestion while allowing unused bandwidth to be shared among active classes, providing both minimum guarantees and efficient utilization.
WFQ operates by assigning weights to different traffic classes, with weights representing the relative bandwidth allocation each class should receive. During congestion when aggregate traffic demand exceeds link capacity, WFQ services queues proportionally to their weights. A class with weight 60 receives approximately 60 percent of bandwidth, while a class with weight 20 receives approximately 20 percent. When classes are not fully utilizing their allocation, the excess bandwidth is distributed among classes with pending traffic.
The weighted nature of WFQ provides important fairness characteristics. High-priority traffic receives preferential treatment through higher weight assignments, ensuring critical applications get necessary bandwidth. Lower-priority traffic still receives some bandwidth based on its weight, preventing complete starvation. This balance between prioritization and fairness makes WFQ suitable for diverse traffic mixes where different applications have different importance levels but all require some minimal service.
Class-Based Weighted Fair Queuing extends basic WFQ by allowing explicit traffic classification and weight assignment. Network administrators define traffic classes using various match criteria like DSCP values, IP precedence, or access lists. Each class receives a configured bandwidth percentage or explicit rate. CBWFQ integrates with other QoS mechanisms like policing and marking, creating comprehensive QoS policies that classify, mark, police, and queue traffic.
WFQ scheduling algorithms determine packet transmission order. Simple WFQ assigns each packet a finish time based on packet size and queue weight, then transmits packets in finish time order. This calculation ensures bandwidth is distributed according to weights over time. Deficit-based algorithms track bandwidth deficits for each queue, prioritizing queues that are behind their allocated rate. These algorithms ensure short-term fairness while maintaining long-term bandwidth distribution.
Configuration considerations for WFQ include determining appropriate weights based on application requirements, ensuring total configured bandwidth does not exceed link capacity, allowing headroom for overhead, and testing under load to verify desired behavior. Weight assignments should reflect organizational priorities and application service level agreements. Monitoring queue depths and drop rates validates that WFQ provides intended service levels.
Priority Queuing provides strict priority without bandwidth sharing, making option A incorrect for percentage-based allocation. FIFO provides no differentiation, making option C incorrect. Round Robin services queues equally without bandwidth weights, making option D incorrect. Only Weighted Fair Queuing provides bandwidth allocation based on configured percentages.
Question 127
A network administrator is implementing VRF-Lite to provide routing separation between different tenants on a shared infrastructure. What is a key characteristic of VRF-Lite compared to full MPLS VPN implementations?
A) VRF-Lite requires MPLS labels
B) VRF-Lite operates without MPLS
C) VRF-Lite supports only IPv4
D) VRF-Lite requires BGP
Answer: B
Explanation:
VRF-Lite operates without MPLS, providing Virtual Routing and Forwarding functionality on standard IP networks without requiring Multiprotocol Label Switching infrastructure. This makes VRF-Lite accessible for organizations needing routing separation and multi-tenancy without the complexity and infrastructure requirements of full MPLS VPN deployments, making it particularly popular in enterprise data centers.
VRF technology creates multiple independent routing table instances on a single router or switch, with each VRF maintaining its own forwarding table, routing protocols, and interfaces. Full MPLS VPNs use VRFs in provider networks with MPLS labels to transport traffic between sites. VRF-Lite implements the VRF functionality without MPLS, using standard IP routing to forward traffic. This simplification eliminates MPLS configuration and label distribution protocols while retaining routing separation benefits.
The VRF-Lite architecture involves assigning interfaces to specific VRFs, creating routing protocol instances within VRFs, and potentially using route leaking or inter-VRF routing for controlled communication. Each VRF is completely isolated from others by default, with packets in one VRF unable to reach destinations in another VRF without explicit inter-VRF routing configuration. This isolation provides the foundation for multi-tenant environments where different customers or departments require routing separation.
VRF-Lite deployment patterns in data centers include per-tenant VRFs for hosting providers, per-application VRFs for security separation, management VRFs for out-of-band access, and shared services VRFs for common resources. Inter-VRF communication when needed can be implemented through route leaking where specific routes are redistributed between VRFs, through routing on a stick where a router connects to multiple VRFs, or through firewall integration where firewalls enforce policies between VRFs.
Configuration for VRF-Lite involves creating VRF instances with unique names, assigning interfaces to VRFs using vrf member commands, configuring routing protocols within VRF contexts, and implementing route distinguishers for overlapping address spaces when using BGP for route distribution. Each VRF can run independent routing protocols including OSPF, EIGRP, BGP, or static routing. Multiple VRFs can use the same IP address space because they maintain separate routing tables.
Operational considerations for VRF-Lite include management complexity from multiple routing tables, careful planning of inter-VRF communication policies, monitoring and troubleshooting requiring VRF awareness, and scale limits on the number of supported VRFs varying by platform. Commands like show ip route vrf tenant1 display routing information for specific VRFs. Ping and traceroute require VRF specification to test connectivity within VRF contexts.
VRF-Lite explicitly does not require MPLS labels, making option A incorrect. VRF-Lite supports both IPv4 and IPv6, making option C incorrect. VRF-Lite does not require BGP though BGP can be used, making option D incorrect. The key distinguishing characteristic is operation without MPLS.
Question 128
An administrator is configuring Cisco FabricPath and needs to understand the switch ID assignment. What is the valid range for FabricPath switch IDs?
A) 1-255
B) 1-1024
C) 1-4094
D) 1-4095
Answer: D
Explanation:
The valid range for FabricPath switch IDs is 1 through 4095, providing a 12-bit address space that allows unique identification of up to 4095 switches within a FabricPath domain. This addressing scheme enables large-scale FabricPath deployments while ensuring each switch has a unique identifier used in the FabricPath forwarding process.
FabricPath switch IDs serve as Layer 2 addresses in the FabricPath forwarding architecture, analogous to how IP addresses identify devices in Layer 3 networks. When classical Ethernet frames enter a FabricPath domain at an edge port, the ingress switch encapsulates them with a FabricPath header containing the source switch ID and destination switch ID. Core FabricPath switches forward frames based on these switch IDs rather than MAC addresses, enabling routing-like behavior at Layer 2.
Switch ID assignment can be configured manually or allocated dynamically. Manual assignment involves explicitly configuring each switch’s ID using the fabricpath switch-id command, providing administrative control over the numbering scheme. Dynamic allocation uses the Dynamic Resource Allocation Protocol to automatically assign switch IDs from the available range, simplifying deployment in large environments. Most production environments use manual assignment for predictability and documentation purposes.
The relationship between switch IDs and MAC address learning affects FabricPath operation. FabricPath maintains a conversational learning table mapping MAC addresses to switch IDs rather than physical ports. When a switch learns a MAC address, it associates that MAC with the switch ID of the FabricPath switch where the endpoint resides. This mapping enables efficient forwarding because switches only need to know how to reach other switches via IS-IS, not individual MAC addresses throughout the fabric.
Switch ID planning considerations include reserving ranges for different data center locations or PODs, documenting assignments for operational clarity, considering growth requirements when allocating IDs, and ensuring uniqueness across the FabricPath domain. Switch IDs must be unique within the domain; duplicate IDs cause forwarding problems and topology instabilities. Systematic numbering schemes like using building and floor numbers in the ID help maintain clarity.
Operational commands for switch ID management include show fabricpath switch-id to display the local switch ID and learned remote switch IDs, show fabricpath route to display IS-IS routes to other switches, and show fabricpath isis adjacency to verify neighbor relationships. These commands help verify proper switch ID configuration and FabricPath topology formation.
The range 1-255 would be insufficient for large FabricPath deployments, making option A incorrect. The range 1-1024 similarly limits scale, making option B incorrect. The range 1-4094 is close but excludes the valid value 4095, making option C incorrect. The correct range 1-4095 provides the full 12-bit address space.
Question 129
A data center engineer is implementing Cisco Tetration for workload protection and segmentation. What type of segmentation does Tetration primarily provide?
A) Physical segmentation
B) Micro-segmentation
C) VLAN segmentation
D) Subnet segmentation
Answer: B
Explanation:
Cisco Tetration primarily provides micro-segmentation, implementing granular security policies at the workload level to control communication between individual applications, processes, or containers. This fine-grained approach moves beyond traditional network-based segmentation to create zero-trust security zones around specific workloads, significantly reducing attack surfaces and limiting lateral movement within data centers.
Micro-segmentation represents a fundamental shift from perimeter-based security to workload-centric protection. Traditional segmentation uses VLANs, subnets, or firewalls to create zones containing multiple workloads, with security policies applied at zone boundaries. All workloads within a zone can typically communicate freely, creating risk if one workload is compromised. Micro-segmentation instead creates individual security contexts for each workload, enforcing policies on every communication regardless of network location.
Tetration implements micro-segmentation through several mechanisms. Application dependency mapping automatically discovers communication patterns between workloads by analyzing network flows, creating a comprehensive map of actual application behavior. Policy generation uses machine learning to recommend segmentation policies based on observed behavior, simplifying policy creation. Enforcement occurs through either host-based agents that implement policies at the operating system level, or network-based enforcement using switch ACLs or firewall rules derived from Tetration policies.
The Tetration architecture includes sensors that collect telemetry from workloads including process information, network flows, user context, and software packages. This telemetry flows to analytics nodes that process data using big data platforms, performing behavior analysis, anomaly detection, and policy generation. The policy engine translates high-level application policies into specific enforcement rules distributed to agents or network devices. This distributed architecture scales to hundreds of thousands of workloads.
Application-centric policy definition in Tetration uses application workspaces that group related workloads. Within workspaces, administrators define policies specifying which workloads can communicate, which ports and protocols are allowed, and which processes can establish connections. Policies follow workloads automatically as they move between physical hosts or cloud environments, maintaining consistent protection regardless of network location. This mobility support is essential for dynamic virtualized and containerized environments.
Tetration integrates with various enforcement points to implement micro-segmentation. Software agents on servers can implement policies in the operating system firewall, providing protection even within a subnet. Integration with Cisco ACI enables policy enforcement through contract translation, leveraging fabric capabilities. Integration with firewalls or cloud security groups extends policies across hybrid environments. This flexible enforcement approach adapts to different infrastructure characteristics.
Physical segmentation uses separate hardware, making option A incorrect for Tetration’s approach. VLAN segmentation provides network-layer separation but not workload-level granularity, making option C incorrect. Subnet segmentation similarly operates at network layer, making option D incorrect. Tetration specifically provides workload-level micro-segmentation.
Question 130
An administrator is troubleshooting VXLAN connectivity and needs to verify VTEP functionality. Which command displays VXLAN tunnel endpoints on a Cisco Nexus switch?
A) show nve peers
B) show vxlan tunnels
C) show overlay peers
D) show tunnel status
Answer: A
Explanation:
The show nve peers command displays Network Virtualization Edge peers, which are the VXLAN Tunnel Endpoints in Cisco Nexus switches. This command provides essential information about VTEP neighbor relationships, tunnel status, and VNI associations, making it the primary troubleshooting tool for VXLAN connectivity verification.
Network Virtualization Edge is Cisco’s implementation of VXLAN tunnel endpoints. The NVE interface serves as the logical interface through which VXLAN encapsulation and decapsulation occur. Each Nexus switch participating in VXLAN has an NVE interface with an associated source IP address used as the VTEP address. The show nve peers command displays all remote VTEPs that the local switch has learned through the control plane, showing the operational state of VXLAN tunneling.
The output from show nve peers contains critical troubleshooting information. The Peer-IP column shows the remote VTEP IP addresses, which should correspond to configured or learned peer VTEPs. The State column indicates whether the peer relationship is up or down, with up meaning the tunnel is established and operational. The LearnType shows how the peer was discovered, such as through Control-Plane learning via EVPN BGP or through Data-Plane multicast learning. The Uptime shows how long the peer relationship has been established.
VXLAN control plane options affect how peers are discovered and displayed. In multicast-based VXLAN, VTEPs discover peers dynamically through multicast group membership, with all VTEPs for a VNI joining a common multicast group. The show nve peers output in this mode shows peers learned through IGMP and multicast traffic. In EVPN-based VXLAN, BGP distributes VTEP information through EVPN routes, and peers appear as learned through the control plane. This distinction helps identify which VXLAN deployment model is active.
Troubleshooting VXLAN connectivity using show nve peers involves verifying that expected peers appear in the output, checking that peer states are up, confirming learning types match the deployment model, and investigating missing or down peers. Missing peers may indicate routing problems preventing VTEP IP reachability, BGP issues in EVPN deployments, or multicast problems in flood-and-learn deployments. Peers showing down states require investigation of underlay connectivity.
Additional related commands complement show nve peers for comprehensive VXLAN troubleshooting. The show nve vni command displays configured VNIs and their status. The show nve interface command shows NVE interface configuration and state. The show bgp l2vpn evpn summary command verifies EVPN BGP sessions in control-plane deployments. Together, these commands provide complete visibility into VXLAN operation.
There is no show vxlan tunnels command in Cisco Nexus syntax, making option B incorrect. There is no show overlay peers command, making option C incorrect. There is no show tunnel status command, making option D incorrect. The show nve peers command is the correct Cisco Nexus command for displaying VTEP information.
Question 131
A network engineer is implementing Port Analyzer on a Cisco Nexus switch for traffic monitoring. What is the maximum number of SPAN sessions supported on most Nexus platforms?
A) 2
B) 4
C) 8
D) 16
Answer: B
Explanation:
Most Cisco Nexus platforms support a maximum of 4 SPAN sessions, though the exact number varies by platform and may be divided between local SPAN, RSPAN, and ERSPAN sessions. Understanding these limits is essential for planning monitoring architectures and ensuring that monitoring requirements can be met within platform capabilities.
Switched Port Analyzer sessions enable network monitoring by copying traffic from source ports or VLANs to destination ports where monitoring tools can analyze the traffic. SPAN provides visibility into network communications for troubleshooting, security analysis, performance monitoring, and compliance auditing. The limited number of concurrent SPAN sessions requires careful planning to ensure critical monitoring needs are addressed.
Different types of SPAN sessions serve various monitoring scenarios. Local SPAN copies traffic from sources to destinations on the same switch, useful for monitoring traffic with locally connected analysis tools. Remote SPAN extends monitoring across the network by transporting copied traffic over a dedicated VLAN to remote analysis tools. Encapsulated Remote SPAN encapsulates copied traffic in GRE tunnels, allowing monitor traffic to traverse routed networks. Platform SPAN limits often apply separately to each type or collectively across all types.
SPAN session configuration involves defining source interfaces or VLANs, specifying destination interfaces, and optionally filtering traffic. Source configuration determines what traffic is copied, with options including specific interfaces, port channels, VLANs, or combinations. Destination configuration specifies where copied traffic is sent, typically a dedicated monitoring port. Filters limit copied traffic to specific criteria like direction, VLANs, or protocols, reducing the volume of copied traffic when monitoring specific issues.
SPAN performance considerations affect both monitored and monitoring traffic. Extensive SPAN configuration can impact switch performance by consuming resources for traffic copying. Destination ports must have sufficient bandwidth to receive all copied traffic or packets may be dropped. Oversubscription occurs when copied traffic volume exceeds destination port capacity, causing incomplete monitoring. Planning SPAN sessions requires understanding traffic volumes and ensuring destination capacity.
Alternative monitoring approaches complement or replace SPAN in certain scenarios. Embedded packet capture uses CPU-based packet capture for targeted troubleshooting when SPAN is unavailable. Netflow provides statistical traffic analysis without copying entire packets, using fewer resources than SPAN. Tap aggregation using external network taps concentrates traffic for monitoring without consuming switch SPAN resources. Understanding these alternatives helps design monitoring architectures within SPAN limitations.
Platform variations in SPAN capabilities require consulting specific documentation. Some Nexus platforms support more than 4 sessions, while others support fewer. Virtual SPAN sessions in virtual Nexus switches have different limits than physical switches. Enhanced SPAN features on certain platforms provide capabilities beyond basic SPAN. Verifying specific platform capabilities ensures monitoring designs are feasible.
The limit of 2 sessions would be restrictive for most monitoring needs, making option A incorrect for most Nexus platforms. The limit of 8 sessions exceeds typical Nexus capabilities, making option C incorrect. The limit of 16 sessions is not standard, making option D incorrect. Most Nexus platforms support 4 SPAN sessions.
Question 132
An administrator is configuring fabric extenders and needs to understand port channels between the Nexus parent switch and FEX. What is the minimum number of links required for a FEX host interface port channel?
A) 1
B) 2
C) 4
D) 8
Answer: A
Explanation:
A Fabric Extender host interface port channel requires a minimum of only 1 link, allowing single-link configurations while still providing port channel benefits like simplified management and support for future expansion. This flexibility enables FEX deployment in scenarios with limited connectivity while maintaining operational consistency with multi-link configurations.
Fabric Extenders extend the Nexus parent switch’s fabric to remote locations, providing simplified management where FEX units are controlled entirely by the parent switch. The connection between parent switch and FEX can use individual links or port channels, with port channels preferred for providing redundancy, increased bandwidth, and protection against link failures. Understanding FEX port channel requirements ensures proper design and configuration.
The architecture of FEX connectivity involves two types of interfaces. Fabric interfaces connect the FEX to the parent switch, carrying all traffic between FEX ports and the parent’s forwarding engine. Host interfaces on the FEX connect to end devices like servers. When host interfaces are configured as port channels, they appear as regular port channels to end devices and to the administrator, but the FEX transparently forwards traffic over fabric interfaces to the parent switch for switching decisions.
Single-link port channels, while technically possible with the minimum of 1 link, are uncommon in production because they provide no redundancy benefit over individual interfaces. However, they enable operational consistency by using port channel configuration syntax throughout the environment, simplify future expansion by allowing additional links to be added to existing port channels, and support migration scenarios where multi-link configurations are planned but not initially deployed. Most production deployments use at least 2 links for redundancy.
FEX port channel configuration differs slightly from standard port channel configuration because the FEX module must be specified. The configuration associates physical interfaces on the FEX with the port channel, and the parent switch manages the port channel state based on member link status. Unlike standard port channels which use LACP for dynamic configuration, FEX port channels use static configuration because the FEX does not run independent protocols.
Maximum link counts for FEX port channels vary by platform and configuration, typically supporting 8 or 16 links per port channel. This allows significant bandwidth aggregation for high-throughput server connections. Planning FEX deployments requires understanding both minimum and maximum link requirements to properly size connections based on bandwidth needs and redundancy requirements.
Troubleshooting FEX port channels involves verifying that all member links are properly connected and operational, checking that FEX software versions are compatible with parent switch versions, confirming port channel configurations match on the FEX and parent switch, and monitoring link utilization to ensure adequate bandwidth. Commands like show interface port-channel and show fex detail provide visibility into FEX connectivity status.
While 2 links provide redundancy and are common, they are not the minimum, making option B incorrect. Four links and eight links are valid configurations but exceed the minimum, making options C and D incorrect. The actual minimum for FEX host interface port channels is 1 link.
Question 133
A data center engineer is implementing Dynamic Fabric Automation on Cisco ACI. What protocol does DFA use to discover and configure fabric topology?
A) LLDP
B) CDP
C) POAP
D) DHCP
Answer: C
Explanation:
Dynamic Fabric Automation uses Power-On Auto Provisioning as the foundation protocol for discovering and configuring fabric topology, enabling zero-touch deployment of ACI fabric infrastructure. POAP automates the process of loading software images and configuration files onto new switches, dramatically simplifying initial fabric deployment and expansion.
Power-On Auto Provisioning operates when a new switch boots without configuration. The switch sends DHCP requests including vendor-specific information identifying itself as a Cisco device. The DHCP server responds with IP addressing and, critically, the location of a configuration script. The switch downloads and executes this script, which can install operating system images, apply base configurations, and integrate the switch into management systems. In ACI contexts, POAP enables automatic fabric integration.
The POAP process for ACI fabric deployment follows a specific sequence. New leaf or spine switches boot and obtain IP addresses via DHCP. The DHCP response includes options pointing to the APIC cluster. Switches contact the APIC using the fabric discovery process, registering themselves as new fabric members. The APIC identifies the switch type, assigns a node ID, and provisions the appropriate configuration including VXLAN VTEP addresses, IS-IS parameters, and fabric policies. This automation eliminates manual configuration of infrastructure elements.
APIC cluster preparation for POAP-based fabric deployment involves configuring discovery policies that define IP address pools for fabric infrastructure, DHCP relay configurations on existing fabric members to forward DHCP requests to the APIC, and node profiles that associate serial numbers with desired node IDs and names. When properly configured, administrators can simply connect and power on new switches, with the fabric automatically integrating them according to policies.
The benefits of POAP-based fabric automation include rapid deployment enabling quick fabric expansion, reduced configuration errors by eliminating manual configuration, consistency across fabric members through centralized policy, and simplified operations by abstracting infrastructure details. Large-scale ACI deployments particularly benefit from automation because manual configuration of hundreds of switches would be impractical.
Security considerations for POAP include controlling DHCP server access to prevent unauthorized devices from obtaining configuration, using secure communication channels for script and image downloads, validating device identity before provisioning to prevent rogue device integration, and implementing fabric authentication to ensure only authorized switches join. These measures protect against attacks during the vulnerable initial provisioning phase.
While LLDP and CDP assist in neighbor discovery after provisioning, they are not the primary protocols for DFA, making options A and B incorrect. DHCP is used within the POAP process but POAP is the specific protocol framework, making option D less precise. POAP is the correct protocol that DFA uses for zero-touch provisioning.
Question 134
An administrator is configuring Multicast routing in a data center and needs to understand PIM modes. Which PIM mode builds source-based shortest path trees?
A) PIM Sparse Mode
B) PIM Dense Mode
C) PIM Source-Specific Multicast
D) PIM Bidirectional
Answer: C
Explanation:
PIM Source-Specific Multicast builds source-based shortest path trees directly without requiring shared tree infrastructure, optimizing multicast delivery for applications where receivers know the source address of desired multicast streams. This simplification eliminates rendezvous points and shared tree complexity while providing efficient forwarding paths from sources to receivers.
PIM SSM represents an evolution in multicast design addressing limitations of traditional sparse mode. In sparse mode, receivers initially join a shared tree rooted at a rendezvous point, then optionally switch to source trees for specific sources. This process requires RP infrastructure, RP placement planning, and shared tree maintenance. SSM eliminates these requirements by having receivers specify both the multicast group and source address they want to join, enabling direct source tree construction.
The SSM model relies on IGMPv3 or MLDv2 for group membership, as these protocol versions support source filtering where hosts specify which sources they want to receive from. When a receiver uses IGMPv3 to join a group with source specification, the last-hop router builds a source tree toward that specific source using PIM join messages. The tree forms along the shortest path from receiver to source based on unicast routing, creating efficient distribution without shared tree infrastructure.
SSM uses a dedicated multicast address range, specifically 232.0.0.0/8 for IPv4, exclusively for source-specific operation. Applications using addresses in this range must be designed for SSM, with receivers knowing source addresses in advance. This requirement fits applications like video distribution where content sources are well-known, financial data feeds from specific providers, or software distribution from identified servers. SSM is unsuitable for applications requiring any-source multicast where receivers don’t know sources in advance.
Configuration for PIM SSM involves enabling PIM sparse mode on interfaces, as SSM is technically a mode of sparse mode operation, then configuring the SSM range to identify which groups use source-specific behavior. Default SSM range is 232.0.0.0/8, but custom ranges can be configured. When groups in the SSM range are joined with source specification, routers automatically build source trees without RP involvement.
Advantages of SSM include simplified deployment without RP planning, elimination of RP failure scenarios, reduced protocol overhead from not maintaining shared trees, prevention of source registration overhead, and improved security because receivers explicitly specify accepted sources. These benefits make SSM preferred for appropriate applications, though it cannot replace any-source multicast for all use cases.
PIM Sparse Mode can build source trees but requires RP infrastructure first, making option A less precise. PIM Dense Mode floods initially rather than building shortest path trees efficiently, making option B incorrect. PIM Bidirectional uses shared trees exclusively without source trees, making option D incorrect. PIM SSM specifically builds source-based shortest path trees directly.
Question 135
A network engineer is implementing unified fabric in a data center. What transport protocol does Cisco Unified Fabric use to carry both Ethernet and Fibre Channel traffic?
A) TCP
B) FCoE
C) iSCSI
D) SCSI
Answer: B
Explanation:
Fibre Channel over Ethernet serves as the transport protocol that Cisco Unified Fabric uses to carry both Ethernet and Fibre Channel traffic over the same physical infrastructure. FCoE enables convergence by encapsulating Fibre Channel frames within Ethernet frames, allowing traditional Fibre Channel storage traffic to share network infrastructure with IP-based data traffic while maintaining the lossless characteristics required for storage protocols.
FCoE convergence provides significant benefits for data center infrastructure. Traditional architectures require separate networks for Ethernet data traffic and Fibre Channel storage traffic, duplicating cabling, switches, and adapters. Unified fabric combines these networks onto shared infrastructure, reducing hardware costs, simplifying cabling, decreasing power and cooling requirements, and reducing management complexity. Converged Network Adapters in servers provide both Ethernet and Fibre Channel connectivity through single ports.
The FCoE protocol encapsulates Fibre Channel frames within Ethernet frames using a dedicated EtherType value 0x8906. This encapsulation preserves Fibre Channel addressing and protocol characteristics while enabling transport over Ethernet infrastructure. FCoE operates at Layer 2 only, requiring bridged connectivity between FCoE devices without IP routing. This limitation means FCoE networks must be carefully designed with lossless Ethernet characteristics throughout the path.
Data Center Bridging extensions enable the lossless Ethernet required for FCoE. Priority Flow Control provides per-class flow control, preventing frame loss for FCoE traffic during congestion. Enhanced Transmission Selection allocates bandwidth among traffic classes ensuring FCoE receives adequate capacity. Data Center Bridging Exchange negotiates capabilities between devices. Together, these extensions create an Ethernet environment suitable for storage traffic that cannot tolerate frame loss.
Unified fabric architecture in Cisco data centers typically uses Nexus switches with FCoE support. Nexus 5000 and 7000 series switches provide FCoE capabilities, with virtual Fibre Channel interfaces mapping to physical Ethernet interfaces. The FCoE Initialization Protocol discovers FCoE-capable devices and establishes virtual links. Fibre Channel forwarder functionality enables connection between FCoE networks and traditional Fibre Channel SAN infrastructure.
Configuration for unified fabric involves enabling FCoE features on Nexus switches, configuring VLANs dedicated to FCoE traffic, creating virtual Fibre Channel interfaces bound to Ethernet interfaces, implementing Data Center Bridging for lossless Ethernet, configuring Fibre Channel zoning through the unified fabric, and connecting to traditional FC SANs through FCF capabilities. This configuration creates an integrated environment supporting both protocols.
TCP is a transport layer protocol for general IP traffic but does not carry Fibre Channel, making option A incorrect. iSCSI carries SCSI over IP networks but is a different approach than FCoE, making option C incorrect. SCSI is the storage protocol but not the transport for unified fabric, making option D incorrect. FCoE specifically transports Fibre Channel over Ethernet for unified fabric.