Visit here for our full Cisco 350-601 exam dumps and practice test questions.
Question 16
What is the default priority value for HSRP?
A) 50
B) 100
C) 150
D) 200
Answer: B
Explanation:
The default priority value for Hot Standby Router Protocol is 100. HSRP priority determines which router in an HSRP group becomes the active router, with the router having the highest priority value assuming the active role and forwarding traffic for the virtual IP address. Understanding HSRP priority is essential for designing predictable failover behavior and ensuring that preferred routers handle traffic under normal operating conditions.
HSRP priority can be configured with values ranging from 0 to 255, with higher values indicating higher priority. When multiple routers in an HSRP group have the same priority, the router with the highest IP address on the HSRP interface becomes active. This tie-breaking mechanism ensures deterministic active router selection even when priorities are identical. Administrators typically configure different priority values on routers to establish a clear preference for which router should be active, making failover behavior predictable and aligned with network design intent.
Priority configuration works in conjunction with preemption to control failover behavior. By default, HSRP does not preempt, meaning that once a router becomes active, it remains active even if another router with higher priority comes online. This prevents unnecessary failover disruptions when routers reboot or recover from failures. However, enabling preemption with the standby preempt command causes a higher-priority router to reclaim the active role when it becomes available. Priority can be dynamically adjusted using interface tracking, where HSRP automatically decrements priority when monitored interfaces or objects fail, triggering failover to the standby router. This interface tracking capability enables HSRP to respond to upstream connectivity failures that would otherwise leave the active router forwarding traffic into a failed path.
Option A, 50, is not the default HSRP priority. While 50 is a valid configurable priority value, it is not what HSRP uses by default when no explicit priority is configured.
Option C, 150, is not the default priority. A value of 150 would be configured explicitly to give a router higher priority than the default, making it more likely to become the active router.
Option D, 200, is not the default priority. This high value would be configured intentionally to strongly prefer a particular router as the active HSRP device.
Understanding HSRP priority and its interaction with preemption and interface tracking is fundamental to designing resilient first-hop redundancy in data center networks.
Question 17
Which protocol does NX-OS use for discovering and learning about neighboring Cisco devices?
A) LLDP
B) CDP
C) STP
D) ARP
Answer: B
Explanation:
Cisco NX-OS uses Cisco Discovery Protocol for discovering and learning about directly connected neighboring Cisco devices. CDP is a Cisco proprietary Layer 2 protocol that enables devices to advertise their identity, capabilities, and neighbors, providing valuable topology discovery and troubleshooting information. CDP operates at the data link layer, sending periodic advertisements out all active interfaces to directly connected devices, which collect and store this information for administrative visibility.
CDP advertisements contain extensive information about the sending device including device ID (typically the hostname), platform model, capabilities (such as router, switch, or IP phone), software version, native VLAN, management IP address, and detailed interface information. This information is invaluable during network troubleshooting, documentation, and topology mapping. Network administrators use CDP to verify physical connectivity, identify device types and capabilities, determine software versions across the infrastructure, and validate cabling. The protocol operates independently of Layer 3 configuration, so it works even when IP connectivity is not established, making it particularly useful during initial device configuration and troubleshooting scenarios.
CDP is enabled by default on most Cisco devices including Nexus switches. The protocol sends advertisements every 60 seconds by default with a holdtime of 180 seconds, meaning neighbor information expires if three consecutive advertisements are missed. Administrators can view CDP information using commands like show cdp neighbors for a summary view or show cdp neighbors detail for comprehensive information including IP addresses and software versions. CDP can be disabled globally with no cdp enable or per-interface with no cdp enable under interface configuration. While CDP is useful for management and troubleshooting, security-conscious environments often disable it on untrusted interfaces to prevent information disclosure to potential attackers.
Option A, LLDP or Link Layer Discovery Protocol, is an IEEE standard protocol (802.1AB) that provides similar neighbor discovery functionality but is vendor-neutral. While NX-OS supports LLDP and it may be used in multi-vendor environments, CDP is the Cisco-native protocol more commonly associated with Cisco device discovery.
Option C, STP or Spanning Tree Protocol, prevents Layer 2 loops by blocking redundant paths. While STP devices exchange information, its purpose is loop prevention rather than neighbor discovery.
Option D, ARP or Address Resolution Protocol, maps IP addresses to MAC addresses at Layer 3. ARP operates at a different layer and serves a completely different purpose than device discovery protocols.
CDP provides essential neighbor discovery capabilities for Cisco network management and troubleshooting.
Question 18
What is the maximum number of VLANs supported on Cisco Nexus 9000 series switches?
A) 1024
B) 2048
C) 4094
D) 8192
Answer: C
Explanation:
The maximum number of VLANs supported on Cisco Nexus 9000 series switches is 4094, which represents the limit imposed by the 802.1Q VLAN tagging standard. The 802.1Q standard uses a 12-bit VLAN ID field in the Ethernet frame header, providing 4096 possible values (0-4095). However, VLAN 0 is reserved and not used for normal traffic, VLAN 1 is the default VLAN that cannot be deleted, and VLAN 4095 is reserved by the standard, leaving VLANs 2-4094 available for user configuration, totaling 4094 usable VLANs.
This VLAN limit applies to traditional 802.1Q VLAN tagging used in legacy Layer 2 networks and is a fundamental constraint inherited from the Ethernet standard. In modern data center environments, this limitation has become significant as virtualization and multi-tenancy drive requirements for greater network segmentation. Organizations deploying large-scale virtualized environments with thousands of tenants or applications may approach or exceed this VLAN limit. This constraint was one of the driving factors behind the development of overlay networking technologies like VXLAN, which uses a 24-bit identifier supporting over 16 million unique segments, far exceeding traditional VLAN limitations.
Nexus 9000 switches operating in NX-OS standalone mode support the full range of 4094 VLANs across the system. However, practical deployment considerations may limit the number of active VLANs. Each VLAN consumes switch resources including memory for MAC address tables, spanning tree instances, and control plane processing. The number of VLANs carrying active traffic simultaneously depends on the specific hardware platform, forwarding mode, and feature set enabled. When Nexus 9000 switches operate in ACI mode as part of an Application Centric Infrastructure fabric, the traditional VLAN concept is abstracted into endpoint groups and VXLAN network identifiers, effectively eliminating the 4094 VLAN limitation through overlay networking technology.
Option A, 1024, was a limitation on some older switch platforms but is not the maximum for Nexus 9000 switches. Modern data center switches support the full 802.1Q range.
Option B, 2048, is not the VLAN limit for Nexus 9000 switches. This value does not correspond to any standard VLAN limitation in Cisco data center switching.
Option D, 8192, exceeds the 802.1Q standard limitation. While overlay technologies can support more segments, the traditional VLAN limit remains 4094.
Understanding VLAN limitations and when to consider overlay networking technologies is important for large-scale data center network design.
Question 19
Which Cisco Nexus feature allows a physical server to use multiple links to different switches for increased bandwidth and redundancy?
A) EtherChannel
B) Virtual Port Channel
C) Port Security
D) SPAN
Answer: B
Explanation:
Virtual Port Channel allows a physical server or other downstream device to use multiple links connected to different Nexus switches for increased bandwidth and redundancy, presenting those switches as a single logical device from the server’s perspective. This capability eliminates the spanning tree blocked ports that would normally occur when a device connects to two separate switches, enabling active-active forwarding across all links and providing seamless failover if one switch or link fails.
The vPC architecture consists of two peer Nexus switches connected by a dedicated vPC peer link and peer-keepalive link. The peer link carries control plane synchronization traffic and serves as a backup data path when needed. The peer-keepalive link provides an independent heartbeat mechanism to detect peer switch failures. From the connected device’s perspective, the two separate physical switches appear as a single logical switch, allowing standard link aggregation (port channel) configuration without special knowledge of the vPC topology. When the server sends traffic across its port channel, frames are distributed across links to both switches based on the configured load-balancing algorithm, effectively doubling available bandwidth compared to single-switch connectivity.
vPC provides significant benefits for data center design. It eliminates spanning tree blocked ports, utilizing all available bandwidth between access and distribution layers. It enables hitless failover because both paths are active and forwarding, so failure of one switch or link causes traffic to seamlessly shift to the remaining path without convergence delays. vPC supports dual-homing for servers, storage devices, and other infrastructure, dramatically improving availability. The technology works with standard Ethernet and does not require special server-side configuration beyond normal link aggregation, making it compatible with any device supporting 802.3ad LACP or static port channels. However, vPC requires careful configuration of the peer link, consistent settings between peers, and proper planning for failure scenarios.
Option A, EtherChannel, bundles multiple links between two devices into a logical channel for increased bandwidth and redundancy. However, traditional EtherChannel connects to a single switch, not multiple switches like vPC enables.
Option C, Port Security, restricts which MAC addresses can access specific switch ports for security purposes. It does not provide multi-switch connectivity or bandwidth aggregation.
Option D, SPAN (Switched Port Analyzer), copies traffic from monitored ports to analysis tools. SPAN is used for traffic monitoring and troubleshooting, not for providing redundant connectivity.
Virtual Port Channel is essential for building highly available, full-bandwidth server connectivity in Cisco data center networks.
Question 20
What is the purpose of the spine-leaf architecture in data center networks?
A) To create hierarchical core, distribution, and access layers
B) To provide predictable latency and equal-cost paths between endpoints
C) To eliminate the need for routing protocols
D) To reduce the number of required switches
Answer: B
Explanation:
The purpose of the spine-leaf architecture in modern data center networks is to provide predictable latency and equal-cost paths between any two endpoints, ensuring consistent performance for east-west traffic flows. This architecture contrasts with traditional three-tier hierarchical designs by creating a non-blocking, low-latency fabric optimized for the massive east-west traffic patterns characteristic of modern distributed applications, virtualization, and storage architectures.
In a spine-leaf topology, every leaf switch connects to every spine switch in a full mesh, but leaf switches do not connect to other leaf switches, and spine switches do not connect to other spine switches. This design ensures that traffic between any two endpoints traverses exactly two hops (leaf to spine to leaf) regardless of which specific endpoints are communicating. The consistent hop count provides predictable, deterministic latency, which is crucial for performance-sensitive applications. Additionally, because each leaf has multiple equal-cost paths to every other leaf through different spine switches, traffic can be load-balanced across these paths using equal-cost multipath routing, maximizing bandwidth utilization and providing resilience against link or spine switch failures.
The spine-leaf architecture scales horizontally by adding more spine switches to increase total fabric bandwidth or adding more leaf switches to accommodate more endpoints. This scale-out approach avoids the oversubscription problems common in traditional hierarchies where aggregation layer uplinks become bottlenecks. The architecture naturally supports modern protocols like VXLAN for overlay networking and works well with routing protocols like BGP that provide flexible policy control and efficient convergence. Spine-leaf designs have become the standard for modern data centers because they align with the requirements of cloud-scale infrastructure, supporting massive virtualization, containerization, and hyper-converged architectures that generate substantial east-west traffic.
Option A describes traditional three-tier hierarchical design with core, distribution, and access layers. Spine-leaf is fundamentally different, using a two-tier Clos fabric topology rather than a hierarchy.
Option C incorrectly suggests eliminating routing protocols. Spine-leaf architectures rely heavily on routing protocols like BGP or IS-IS to establish connectivity and load balance traffic across multiple equal-cost paths.
Option D suggests reducing switch count as a goal, which is not accurate. Spine-leaf may actually require more switches than collapsed traditional designs, but provides superior performance and scalability characteristics.
The spine-leaf architecture has become the foundational design pattern for building scalable, high-performance data center networks.
Question 21
Which command displays the MAC address table on a Cisco Nexus switch?
A) show ip route
B) show mac address-table
C) show vlan
D) show interface status
Answer: B
Explanation:
The show mac address-table command displays the MAC address table on a Cisco Nexus switch, showing the mapping between MAC addresses, VLAN IDs, interface associations, and entry types. The MAC address table is fundamental to Layer 2 switching operation, enabling the switch to make forwarding decisions by looking up destination MAC addresses and determining which interface should receive each frame. Understanding how to view and interpret the MAC address table is essential for troubleshooting connectivity issues and verifying switch learning behavior.
The MAC address table output includes several important columns. The VLAN column shows which VLAN the MAC address belongs to, as MAC addresses are learned per VLAN rather than globally. The MAC Address column displays the 48-bit hardware address of the learned device. The Type column indicates how the entry was created, with “dynamic” indicating normal learning from received frames, “static” for manually configured entries, and other types like “secure” for port security configurations. The Age column shows how long since the switch last saw traffic from this MAC address, with entries aging out after the configured aging time (typically 300 seconds by default). The Ports column identifies which physical or logical interface the MAC address is associated with.
The command supports various filtering options to narrow results. Adding a specific MAC address shows only that entry, useful when troubleshooting a particular device. Specifying a VLAN with show mac address-table vlan displays only addresses in that VLAN. The interface keyword filters to show only addresses learned on a specific port. Additional options include count to show table statistics, dynamic or static to filter by entry type, and address filters using partial MAC addresses. The clear mac address-table dynamic command removes dynamically learned entries, forcing the switch to relearn MAC addresses, which can be useful when troubleshooting or after network topology changes.
Option A, show ip route, displays the Layer 3 routing table showing IP network destinations and next-hop information. This command relates to routing rather than Layer 2 MAC address learning.
Option C, show vlan, displays VLAN configuration and status information showing which VLANs exist and which ports are members. While related to Layer 2 operation, it does not show MAC addresses.
Option D, show interface status, provides summary information about interface states, speed, duplex, and VLAN assignments. It shows interface configuration rather than learned MAC addresses.
The show mac address-table command is essential for verifying and troubleshooting Layer 2 switching operations.
Question 22
What protocol does Cisco Nexus use in ACI fabric for the underlay network?
A) OSPF
B) EIGRP
C) IS-IS
D) RIP
Answer: C
Explanation:
Cisco Nexus switches in an ACI fabric use Intermediate System to Intermediate System as the underlay routing protocol to establish connectivity between leaf and spine switches. IS-IS was selected for ACI because of its scalability, rapid convergence characteristics, stability, and efficient operation in large-scale data center environments. The protocol runs on every leaf and spine switch to build the Layer 3 underlay network that transports VXLAN-encapsulated overlay traffic throughout the fabric.
IS-IS operates as a link-state routing protocol where each switch maintains a complete topology database of the fabric, enabling calculation of optimal paths using the Dijkstra shortest path first algorithm. In the ACI fabric, IS-IS establishes adjacencies between directly connected leaf and spine switches, with each switch advertising its reachability information. The full mesh connectivity between leaf and spine layers means each leaf switch has multiple equal-cost paths to every other leaf through the various spine switches. IS-IS efficiently manages these multiple paths, enabling equal-cost multipath load balancing that distributes traffic across all available spine switches, maximizing fabric bandwidth utilization and providing resilience against failures.
The specific IS-IS implementation in ACI is optimized for the fabric environment. The protocol runs in a single Level 1 domain, appropriate for the relatively flat spine-leaf topology that does not require IS-IS’s hierarchical Level 1/Level 2 capabilities. IS-IS operates directly over Layer 2 rather than IP, which provides some efficiency advantages in the controlled fabric environment. The protocol’s fast convergence capabilities minimize traffic disruption during link or node failures, typically converging in subsecond timeframes. IS-IS also has minimal control plane overhead, conserving switch resources for data plane operations. The APIC controller automatically configures IS-IS on all fabric switches during initialization, eliminating manual protocol configuration and ensuring consistent settings across the entire fabric.
Option A, OSPF, is another widely-used link-state routing protocol that could theoretically work in data centers. However, Cisco chose IS-IS for ACI because of specific technical advantages in the fabric environment.
Option B, EIGRP, is a Cisco proprietary advanced distance-vector routing protocol. While capable and efficient, it is not used as the ACI underlay protocol.
Option D, RIP or Routing Information Protocol, is a legacy distance-vector protocol with poor scalability and slow convergence. RIP is not suitable for modern data center fabrics.
IS-IS provides the robust, scalable underlay routing foundation essential for ACI fabric operations.
Question 23
In FCoE, what is the purpose of the FIP protocol?
A) To encrypt Fibre Channel traffic
B) To discover FCoE-capable devices and establish virtual links
C) To compress Fibre Channel data
D) To convert Fibre Channel to iSCSI
Answer: B
Explanation:
The FCoE Initialization Protocol is used to discover FCoE-capable devices on an Ethernet network and establish virtual links between FCoE endpoints and FCF switches. FIP provides the discovery and initialization services necessary to create Fibre Channel over Ethernet connections in a shared Ethernet environment, ensuring that FCoE traffic is properly isolated and that only authorized devices participate in the FCoE fabric. FIP operates at Layer 2 using dedicated Ethernet types to maintain separation from regular Ethernet traffic.
FIP performs several critical functions during FCoE initialization. The VLAN discovery phase allows an FCoE endpoint (typically a server with a Converged Network Adapter) to discover which VLAN is being used for FCoE traffic on the Ethernet network, as FCoE traffic must be carried on a dedicated VLAN separate from normal IP traffic. The FCF discovery phase enables the endpoint to locate available Fibre Channel Forwarder switches that can provide connectivity to the Fibre Channel fabric. Once an FCF is discovered, FIP facilitates virtual link establishment between the endpoint and the FCF, creating a point-to-point virtual connection that emulates the direct physical connections used in native Fibre Channel networks.
FIP also provides ongoing virtual link maintenance through keep-alive mechanisms that verify continued connectivity between endpoints and FCFs. If keepalives fail, the virtual link is torn down, allowing the endpoint to discover and connect to an alternate FCF if available. FIP’s security features help prevent unauthorized devices from accessing the FCoE network by validating that discovered devices are legitimately part of the fabric. The protocol uses distinct Ethernet types (0x8914 for FIP) that differ from the Ethernet type used for FCoE data traffic (0x8906), ensuring clear separation between control and data planes. This separation enables Ethernet switches to apply different handling to FIP frames versus FCoE data frames when necessary.
Option A incorrectly suggests encryption as FIP’s purpose. FIP handles discovery and initialization, not encryption. FCoE can use standard Fibre Channel security mechanisms, but encryption is not FIP’s function.
Option C describes compression, which is not related to FIP. FIP establishes connections rather than modifying data content.
Option D mentions protocol conversion, which is not FIP’s role. FCoE encapsulates Fibre Channel in Ethernet; it does not convert between Fibre Channel and iSCSI, which are different storage protocols.
FIP is essential for proper FCoE operation in converged data center networks carrying both storage and IP traffic.
Question 24
Which QoS mechanism is used to classify and mark traffic on a Cisco Nexus switch?
A) Policing
B) Shaping
C) Classification and marking
D) Queuing
Answer: C
Explanation:
Classification and marking is the QoS mechanism used to identify different types of traffic and assign priority values that subsequent QoS mechanisms can use for differentiated treatment. Classification examines packet headers or contents to categorize traffic based on criteria like IP addresses, protocols, port numbers, or application signatures. Marking writes priority values into packet headers, typically using the IP Precedence or DSCP field in the IP header or the CoS field in the 802.1Q VLAN tag. This classification and marking typically occurs at the network edge where traffic enters the network, establishing QoS treatment that is honored throughout the infrastructure.
Classification can use various packet fields and matching criteria to identify traffic types. Access control lists provide flexible classification based on Layer 3 and Layer 4 information like source and destination IP addresses, protocols, and TCP/UDP port numbers. Class maps group multiple match criteria together to identify specific traffic classes such as voice, video, critical data, or best-effort. On Nexus switches, classification policies are defined using Modular QoS CLI with class maps specifying match criteria and policy maps defining actions to take on matched traffic. The classification engine examines packets as they enter the switch and determines which traffic class each packet belongs to.
Once traffic is classified, marking assigns priority values that downstream devices use for QoS enforcement. For Layer 2 marking, the 802.1p CoS field in the VLAN tag uses three bits providing eight priority levels (0-7). For Layer 3 marking, the DSCP field in the IP header uses six bits providing 64 possible values, with standardized values like EF (Expedited Forwarding) for voice and AF (Assured Forwarding) classes for different data priorities. Marking decisions should align with organizational QoS policies and be consistent across the network. Trust boundaries determine where marking is trusted versus where traffic is reclassified, with typical designs trusting marks from IP phones and servers but remarking traffic from user workstations to prevent priority escalation attacks.
Option A, policing, enforces rate limits by dropping or remarking traffic that exceeds configured thresholds. Policing is a separate QoS mechanism that uses marked values rather than creating them.
Option B, shaping, delays excess traffic to conform to rate limits rather than dropping it. Like policing, shaping acts on traffic based on existing classifications rather than creating them.
Option D, queuing, places packets into different queues for transmission scheduling based on their priority markings. Queuing depends on classification and marking but is a distinct mechanism.
Classification and marking form the foundation of QoS by identifying and tagging traffic for differential treatment throughout the network.
Question 25
What is the function of a Fabric Extender (FEX) in a Cisco data center network?
A) To provide independent switching capabilities
B) To extend the I/O of the parent switch to remote locations
C) To replace spine switches in the fabric
D) To provide Layer 3 routing between VLANs
Answer: B
Explanation:
A Fabric Extender extends the I/O capabilities of a parent Nexus switch to remote locations such as server racks, providing connectivity for end devices while centralizing switching intelligence and management in the parent switch. The FEX operates as a remote line card of the parent switch rather than as an independent switching device, simplifying management and reducing the number of devices requiring individual configuration. This architecture is particularly useful in data centers where simplified operations and consistent policy enforcement across distributed access layer connectivity is desired.
The FEX architecture uses a host interface for connecting end devices and a fabric interface for uplinks to the parent switch. End devices like servers connect to FEX host interfaces using standard Ethernet connections. The FEX fabric interfaces, also called uplink interfaces, connect back to the parent Nexus switch using 10 Gigabit or faster links. All forwarding decisions, spanning tree operations, and management functions occur on the parent switch, with the FEX acting purely as a remote physical layer extension. This means the FEX does not perform local switching between devices connected to the same FEX; all traffic must traverse the fabric link to the parent switch even for communication between devices on the same FEX.
FEX provides several operational benefits. It reduces management overhead because FEX devices do not require individual IP addresses, software upgrades, or configuration files; they are managed entirely through the parent switch. Configuration is simplified as administrators define policies once on the parent switch that automatically apply to all connected FEXs. The architecture supports consistent policy enforcement because all traffic passes through the parent switch where security policies, QoS, and other features are centrally applied. FEXs can be dual-homed to two parent switches in a vPC configuration for redundancy, providing resilient connectivity for attached servers. However, the FEX architecture creates a dependence on the parent switch, and FEXs have limited local intelligence, forwarding all traffic including traffic between locally connected devices to the parent switch.
Option A incorrectly suggests FEX provides independent switching. FEX devices specifically do not perform independent switching decisions; they rely entirely on the parent switch for all forwarding intelligence.
Option C suggests FEX replaces spine switches, which is incorrect. FEX operates at the access layer extending leaf switches, while spine switches form the fabric backbone in spine-leaf architectures.
Option D claims FEX provides Layer 3 routing, which is wrong. FEX operates at Layer 2 only, with all routing functions performed by the parent switch if needed.
Fabric Extenders simplify data center operations by extending switch I/O while centralizing management and control.
Question 26
Which NX-OS command enables a feature that is disabled by default?
A) enable feature
B) feature enable
C) feature
D) start feature
Answer: C
Explanation:
The feature command in NX-OS enables features that are disabled by default on Cisco Nexus switches. Unlike traditional IOS where most features are available immediately, NX-OS requires explicit feature enablement before many protocols and functionalities can be configured or used. This design philosophy reduces resource consumption by loading only necessary features into memory, improves security by minimizing attack surface, and provides clearer visibility into which capabilities are active on each switch.
The feature command syntax is straightforward: feature feature-name enables the specified feature, while no feature feature-name disables it. Common features that require enablement include routing protocols like feature ospf, feature eigrp, or feature bgp for enabling Open Shortest Path First, Enhanced Interior Gateway Routing Protocol, or Border Gateway Protocol respectively. Layer 2 features like feature lacp enables Link Aggregation Control Protocol support. Network virtualization features like feature vn-segment-vlan-based enables VXLAN functionality. High availability features like feature vpc enables Virtual Port Channel configuration. The feature interface-vlan command enables
the creation of switched virtual interfaces for inter-VLAN routing.
When a feature is enabled, NX-OS loads the necessary software modules, allocates required memory and resources, and makes associated configuration commands available. Before enabling a feature, attempting to configure related functionality results in error messages indicating the feature must first be enabled. After enabling a feature, the show feature command displays which features are currently enabled on the switch, useful for auditing configurations and troubleshooting. Disabling a feature removes its configuration and deallocates associated resources, though some features cannot be disabled if they are currently in use or if other enabled features depend on them. Understanding feature management is essential for NX-OS administration and differs significantly from traditional Cisco IOS operation.
Option A, enable feature, reverses the correct syntax. While the intent is clear, this is not the proper NX-OS command structure.
Option B, feature enable, also uses incorrect syntax. The feature keyword is followed directly by the feature name without an enable keyword.
Option D, start feature, is not valid NX-OS syntax. The feature command is the correct way to enable features, not a start command.
The feature command is fundamental to NX-OS operation, enabling administrators to activate only needed functionality on Nexus switches.
Question 27
What is the purpose of Priority Flow Control in Data Center Bridging?
A) To provide routing between VLANs
B) To create a lossless Ethernet service for specific traffic classes
C) To compress network traffic
D) To encrypt data in transit
Answer: B
Explanation:
Priority Flow Control creates a lossless Ethernet service for specific traffic classes by implementing a per-priority pause mechanism that prevents frame drops due to buffer overflow. PFC is a critical component of Data Center Bridging, enabling Ethernet networks to carry storage traffic like FCoE that requires lossless transmission similar to traditional Fibre Channel networks. By providing selective flow control on a per-class basis, PFC allows lossless and best-effort traffic to coexist on the same physical infrastructure without the lossless requirements impacting other traffic types.
PFC extends the IEEE 802.3x Ethernet pause mechanism by adding per-priority granularity. Traditional Ethernet pause stops transmission on all priorities when congestion occurs, which is too blunt for converged networks carrying mixed traffic types. PFC leverages the eight priority levels defined by IEEE 802.1p, allowing independent pause control for each priority. When a switch experiences congestion in a queue associated with a specific priority, it sends a PFC pause frame to the upstream device for only that priority class, requesting temporary cessation of transmission for that class while other priorities continue forwarding normally. This selective pause capability is essential for FCoE, which is mapped to a specific priority and must not drop frames, while allowing normal IP traffic on other priorities to experience standard drop behavior during congestion.
Configuring PFC requires coordination across all switches in the path between FCoE initiators and targets. The no-drop class of service must be configured consistently, typically using CoS 3 for FCoE traffic. All switches must have PFC enabled on relevant interfaces, and endpoints must mark FCoE traffic with the correct priority value. The interaction between PFC and other QoS mechanisms like queuing and scheduling must be carefully managed to prevent head-of-line blocking where paused traffic blocks other traffic classes. Enhanced Transmission Selection, another DCB component, works alongside PFC to provide bandwidth guarantees and prevent starvation. Proper PFC configuration is essential for stable FCoE operation, as misconfigurations can lead to frame loss causing severe storage performance degradation or connection failures.
Option A describes routing functionality, which is unrelated to PFC. PFC operates at Layer 2 providing flow control, not Layer 3 routing between networks.
Option C suggests compression, which is not PFC’s function. PFC manages traffic flow to prevent drops, not to reduce data size.
Option D mentions encryption, which is a security function unrelated to PFC. PFC provides lossless transmission through flow control, not data confidentiality through encryption.
Priority Flow Control is fundamental to creating converged networks that reliably carry both storage and IP traffic on shared Ethernet infrastructure.
Question 28
Which command shows the current software version running on a Cisco Nexus switch?
A) show version
B) show software
C) show running-config
D) show system
Answer: A
Explanation:
The show version command displays the current software version running on a Cisco Nexus switch, along with extensive additional system information including hardware platform details, uplink module information, CPU utilization, memory statistics, flash storage usage, system uptime, and boot image location. This command is essential for verifying software versions during upgrades, troubleshooting compatibility issues, and documenting network infrastructure. Understanding show version output is fundamental for Nexus switch administration.
The show version output provides several critical pieces of information. The software version is displayed near the top, showing the NX-OS release number such as 9.3(5) or similar format where the first number indicates the major release, the second indicates the feature set, and the number in parentheses indicates the maintenance release. The Hardware section identifies the specific Nexus platform model and chassis type. Bootflash shows the filesystem containing the operating system image and available storage space. The System uptime field indicates how long the switch has been running since the last reload, useful for identifying recent reboots. Memory information shows total and used RAM, helping assess whether the switch has sufficient memory for its workload.
Additional details in show version output include CPU utilization percentages showing processor usage, which can indicate resource constraints if consistently high. The Kernel uptime shows operating system uptime, while Process uptime might differ if certain processes were restarted. The kickstart and system image fields show which software images are currently running, important during ISSU procedures or troubleshooting boot issues. The BIOS version indicates the firmware level, which must meet minimum requirements for certain NX-OS versions. Plugin information shows any licensed features or add-on software modules. Administrators frequently use show version as a first step in troubleshooting to verify the software version, check for recent reboots, and assess resource utilization.
Option B, show software, is not a standard NX-OS command for displaying software version. While show install all status displays upgrade information in some contexts, show version is the primary command.
Option C, show running-config, displays the active configuration including all configured commands. While essential for configuration verification, it does not show software version information.
Option D, show system, is not a complete command in NX-OS. Various show system subcommands exist for specific purposes, but show version is the standard command for displaying software version and system information.
The show version command is one of the most frequently used commands for verifying and documenting Nexus switch software and hardware characteristics.
Question 29
What is the maximum hop count in VXLAN encapsulation before reaching the destination?
A) It depends on the underlay network routing
B) Always 2 hops
C) Always 3 hops
D) No hop limit
Answer: A
Explanation:
The maximum hop count in VXLAN encapsulation depends entirely on the underlay network routing, as VXLAN creates an overlay network that rides on top of an IP-based underlay infrastructure. The VXLAN header itself does not impose hop count limitations; instead, the number of Layer 3 hops traversed is determined by the IP routing topology between the source and destination VXLAN Tunnel Endpoints. Understanding this relationship between overlay and underlay is fundamental to VXLAN architecture and data center network design.
VXLAN operates by encapsulating the original Layer 2 Ethernet frame in a UDP/IP packet for transmission across the underlay network. When an endpoint sends traffic that must traverse the VXLAN overlay, the source VTEP encapsulates the frame, adding a VXLAN header, UDP header, IP header, and outer Ethernet header. The source IP address in the outer header is the VTEP’s interface address, and the destination IP address is the remote VTEP where the destination endpoint resides. This encapsulated packet is then routed through the underlay network using standard IP routing protocols like OSPF, BGP, or IS-IS, traversing as many Layer 3 hops as the underlay routing determines is the optimal path between the two VTEPs.
In a typical spine-leaf architecture commonly used with VXLAN, the underlay routing is optimized for minimal hop count. If both source and destination endpoints connect to leaf switches in the same fabric, the underlay path is typically two hops: source leaf to spine to destination leaf. However, in multi-site deployments, data center interconnect scenarios, or when traversing external routers, the underlay hop count can be significantly higher. The VXLAN architecture does not inherently limit this hop count; it relies on the underlay IP network to deliver packets between VTEPs regardless of distance. The only limitation is the IP TTL field in the outer header, which typically starts at 64 or higher, allowing many hops before expiration. Understanding underlay path characteristics is important for predicting latency and planning capacity.
Option B incorrectly suggests a fixed 2-hop limit. While spine-leaf architectures often result in 2-hop underlay paths, this is not a VXLAN requirement or limitation but rather a characteristic of that specific topology design.
Option C similarly imposes an incorrect fixed limit. VXLAN does not enforce any specific hop count; the underlay routing determines the path length.
Option D suggests no hop limit, which is technically incorrect because IP TTL provides an upper bound. However, this limit is not specific to VXLAN but rather a general IP characteristic.
VXLAN’s flexibility in working over various underlay topologies is one of its key strengths for data center and WAN deployment scenarios.
Question 30
Which protocol is used for orchestration and management in Cisco ACI?
A) SNMP
B) OpFlex
C) NETCONF
D) Telnet
Answer: B
Explanation:
OpFlex is the protocol used for orchestration and management communication between the Application Policy Infrastructure Controller and the fabric switches in Cisco ACI. OpFlex enables the APIC to push policy information to switches and receive operational state updates from them, providing the communication framework that makes ACI’s policy-driven automation possible. Understanding OpFlex’s role is essential for comprehending how ACI translates high-level business intent into low-level network configurations across the entire fabric.
OpFlex is a declarative policy protocol developed to support software-defined networking architectures. Rather than the APIC telling each switch exactly what commands to execute, OpFlex communicates the desired end state and policies, allowing switches to determine the appropriate local configurations needed to achieve that state. The APIC maintains the policy repository containing all endpoint groups, contracts, bridge domains, VRFs, and other policy objects. When policies are created or modified, the APIC uses OpFlex to distribute relevant policy information to affected leaf switches. Each leaf switch receives only the policy information necessary for the endpoints it hosts, avoiding unnecessary policy distribution and scaling efficiently even in very large fabrics.
The communication between APIC and switches occurs through a bidirectional OpFlex channel. The APIC pushes policy updates to switches when administrators make changes through the APIC interface. Simultaneously, leaf switches report endpoint discovery, operational status, statistics, and health information back to the APIC through OpFlex. This bidirectional communication enables the APIC to maintain an accurate view of the fabric’s actual state and detect when reality diverges from the desired configuration. The APIC can then automatically remediate discrepancies, ensuring policy compliance. OpFlex runs over TCP with TLS encryption for security, protecting the sensitive policy information transmitted between controller and switches. The protocol is designed for high scalability, supporting thousands of switches in a single fabric and enabling rapid policy deployment across the entire infrastructure.
Option A, SNMP, is a traditional network management protocol used for monitoring and retrieving operational statistics. While ACI supports SNMP for integration with existing management systems, it is not the primary orchestration protocol between APIC and switches.
Option C, NETCONF, is an IETF standard network configuration protocol that could theoretically be used for device management. However, ACI specifically uses OpFlex rather than NETCONF for its policy distribution and orchestration.
Option D, Telnet, is an insecure remote access protocol for command-line interaction with network devices. It is not used for automated orchestration and is generally disabled in modern secure environments.
OpFlex is central to ACI’s architecture, enabling the policy-driven automation that differentiates ACI from traditional network management approaches.