Visit here for our full Cisco 350-601 exam dumps and practice test questions.
Question 151
An administrator is configuring Cisco Nexus switches and needs to verify which VLANs are allowed on a trunk port. Which command displays this information?
A) show vlan
B) show interface trunk
C) show running-config interface
D) show vlan brief
Answer: B
Explanation:
Trunk port configuration and verification are fundamental tasks in data center network management. Understanding which VLANs traverse trunk links is essential for troubleshooting connectivity issues, validating segmentation, and ensuring proper traffic flow between switches.
The show interface trunk command displays comprehensive information about trunk ports including which VLANs are allowed, active, and forwarding. This command provides a dedicated view of all trunk interfaces showing the native VLAN, allowed VLAN list, VLANs in spanning tree forwarding state, and active VLANs. The allowed VLAN list shows which VLANs are permitted on the trunk based on configuration using the switchport trunk allowed vlan command. The active VLANs list shows which allowed VLANs actually exist in the VLAN database. The forwarding VLANs list shows which active VLANs are in spanning tree forwarding state rather than blocking. This consolidated output makes troubleshooting trunk issues efficient by presenting all relevant information in one display. Administrators can quickly identify configuration problems such as VLANs missing from allowed lists, VLANs not created in the database, or spanning tree blocking unexpected VLANs. The command output also shows encapsulation type and native VLAN configuration for each trunk interface.
The show vlan command displays VLAN database information including VLAN IDs, names, status, and assigned access ports. While useful for understanding which VLANs exist and which access ports belong to each VLAN, this command does not show trunk port configuration or which VLANs are allowed on trunks. The show vlan output focuses on VLAN definitions rather than trunk behavior.
The show running-config interface command displays the complete configuration for specified interfaces including trunk settings. While this shows the configured allowed VLAN list, it does not show active or forwarding VLANs, nor does it provide the consolidated trunk-specific view. Administrators would need to manually correlate configuration with VLAN database and spanning tree state. This approach is less efficient than using the dedicated trunk display command.
The show vlan brief command provides a summary of VLAN information similar to show vlan but in condensed format. This command lists VLANs with their names, status, and ports but does not specifically address trunk configuration or allowed VLAN lists. The brief output focuses on VLAN membership rather than trunk behavior.
The show interface trunk command provides the targeted, comprehensive view needed for verifying and troubleshooting VLAN configuration on trunk ports.
Question 152
A storage administrator needs to implement a protocol that allows servers to boot from SAN storage. Which technology enables this capability?
A) NFS boot
B) iSCSI boot
C) HTTP boot
D) TFTP boot
Answer: B
Explanation:
Boot from SAN technology enables servers to load operating systems from remote storage arrays rather than local disks. This capability provides benefits including centralized boot image management, diskless server deployment, rapid server provisioning, and simplified disaster recovery.
iSCSI boot enables servers to boot from SAN storage over Ethernet networks. Servers equipped with iSCSI-capable network adapters or host bus adapters can use the Pre-boot Execution Environment to establish iSCSI sessions before the operating system loads. During the boot process, the adapter firmware initializes network connectivity, discovers iSCSI targets, authenticates to the storage array, and presents boot LUNs as local disks to the BIOS. The server then boots from the remote storage device as if it were a local disk. iSCSI boot requires configuration of adapter firmware with iSCSI initiator name, target portal IP addresses, authentication credentials, and boot LUN identifiers. The storage array must be configured with appropriate LUN masking to present boot volumes only to authorized initiators. iSCSI boot provides advantages over local storage including eliminating single points of failure through multipath redundancy, simplifying hardware replacement by decoupling server identity from local storage, enabling rapid provisioning through boot volume cloning, and centralizing patch management with golden images. Organizations implementing blade servers or hyperconverged infrastructure frequently use SAN boot to minimize hardware costs and operational complexity.
NFS boot allows diskless workstations or thin clients to mount root filesystems from NFS servers during boot. While NFS boot provides network-based booting, it operates at the file system level rather than presenting block devices to the BIOS. NFS boot is typically used for lightweight clients rather than enterprise servers and does not provide the same capabilities as SAN boot for enterprise applications.
HTTP boot is part of UEFI specifications enabling systems to download boot images from web servers. This technology supports network-based OS deployment and thin client scenarios but does not provide persistent SAN storage that servers boot from continuously. HTTP boot downloads images rather than accessing ongoing block storage.
TFTP boot uses Trivial File Transfer Protocol to download boot images from network servers during PXE boot processes. TFTP commonly supports network installation of operating systems and diskless workstation boot but provides temporary image transfer rather than persistent SAN storage access. TFTP boot serves deployment scenarios rather than ongoing SAN boot operations.
iSCSI boot technology provides enterprise servers with the reliability, performance, and management benefits of SAN storage while leveraging standard Ethernet infrastructure.
Question 153
An engineer is designing a data center network and needs to ensure all traffic between leaf switches traverses only one spine switch. What is this characteristic called?
A) Oversubscription
B) Non-blocking architecture
C) Spine redundancy
D) Clos topology
Answer: D
Explanation:
Data center network topologies determine traffic flow patterns, performance characteristics, and scalability properties. Understanding architectural patterns helps engineers design networks that meet application requirements while efficiently utilizing infrastructure.
Clos topology describes the characteristic where all traffic between leaf switches traverses only one spine switch. The Clos network architecture, originally designed for telephone switching, has been adapted for modern data center networks. In a Clos-based spine-leaf design, leaf switches connect to all spine switches but never to other leaf switches, and spine switches connect to all leaf switches but never to other spine switches. This creates a bipartite graph where any leaf-to-leaf communication requires exactly two hops through one spine. The consistent path length provides predictable latency and bandwidth characteristics. Clos topologies support horizontal scaling by adding leaf switches for more server capacity or adding spine switches for more bandwidth without changing the fundamental two-hop architecture. The design provides multiple equal-cost paths between any two leaf switches, enabling ECMP load distribution. Clos networks are inherently non-blocking when properly provisioned, meaning sufficient spine bandwidth exists to support simultaneous full-rate communication from all leaf ports. This architecture has become the standard for modern data center networks replacing traditional three-tier hierarchies.
Oversubscription refers to the ratio of downlink bandwidth to uplink bandwidth at any network layer. For example, a leaf switch with 48 ports at 10 Gbps to servers and four uplinks at 40 Gbps to spines has 12:1 oversubscription. While related to architecture design, oversubscription is a capacity characteristic rather than the fundamental topology pattern.
Non-blocking architecture describes networks with sufficient bandwidth that any port can communicate at full line rate simultaneously with any other port without congestion. While Clos topologies can be designed as non-blocking, the term describes capacity provisioning rather than the specific two-hop architectural pattern.
Spine redundancy refers to having multiple spine switches for fault tolerance. Redundancy is a design goal that Clos topology supports, but the term does not specifically describe the two-hop traffic pattern characteristic of Clos networks.
Clos topology represents the mathematical and architectural foundation for modern spine-leaf data center designs that provide predictable performance at scale.
Question 154
A network administrator needs to configure a Cisco Nexus switch to automatically recover from error-disabled state after a specific time period. Which feature should be enabled?
A) Port security
B) Error-disable recovery
C) Storm control
D) UDLD
Answer: B
Explanation:
Network switches place interfaces into error-disabled state when they detect conditions that might harm the network such as port security violations, BPDU guard triggers, or excessive errors. Understanding recovery mechanisms helps balance security protection with operational availability.
Error-disable recovery should be enabled to automatically recover interfaces from error-disabled state after a specific time period. When switches detect potentially dangerous conditions, they shut down the offending interface and place it in err-disabled state to prevent network problems from spreading. Without automatic recovery, interfaces remain disabled until administrators manually investigate and re-enable them using shutdown followed by no shutdown commands. The errdisable recovery feature allows switches to automatically attempt interface recovery after a configurable interval, typically 300 seconds by default. Administrators can enable recovery globally and specify which error conditions should support automatic recovery such as bpduguard, link-flap, security-violation, or udld. The recovery timer provides time for transient issues to resolve while automating recovery from temporary problems. Automatic recovery is valuable in large environments where manual intervention for every error-disable event would be operationally burdensome. However, administrators should monitor recovery events because interfaces repeatedly entering and recovering from error-disabled state indicate underlying problems requiring investigation. The feature balances protection from network issues with operational efficiency.
Port security is a feature that restricts which MAC addresses can access a switchport and can cause error-disable state when violations occur. Port security is a cause of error-disabled state rather than a recovery mechanism. Configuring port security without error-disable recovery means violations require manual intervention.
Storm control detects and mitigates broadcast, multicast, or unknown unicast storms by rate-limiting traffic or shutting down interfaces. Like port security, storm control can trigger error-disabled state but does not provide automatic recovery. Storm control addresses a specific problem type rather than general error recovery.
UDLD is UniDirectional Link Detection that identifies cabling problems where traffic flows only one direction. UDLD can place interfaces in error-disabled state when unidirectional links are detected. UDLD detects a specific failure condition but does not provide automatic recovery from error-disabled state caused by any condition.
Error-disable recovery provides the automated restoration capability that reduces operational overhead while maintaining protection from recurring network problems.
Question 155
An administrator is configuring HSRP on Cisco Nexus switches for gateway redundancy. Which virtual MAC address format does HSRP version 2 use?
A)0c07.acXX
B)5e00.01XX
C)b400.XXYY
D)0c9f.fXXX
Answer: D
Explanation:
First Hop Redundancy Protocols provide default gateway redundancy through virtual IP and MAC addresses shared between multiple physical routers. Understanding the MAC address formats helps with troubleshooting, packet analysis, and capacity planning.
HSRP version 2 uses the virtual MAC address format 0000.0c9f.fXXX where XXX represents the HSRP group number in hexadecimal. This expanded format supports 4096 HSRP groups compared to version 1’s limit of 256 groups. The MAC address prefix 0000.0c9f.f is reserved by Cisco specifically for HSRPv2. When routers form an HSRP group, they create a virtual MAC address based on the group number that the active router uses to respond to ARP requests for the virtual IP address. Connected hosts learn this virtual MAC as their default gateway rather than the physical MAC of any specific router. When active router failure occurs and the standby assumes the active role, it begins using the same virtual MAC address, ensuring host ARP caches remain valid without requiring updates. The consistent virtual MAC address across failover events enables transparent gateway redundancy. HSRPv2 provides additional improvements over version 1 including millisecond timers for faster convergence, IPv6 support, and improved authentication. The expanded group number range in version 2 allows more granular gateway redundancy deployment in large environments with many VLANs.
The MAC address format 0000.0c07.acXX is used by HSRP version 1 where XX represents the group number in hexadecimal. This format limits HSRP to 256 groups numbered 0 through 255. Organizations requiring more than 256 HSRP groups must use version 2 with its expanded address space.
The MAC address format 0000.5e00.01XX is used by VRRP (Virtual Router Redundancy Protocol), which is the IETF standard alternative to Cisco’s proprietary HSRP. VRRP uses this IEEE-assigned MAC address prefix. While functionally similar to HSRP, VRRP is a distinct protocol with different packet formats and behaviors.
The MAC address format 0007.b400.XXYY is used by GLBP (Gateway Load Balancing Protocol) where multiple virtual MAC addresses exist per group. GLBP provides both redundancy and load balancing by directing different clients to different physical routers. The GLBP AVG assigns virtual MAC addresses to AVFs enabling active-active gateway operation.
Understanding HSRP version 2 MAC address format is essential for network analysis, troubleshooting failover issues, and capacity planning for large-scale gateway redundancy deployments.
Question 156
A storage engineer is troubleshooting Fibre Channel connectivity and needs to identify the worldwide name of an HBA. Where is the WWN typically found?
A) Burned into the HBA hardware
B) Configured by the administrator
C) Assigned by the FC switch
D) Generated randomly at boot
Answer: A
Explanation:
Fibre Channel addressing uses World Wide Names to uniquely identify devices in the SAN fabric. Understanding WWN characteristics and assignment is essential for zoning configuration, troubleshooting, and maintaining stable SAN operations.
The WWN is typically burned into the HBA hardware by the manufacturer, similar to Ethernet MAC addresses. World Wide Names follow IEEE naming standards using 64-bit identifiers displayed as 16 hexadecimal digits separated by colons. Each FC HBA receives one or more unique WWNs including a World Wide Node Name identifying the physical device and one or more World Wide Port Names identifying individual ports on multi-port HBAs. The manufacturer assigns these identifiers during production ensuring global uniqueness. WWNs remain constant throughout the device lifetime regardless of which server contains the HBA or which switch port it connects to. This persistence is critical for SAN zoning because zones reference devices by WWN rather than physical location. When administrators replace failed HBAs, the new HBA has different WWNs requiring zone updates. Some enterprise HBAs support WWN modification through vendor utilities allowing administrators to preserve WWNs when replacing hardware, but factory-assigned WWNs remain the default. Administrators can display WWNs using operating system utilities, HBA management software, or by examining labels on the physical adapters.
Configured by the administrator would mean WWNs change based on administrative settings rather than being unique hardware identifiers. While some HBAs support administrative WWN override, this is not the typical source. Factory-assigned WWNs provide guaranteed uniqueness that administrator configuration cannot reliably ensure.
Assigned by the FC switch would make WWNs dependent on fabric connectivity and switch assignment algorithms. This approach would cause WWN changes when devices move between switches or ports, breaking persistent zoning configurations. FC switches assign Fibre Channel IDs dynamically but not WWNs.
Generated randomly at boot would create different WWNs each time the device powers on, making persistent configuration impossible. Random generation would also risk WWN collisions where multiple devices might generate identical identifiers. The FC addressing scheme requires stable, unique identifiers that random generation cannot provide.
Factory-burned WWNs provide the stable, globally unique identifiers essential for Fibre Channel addressing, zoning, and fabric operations.
Question 157
An administrator needs to configure a port channel using static configuration without dynamic negotiation. Which port channel mode should be used?
A) Active
B) Passive
C) On
D) Desirable
Answer: C
Explanation:
Port channel implementations support multiple modes determining how member interfaces negotiate channel formation. Selecting appropriate modes based on requirements and connected device capabilities ensures reliable link aggregation.
The on mode configures static port channels without dynamic negotiation protocols. When interfaces are configured in on mode, they unconditionally attempt to form a port channel without exchanging LACP or PAgP protocol messages. Both ends of the connection must be configured identically in on mode for the channel to function. The switch immediately adds interfaces to the port channel bundle without verifying compatibility or negotiating parameters with the remote device. Static configuration using on mode was common before LACP standardization and remains useful when connecting to devices that do not support negotiation protocols or when administrators want explicit control without protocol overhead. However, on mode lacks the safety features that LACP provides including configuration verification, automatic member removal when failures occur, and graceful handling of misconfigurations. Mismatched configurations with on mode can create loops or unstable connectivity. Modern best practice recommends LACP over static configuration for its superior error detection and compatibility verification.
Active mode enables LACP with the interface actively initiating negotiation by sending LACP PDUs. Active mode interfaces establish port channels with other interfaces configured as either active or passive. This dynamic negotiation verifies configuration compatibility and provides ongoing monitoring. Active mode uses protocol exchange rather than static configuration.
Passive mode enables LACP but the interface waits for the remote side to initiate negotiation rather than actively sending PDUs. Passive interfaces respond to LACP messages from active partners. At least one end must be active for negotiation to occur. Passive mode involves protocol negotiation rather than static operation.
Desirable mode belongs to PAgP (Port Aggregation Protocol), which is Cisco’s proprietary predecessor to the standard LACP protocol. Desirable mode actively sends PAgP messages to negotiate channel formation. PAgP is largely obsolete with LACP being the preferred standard protocol. Desirable is not a static mode.
The on mode provides static port channel configuration for scenarios requiring explicit control or compatibility with devices lacking negotiation protocol support.
Question 158
A network engineer is implementing BGP in a spine-leaf fabric. Which BGP attribute is used to prevent routing loops by tracking the autonomous systems a route has traversed?
A) Local preference
B) AS path
C) MED
D) Weight
Answer: B
Explanation:
Border Gateway Protocol uses path attributes to make routing decisions and prevent loops in complex internetworks. Understanding these attributes is essential for proper BGP configuration and troubleshooting in data center fabrics.
AS path is the BGP attribute used to prevent routing loops by tracking the autonomous systems a route has traversed. When a BGP router advertises a route to an external BGP peer, it prepends its own AS number to the AS_PATH attribute. As the route propagates through multiple autonomous systems, each AS adds its number creating a list of ASNs the route has passed through. Before accepting a route, BGP routers check whether their own AS number appears in the AS_PATH. If found, the router rejects the route because accepting it would create a loop where traffic could circle back to the originating AS. This loop prevention mechanism functions similarly to TTL in IP packets but operates at the AS level. In data center spine-leaf fabrics using BGP, each leaf and spine switch typically has a unique AS number. When a leaf advertises routes to a spine, the spine sees the leaf’s AS in the path. When that spine advertises to other leaves, they see both the spine’s AS and the originating leaf’s AS. The receiving leaf does not send the route back to the spine that sent it because it detects the loop. AS_PATH also influences route selection because shorter paths are preferred over longer paths in BGP best path selection.
Local preference is a BGP attribute used within an autonomous system to influence outbound routing decisions. Higher local preference values are preferred. Local preference affects path selection but does not prevent loops. This attribute is not propagated between autonomous systems.
MED (Multi-Exit Discriminator) is a BGP attribute used to suggest to neighboring autonomous systems which entry point to use when multiple connections exist. MED influences inbound traffic patterns but does not prevent loops. The attribute has limited scope and lower priority than AS path in decision processes.
Weight is a Cisco-proprietary BGP attribute that influences path selection locally on a single router. Higher weight values are preferred for outbound traffic. Weight provides local control but is not advertised to any BGP neighbors and does not prevent loops.
The AS_PATH attribute serves the critical function of preventing routing loops while enabling BGP to scale across thousands of autonomous systems in the global Internet and data center fabrics.
Question 159
An administrator is configuring VRF instances on Cisco Nexus switches for tenant isolation. What is the primary purpose of VRF?
A) To increase routing table size
B) To provide Layer 3 isolation between tenants
C) To enable faster convergence
D) To reduce memory consumption
Answer: B
Explanation:
Multi-tenant data center environments require strong isolation between different customers or business units sharing common infrastructure. Virtual routing and forwarding instances provide Layer 3 separation while efficiently utilizing physical resources.
Providing Layer 3 isolation between tenants is the primary purpose of VRF. Virtual Routing and Forwarding creates separate routing table instances within a single physical router or switch, enabling multiple isolated routing domains on shared hardware. Each VRF maintains its own routing table, forwarding table, and routing protocol instances. Traffic in one VRF cannot directly communicate with traffic in another VRF without explicit policy allowing it, typically through route leaking or firewall services. This isolation enables service providers to host multiple customers on common infrastructure while ensuring customer traffic remains private and separate. In enterprise data centers, VRFs separate production, development, and management networks. Each VRF can use overlapping IP address spaces without conflict because routing decisions occur within VRF context. Interfaces are assigned to specific VRFs, determining which routing table processes traffic received on that interface. Routing protocols like BGP, OSPF, or EIGRP can run independently within each VRF with separate neighbor relationships and route advertisements. VRF-lite implementations provide VRF functionality without requiring MPLS, making the technology accessible in standard data center environments.
Increasing routing table size is not the purpose of VRF but rather a consequence of creating multiple routing instances. Each VRF has its own routing table which increases total memory consumption compared to a single global table. This is a cost of achieving isolation rather than a benefit.
Enabling faster convergence is not a VRF function. Convergence speed depends on routing protocol timers, network topology, and protocol characteristics. VRF creates isolation but does not inherently improve convergence. In some cases, separate protocol instances per VRF might converge independently, but this is not the primary purpose.
Reducing memory consumption is opposite to VRF’s effect. Multiple routing table instances consume more memory than a single global table. VRF trades increased resource consumption for the benefit of isolation and multi-tenancy support.
VRF technology enables data centers to support multiple isolated tenants on shared infrastructure while maintaining security and address space independence.
Question 160
A storage administrator needs to configure LUN masking on a storage array. What is the purpose of LUN masking?
A) To encrypt data on storage volumes
B) To control which hosts can access specific LUNs
C) To compress data to save space
D) To replicate LUNs to remote sites
Answer: B
Explanation:
Storage area networks require access controls at multiple layers to ensure data security and prevent unauthorized access. LUN masking provides critical access control at the storage array level complementing fabric-level zoning.
Controlling which hosts can access specific LUNs is the purpose of LUN masking. Storage arrays present Logical Unit Numbers representing storage volumes to connected hosts. Without access controls, any host discovering the array could potentially access all LUNs, creating security risks and data corruption possibilities. LUN masking creates access control lists on the storage array mapping specific LUNs to authorized host initiators. The array identifies hosts by their HBA World Wide Names or iSCSI initiator names. When a host attempts to discover or access LUNs, the array consults masking configuration and presents only authorized LUNs to that host. Other LUNs remain invisible to unauthorized hosts. This access control prevents production servers from accessing development storage, database servers from accessing web server volumes, and Windows hosts from accessing Linux filesystems. LUN masking provides defense-in-depth when combined with SAN zoning which controls fabric-level connectivity. Both mechanisms should be configured consistently to ensure comprehensive access control. Masking operates at the storage array and can provide finer granularity than zoning alone.
Encrypting data on storage volumes is a data protection function separate from access control. Storage encryption protects data confidentiality against physical theft of disks or unauthorized access to storage components. Encryption operates on data content while LUN masking controls access visibility. Both are security features serving different purposes.
Compressing data to save space is a capacity optimization feature that reduces storage consumption. Many arrays support inline or post-process compression to store more data on available capacity. Compression addresses storage efficiency rather than access control. Data compression and LUN masking are independent features.
Replicating LUNs to remote sites provides disaster recovery and business continuity capabilities. Storage replication creates copies of data at secondary locations for protection against site failures. Replication ensures data availability while LUN masking controls access permissions. These features address different aspects of storage management.
LUN masking implements essential access control that prevents unauthorized storage access and supports multi-tenant environments on shared storage infrastructure.
Question 161
An engineer is designing QoS policies for a converged data center network carrying storage, voice, and data traffic. How many bits are available in the IP DSCP field for traffic classification?
A) 3 bits
B) 6 bits
C) 8 bits
D) 16 bits
Answer: B
Explanation:
Quality of Service implementations rely on packet marking to classify traffic and enable differentiated treatment across networks. Understanding the encoding and capacity of QoS marking fields is essential for designing effective policies.
The IP DSCP field uses 6 bits for traffic classification within the IP header’s Type of Service byte. Differentiated Services Code Point replaced the older IP Precedence scheme, utilizing 6 bits instead of the previous 3 bits for finer-grained classification. The 6-bit field provides 64 possible DSCP values ranging from 0 to 63, though not all values are commonly used. Standard DSCP values include Default (0) for best effort traffic, Expedited Forwarding (46) for low-latency traffic like voice, and various Assured Forwarding classes (10, 18, 26, 34, 38) for different priority data categories. The remaining 2 bits in the ToS byte are reserved for Explicit Congestion Notification. Network devices examine DSCP markings to make queuing, scheduling, and drop decisions. Consistent DSCP marking across administrative domains enables end-to-end QoS. Data center QoS policies typically assign specific DSCP values to traffic types, such as CS4 or AF41 for iSCSI storage, EF for voice, AF21 for critical applications, and Default for best effort. The 6-bit field provides sufficient granularity for complex QoS deployments while remaining simple enough for practical implementation.
Three bits describes the IP Precedence field used in the original ToS byte before DiffServ. IP Precedence provided only 8 priority levels (0-7) which proved insufficient for modern QoS requirements. The 3-bit field has been superseded by the 6-bit DSCP field though some legacy systems still use precedence values.
Eight bits would represent the entire ToS byte including both DSCP and ECN bits. While the full byte is 8 bits, only 6 bits are allocated to DSCP for traffic classification. The distinction between the full byte and the DSCP portion is important for understanding field capacity.
Sixteen bits would require 2 bytes and is not used for IP DSCP. No QoS marking field uses 16 bits in standard IP headers. This represents a fundamental misunderstanding of IP header structure and QoS encoding.
The 6-bit DSCP field provides the classification granularity modern data centers require for implementing sophisticated QoS policies across diverse traffic types.
Question 162
A network administrator needs to configure an interface on a Cisco Nexus switch to operate in Layer 3 mode. Which command converts a switchport to a routed interface?
A) no switchport
B) ip routing
C) routed-port
D) layer3-interface
Answer: A
Explanation:
Cisco Nexus switches support both Layer 2 switching and Layer 3 routing on the same platform. Interfaces can operate in switchport mode for Layer 2 bridging or in routed mode for Layer 3 forwarding. Understanding how to configure interface modes enables flexible network designs.
The no switchport command converts a switchport to a routed Layer 3 interface on Cisco Nexus switches. By default, most Nexus interfaces operate in Layer 2 switchport mode handling Ethernet frames. The no switchport command removes Layer 2 functionality, converting the interface to a routed port capable of having an IP address assigned and participating in routing protocols. After conversion, the interface no longer participates in VLANs, spanning tree, or Layer 2 forwarding. Instead, it forwards packets based on IP routing tables. Routed interfaces are used for switch-to-switch connections in spine-leaf fabrics where Layer 3 is preferred over Layer 2, for connections to upstream routers, or for providing default gateways directly on the switch. The interface configuration changes from switchport commands like switchport mode and switchport access vlan to Layer 3 commands like ip address and ip router. Converting interfaces to routed mode is essential when implementing pure Layer 3 data center fabrics or when segmenting networks using routing rather than VLANs.
The ip routing command enables IP routing functionality globally on the switch rather than configuring individual interfaces. This command allows the switch to forward packets between different subnets based on routing tables. While ip routing must be enabled for routed interfaces to forward traffic, it does not convert specific interfaces from Layer 2 to Layer 3 mode.
Routed-port is not a valid command on Cisco platforms. This represents incorrect syntax that would produce configuration errors. The proper command uses the negative form of switchport to remove Layer 2 functionality.
Layer3-interface is also not valid syntax on Cisco Nexus switches. While descriptive of the desired outcome, this is not an actual command. Interface mode conversion requires the no switchport command using standard NX-OS syntax.
The no switchport command provides the simple, standard method for converting Nexus interfaces from Layer 2 switching to Layer 3 routing mode.
Question 163
An administrator is troubleshooting VXLAN overlay connectivity. Which protocol is commonly used as the control plane for VXLAN to distribute MAC and IP address reachability information?
A) OSPF
B) EIGRP
C) EVPN
D) RIP
Answer: C
Explanation:
VXLAN overlay networks require control plane protocols to distribute endpoint reachability information between VTEPs. The control plane determines how VTEPs learn about remote MAC addresses and their associated VTEP locations, affecting scalability and efficiency.
EVPN is the protocol commonly used as the control plane for VXLAN to distribute MAC and IP address reachability information. Ethernet VPN uses MP-BGP to exchange Layer 2 and Layer 3 reachability information between VXLAN Tunnel Endpoints. EVPN defines new BGP address families specifically for carrying MAC addresses, IP addresses, and their bindings to VTEPs. When an endpoint connects to a VTEP, that VTEP advertises the endpoint’s MAC address, IP address, and associated VNI through EVPN route types. Remote VTEPs receive these advertisements and populate their forwarding tables, enabling them to send traffic directly to the correct VTEP without flooding. EVPN eliminates the need for data plane learning that would require flooding unknown destinations across the overlay. This control plane approach dramatically reduces bandwidth consumption on the underlay network by converting multicast and broadcast traffic into unicast VXLAN tunnels. EVPN also supports advanced features including active-active multi-homing, MAC mobility detection for VM migration, and integrated routing and bridging for optimal traffic forwarding. The BGP foundation provides scalability to thousands of VTEPs in large data centers.
OSPF is a link-state Interior Gateway Protocol designed for routing IP packets within autonomous systems. While OSPF might run on the underlay network providing reachability between VTEPs, it does not carry MAC address reachability information for overlay networks. OSPF routes IP prefixes rather than individual MAC addresses.
EIGRP is Enhanced Interior Gateway Routing Protocol that provides advanced distance vector routing for IP networks. Like OSPF, EIGRP might run on underlay infrastructure but does not serve as a VXLAN control plane. EIGRP cannot distribute the MAC and IP binding information needed for overlay operations.
RIP is Routing Information Protocol, a legacy distance vector protocol with severe scalability limitations. RIP is unsuitable for modern data centers and cannot distribute overlay reachability information. RIP routes IP networks using hop counts and does not support MAC address advertisement.
EVPN has emerged as the standard control plane for VXLAN overlays, providing the scalability and functionality required for software-defined data center networks.
Question 164
A storage administrator needs to configure flow control on interfaces connected to FCoE devices. Which mechanism should be enabled to provide lossless Ethernet?
A) PAUSE frames (802.3x)
B) Priority Flow Control (PFC)
C) Spanning Tree Protocol
D) LACP
Answer: B
Explanation:
Fibre Channel over Ethernet requires lossless transport to maintain storage protocol reliability. Standard Ethernet flow control mechanisms must be enhanced to support converged networks where storage and data traffic coexist.
Priority Flow Control should be enabled to provide lossless Ethernet for FCoE devices. PFC is part of the Data Center Bridging standards that enable Ethernet to support lossless transport required by Fibre Channel. Unlike traditional 802.3x PAUSE frames that stop all traffic on a link, PFC operates on a per-Class of Service basis allowing selective flow control. FCoE traffic is assigned to a specific CoS value, and PFC protects only that traffic class from drops while allowing other traffic classes to continue using standard Ethernet drop behavior. When a receiving interface’s buffers for the FCoE CoS approach capacity, it sends PFC PAUSE frames requesting the sender to temporarily stop transmission for that specific class. The sender halts only FCoE traffic while continuing to send other traffic. This class-based approach enables true convergence where lossless storage and lossy data traffic coexist without interference. All devices in the FCoE path from CNAs through switches to storage arrays must support and enable PFC for the designated CoS. PFC configuration includes mapping FCoE to CoS 3 by default and enabling no-drop behavior for that class.
PAUSE frames (802.3x) provide link-level flow control but affect all traffic on the link without class differentiation. When 802.3x PAUSE is received, all transmission stops regardless of traffic type or priority. This all-or-nothing approach is unsuitable for converged networks where only storage traffic requires lossless treatment. Using 802.3x would impact data traffic unnecessarily.
Spanning Tree Protocol prevents Layer 2 loops and has no relationship to flow control or lossless transmission. STP blocks redundant paths to create loop-free topologies but does not prevent frame drops due to congestion. STP and PFC address completely different network challenges.
LACP dynamically negotiates port channel formation for link aggregation. While LACP provides increased bandwidth and redundancy, it does not implement flow control or lossless behavior. Link aggregation complements but does not replace the need for PFC in FCoE environments.
Priority Flow Control provides the class-based lossless transport essential for FCoE storage traffic while maintaining standard Ethernet behavior for data traffic in converged networks.
Question 165
An engineer is configuring Cisco ACI and needs to define the scope of Layer 3 communication and route separation. Which construct provides this functionality?
A) Tenant
B) VRF
C) Bridge Domain
D) Endpoint Group
Answer: B
Explanation:
Cisco Application Centric Infrastructure uses hierarchical policy constructs to model application requirements and network behavior. Understanding each construct’s role enables proper policy design and network segmentation.
VRF provides Layer 3 communication scope and route separation in Cisco ACI. The Virtual Routing and Forwarding construct, called a context or private network in ACI terminology, creates isolated Layer 3 routing domains within a tenant. Each VRF maintains separate routing tables, forwarding instances, and can run independent routing protocol processes. Bridge domains associate with specific VRFs, and endpoints in bridge domains within the same VRF can route between subnets while endpoints in different VRFs cannot communicate without explicit policy. VRFs enable overlapping IP address spaces between different contexts since routing lookups occur within VRF scope. Organizations use multiple VRFs to separate production from development environments, isolate different business units, or implement customer separation in multi-tenant scenarios. Routes can be selectively leaked between VRFs when controlled inter-context communication is required, typically through contracts with route leaking enabled. VRFs also define the scope for route distribution in BGP or other routing protocols when ACI integrates with external networks. Each VRF can have unique route policies, external connectivity options, and service integration.
Tenant is the top-level organizational container providing complete administrative and resource isolation. While tenants contain VRFs and provide high-level separation, the VRF construct specifically handles Layer 3 routing scope and table separation. Multiple VRFs can exist within a single tenant to segment different applications or environments.
Bridge Domain provides Layer 2 flood domain and subnet definition but operates within the Layer 3 context defined by a VRF. Bridge domains handle Layer 2 forwarding, unknown unicast flooding, and act as default gateways for attached subnets. Multiple bridge domains can exist within a VRF with routing between them controlled by the VRF.
Endpoint Group groups endpoints with common policy requirements but does not define routing scope. EPGs can span multiple bridge domains and exist within the Layer 2 and Layer 3 context provided by bridge domains and VRFs. EPGs handle security and service policy rather than routing separation.
VRF constructs provide the essential Layer 3 isolation and routing scope definition required for multi-tenant environments and network segmentation in ACI fabrics.