Cisco 350-601 Implementing and Operating Cisco Data Center Core Technologies (DCCOR) Exam Dumps and Practice Test Questions Set 3 Q31 – 45

Visit here for our full Cisco 350-601 exam dumps and practice test questions.

Question 31:

An engineer is configuring Cisco Nexus switches in a VPC domain and needs to ensure control plane communication between peers. Which link provides keepalive messages between VPC peer switches?

A) VPC peer-link

B) VPC peer-keepalive link

C) VPC member port

D) Management interface

Answer: B

Explanation:

The VPC peer-keepalive link is a dedicated Layer 3 connection between VPC peer switches that carries keepalive heartbeat messages to monitor peer health and prevent split-brain scenarios. This link operates independently of the peer-link and typically uses the management interfaces or dedicated out-of-band connections to ensure reliable communication even when the peer-link fails. The peer-keepalive mechanism is critical for VPC operational stability and failover behavior.

Keepalive messages exchange health status information between peers every second by default. These lightweight UDP packets on port 3200 verify that both peers remain operational and can communicate. When keepalive messages stop arriving, the receiving peer knows its partner has failed or lost connectivity, triggering appropriate failover actions.

The peer-keepalive link must be configured in a different failure domain than the peer-link to provide independent failure detection. Using separate physical paths ensures that a single cable, switch, or network failure does not simultaneously disrupt both links. Common implementations use management interfaces connected through separate management switches.

Configuration requires specifying the destination IP address of the peer switch, source interface for keepalive packets, and optionally the VRF for management network isolation. The keepalive configuration uses syntax like vpc domain, then peer-keepalive destination ip-address source ip-address vrf management. Proper VRF configuration ensures keepalive traffic uses management networks.

When the peer-keepalive link fails but the peer-link remains operational, VPC operation continues normally because peers can still synchronize through the peer-link. However, if both peer-keepalive and peer-link fail simultaneously, the secondary peer suspends its VPC member ports to prevent loops and split-brain conditions where both peers think they are primary.

The keepalive link carries only control plane traffic and requires minimal bandwidth, typically just a few kilobits per second. Unlike the peer-link which carries both control and data plane traffic, the peer-keepalive link is purely for health monitoring and does not forward user traffic.

The VPC peer-link carries control plane synchronization including MAC address tables, IGMP snooping, and ARP information, but the primary keepalive heartbeat mechanism uses the separate peer-keepalive link. The peer-link serves multiple purposes but the dedicated keepalive link provides more reliable health monitoring.

VPC member ports are the interfaces configured as part of port-channels that connect to downstream devices. These ports carry user traffic and do not participate in peer health monitoring. Member ports depend on properly functioning peer-link and peer-keepalive links for VPC operation.

Management interfaces can be used as the source or destination for peer-keepalive traffic but referring to them generically as management interface does not capture the specific peer-keepalive link configuration and purpose. The peer-keepalive link is the logical construct that uses physical interfaces for keepalive messaging.

Question 32:

A data center network uses Cisco ACI fabric. Which protocol does ACI use for forwarding information between leaf and spine switches?

A) VXLAN

B) TRILL

C) FabricPath

D) SPB

Answer: A

Explanation:

Cisco ACI fabric uses VXLAN as the data plane encapsulation protocol for forwarding traffic between leaf and spine switches. VXLAN encapsulates Layer 2 Ethernet frames within UDP packets, enabling Layer 2 overlay networks across Layer 3 IP infrastructure. In ACI architecture, all traffic between leaf switches traverses the spine using VXLAN tunnels, creating a scalable and flexible fabric architecture.

ACI implements VXLAN with hardware-based encapsulation and decapsulation in leaf switch ASICs, providing line-rate performance without software processing overhead. Each leaf switch acts as a VXLAN Tunnel Endpoint creating and terminating VXLAN tunnels to other leaf switches. The spine switches forward VXLAN-encapsulated traffic based on outer IP headers without inspecting inner encapsulated frames.

The underlay network between leaf and spine switches uses standard IP routing with protocols like IS-IS or OSPF providing reachability between all leaf switch loopback addresses. These loopback addresses serve as VXLAN tunnel endpoints. The separation between underlay IP routing and overlay VXLAN encapsulation provides scalability and operational simplicity.

VXLAN Network Identifiers in ACI map to endpoint groups and bridge domains, providing tenant isolation and traffic segmentation. Each VNI represents a unique network segment, and ACI can support up to 16 million VNIs theoretically, though practical deployments use far fewer. This massive scale accommodates large multi-tenant environments.

ACI extends standard VXLAN with additional functionality including policy enforcement, service insertion, and microsegmentation. The APIC controller programs leaf switches with VXLAN configurations based on declared application policies, automating complex network provisioning and ensuring consistency across the fabric.

The combination of VXLAN overlay with IP underlay enables ACI benefits including any-to-any connectivity, optimal traffic paths through equal-cost multipath, and integration with external networks. VXLAN provides the transport mechanism while ACI policy model provides the intelligence for traffic handling.

TRILL is a Layer 2 multipathing protocol that uses IS-IS for shortest path calculations. While TRILL was proposed for data center networks, Cisco ACI does not use TRILL. ACI chose VXLAN for its standards-based approach and broad industry support.

FabricPath is a Cisco proprietary Layer 2 multipathing technology used in Nexus switches outside of ACI fabric. FabricPath provides similar multipathing benefits as TRILL but ACI architecture specifically implements VXLAN rather than FabricPath for overlay networking.

SPB or Shortest Path Bridging is an IEEE standard for Layer 2 multipathing using IS-IS. While SPB addresses similar problems as TRILL and FabricPath, Cisco ACI does not implement SPB. ACI’s selection of VXLAN aligns with industry standardization and provides proven scalability.

Question 33:

An administrator is configuring QoS on a Nexus switch and needs to limit bandwidth for a specific traffic class. Which QoS mechanism restricts traffic rate and drops excess packets?

A) Shaping

B) Policing

C) Queuing

D) Marking

Answer: B

Explanation:

Policing is the QoS mechanism that restricts traffic rate to a specified limit and drops or remarks packets exceeding the configured rate. Policers enforce bandwidth limits by measuring traffic rates against configured thresholds and taking action on traffic exceeding those thresholds. The typical action is dropping excess packets, though policers can alternatively remark packets to lower priority classes.

Policing operates using token bucket algorithms that define permitted burst sizes and sustained rates. Tokens are added to the bucket at the configured rate, and each packet consumes tokens based on its size. When sufficient tokens exist, packets are transmitted. When tokens are exhausted, packets are dropped or remarked according to policy configuration.

The key characteristic distinguishing policing from shaping is that policing drops excess traffic immediately without buffering. This aggressive enforcement creates a hard rate limit but can result in packet loss and TCP retransmissions when traffic exceeds limits. Policing is typically deployed at network edges to enforce service level agreements or protect network resources from excessive traffic.

Nexus switches implement policing in hardware ASICs enabling line-rate enforcement without performance impact. Policers can be applied to individual interfaces, VLANs, or traffic classes defined through classification. Multiple policers can coexist on a single interface for different traffic types enabling granular bandwidth management.

Single-rate two-color policers define one rate threshold and mark packets as conforming or exceeding. Dual-rate three-color policers define committed and peak rates with three possible outcomes: conforming, exceeding, or violating. Three-color policers provide more sophisticated traffic conditioning suitable for complex SLA enforcement.

Configuration involves defining policy maps with police statements specifying rates in bits per second, burst sizes, and actions for conforming and exceeding traffic. These policies apply to interfaces or classes within service policies, integrating with broader QoS configurations.

Shaping is a QoS mechanism that restricts traffic rate but buffers excess packets rather than dropping them. Shapers delay packets to conform traffic to specified rates, smoothing bursty traffic into steady streams. Unlike policing which drops excess traffic, shaping queues packets for later transmission reducing packet loss but introducing delay.

Queuing determines how packets are selected for transmission from multiple queues and does not directly limit bandwidth. Queuing mechanisms like weighted fair queuing or priority queuing manage relative bandwidth allocation and latency but do not enforce hard rate limits like policing.

Marking modifies packet headers to indicate QoS treatment by setting DSCP, CoS, or other fields. Marking categorizes traffic and signals desired treatment but does not restrict bandwidth or drop packets. Marking typically precedes enforcement mechanisms like policing or queuing.

Question 34:

A network engineer is troubleshooting connectivity issues in a data center using VXLAN. Which UDP port does VXLAN use by default for encapsulation?

A) 4789

B) 8472

C) 4739

D) 6081

Answer: A

Explanation:

VXLAN uses UDP destination port 4789 by default for encapsulation as defined by RFC 7348, the IANA-assigned standard port for VXLAN. This port number is used in the outer UDP header of VXLAN packets, allowing network devices and firewalls to identify VXLAN traffic and apply appropriate handling. Understanding the default port is essential for configuring firewalls, troubleshooting connectivity, and analyzing packet captures.

The outer UDP header contains the destination port 4789 while the source port is typically derived from a hash of the inner packet headers. This source port entropy enables equal-cost multipath load distribution across multiple paths in the underlay network. Different flows receive different source ports ensuring traffic spreads across available links.

VXLAN encapsulation wraps the original Layer 2 Ethernet frame in a VXLAN header containing the VNI, followed by UDP and IP headers. The resulting packet is a standard UDP/IP packet that traverses Layer 3 networks using normal routing. This encapsulation enables Layer 2 connectivity across Layer 3 boundaries.

Network devices including firewalls, load balancers, and monitoring tools must recognize UDP port 4789 as VXLAN traffic. Firewalls should permit this traffic between VXLAN Tunnel Endpoints. Monitoring and troubleshooting tools use port 4789 to identify VXLAN flows and potentially decode inner headers for application visibility.

Earlier implementations before RFC 7348 standardization used different port numbers including the Linux kernel default of 8472. For interoperability, administrators should verify all VXLAN devices use consistent port numbers. Most enterprise equipment now defaults to the standard port 4789.

Some VXLAN implementations allow configuring alternative destination ports for specific use cases including coexistence with other UDP applications or working around firewall restrictions. However, using non-standard ports reduces interoperability and complicates troubleshooting.

Port 8472 was used by early Linux VXLAN implementations before IANA standardization. While some systems may still use this port, modern implementations should use the standardized port 4789. Mixing port numbers between devices causes connectivity failures.

Port 4739 is not associated with VXLAN and does not represent any standard or legacy VXLAN implementation. This port number is not relevant to VXLAN troubleshooting or configuration.

Port 6081 is not the standard VXLAN port. While various protocols use UDP ports in this range, VXLAN specifically uses port 4789 according to RFC standards and industry implementations.

Question 35:

An administrator needs to configure a Nexus switch to allow only specific MAC addresses on an interface. Which security feature provides this functionality?

A) DHCP Snooping

B) Port Security

C) Dynamic ARP Inspection

D) IP Source Guard

Answer: B

Explanation:

Port Security provides functionality to restrict which MAC addresses can send traffic through a switch interface. This security feature limits the number of MAC addresses allowed on a port and can specify exactly which MAC addresses are permitted. Port security prevents unauthorized devices from connecting to the network and protects against MAC flooding attacks that attempt to overflow switch MAC address tables.

Configuration options include setting maximum MAC address limits, specifying static MAC addresses that are always permitted, and defining violation actions when unauthorized MAC addresses attempt to use the port. Administrators can configure secure MAC addresses statically or allow the switch to learn them dynamically up to the configured maximum.

Port security supports multiple violation modes determining how the switch responds to security violations. Shutdown mode disables the interface when violations occur, requiring manual recovery. Restrict mode drops frames from violating MAC addresses while allowing traffic from authorized addresses and generates notifications. Protect mode drops violating traffic silently without notifications.

MAC addresses can be configured as static secure addresses that remain in configuration permanently, dynamic secure addresses that are learned but lost at reboot, or sticky secure addresses that are learned dynamically then saved to running configuration. Sticky learning combines the convenience of dynamic learning with the persistence of static configuration.

Port security integrates with other security features creating defense-in-depth protection. Combining port security with DHCP snooping, dynamic ARP inspection, and IP source guard provides comprehensive access control preventing various network attacks.

Common use cases include securing access ports in conference rooms or public areas where physical port access cannot be fully controlled, enforcing device compliance by allowing only approved MAC addresses, and limiting wireless access point connections to expected infrastructure devices.

DHCP Snooping protects against rogue DHCP servers and DHCP-based attacks by validating DHCP messages and building a binding database of IP to MAC address mappings. While DHCP snooping provides security, it does not directly restrict which MAC addresses can use an interface based on administrator-defined lists.

Dynamic ARP Inspection validates ARP packets against DHCP snooping bindings to prevent ARP spoofing attacks. DAI protects against man-in-the-middle attacks but does not provide general MAC address filtering. DAI depends on DHCP snooping database rather than explicitly configured MAC address lists.

IP Source Guard prevents IP address spoofing by validating source IP addresses against DHCP snooping bindings. While IPSG provides security, it operates at Layer 3 validating IP addresses rather than filtering Layer 2 MAC addresses. Port security is the appropriate feature for MAC address restriction.

Question 36:

A data center uses Fibre Channel over Ethernet (FCoE). Which class of service value should be used for FCoE traffic to ensure lossless operation?

A) CoS 3

B) CoS 5

C) CoS 7

D) Any CoS value

Answer: A

Explanation:

FCoE traffic should use Class of Service 3 to ensure lossless operation in converged network environments. CoS 3 is the industry-standard priority level designated for FCoE traffic, and networks implementing FCoE must configure Priority Flow Control for CoS 3 to prevent frame loss. Lossless operation is mandatory for FCoE because the Fibre Channel protocol does not implement error recovery at the transport layer like TCP does.

Priority Flow Control is an extension to standard Ethernet flow control specified in IEEE 802.1Qbb. PFC operates on a per-priority basis allowing specific CoS values to be paused while other traffic continues flowing. Configuring PFC for CoS 3 ensures that when congestion occurs, switches can pause FCoE traffic transmission temporarily rather than dropping frames.

Enhanced Transmission Selection defined in IEEE 802.1Qaz works with PFC to guarantee bandwidth allocation for different traffic classes. ETS configuration reserves minimum bandwidth percentages for CoS 3 FCoE traffic ensuring storage traffic receives necessary resources even during network congestion from other applications.

Data Center Bridging is the collective term for technologies including PFC and ETS that enable lossless Ethernet suitable for converged networks carrying both storage and data traffic. DCB extensions transform standard Ethernet into a transport appropriate for storage protocols that require reliability and predictability.

FCoE frame structure includes the CoS value in the IEEE 802.1Q VLAN tag, allowing switches to identify FCoE traffic and apply appropriate treatment. All devices in the FCoE path must recognize CoS 3 and maintain lossless configuration to ensure end-to-end reliability required by storage applications.

Misconfiguring CoS values or failing to enable PFC for CoS 3 results in FCoE frame loss during congestion. Dropped FCoE frames cause upper-layer protocol errors, I/O failures, and potential data corruption. Proper CoS 3 configuration with PFC is non-negotiable for production FCoE deployments.

CoS 5 is traditionally used for voice traffic and requires low latency but not necessarily lossless operation. Voice protocols tolerate occasional packet loss through error concealment and jitter buffers. Using CoS 5 for FCoE would not provide required lossless behavior unless explicitly configured.

CoS 7 typically designates network control traffic such as routing protocols and should not be used for FCoE. Control traffic priority should remain separate from storage traffic to maintain network stability. Using CoS 7 for FCoE deviates from standards and best practices.

While technically any CoS value could be configured for FCoE if PFC is enabled for that value, industry standards and interoperability requirements dictate CoS 3. Using non-standard CoS values complicates configuration and troubleshooting while providing no benefits.

Question 37:

An engineer is configuring Nexus switches and needs to save the running configuration permanently. Which command saves the running configuration to the startup configuration?

A) write memory

B) copy running-config startup-config

C) save config

D) commit

Answer: B

Explanation:

The command copy running-config startup-config saves the current running configuration to the startup configuration file on Cisco Nexus switches. This command ensures that configuration changes persist across reboots by writing the in-memory running configuration to non-volatile storage. Without saving, all configuration changes are lost when the switch restarts or loses power.

The copy command follows Cisco IOS syntax conventions where the source and destination are specified explicitly. Running-config represents the active configuration in memory, while startup-config represents the configuration file loaded at boot. The copy operation transfers contents from source to destination preserving all configuration statements.

Nexus switches maintain separate running and startup configurations unlike some platforms that automatically save changes. This separation provides safety by allowing administrators to test configurations before committing them permanently. If changes cause problems, rebooting restores the previous working configuration.

After copying running to startup, both configurations are identical until additional changes are made to the running configuration. Administrators can compare configurations using show running-config and show startup-config to identify unsaved changes. The reload command uses startup-config, so any unsaved running-config changes are lost.

Best practices recommend saving configurations after verifying that changes work correctly. In production environments, testing changes during maintenance windows and confirming functionality before saving prevents accidental persistence of problematic configurations that could cause issues after future reboots.

Nexus switches also support configuration checkpoints and rollback features providing additional safety. Administrators can create named checkpoints before changes, then rollback to those checkpoints if problems occur. This versioning capability enhances change management beyond simple running and startup configurations.

The write memory command was traditional Cisco IOS syntax that functioned identically to copy running-config startup-config. While some Nexus software versions accept write memory for backward compatibility, the copy command is the modern standard syntax. Both commands achieve the same result of saving configuration.

The command save config is not valid Cisco Nexus syntax. While the concept of saving configuration is correct, the specific command syntax on Nexus platforms uses copy running-config startup-config. Attempting to use save config results in command not found errors.

The commit command is used in configuration management systems and some network operating systems with candidate configurations, but it is not the standard Nexus command for saving configuration. Nexus uses the copy running-config startup-config syntax following traditional Cisco conventions.

Question 38:

A data center fabric uses MP-BGP EVPN for control plane. Which route type in EVPN is used to advertise MAC address and IP address bindings?

A) Route Type 1

B) Route Type 2

C) Route Type 3

D) Route Type 5

Answer: B

Explanation:

EVPN Route Type 2 is specifically designed to advertise MAC address and IP address bindings in EVPN networks. These MAC/IP Advertisement routes enable VTEPs to learn host MAC addresses, associated IP addresses, and the VTEP where those hosts are located. Route Type 2 provides the foundation for both Layer 2 and Layer 3 forwarding in EVPN-VXLAN fabrics.

Route Type 2 advertisements include multiple important fields including the MAC address of the host, optionally the IP address of the host, the VNI or VLAN identifier, and the next hop which is typically the originating VTEP loopback address. This comprehensive information enables remote VTEPs to build complete forwarding tables mapping MAC addresses to destination VTEPs.

When a VTEP learns a MAC address locally through normal bridge learning, it generates a BGP EVPN Route Type 2 update advertising that MAC to other VTEPs in the fabric. Remote VTEPs receive these updates and install forwarding entries pointing traffic for that MAC address toward the advertising VTEP through VXLAN tunnels.

Route Type 2 also carries IP address information enabling integrated routing and bridging. When a VTEP learns both MAC and IP for a host, it advertises both in the same Route Type 2 update. This allows distributed anycast gateway implementations and optimal routing within the fabric without requiring separate ARP learning floods.

The combination of MAC and IP advertisement in Route Type 2 enables silent host tracking where hosts are learned without flooding. Traditional networks require ARP broadcasts to discover hosts, but EVPN learns host information through BGP distribution eliminating unnecessary flooding and improving efficiency.

Route Type 2 advertisements also support host mobility by including MAC Mobility extended communities. When hosts move between VTEPs, the new location advertises Route Type 2 with updated next hop information and higher sequence numbers, causing remote VTEPs to update their forwarding tables directing traffic to the new location.

Route Type 1 is the Ethernet Auto-Discovery route used for fast convergence and split-horizon filtering in multihoming scenarios. While important for EVPN operation, Route Type 1 does not carry MAC address and IP address bindings for individual hosts.

Route Type 3 is the Inclusive Multicast Ethernet Tag route used for building multicast distribution trees and discovering all VTEPs participating in a VNI. Route Type 3 enables BUM traffic handling but does not advertise specific host MAC or IP addresses.

Route Type 5 is the IP Prefix Advertisement route used for distributing IP routing information including external subnets. Route Type 5 carries IP prefixes rather than individual host MAC/IP bindings and is used for inter-subnet routing and external connectivity.

Question 39:

An administrator is configuring LACP on a Nexus switch. Which LACP mode actively initiates negotiation with the remote device?

A) On

B) Active

C) Passive

D) Desirable

Answer: B

Explanation:

LACP Active mode actively initiates negotiation with the remote device by sending LACP packets to establish the port-channel. In Active mode, the switch proactively communicates with the connected device to form the aggregation, exchanging LACP Protocol Data Units that negotiate parameters and confirm both sides support link aggregation. Active mode ensures the port-channel formation process begins immediately.

LACP defines two modes for port-channel member interfaces: Active and Passive. Active mode sends LACP packets and responds to received LACP packets, taking initiative in establishing the aggregation. This proactive approach is typically configured on at least one end of the link to ensure negotiation occurs.

Link Aggregation Control Protocol standardized in IEEE 802.3ad and updated in 802.1AX provides dynamic port-channel formation with negotiation and monitoring. Unlike static port-channels configured with mode On, LACP dynamically detects configuration mismatches, monitors link health, and adjusts the aggregation when member links fail or recover.

LACP negotiation involves exchanging information about system priority, system MAC address, port priority, port number, and operational key. Both devices must agree on these parameters for successful aggregation. Active mode initiates this exchange ensuring the handshake completes even if the remote device is in Passive mode.

Best practices recommend configuring at least one end of an LACP port-channel in Active mode. Configurations with both ends in Passive mode never form a port-channel because neither side initiates negotiation. Configuring both ends Active or one Active and one Passive both work correctly.

LACP provides faster failure detection and recovery compared to static port-channels. Member link failures are detected through missing LACP packets triggering immediate removal of failed links from the aggregation. This dynamic membership adjustment maintains connectivity through remaining healthy links.

Mode On configures static port-channeling without LACP negotiation. In On mode, the switch unconditionally aggregates configured member interfaces without protocol exchange. While simpler, On mode lacks LACP’s dynamic detection capabilities and can create misconfigurations if both ends are not perfectly matched.

Passive mode responds to received LACP packets but does not actively initiate negotiation. A port in Passive mode waits for the remote device to send LACP packets before participating in aggregation formation. Passive mode alone cannot establish a port-channel without an Active peer.

Desirable is a mode from Port Aggregation Protocol, a Cisco proprietary predecessor to LACP. PAgP uses modes Desirable and Auto analogous to LACP’s Active and Passive. Modern implementations use LACP rather than PAgP, and Desirable is not a valid LACP mode.

Question 40:

A network engineer is implementing first-hop redundancy in a data center. Which virtual MAC address format does HSRP version 2 use?

A)0C07.ACxx

B)0C9F.Fxxx

C)5E00.01xx

D)B400.xxyy

Answer: B

Explanation:

HSRP version 2 uses the virtual MAC address format 0000.0C9F.Fxxx where xxx represents the HSRP group number in hexadecimal. This address format differs from HSRP version 1 and provides support for expanded group numbers ranging from 0 to 4095. The new MAC address range was necessary because version 2 expanded beyond version 1’s limitation of 256 groups.

The virtual MAC address is shared among HSRP group members with the active router responding to ARP requests with this address. End devices use the virtual MAC as their default gateway MAC address, enabling transparent failover when the active router changes. The consistent MAC address prevents end devices from needing to update ARP caches during failover.

HSRP version 2 provides several improvements over version 1 including support for IPv6, millisecond timers for faster convergence, and the expanded group number range. The new MAC address format supports the larger group numbers while maintaining the Cisco OUI in the vendor portion of the address.

Group numbers in the MAC address are represented in hexadecimal, so group 1 uses MAC 0000.0C9F.F001, group 10 uses 0000.0C9F.F00A, group 255 uses 0000.0C9F.F0FF, and group 4095 uses 0000.0C9F.FFFF. Understanding hexadecimal conversion is important when identifying HSRP groups from captured traffic or MAC address tables.

Virtual MAC addresses appear in switch MAC address tables associated with the interface toward the active HSRP router. When failover occurs and a new router becomes active, switches learn the virtual MAC on the interface toward the new active router. This learning happens through gratuitous ARP sent by the new active router.

HSRP configuration should ensure all routers in a group use the same version. Mixing HSRP version 1 and version 2 in the same group causes interoperability problems. Version 2 is recommended for new deployments due to its enhanced capabilities and support for modern requirements.

The MAC address format 0000.0C07.ACxx is used by HSRP version 1 where xx represents the group number in hexadecimal. Version 1 supports only 256 groups due to the single-byte group number field. Modern deployments typically use version 2 for its expanded capabilities.

The MAC address format 0000.5E00.01xx is used by Virtual Router Redundancy Protocol, a standards-based alternative to HSRP. VRRP and HSRP are different protocols with different virtual MAC address ranges. VRRP is defined in RFC 5798 while HSRP is Cisco proprietary.

The MAC address format 0007.B400.xxyy is used by Gateway Load Balancing Protocol, another Cisco first-hop redundancy protocol. GLBP differs from HSRP by providing load balancing across multiple routers rather than active-standby operation. GLBP uses different virtual MAC addresses for each group member.

Question 41:

An administrator needs to configure a Nexus switch interface as a Layer 3 routed port. Which command converts a Layer 2 switchport to a Layer 3 interface?

A) switchport mode routed

B) no switchport

C) ip routing

D) interface mode layer3

Answer: B

Explanation:

The command no switchport converts a Layer 2 switchport to a Layer 3 routed interface on Cisco Nexus switches. By default, most Nexus switch interfaces operate in Layer 2 switchport mode participating in VLANs and Layer 2 forwarding. The no switchport command removes Layer 2 functionality and enables Layer 3 capabilities including IP address assignment and routing protocol participation.

After issuing no switchport, the interface functions as a router interface similar to interfaces on traditional routers. Administrators can assign IP addresses directly to the interface using ip address commands, enable routing protocols, and configure Layer 3 features. The interface no longer participates in VLANs or Layer 2 switching operations.

Layer 3 interfaces are essential for building routed network topologies where each link between switches represents a different IP subnet. This approach is common in modern data center designs using protocols like BGP for fabric routing. Leaf-spine architectures typically configure all inter-switch links as Layer 3 interfaces.

The conversion from Layer 2 to Layer 3 is reversible. Issuing the switchport command without no converts the interface back to Layer 2 mode. This flexibility allows administrators to adjust interface modes based on network design requirements without hardware changes.

When converting interfaces, administrators should ensure no Layer 2 configurations like VLAN assignments or spanning tree settings remain that could conflict with Layer 3 operation. Clean configuration practices recommend removing Layer 2 settings before or immediately after issuing no switchport.

Some Nexus platforms have dedicated Layer 3 interfaces that operate in routed mode by default without requiring no switchport. These platforms may use different syntax, but the no switchport command represents the standard approach across most Nexus switch families.

The command switchport mode routed is not valid Cisco syntax. While the concept of setting a port to routed mode is correct, the actual command is no switchport. Switchport mode commands include access and trunk but not routed.

The ip routing command enables IP routing globally on the switch allowing the device to forward packets between different IP subnets. While necessary for Layer 3 functionality, ip routing does not convert individual interfaces from Layer 2 to Layer 3. Interface mode must be changed with no switchport.

The command interface mode layer3 is not valid Nexus syntax. Interface mode changes use the switchport or no switchport commands rather than mode keywords. This command would not be recognized by the Nexus command parser.

Question 42:

A data center network engineer is troubleshooting OSPF adjacency issues between Nexus switches. Which OSPF network type should be used on point-to-point links to avoid unnecessary DR/BDB election?

A) Broadcast

B) Point-to-point

C) Non-broadcast

D) Point-to-multipoint

Answer: B

Explanation:

The OSPF point-to-point network type should be configured on point-to-point links to avoid unnecessary Designated Router and Backup Designated Router elections. On true point-to-point connections where only two routers exist, DR/BDR election serves no purpose and adds unnecessary overhead. Point-to-point network type eliminates this election process and speeds OSPF adjacency formation.

Point-to-point network type is appropriate for direct connections between two routers including physical Ethernet links, GRE tunnels, and some serial interfaces. OSPF forms full adjacencies directly between neighbors without the intermediate steps required for multi-access networks. This simplification reduces convergence time and configuration complexity.

When configured as point-to-point, OSPF does not perform DR/BDR election and both routers immediately transition to full adjacency state after exchange and loading states. Hello and dead timers still apply but the adjacency process completes faster without waiting for election. This behavior is optimal for modern data center leaf-spine topologies where all inter-switch links are point-to-point.

Configuration requires explicitly setting the network type using the ip ospf network point-to-point command under the interface configuration. Without this command, Ethernet interfaces default to broadcast network type which triggers unnecessary DR/BDR election even on point-to-point connections.

Point-to-point network type also affects LSA generation. On point-to-point networks, routers generate Type 1 Router LSAs describing the connection without generating Type 2 Network LSAs which are only relevant for multi-access networks with DRs. This reduces LSDB size slightly in large deployments.

Modern data center OSPF designs heavily utilize point-to-point network types on fabric links. Combined with features like BFD for fast failure detection and ECMP for load balancing, point-to-point OSPF provides efficient routing suitable for spine-leaf architectures.

Broadcast network type is the default for Ethernet interfaces and performs DR/BDR election. While functional on point-to-point links, broadcast type introduces unnecessary election overhead and slower convergence. On true point-to-point connections, explicitly configuring point-to-point 

Non-broadcast network type is designed for NBMA networks like Frame Relay or ATM where broadcast capability does not exist but multiple routers share the same network segment. NBMA networks perform DR/BDR election and require neighbor statements. This type is inappropriate for modern Ethernet point-to-point links.

Point-to-multipoint network type treats NBMA networks as collections of point-to-point links avoiding DR/BDR election. While eliminating election, point-to-multipoint is designed for one-to-many topologies not true point-to-point connections between two routers. Point-to-point type is more appropriate for simple dual-router links.

Question 43:

An engineer is configuring Nexus switches and needs to verify the current VLAN configuration. Which command displays all configured VLANs and their associated names?

A) show vlan

B) show vlan brief

C) show running-config vlan

D) display vlan

Answer: B

Explanation:

The command show vlan brief displays a concise summary of all configured VLANs including VLAN IDs, names, status, and associated ports. This command provides the most commonly needed VLAN information in an easy-to-read format ideal for quick verification and troubleshooting. The brief keyword formats output for optimal readability showing essential information without excessive detail.

Output includes columns for VLAN ID, VLAN name, status showing active or suspended, and ports assigned to each VLAN. This comprehensive yet concise view allows administrators to quickly verify VLAN configuration, identify which ports belong to which VLANs, and confirm VLAN status across the switch.

VLAN names shown in the output help identify the purpose of each VLAN beyond numeric IDs. Descriptive names like USERS_VLAN10 or SERVERS_VLAN20 make configurations more understandable and reduce errors during troubleshooting. Default VLAN names like VLAN0010 indicate VLANs created without custom names.

The show vlan brief command is among the most frequently used VLAN verification commands during initial configuration, troubleshooting connectivity issues, and auditing network configurations. The summary format balances completeness with readability making it suitable for daily operations.

For more detailed VLAN information including spanning tree parameters, private VLAN relationships, or specific VLAN properties, administrators can use show vlan id number or show vlan without the brief keyword. These variations provide additional details when needed for deeper troubleshooting.

Nexus switches support thousands of VLANs allowing extensive network segmentation. The show vlan brief command displays all configured VLANs though on switches with many VLANs the output can be lengthy. Filtering options like show vlan brief | include pattern help locate specific VLANs in large configurations.

The show vlan command without brief keyword displays more detailed information including parameters like ring numbers, parent VLANs for private VLANs, and state change reasons. While comprehensive, the additional details make output longer and sometimes harder to read for basic verification tasks.

The show running-config vlan command displays VLAN configuration as it appears in the running configuration file using configuration syntax rather than operational state format. This view shows configured parameters but does not indicate operational status or port assignments making it less useful for operational verification.

The command display vlan is not valid Cisco Nexus syntax. While some network operating systems from other vendors use display commands, Cisco platforms use show commands for viewing operational information. This command would result in an error on Nexus switches.

Question 44:

A data center uses Cisco Nexus switches with VPC. Which statement correctly describes VPC behavior when the peer-link fails but the peer-keepalive link remains operational?

A) Both VPC peers continue forwarding traffic normally

B) The secondary peer suspends its VPC member ports

C) Both peers suspend VPC member ports to prevent loops

D) The primary peer suspends its VPC member ports

Answer: B

Explanation:

When the VPC peer-link fails while the peer-keepalive link remains operational, the secondary peer suspends its VPC member ports to prevent loops and split-brain scenarios. This behavior is a critical safety mechanism that ensures network stability by maintaining a single active path for each VPC connection. The secondary peer recognizes it can still communicate with the primary peer through keepalive but lacks peer-link synchronization, so it conservatively suspends its VPC ports.

The VPC primary and secondary roles are determined during initial peer establishment based on configured priority values or system MAC addresses when priorities match. The role assignment becomes critical during peer-link failures because only the primary peer continues forwarding traffic on VPC member ports ensuring consistent operation.

This failover behavior protects against split-brain conditions where both peers might simultaneously forward traffic for the same VPC creating loops and MAC address flapping. By suspending the secondary peer’s VPC ports, the network maintains loop-free topology even when peer-link synchronization is lost.

The peer-keepalive link’s continued operation is essential for this controlled failover. If both peer-link and peer-keepalive fail simultaneously, each peer cannot determine the other’s status creating more complex failure scenarios. Proper design ensures peer-link and peer-keepalive use independent failure domains.

Traffic from downstream devices connected via VPC automatically fails over to the primary peer when the secondary suspends its ports. Devices detect link failure on the secondary peer’s ports and redirect traffic through remaining active links connected to the primary peer. This failover typically completes within seconds.

To recover from peer-link failure, administrators must restore the peer-link connectivity. Once the peer-link is reestablished and synchronization completes, the secondary peer automatically unsuspends its VPC member ports and normal redundant operation resumes. No manual intervention is required beyond fixing the peer-link failure.

Both VPC peers do not continue forwarding normally when peer-link fails. The loss of peer-link prevents MAC address and state synchronization between peers. Allowing both peers to forward without synchronization would create loops and inconsistent forwarding behavior. The secondary suspension prevents these problems.

Both peers do not suspend VPC member ports during peer-link failure. If both peers suspended their ports, all downstream devices would lose connectivity even though the infrastructure remains partially functional. Keeping the primary peer active maintains connectivity through the surviving path.

The primary peer does not suspend its VPC member ports during peer-link failure. The primary role specifically indicates this peer should continue forwarding to maintain network connectivity. Suspending the primary’s ports would create unnecessary outage when alternative paths exist through the primary.

Question 45:

An administrator is configuring SNMPv3 on a Nexus switch for secure monitoring. Which SNMPv3 security level provides authentication and encryption?

A) noAuthNoPriv

B) authNoPriv

C) authPriv

D) privAuth

Answer: C

Explanation:

The SNMPv3 security level authPriv provides both authentication and encryption delivering the highest security for SNMP communications. Authentication verifies the identity of SNMP messages ensuring they originate from legitimate sources and have not been tampered with during transmission. Encryption protects message confidentiality preventing eavesdropping on network management traffic that might contain sensitive configuration or status information.

AuthPriv implements authentication using protocols like HMAC-MD5 or HMAC-SHA to create cryptographic message digests that validate message integrity and origin. The authentication process ensures that messages come from authorized users and have not been modified. If authentication fails, the switch discards the message preventing unauthorized access.

Privacy or encryption under authPriv uses protocols like DES or AES to encrypt SNMP message payloads. Encryption transforms readable data into ciphertext that cannot be understood without the proper decryption key. This protects sensitive information in SNMP messages from network sniffing and packet capture analysis.

SNMPv3 with authPriv requires configuring user accounts with authentication passwords and privacy passwords or keys. These credentials are used by both the SNMP agent on the switch and the network management system to establish secure communications. Strong password practices are essential to maintain security.

The security improvements in SNMPv3 address significant vulnerabilities in earlier SNMP versions. SNMPv1 and v2c use community strings transmitted in clear text providing minimal security. SNMPv3’s authentication and encryption make it suitable for production networks where security is important.

Configuration complexity is higher for authPriv compared to less secure modes but the security benefits justify the additional effort. Administrators must manage authentication and privacy credentials, select appropriate algorithms, and ensure consistent configuration across network devices and management systems.

NoAuthNoPriv provides no authentication and no encryption representing the lowest SNMPv3 security level. This mode offers minimal improvement over SNMPv2c and should not be used in production environments where security matters. NoAuthNoPriv exists primarily for basic compatibility or testing.

AuthNoPriv provides authentication without encryption validating message origin and integrity but transmitting message content in clear text. While better than noAuthNoPriv, authNoPriv exposes SNMP data to eavesdropping. This mode might be acceptable in trusted networks but authPriv is preferable when encryption capabilities exist.

PrivAuth is not a valid SNMPv3 security level. The correct terminology is authPriv indicating authentication with privacy. The order of terms in security level names is standardized with auth representing authentication and Priv representing privacy or encryption.