Visit here for our full Cisco 350-601 exam dumps and practice test questions.
Question 1
A network engineer is designing a data center network using the spine-leaf architecture. How many hops are required for any server to communicate with any other server in this topology?
A) One hop through a leaf switch
B) Two hops maximum through one spine switch
C) Three hops through multiple spine switches
D) Four hops through leaf and spine switches
Answer: B
Explanation:
The spine-leaf architecture represents a fundamental shift in data center network design that addresses the limitations of traditional three-tier hierarchies. Understanding traffic flow patterns in this topology is essential for capacity planning, latency prediction, and performance optimization.
Two hops maximum through one spine switch is the correct answer for server-to-server communication in spine-leaf architecture. In this design, every leaf switch maintains full mesh connectivity to every spine switch, but leaf switches never connect directly to other leaf switches, and spine switches never connect to other spine switches. When a server connected to one leaf switch needs to communicate with a server on a different leaf switch, traffic traverses exactly two hops: from the source leaf up to any spine switch, then from that spine down to the destination leaf. This consistent path length ensures predictable latency regardless of which servers are communicating. The architecture eliminates the variable hop counts present in traditional designs where communication between different aggregation blocks might traverse core, aggregation, and access layers multiple times. The deterministic two-hop pattern simplifies capacity planning because network engineers can calculate exactly how much bandwidth each spine-to-leaf connection must support.
One hop through a leaf switch would only be possible for servers connected to the same leaf switch communicating with each other. While this represents optimal local switching, it does not describe the general case of any server communicating with any other server across the fabric. Most inter-server traffic in modern data centers involves servers on different leaf switches.
Three hops through multiple spine switches would indicate a design flaw. Properly implemented spine-leaf architectures never require traffic to traverse multiple spine switches because leaf switches connect directly to all spines. If traffic needed to go from one spine to another spine, the architecture would no longer be true spine-leaf and would introduce unnecessary latency and complexity.
Four hops through leaf and spine switches represents even more severe architectural problems. Standard spine-leaf designs guarantee two-hop paths between any two servers on different leaves. Four hops would indicate either misunderstanding of the topology or a broken implementation that violates spine-leaf principles.
The consistent two-hop maximum path length is a defining characteristic of spine-leaf architecture that provides predictable performance and simplified traffic engineering.
Question 2
An administrator is configuring Cisco Nexus switches and needs to enable a feature before it can be used. Which command enables the LACP feature?
A) enable lacp
B) feature lacp
C) service lacp
D) protocol lacp
Answer: B
Explanation:
Cisco NX-OS operating system uses a modular architecture where features must be explicitly enabled before configuration and use. This design reduces memory consumption and attack surface by loading only necessary functionality. Understanding feature enablement is fundamental to NX-OS administration.
The feature lacp command enables the Link Aggregation Control Protocol feature on Cisco Nexus switches. NX-OS requires administrators to enable features using the feature command followed by the feature name. Once enabled, LACP functionality becomes available for configuring port channels that use dynamic negotiation rather than static configuration. Enabling LACP loads the necessary code modules into memory, makes LACP configuration commands available, and allows the switch to send and receive LACP protocol data units. The feature remains enabled across reboots once configured and saved. Administrators can verify enabled features using show feature command, which displays all available features and their current states. Disabling features when no longer needed conserves system resources. The feature-based architecture provides granular control over switch capabilities and helps administrators understand exactly what protocols and services are active.
The enable lacp command uses incorrect syntax for NX-OS. While some network operating systems use enable as a command verb, NX-OS specifically uses feature for enabling major functionality. The enable command in NX-OS context typically refers to entering privileged EXEC mode rather than activating features.
The service lacp command also represents incorrect syntax. Some IOS platforms use service commands for certain global settings, but NX-OS does not use this syntax for feature enablement. The service keyword serves different purposes in different operating systems and does not activate features in NX-OS.
The protocol lacp command is not valid NX-OS syntax. While LACP is indeed a protocol, NX-OS does not use protocol as the command verb for feature activation. This represents a conceptual misunderstanding of NX-OS command structure.
Understanding the feature command syntax is essential for configuring and managing Cisco Nexus switches effectively across all NX-OS platforms.
Question 3
A storage administrator needs to implement a Fibre Channel topology that provides the highest availability by connecting each device to the fabric through two separate paths. Which topology should be implemented?
A) Point-to-point
B) Arbitrated loop
C) Dual-fabric topology
D) Single switched fabric
Answer: C
Explanation:
Fibre Channel storage area networks require careful topology design to balance performance, availability, and cost. High availability configurations eliminate single points of failure through redundant components and multiple data paths.
Dual-fabric topology provides the highest availability by connecting each device to two completely separate fabrics through independent paths. In this configuration, each host and storage array has at least two host bus adapters or target ports, with each connecting to a different fabric. The two fabrics operate independently with separate switches, zoning configurations, and management domains. If one fabric fails due to switch failure, cable damage, or misconfiguration, traffic continues flowing through the second fabric without interruption. Multipathing software on hosts manages both paths, distributing I/O across fabrics for load balancing and automatically redirecting traffic when failures occur. This architecture provides both high availability and increased aggregate bandwidth since both fabrics actively carry traffic. The redundancy extends beyond just switches to include cables, HBAs, and storage controller ports, creating a truly fault-tolerant design. Organizations requiring maximum uptime for mission-critical applications implement dual-fabric designs despite the increased cost of duplicate infrastructure.
Point-to-point topology connects devices directly without fabric switches, creating a dedicated connection between one initiator and one target. While simple and low-latency, point-to-point provides no path redundancy. If the single connection fails, communication ceases completely. This topology suits specific use cases like direct server-to-storage attachments but cannot provide the availability required for enterprise environments.
Arbitrated loop topology connects multiple devices in a logical loop where devices arbitrate for loop access to communicate. This older FC topology has largely been replaced by switched fabrics. Arbitrated loop suffers from performance issues when many devices share the loop, and a single device failure can disrupt the entire loop. Loop configurations do not provide the high availability needed for critical applications.
Single switched fabric provides centralized connectivity through FC switches and supports many devices in a single fault domain. While switches can be redundant within the single fabric, a fabric-wide issue such as zoning misconfiguration or spanning tree problems affects all devices. Single fabric lacks the complete isolation that dual fabrics provide.
Dual-fabric topology represents the gold standard for high-availability SAN designs by eliminating single points of failure through complete infrastructure redundancy.
Question 4
An engineer is troubleshooting VXLAN connectivity issues. Which UDP port does VXLAN use by default for encapsulation?
A) 4789
B) 8472
C) 4739
D) 1024
Answer: A
Explanation:
Virtual Extensible LAN operates as an overlay network technology that encapsulates Layer 2 Ethernet frames within UDP packets for transport across Layer 3 infrastructure. Understanding VXLAN’s transport mechanism including port assignments is essential for firewall configuration, troubleshooting, and network design.
UDP port 4789 is the IANA-assigned default port for VXLAN encapsulation. When VXLAN Tunnel Endpoints encapsulate Ethernet frames, they wrap them in a VXLAN header, then a UDP header using destination port 4789, followed by an IP header for routing across the underlay network. The UDP port selection allows VXLAN traffic to traverse existing IP infrastructure including routers and firewalls. Network administrators must ensure that firewalls and access control lists permit UDP port 4789 between VTEP endpoints for VXLAN communication to function. The well-known port assignment enables consistent configuration across vendors and simplifies troubleshooting. Source ports are typically randomized based on hash values calculated from the inner frame to provide entropy for ECMP load distribution across underlay paths. Some early VXLAN implementations used different port numbers before standardization, but modern deployments consistently use 4789. Traffic analysis and packet captures should show UDP 4789 when examining VXLAN traffic between VTEPs.
UDP port 8472 was used by early Linux VXLAN implementations before IANA standardized port 4789. Some legacy systems might still use this port, but it is not the standard default. Using non-standard ports creates interoperability issues when integrating equipment from different vendors. Modern implementations should use the standardized port for compatibility.
UDP port 4739 is not associated with VXLAN and represents an incorrect port number. This value might appear in misconfigured systems or documentation errors but is not valid for VXLAN encapsulation. Using wrong port numbers in firewall rules or configuration would break VXLAN connectivity.
UDP port 1024 falls in the registered port range and is not related to VXLAN. This represents a random incorrect value. VXLAN specifically requires port 4789 for proper operation according to RFC 7348 which defines the VXLAN specification.
Understanding the correct VXLAN UDP port is critical for successful overlay network deployment and troubleshooting connectivity issues in software-defined data centers.
Question 5
A data center is implementing Cisco ACI. Which protocol does the Application Policy Infrastructure Controller use to communicate policies to leaf switches?
A) SNMP
B) NetFlow
C) OpFlex
D) NETCONF
Answer: C
Explanation:
Cisco Application Centric Infrastructure relies on a distributed policy model where centralized controllers define intent and distributed switches enforce policies. The communication protocol between controllers and switches must support declarative policy distribution and state synchronization.
OpFlex is the protocol the Application Policy Infrastructure Controller uses to communicate policies to leaf switches in ACI fabric. OpFlex is a declarative policy protocol where the APIC declares desired policy outcomes and leaf switches determine how to implement those policies based on local conditions. This approach differs from imperative protocols where controllers send specific commands that switches must execute exactly. The APIC compiles high-level application policies into concrete configurations including endpoint groups, contracts, bridge domains, and VRFs. These compiled policies are transmitted to leaf switches via OpFlex messages. Leaf switches maintain local policy repositories synchronized with the APIC and make autonomous forwarding decisions based on received policies. OpFlex supports policy resolution, endpoint reporting, and state synchronization between the centralized policy controller and distributed enforcement points. The protocol enables the APIC to manage thousands of leaf switches efficiently by declaring intent rather than micromanaging individual switch configurations. When policies change, OpFlex distributes updates to affected switches, ensuring consistent policy enforcement across the fabric.
SNMP is the Simple Network Management Protocol used for monitoring and basic device management through GET and SET operations on management information bases. While SNMP might be used for monitoring ACI components, it is not the policy distribution protocol. SNMP’s request-response model and limited data structures cannot efficiently handle complex policy distribution at the scale required by ACI.
NetFlow is a network telemetry protocol that exports flow records for traffic analysis, capacity planning, and security monitoring. NetFlow provides visibility into network traffic patterns but does not distribute policies or configurations. The protocol operates unidirectionally sending flow data from network devices to collectors.
NETCONF is a network configuration protocol using XML-encoded data and RPC-based operations for device management. While NETCONF can configure network devices and is used in various SDN implementations, Cisco ACI specifically uses OpFlex for policy communication between APIC and leaf switches. NETCONF serves different purposes in network automation contexts.
OpFlex’s declarative policy model enables ACI’s scalable, intent-based infrastructure where business requirements translate automatically into distributed network policies.
Question 6
An administrator needs to configure VPC on Cisco Nexus switches. What is the purpose of the VPC peer-keepalive link?
A) To synchronize MAC address tables between peers
B) To forward data traffic between peer switches
C) To detect peer switch failures and prevent split-brain scenarios
D) To provide additional bandwidth for VPC traffic
Answer: C
Explanation:
Virtual Port Channel implementations require mechanisms to coordinate between peer switches and handle failure scenarios gracefully. The peer-keepalive link serves a critical monitoring function distinct from data forwarding or state synchronization.
Detecting peer switch failures and preventing split-brain scenarios is the purpose of the VPC peer-keepalive link. This dedicated Layer 3 connection between VPC peer switches carries periodic heartbeat messages that verify peer vitality. The peer-keepalive link operates independently from the peer-link using a separate network path, typically the management network or out-of-band connection. Heartbeat messages transmitted every second with configurable timeout values enable rapid failure detection. When a peer switch stops receiving keepalive messages, it can determine whether its partner has failed or if only the peer-link has failed. This distinction is critical for preventing split-brain scenarios where both switches believe they are the primary and continue forwarding VPC traffic independently, potentially causing duplicate frames and network instability. If the peer-link fails but peer-keepalive messages continue, the secondary peer suspends its VPC VLANs, allowing only the primary to forward traffic and maintaining loop-free topology. The peer-keepalive link carries only keepalive protocol messages, not data traffic or state synchronization information. Proper peer-keepalive configuration using a reliable, independent network path is essential for VPC stability.
Synchronizing MAC address tables between peers is the function of the VPC peer-link, not the peer-keepalive link. The peer-link carries Cisco Fabric Services protocol messages that synchronize forwarding information including MAC addresses, ARP tables, and IGMP snooping data between VPC peers. This synchronization ensures both switches have consistent forwarding information.
Forwarding data traffic between peer switches also occurs over the peer-link rather than the peer-keepalive link. When frames arrive on VPC member ports destined for hosts connected to the peer switch, they traverse the peer-link for delivery. The peer-keepalive carries only control plane keepalive messages.
Providing additional bandwidth for VPC traffic is not the peer-keepalive purpose. Bandwidth for VPC data forwarding comes from the peer-link, which should be sized appropriately for expected traffic volumes. The peer-keepalive link carries minimal traffic consisting only of heartbeat messages requiring very little bandwidth.
The peer-keepalive link’s monitoring function provides the critical failure detection capability that enables VPC to handle failures gracefully without creating network loops or extended outages.
Question 7
A storage engineer is configuring FCoE on Cisco Nexus switches. Which technology ensures lossless Ethernet for FCoE traffic?
A) Spanning Tree Protocol
B) Priority Flow Control
C) LACP
D) HSRP
Answer: B
Explanation:
Fibre Channel over Ethernet convergence requires Ethernet networks to provide lossless transport characteristics similar to native Fibre Channel. Standard Ethernet allows frame drops during congestion, which is unacceptable for storage traffic. Data Center Bridging extensions address this requirement.
Priority Flow Control ensures lossless Ethernet for FCoE traffic by providing per-priority pause functionality. PFC extends IEEE 802.3x flow control by enabling selective pause on a per-class-of-service basis rather than pausing all traffic on a link. FCoE traffic is assigned to a specific CoS value, typically CoS 3, and PFC protects only that priority class. When a switch’s ingress buffers for the FCoE priority class approach capacity, it sends PFC pause frames to the upstream device requesting temporary transmission halt for that specific class. Other traffic classes continue flowing without interruption. This selective flow control prevents FCoE frame drops due to buffer overflow while allowing non-storage traffic to use standard Ethernet behaviors. PFC operates on a hop-by-hop basis throughout the network path between FCoE devices. All switches in the FCoE forwarding path must support and enable PFC for the designated CoS to maintain lossless characteristics end-to-end. Priority Flow Control is one component of Data Center Bridging alongside Enhanced Transmission Selection for bandwidth allocation and Data Center Bridging Exchange protocol for capability discovery and configuration.
Spanning Tree Protocol prevents Layer 2 loops by blocking redundant paths and does not relate to lossless characteristics. STP determines active forwarding paths and blocks alternates to prevent broadcast storms. While important for loop-free topologies, STP provides no flow control or congestion management for ensuring lossless delivery.
LACP is the Link Aggregation Control Protocol that dynamically negotiates port channel formation between devices. LACP provides link bundling for increased bandwidth and redundancy but does not implement flow control or prevent frame drops during congestion. Link aggregation complements but does not replace the need for lossless transport.
HSRP is the Hot Standby Router Protocol providing first-hop redundancy for default gateways. HSRP operates at Layer 3 for gateway redundancy and has no relationship to Ethernet flow control or lossless characteristics. HSRP addresses availability rather than loss prevention.
Priority Flow Control implementation is mandatory for FCoE deployments to provide the lossless transport that storage protocols require for reliable operation.
Question 8
An administrator is configuring a Cisco Nexus switch and needs to save the running configuration to startup configuration. Which command accomplishes this?
A) write memory
B) copy running-config startup-config
C) save config
D) Both A and B
Answer: D
Explanation:
Configuration persistence ensures that changes made to network devices survive reboots and power cycles. Cisco NX-OS provides multiple command syntaxes for saving configurations to accommodate different administrative preferences and maintain compatibility with various Cisco operating systems.
Both commands write memory and copy running-config startup-config accomplish saving the running configuration to startup configuration on Cisco Nexus switches. The copy running-config startup-config syntax is the modern, explicit method that clearly indicates the source and destination for the copy operation. This command copies the current active configuration stored in RAM to the startup configuration file in NVRAM that the switch reads during boot. The write memory command provides the same functionality using shorter legacy syntax inherited from older Cisco IOS versions. Many administrators prefer write memory for its brevity during interactive configuration sessions. NX-OS supports both syntaxes for flexibility and compatibility. Without saving configuration changes, modifications exist only in the running configuration and disappear when the switch reloads. Best practice involves saving configurations after making and verifying changes, creating a recovery point for the new configuration. Administrators can verify successful saves by comparing running and startup configurations using show running-config and show startup-config commands or using show file systems to examine configuration file timestamps.
Using write memory alone is correct but incomplete since copy running-config startup-config also works. Limiting the answer to only one command ignores NX-OS’s support for multiple valid syntaxes accomplishing the same task.
Using copy running-config startup-config alone is similarly correct but incomplete. While this is the preferred modern syntax, write memory remains valid and widely used. The question asks which command accomplishes the task without restricting to only one option.
The save config command does not exist in NX-OS syntax. This represents incorrect command structure that would produce a syntax error. NX-OS uses copy or write commands rather than save for configuration management.
Understanding that NX-OS accepts multiple command syntaxes for common operations helps administrators work efficiently regardless of their background with different Cisco operating systems.
Question 9
A network engineer needs to implement a routing protocol in the data center that provides fast convergence and uses a link-state algorithm. Which protocol should be selected?
A) RIP
B) EIGRP
C) OSPF
D) BGP
Answer: C
Explanation:
Data center routing protocols must provide rapid convergence to minimize disruption when network changes occur, efficient resource utilization, and scalability to support growing infrastructures. The algorithm type fundamentally affects protocol behavior and suitability for different environments.
OSPF is the routing protocol that provides fast convergence using a link-state algorithm. Open Shortest Path First builds a complete topology database describing all routers and links within an area by exchanging Link State Advertisements. Each router independently runs the Dijkstra shortest path first algorithm against this database to calculate optimal paths to all destinations. When topology changes occur, routers flood LSAs describing the change throughout the area, and all routers recalculate paths using the updated topology information. This link-state approach enables rapid convergence because routers have complete topology knowledge and can immediately compute alternate paths when failures occur. OSPF supports equal-cost multipath routing, hierarchical design through areas, and authentication for security. In data center environments, OSPF provides deterministic behavior and fast failover critical for maintaining application availability. The protocol scales well with proper area design and works effectively in spine-leaf architectures when configured appropriately.
RIP is the Routing Information Protocol using a distance-vector algorithm rather than link-state. RIP determines paths based on hop count metrics and has slow convergence due to counting-to-infinity problems and periodic update intervals. Maximum hop count limitations and slow convergence make RIP unsuitable for modern data centers. RIP represents legacy technology superseded by more capable protocols.
EIGRP is Enhanced Interior Gateway Routing Protocol using a hybrid algorithm combining distance-vector and link-state characteristics. While EIGRP provides fast convergence through DUAL algorithm and feasible successors, it uses an advanced distance-vector approach rather than pure link-state. EIGRP is Cisco proprietary, though RFC 7868 documents the protocol. For multi-vendor environments or strict link-state requirements, OSPF is preferred.
BGP is Border Gateway Protocol using a path-vector algorithm designed for inter-domain routing. While BGP is increasingly used in data center fabrics for its scalability and policy flexibility, it is not a link-state protocol. BGP exchanges path attributes and uses best path selection algorithms different from link-state shortest path first calculations.
OSPF’s link-state algorithm provides the fast convergence and deterministic behavior required for modern data center routing implementations.
Question 10
An administrator is configuring QoS on Cisco Nexus switches to prioritize storage traffic. Which QoS model does Cisco recommend for data center environments?
A) Best effort only
B) IntServ with RSVP
C) DiffServ with class-based marking
D) Priority queuing only
Answer: C
Explanation:
Quality of Service in data center environments must balance competing traffic demands including storage, voice, video, and data applications. The QoS model determines how traffic is classified, marked, queued, and scheduled to ensure critical applications receive necessary resources.
DiffServ with class-based marking is the QoS model Cisco recommends for data center environments. Differentiated Services provides a scalable framework for traffic classification and prioritization without per-flow state maintenance. Traffic is classified and marked at network edges or by applications using Differentiated Services Code Point values in IP headers or Class of Service values in Ethernet frames. Network devices examine these markings and apply appropriate queuing, scheduling, and congestion management behaviors. Class-based marking enables administrators to define traffic classes such as storage, voice, mission-critical data, and best effort, then assign DSCP or CoS values to each class. Switches use these markings to place packets into different queues with varying service levels. DiffServ scales efficiently because core devices make forwarding decisions based on simple header markings without tracking individual flows. Data centers typically implement multiple traffic classes with distinct treatment including lossless FCoE storage using Priority Flow Control, low-latency voice and video using priority queuing, guaranteed bandwidth for critical applications using weighted scheduling, and best-effort treatment for non-critical traffic.
Best effort only provides no QoS differentiation, treating all traffic equally. This model is insufficient for data centers running diverse applications with varying performance requirements. Without QoS, high-priority storage or voice traffic can be delayed behind bulk file transfers or backup traffic, causing performance degradation and application failures.
IntServ with RSVP is the Integrated Services model using Resource Reservation Protocol for per-flow resource reservation. While IntServ provides strong guarantees, it requires per-flow state maintenance throughout the network path, creating scalability challenges. The signaling overhead and complexity make IntServ impractical for large-scale data centers with thousands of concurrent flows.
Priority queuing only uses a single strict priority queue for high-priority traffic with all other traffic in lower queues. While simple, pure priority queuing can starve lower-priority traffic if high-priority traffic volume is excessive. DiffServ’s multi-class approach with appropriate scheduling algorithms provides better balance between protecting critical traffic and serving all applications fairly.
DiffServ’s scalable class-based approach provides the flexibility and performance required for complex data center environments running diverse application workloads.
Question 11
A storage administrator needs to implement zoning in a Fibre Channel SAN. What is the primary purpose of zoning?
A) To increase bandwidth between devices
B) To control which initiators can communicate with which targets
C) To compress data during transmission
D) To provide encryption for storage traffic
Answer: B
Explanation:
Fibre Channel storage area networks require access control mechanisms to ensure data security, prevent unauthorized storage access, and isolate different environments sharing common infrastructure. Zoning provides fundamental security and segmentation capabilities essential for multi-tenant and enterprise SANs.
Controlling which initiators can communicate with which targets is the primary purpose of zoning. Zoning creates logical boundaries within the FC fabric that restrict device visibility and communication. Administrators define zones containing specific initiators from servers and targets from storage arrays. Devices can only discover and establish sessions with other devices in the same zone. This access control prevents servers from accessing storage not allocated to them, protects production data from development servers, and enables multiple customers or business units to share SAN infrastructure while maintaining isolation. Zoning operates at the fabric level implemented by FC switches. Two primary zoning types exist: soft zoning based on World Wide Names that identifies devices by their unique identifiers, and hard zoning based on physical switch ports providing stronger security. Zone sets activate collections of zones simultaneously ensuring consistent policy enforcement. Zoning complements LUN masking implemented on storage arrays, providing defense-in-depth where both fabric and array enforce access controls. Proper zoning design minimizes zone membership to required devices, uses hard zoning for security-sensitive environments, and documents zone purposes for operational clarity.
Increasing bandwidth between devices is not a zoning function. Bandwidth is determined by link speeds, number of paths, and traffic load. While zoning can affect traffic patterns by controlling which devices communicate, it does not directly increase available bandwidth. Zoning is an access control mechanism rather than a performance enhancement feature.
Compressing data during transmission is not provided by zoning. Some storage protocols and devices support compression to reduce bandwidth consumption and storage capacity requirements, but this functionality is separate from zoning. Zoning controls access without modifying transmitted data.
Providing encryption for storage traffic is not a zoning capability. FC fabrics can support encryption through FC Security Protocol or by using encryption-capable switches and storage arrays, but these security services are independent of zoning. Zoning controls which devices can communicate while encryption protects data confidentiality during transmission.
Zoning provides the essential access control foundation for secure, multi-tenant Fibre Channel storage area network implementations.
Question 12
An engineer is troubleshooting a port channel that is not forming correctly. Which protocol should be verified to ensure dynamic port channel negotiation is working?
A) STP
B) LACP
C) HSRP
D) VTP
Answer: B
Explanation:
Port channel implementations can use static configuration or dynamic negotiation protocols. When troubleshooting port channel formation issues, understanding which negotiation protocol is configured and verifying its operation is essential for identifying configuration mismatches or compatibility problems.
LACP should be verified to ensure dynamic port channel negotiation is working. Link Aggregation Control Protocol is the IEEE 802.3ad standard protocol that dynamically negotiates port channel formation between devices. LACP-enabled interfaces exchange protocol data units to negotiate channel parameters, verify configuration compatibility, and monitor member link status. Both ends of a port channel must be configured for LACP mode using either active mode which initiates negotiation or passive mode which responds to negotiation. At least one end must be active for the channel to form. LACP verifies that member interfaces have compatible configuration including speed, duplex, and VLAN assignments. The protocol provides ongoing monitoring, removing failed links from the bundle and adding recovered links back automatically. When troubleshooting port channel issues, administrators should verify LACP is enabled on interfaces, check mode compatibility between devices, examine LACP PDU exchange using debug commands, and review port channel protocol logs for negotiation failures. Common problems include mode mismatches where both sides are passive, configuration inconsistencies between member ports, and physical layer issues preventing PDU exchange.
STP is Spanning Tree Protocol that prevents Layer 2 loops but does not negotiate port channel formation. STP operates independently from port channel protocols, treating the logical port channel as a single link. While STP configuration can affect port channel behavior, verifying STP does not troubleshoot port channel negotiation issues.
HSRP is Hot Standby Router Protocol providing Layer 3 gateway redundancy. HSRP operates at the router level and has no relationship to port channel formation. HSRP addresses default gateway high availability rather than link aggregation.
VTP is VLAN Trunking Protocol that synchronizes VLAN databases between switches. VTP propagates VLAN information but does not participate in port channel negotiation. While VLAN configuration must be consistent across port channel member links, VTP itself does not control channel formation.
LACP verification is the critical troubleshooting step for dynamic port channel formation issues in modern data center networks.
Question 13
A data center is implementing automated network provisioning using Python scripts. Which library provides a multi-vendor abstraction layer for network device interaction?
A) Paramiko
B) NAPALM
C) Requests
D) NumPy
Answer: B
Explanation:
Network automation requires programmatic interfaces to configure devices, gather state information, and validate operations. Multi-vendor environments benefit from abstraction layers that provide consistent APIs across different equipment manufacturers and operating systems.
NAPALM provides a multi-vendor abstraction layer for network device interaction. Network Automation and Programmability Abstraction Layer with Multivendor support offers a unified Python API for interacting with different network vendors including Cisco IOS, NX-OS, Arista EOS, Juniper Junos, and others. NAPALM abstracts vendor-specific differences behind common methods for operations like loading configurations, retrieving state information, and performing validation. Developers write code once using NAPALM methods and the library handles translation to vendor-specific commands and output parsing. Core NAPALM functions include get_facts for device information, get_interfaces for interface states, get_bgp_neighbors for routing protocol status, and configuration management through load_merge_candidate, compare_config, and commit_config methods. This abstraction significantly reduces development effort in heterogeneous environments and enables configuration templates that work across vendors. NAPALM provides both operational consistency and validation capabilities ensuring configuration changes have intended effects. The library integrates with automation frameworks like Ansible, Salt, and StackStorm enabling sophisticated network automation workflows.
Paramiko is a Python SSH library providing low-level SSH connectivity and command execution. While useful for network automation, Paramiko does not provide vendor abstraction or high-level device interaction methods. Developers using Paramiko must handle vendor-specific command syntax and output parsing manually. Paramiko serves as a transport layer that libraries like NAPALM can build upon.
Requests is a Python HTTP library for making web requests to APIs. While useful for interacting with REST APIs including those provided by network controllers, Requests does not specifically target network devices or provide multi-vendor abstraction. Requests handles HTTP protocol details but does not understand network device configuration or operational commands.
NumPy is a numerical computing library for Python providing array operations and mathematical functions. NumPy serves scientific computing and data analysis applications but has no relationship to network device automation or multi-vendor abstraction.
NAPALM’s unified API approach simplifies network automation development and reduces the complexity of managing heterogeneous network infrastructures.
Question 14
An administrator is implementing Cisco ACI and needs to create a construct that groups endpoints with common policy requirements. Which construct should be used?
A) Bridge Domain
B) Endpoint Group
C) VRF
D) Tenant
Answer: B
Explanation:
Cisco Application Centric Infrastructure uses logical constructs to abstract network configuration from application requirements. Understanding these constructs and their relationships is fundamental to designing and implementing ACI policies correctly.
Endpoint Group is the construct that groups endpoints with common policy requirements. EPGs represent collections of endpoints that share security policies, QoS requirements, and other network services. Endpoints within an EPG might include virtual machines running the same application tier, physical servers providing similar services, or network-attached storage with common access requirements. EPGs are application-centric rather than network-centric, meaning they model application architecture rather than network topology. For example, a three-tier application might have web-tier-EPG, app-tier-EPG, and database-tier-EPG. By default, endpoints within the same EPG can communicate freely while communication between different EPGs requires explicit contracts. This default-deny approach implements zero-trust security principles. Endpoints can be assigned to EPGs dynamically based on attributes like VLAN, IP subnet, or VMware port group, or statically by administrator assignment. EPGs associate with bridge domains for Layer 2 connectivity and can span multiple leaf switches, enabling workload mobility. The EPG abstraction decouples application policy from physical infrastructure, allowing applications to move without reconfiguration.
Bridge Domain provides Layer 2 flood domain and default gateway functionality but does not group endpoints by policy requirements. Bridge domains define IP addressing and routing parameters. Multiple EPGs can exist within a single bridge domain while requiring different policies. Bridge domains handle forwarding behavior rather than policy grouping.
VRF is Virtual Routing and Forwarding instance providing Layer 3 isolation and routing contexts. VRFs separate routing tables and forwarding behavior but do not directly group endpoints by policy. Multiple bridge domains can exist within a VRF, and EPGs in those bridge domains can communicate subject to contract policies.
Tenant is the top-level organizational container providing complete administrative isolation. Tenants contain all other constructs including VRFs, bridge domains, EPGs, and contracts. While tenants group related resources, they provide isolation rather than policy-based endpoint grouping. EPGs within tenants provide the granular grouping based on policy requirements.
Endpoint Groups implement the application-centric policy model that distinguishes ACI from traditional network-centric configuration approaches.
Question 15
A network engineer needs to implement a protocol that provides loop-free Layer 2 multipathing in the data center. Which protocol should be configured?
A) RSTP
B) TRILL
C) VTP
D) LLDP
Answer: B
Explanation:
Traditional Layer 2 networks rely on Spanning Tree Protocol to prevent loops by blocking redundant paths, which wastes bandwidth and creates suboptimal traffic patterns. Modern data center technologies provide loop-free multipathing that utilizes all available links for improved performance and resilience.
TRILL should be configured to provide loop-free Layer 2 multipathing. Transparent Interconnection of Lots of Links is an IETF standard protocol that brings Layer 3 routing benefits to Layer 2 networks. TRILL uses IS-IS link-state routing protocol to build a complete topology map of the network, then applies shortest path calculations to determine optimal forwarding paths. TRILL encapsulates Ethernet frames with a TRILL header containing hop count and other information, then forwards frames using MAC-in-MAC encapsulation. This approach enables equal-cost multipath routing where traffic distributes across all available equal-cost paths rather than blocking redundant links as Spanning Tree does. TRILL provides benefits including full link utilization, optimal traffic distribution, fast convergence when failures occur, and elimination of Spanning Tree limitations. The protocol operates transparently to attached devices which continue using standard Ethernet without modification. TRILL competes with other Layer 2 multipathing technologies like Cisco FabricPath and IEEE SPB but provides standards-based interoperability. Data centers implementing TRILL gain the ability to build larger Layer 2 domains with better performance characteristics than traditional Spanning Tree networks.
RSTP is Rapid Spanning Tree Protocol that improves convergence time compared to original STP but still operates by blocking redundant paths to prevent loops. RSTP provides faster failover than STP but does not enable multipath forwarding. Only one path between any two points is active at a time with RSTP.
VTP is VLAN Trunking Protocol that synchronizes VLAN databases between switches. VTP simplifies VLAN management in large switched networks but does not address loop prevention or multipath forwarding. VTP operates independently from protocols that control traffic forwarding paths.
LLDP is Link Layer Discovery Protocol that enables devices to advertise their identity and capabilities to directly connected neighbors. LLDP provides topology discovery and inventory information but does not control forwarding behavior or provide loop prevention. LLDP is a management protocol rather than a forwarding protocol.
TRILL represents a modern approach to Layer 2 networking that overcomes Spanning Tree limitations through intelligent multipath forwarding based on routing protocols.