Visit here for our full Cisco 350-601 exam dumps and practice test questions.
Question 211:
What is the purpose of the Cisco Nexus vPC (Virtual Port Channel) feature?
A) Provide Layer 3 routing only
B) Allow a device to use a port channel across two separate switches for redundancy
C) Replace spanning tree protocol
D) Configure VLANs automatically
Answer: B
Explanation:
Virtual Port Channel (vPC) is a Cisco Nexus feature that allows a downstream device to form a single port channel (link aggregation) across two separate physical Nexus switches, providing active-active Layer 2 connectivity with enhanced redundancy and bandwidth utilization. From the downstream device’s perspective, it appears to be connected to a single logical switch, but the connection actually spans two independent switches operating in a vPC domain. This technology eliminates the traditional single point of failure associated with connecting to a single switch and enables full utilization of all links without relying on Spanning Tree Protocol to block redundant paths. vPC is commonly deployed in data center architectures to connect servers, storage devices, or other network switches with maximum availability and performance.
The vPC architecture requires careful configuration of both vPC peer switches. Each vPC domain consists of two peer switches connected through a dedicated vPC peer link, which is typically a high-bandwidth connection (often 2x10GE or higher) that carries control traffic and data traffic for orphaned ports and multicast. A vPC peer keepalive link, usually a separate management connection, monitors the health of the peer switch to prevent split-brain scenarios. Both peers must run identical software versions and have synchronized configurations for vPC VLANs. The downstream device creates a standard port channel (using LACP or static configuration) with links distributed across both vPC peers. vPC provides several operational benefits including elimination of Spanning Tree blocked ports, simplified network topology, increased bisectional bandwidth, deterministic failover behavior, and the ability to perform maintenance on one peer switch while the other continues forwarding traffic. vPC is compatible with various data center protocols and features including FabricPath, OTV, and FCoE.
Providing Layer 3 routing only does not describe vPC functionality. While Nexus switches running vPC certainly support Layer 3 routing capabilities, vPC specifically addresses Layer 2 redundancy and link aggregation across two physical switches. Layer 3 routing can operate alongside vPC, and features like anycast gateway can complement vPC deployments, but routing is not the primary purpose of vPC.
Replacing Spanning Tree Protocol is not accurate, although vPC does minimize STP’s role in the network. vPC still uses STP as a loop prevention mechanism, but because vPC enables active-active forwarding on all links, STP typically does not block any ports in a properly configured vPC environment. STP runs in the background as a safety mechanism but is not actively blocking traffic under normal conditions.
Configuring VLANs automatically is not a function of vPC. VLAN configuration must be manually performed or automated through management tools, and the same VLANs must be configured and allowed on both vPC peer switches for proper operation. vPC requires consistent VLAN configuration across peers but does not provide automatic VLAN provisioning.
Question 212:
Which Cisco Nexus feature provides automated rollback of configuration changes if connectivity is lost during the change window?
A) Checkpoint
B) Rollback
C) Configuration Replace
D) Configuration Session
Answer: D
Explanation:
Configuration Session (also known as configure session or config session) is a Cisco Nexus feature that provides a safe mechanism for making configuration changes with automatic rollback capability if the administrator loses connectivity during the configuration process or if the changes cause unexpected problems. When using configuration session mode, all commands are staged in a temporary session buffer rather than being immediately applied to the running configuration. The administrator can review all pending changes, verify their correctness, and then commit them atomically as a single transaction. If the administrator does not explicitly commit the changes within a specified timeout period, or if connectivity is lost before commitment, all changes are automatically discarded and the switch returns to its previous configuration state.
Configuration sessions provide multiple safety features for managing network changes. The administrator can create a named session, enter configuration commands that are buffered but not active, verify the pending configuration with show commands, and either commit all changes simultaneously or abort the session to discard everything. A critical safety feature is the commit timer, which can be set when committing changes (for example, commit 300 for a 5-minute timer). If the administrator does not confirm the commit within the timer period by issuing a second commit command, all changes automatically roll back. This prevents situations where a configuration error causes loss of connectivity, leaving the switch in a broken state. Configuration sessions support the verify command to check for syntax errors and configuration conflicts before committing. Multiple administrators can create separate configuration sessions simultaneously, and the system tracks which session belongs to which administrator. When changes are committed, they are merged with the running configuration, and any conflicts are detected before application.
Checkpoint is a related but different feature that creates a snapshot of the current configuration at a specific point in time. While checkpoints enable administrators to roll back to a previous configuration state manually, they do not provide automatic rollback based on connectivity loss or timeout. Checkpoints must be created explicitly before making changes and rolled back manually if problems occur.
Rollback refers to the general concept of reverting to a previous configuration state, which can be accomplished through various mechanisms including checkpoints. However, rollback itself is not the specific feature that provides automatic rollback during configuration changes. Rollback requires manual intervention to restore a previous configuration.
Configuration Replace is a feature that allows wholesale replacement of the running configuration with a saved configuration file, but it does not provide the automatic rollback safety mechanism during active configuration sessions. Configuration replace is typically used for initial device provisioning or complete configuration restoration scenarios.
Question 213:
What is the function of the Cisco Nexus OTV (Overlay Transport Virtualization) feature?
A) Provide server virtualization
B) Extend Layer 2 networks across geographically separated data centers over IP infrastructure
C) Replace BGP routing
D) Manage storage arrays
Answer: B
Explanation:
Overlay Transport Virtualization (OTV) is a Cisco technology designed to extend Layer 2 networks across geographically separated data centers over an IP-based infrastructure, enabling seamless workload mobility, disaster recovery, and data center interconnection while maintaining network isolation and optimal routing. OTV creates a Layer 2 overlay on top of a Layer 3 transport network, encapsulating Layer 2 Ethernet frames within IP packets for transmission across the data center interconnect. This approach allows organizations to stretch VLANs between sites for virtual machine mobility while avoiding the scaling and stability issues associated with traditional Layer 2 extension methods like VPLS or long-distance spanning tree. OTV is particularly valuable for disaster recovery scenarios where applications need to fail over between data centers while maintaining IP addresses and network state.
OTV provides intelligent features that optimize data center interconnection. The technology uses a control plane protocol to discover remote MAC addresses and advertise local MAC address reachability, significantly reducing flooding of unknown unicast and multicast traffic across the WAN. OTV implements multihoming and site redundancy, allowing multiple edge devices at each site to provide redundant connectivity to remote sites. The feature includes a concept called failure isolation, which prevents spanning tree topology changes in one data center from impacting remote sites, maintaining independence between data center failures. OTV supports multicast optimization by using IP multicast in the overlay to efficiently replicate Layer 2 multicast and broadcast traffic without flooding every frame across the interconnect. The technology integrates with routing protocols to ensure that inter-subnet traffic flows optimally, preventing traffic tromboning where packets unnecessarily traverse the data center interconnect. OTV edge devices can be configured with features like AED (Authoritative Edge Device) for first-hop redundancy protocol election and site VLAN extension lists to control which VLANs are extended to which sites.
Providing server virtualization is the function of hypervisor platforms like VMware ESXi, Microsoft Hyper-V, or KVM, not OTV. While OTV facilitates network connectivity for virtualized workloads and enables virtual machine mobility between data centers, it does not provide the compute virtualization layer itself. OTV operates at the network level to support virtualized environments.
Replacing BGP routing fundamentally misunderstands OTV’s purpose. OTV typically operates in conjunction with BGP rather than replacing it. The WAN or IP transport network connecting data centers usually runs BGP for routing between sites, while OTV creates Layer 2 overlays on top of this Layer 3 infrastructure. OTV and BGP serve complementary roles in data center interconnect architectures.
Managing storage arrays is accomplished through storage management software and protocols like SMI-S, not through network extension technologies like OTV. While OTV can extend storage networks (like extending FCoE VLANs for storage traffic), it does not manage the storage arrays themselves. Storage management is a separate domain from network connectivity.
Question 214:
Which protocol does Cisco UCS use for discovery and communication with I/O Modules and blade servers?
A) SNMP
B) SSH
C) Chassis Management Controller protocol
D) Telnet
Answer: C
Explanation:
Cisco UCS uses the Chassis Management Controller (CMC) protocol for internal communication between Fabric Interconnects, I/O Modules (Fabric Extenders), and the management controllers on blade servers within UCS chassis. The CMC protocol enables UCS Manager running on Fabric Interconnects to discover, inventory, configure, and monitor all components within the UCS domain. When a chassis is connected to Fabric Interconnects through its I/O Modules, the discovery process begins automatically with the Fabric Interconnects detecting the chassis, identifying all installed blade servers, and inventorying their components including CPUs, memory, adapters, and storage. The CMC protocol facilitates the push of service profiles to blade servers, updating firmware on components, monitoring hardware health and environmental conditions, and collecting fault and event information.
The CMC architecture within UCS provides a unified management framework that abstracts physical hardware into manageable objects within UCS Manager’s Management Information Tree. Each blade server contains a Baseboard Management Controller that communicates through the I/O Modules back to the Fabric Interconnects using the CMC protocol. This communication path is completely independent of the data plane network traffic, ensuring management functions remain available even if network connectivity is disrupted. The protocol supports secure, authenticated communication to protect management traffic from unauthorized access or tampering. CMC enables critical UCS capabilities including stateless computing where blade servers can be quickly reconfigured by associating different service profiles, centralized firmware management where all components can be updated from UCS Manager, and comprehensive health monitoring with detailed fault reporting and environmental sensor data. The protocol operates over the backplane connections within the chassis and over the chassis uplinks to the Fabric Interconnects.
SNMP (Simple Network Management Protocol) is used for monitoring and managing network devices including some aspects of UCS external communication, but it is not the primary protocol for internal UCS component discovery and communication. UCS Manager can be monitored via SNMP by external management systems, but the internal discovery and control mechanisms use CMC protocol.
SSH (Secure Shell) provides encrypted remote access to the UCS Manager CLI and to individual switch management interfaces, but it is not the protocol used for internal component discovery and communication. SSH is an administrative access method rather than the underlying discovery and control protocol.
Telnet is an insecure remote access protocol that can be used for CLI access to network devices but is not recommended due to security concerns and is not the protocol used for UCS internal component discovery. Modern UCS deployments disable Telnet in favor of SSH for any remote CLI access needs.
Question 215:
What is the purpose of a Tenant in Cisco ACI architecture?
A) Physical server only
B) Logical container for policies and network resources providing isolation and administration boundaries
C) Hardware component
D) Cabling specification
Answer: B
Explanation:
A Tenant in Cisco Application Centric Infrastructure is a logical container that provides complete isolation and administrative boundaries for policies, network resources, and security constructs within the ACI fabric. Tenants represent the highest level of policy hierarchy and enable multi-tenancy by allowing multiple independent organizations, business units, or applications to share the same physical ACI fabric infrastructure while maintaining complete separation of their networking and security policies. Each tenant contains its own private network constructs including VRFs (Virtual Routing and Forwarding instances), bridge domains, subnets, EPGs (Endpoint Groups), contracts, filters, and Layer 4-7 service integration. This isolation ensures that network policies, IP address spaces, and traffic flows in one tenant cannot interfere with or access resources in another tenant.
The tenant architecture provides flexible deployment models for various organizational needs. In service provider environments, each customer might have a dedicated tenant with complete isolation from other customers. In enterprise environments, different business units, applications, or geographic regions might be assigned separate tenants for administrative delegation and security isolation. ACI includes several built-in tenants: the common tenant contains shared resources accessible by all tenants such as shared services, the infra tenant manages the fabric infrastructure itself including the underlay network and fabric protocols, and the mgmt tenant handles out-of-band and in-band management connectivity. Administrators can create custom tenants and delegate management responsibilities using role-based access control, allowing different teams to manage their tenants without affecting others. Tenants support overlapping IP address spaces, meaning multiple tenants can use the same private IP ranges without conflicts because each tenant’s VRF provides routing isolation. Communication between tenants requires explicit configuration through shared services or inter-tenant contracts.
Physical server only represents an endpoint that might connect to the ACI fabric but is not what a tenant represents. Servers are discovered and assigned to EPGs within tenants, but the tenant itself is a logical policy container, not a physical server. Multiple servers across many physical locations can belong to EPGs within a single tenant.
Hardware component mischaracterizes tenants as physical infrastructure. Tenants are entirely logical constructs within the ACI policy model and exist as configuration objects in the APIC Management Information Tree. They do not represent physical hardware like switches, servers, or cables.
Cabling specification has no relationship to tenants. Cabling refers to the physical connectivity between devices using copper or fiber optic cables, while tenants are logical policy containers. The physical cabling of the ACI fabric is independent of tenant configuration.
Question 216:
Which Cisco Nexus command displays the current vPC status and configuration?
A) show vpc
B) show vlan
C) show interface
D) show ip route
Answer: A
Explanation:
The show vpc command on Cisco Nexus switches displays comprehensive information about the Virtual Port Channel configuration and operational status, including vPC domain ID, peer status, peer keepalive status, vPC peer link status, individual vPC interface status, consistency checks, and role information. This command is essential for verifying vPC operation, troubleshooting connectivity issues, and ensuring both peer switches are operating correctly. The output includes critical information such as whether the vPC peer is alive and reachable through the keepalive link, whether the peer link is operational, the role of each switch (primary or secondary), consistency parameter status showing if configurations match between peers, and the status of individual vPC member ports.
Understanding the show vpc output is crucial for maintaining healthy vPC deployments. The command displays the vPC domain ID which must match on both peer switches, the source and destination IP addresses used for peer keepalive messages, and the status of these keepalive messages indicating if the peer switch is responsive. The peer link information shows if the physical connection between vPC peers is up and passing traffic correctly. The role section indicates which switch is primary and which is secondary, determined by priorities and system MAC addresses. Consistency parameters section shows if critical configuration elements like STP mode, port channel mode, MTU, and allowed VLANs match between peers, which is required for proper vPC operation. The individual vPC listings show each configured vPC number, its status (up or down), and which physical interfaces are participating. Additional variations like show vpc brief provide summarized information, show vpc consistency-parameters displays detailed configuration matching status, and show vpc statistics shows traffic and protocol message counters.
The show vlan command displays VLAN configuration and status information including which VLANs exist, their names, status, and which ports are assigned to each VLAN. While VLAN information is relevant to vPC operation (VLANs must be configured consistently across vPC peers), this command does not show vPC-specific status like peer health, keepalive status, or vPC port channel information.
The show interface command displays detailed information about individual physical or logical interfaces including operational status, line protocol status, packet counters, error statistics, and configuration parameters. While useful for troubleshooting individual links that participate in vPC port channels, this command does not provide vPC domain status or peer relationship information.
The show ip route command displays the IP routing table showing learned and configured routes, next hops, and routing protocol information. This Layer 3 routing information is unrelated to vPC, which operates at Layer 2 for link aggregation and redundancy. vPC and IP routing are separate functions that can operate simultaneously on Nexus switches.
Question 217:
What is the function of the First Hop Redundancy Protocol (FHRP) in data center networks?
A) Compress data
B) Provide redundant default gateway functionality for endpoints
C) Encrypt traffic
D) Manage DNS services
Answer: B
Explanation:
First Hop Redundancy Protocols (FHRP) provide redundant default gateway functionality for endpoints by allowing multiple routers or Layer 3 switches to work together presenting a single virtual IP address and MAC address as the default gateway. If the primary gateway device fails, another device in the FHRP group automatically assumes the virtual gateway role, maintaining network connectivity for endpoints without requiring any reconfiguration. Common FHRP implementations include HSRP (Hot Standby Router Protocol, Cisco proprietary), VRRP (Virtual Router Redundancy Protocol, industry standard), and GLBP (Gateway Load Balancing Protocol, Cisco proprietary). In data center environments, FHRP ensures high availability for server and storage connectivity by eliminating the default gateway as a single point of failure.
FHRP deployment in data centers involves careful planning of gateway redundancy architecture. In HSRP deployments, one router is elected as active and forwards traffic while standby routers monitor the active router through hello messages, ready to take over if the active router fails. VRRP operates similarly but with some protocol differences and is often chosen for multi-vendor environments. GLBP provides load balancing in addition to redundancy by allowing multiple routers to simultaneously forward traffic for the same virtual IP address, with different endpoints using different physical routers based on virtual MAC address assignment. Modern data center designs often implement anycast gateway or distributed gateway architectures in VXLAN EVPN fabrics or ACI environments, where every leaf switch can serve as the default gateway for endpoints using the same virtual IP and MAC address, eliminating the need for traditional FHRP. However, traditional FHRP remains common in many data center designs, particularly at the network edge or for legacy application compatibility.
Compressing data is accomplished through compression algorithms implemented in applications, storage systems, or specialized network appliances, not through FHRP. Data compression reduces the size of data for storage or transmission efficiency but has no relationship to gateway redundancy protocols.
Encrypting traffic is handled by security protocols like IPsec, TLS/SSL, or MACsec, not by FHRP. Encryption protects data confidentiality by making information unreadable without proper decryption keys. While encrypted traffic might pass through gateways running FHRP, the encryption function is separate from gateway redundancy.
Managing DNS services involves DNS server software that translates domain names to IP addresses and is unrelated to FHRP. DNS provides name resolution services, while FHRP provides gateway redundancy. These are independent network services that serve different purposes.
Question 218:
Which Cisco ACI construct defines the communication rules between Endpoint Groups?
A) VLAN
B) Contract
C) Subnet
D) Route map
Answer: B
Explanation:
Contracts in Cisco Application Centric Infrastructure define the communication rules and policies between Endpoint Groups, specifying which traffic is permitted to flow from one EPG to another. Contracts implement a whitelist security model where all traffic between EPGs is denied by default unless explicitly permitted through a contract. A contract contains one or more subjects, and each subject contains one or more filters that specify the protocols, ports, and other traffic characteristics that are allowed. EPGs establish relationships with contracts as either providers (offering services) or consumers (accessing services), creating a clear application-aware policy model. For example, a web EPG might consume a contract that an application EPG provides, allowing HTTP/HTTPS traffic from web servers to application servers.
The contract-based policy model in ACI provides powerful capabilities for implementing microsegmentation and zero-trust networking. Contracts support bidirectional and unidirectional traffic policies, allowing granular control over traffic flow direction. Filters within contracts can match on Layer 2 through Layer 4 parameters including Ethernet type, IP protocol, TCP/UDP ports, and ICMP types. Contracts can include quality of service markings, traffic redirection through service graphs for Layer 4-7 services like firewalls or load balancers, and logging directives for traffic monitoring. The taboo contract feature allows explicit denial of specific traffic patterns even if other contracts would permit the traffic, providing exception handling. Contract scopes control the contract’s visibility and applicability—context scope limits the contract to EPGs within the same VRF, global scope allows usage across VRFs, and tenant scope limits to the same tenant. This abstraction of security policy from network addressing means applications can move between subnets or even data centers while maintaining consistent security policies through EPG membership and contracts.
VLAN is a traditional Layer 2 segmentation technology that groups ports into broadcast domains but does not define inter-group communication policies in ACI. While VLANs might be used for endpoint connectivity to the ACI fabric (VLAN encapsulation on access ports), communication policies between EPGs are controlled through contracts, not VLAN configurations.
Subnet defines an IP address range and is configured within bridge domains in ACI for IP addressing purposes. Subnets specify the IP space available to endpoints but do not control communication policies between different EPGs. Subnets and contracts serve different functions in the ACI policy model.
Route map is a traditional routing policy tool used to manipulate routing protocol behavior, match traffic patterns, and set attributes for route redistribution. Route maps are not used in ACI to define EPG-to-EPG communication rules, which are handled exclusively through the contract model.
Question 219:
What is the purpose of QoS (Quality of Service) in data center networks?
A) Increase physical bandwidth
B) Prioritize and manage network traffic to ensure critical applications receive adequate resources
C) Replace routing protocols
D) Provide authentication
Answer: B
Explanation:
Quality of Service (QoS) in data center networks provides mechanisms to prioritize and manage network traffic, ensuring that critical applications and traffic types receive adequate bandwidth, low latency, and minimal packet loss even during network congestion. QoS implements policies that classify traffic into different classes, assign priorities, allocate bandwidth, and manage queue depths to deliver predictable performance for latency-sensitive applications like VoIP, video conferencing, or real-time database replication. In modern data centers handling diverse workloads including production applications, backup traffic, storage replication, and management traffic, QoS prevents lower-priority bulk data transfers from impacting time-sensitive application traffic. Data center QoS often implements standards like IEEE 802.1p for Layer 2 prioritization and DSCP (Differentiated Services Code Point) for Layer 3 QoS marking.
Data center QoS deployments require end-to-end design across the entire traffic path. Classification mechanisms identify traffic based on various criteria including source/destination IP addresses, port numbers, protocol types, VLAN tags, or application signatures. Marking applies QoS values to packets using CoS (Class of Service) bits in the 802.1Q header for Layer 2 or DSCP values in the IP header for Layer 3, allowing downstream devices to recognize priority. Queuing strategies like priority queuing, weighted fair queuing, or deficit weighted round robin allocate buffer space and transmission opportunities based on traffic class. Policing and shaping mechanisms enforce bandwidth limits and smooth traffic bursts. In lossless Ethernet environments supporting FCoE or RDMA, Priority Flow Control (PFC) and Enhanced Transmission Selection (ETS) from the Data Center Bridging standards ensure zero packet loss for storage traffic. Cisco Nexus switches support sophisticated QoS features including MQC (Modular QoS CLI) for policy configuration, network qos policy for system-wide QoS settings, and queuing qos policy for per-interface buffer and scheduling configuration.
Increasing physical bandwidth involves adding more network links, upgrading to higher-speed interfaces, or implementing link aggregation, not QoS. While QoS helps optimize use of available bandwidth, it does not create additional physical capacity. QoS makes efficient use of existing bandwidth by prioritizing important traffic.
Replacing routing protocols fundamentally misunderstands QoS purpose. Routing protocols like OSPF, EIGRP, or BGP determine network paths and exchange reachability information, while QoS manages traffic treatment along those paths. QoS and routing protocols serve complementary but distinct functions in network operations.
Providing authentication is a security function handled by protocols like 802.1X, RADIUS, TACACS+, or certificate-based authentication systems. Authentication verifies user or device identity before granting network access. QoS focuses on traffic management and prioritization, not identity verification.
Question 220:
Which Cisco UCS component connects blade servers in a chassis to the Fabric Interconnects?
A) TOR switch
B) I/O Module (Fabric Extender)
C) Management switch
D) Console server
Answer: B
Explanation:
The I/O Module, also known as Fabric Extender (FEX) or IOM (Input/Output Module), is the critical component in Cisco UCS blade server chassis that connects blade servers to the Fabric Interconnects. Each UCS chassis typically contains two I/O Modules for redundancy, with each module connected to both Fabric Interconnects through multiple uplink ports. The I/O Modules extend the unified fabric from the Fabric Interconnects into the chassis, providing each blade server with connectivity for LAN, SAN, and management traffic over a converged network. From an architectural perspective, I/O Modules function as remote line cards of the Fabric Interconnects, with the FIs controlling their configuration and operation. The I/O Modules handle the physical connectivity to blade server mezzanine cards but perform minimal local switching intelligence.
I/O Modules support various bandwidth configurations depending on the model and blade server density. Each blade server connects to both I/O Modules through its mezzanine adapter cards, providing redundant paths to the Fabric Interconnects. The I/O Modules aggregate traffic from all blade servers in the chassis and forward it to the Fabric Interconnects over the uplink ports. Modern I/O Modules support features like fabric failover where if one uplink path fails, traffic automatically reroutes through alternate paths without disrupting server connectivity. The modules participate in UCS Manager’s discovery process, reporting blade server presence and hardware configuration when new servers are installed. Different IOM models support varying numbers of uplink ports and different speeds (1GE, 10GE, 25GE, 40GE, or 100GE), allowing organizations to scale chassis bandwidth based on workload requirements. The I/O Modules are hot-swappable, enabling replacement without powering down the chassis, though active traffic on that module’s connections will be disrupted during replacement.
TOR (Top of Rack) switch refers to a traditional data center architecture where switches are placed at the top of each rack to provide connectivity for servers in that rack. UCS blade chassis do not use TOR switches; instead, they use I/O Modules that extend the Fabric Interconnect fabric into the chassis. TOR switches are typically used for rack-mounted servers, not blade servers in UCS.
Management switch is not a UCS component. UCS uses dedicated management interfaces and an out-of-band management network for administrative access, but there is no separate management switch within the chassis. Management traffic flows through the same I/O Modules and Fabric Interconnects as data traffic, using separate VLANs and interfaces for isolation.
Console server is a device that provides centralized access to serial console ports of multiple devices, but it is not part of the UCS chassis architecture. UCS blade servers are accessed through UCS Manager or directly through KVM (Keyboard, Video, Mouse) connections, not through separate console servers.
Question 221:
What is the function of the Cisco Nexus Scheduler feature?
A) Schedule automated tasks like configuration backups or software upgrades
B) Schedule VLAN creation
C) Schedule power on/off of switches
D) Schedule user password expiration
Answer: A
Explanation:
The Cisco Nexus Scheduler feature provides the ability to schedule automated execution of CLI commands or scripts at specific times or recurring intervals, enabling administrators to automate routine maintenance tasks, configuration backups, log collection, or operational commands. The scheduler supports one-time job execution or recurring schedules using cron-like syntax for flexible timing specifications. Common use cases include scheduling nightly configuration backups to remote servers, periodic collection of diagnostic information, automated cleanup of log files, execution of health checks during maintenance windows, or running show commands to capture operational data for trending analysis. The scheduler enhances operational efficiency by automating repetitive tasks that would otherwise require manual intervention or external automation tools.
Scheduler configuration involves creating scheduler jobs that define what commands to execute and scheduler schedules that define when those jobs run. Jobs can execute single commands, multiple commands in sequence, or invoke VSH (Virtual Shell) scripts containing complex command sequences. Schedules specify timing using parameters like start time, recurring patterns (daily, weekly, monthly), time ranges, and specific dates. For example, an administrator might create a job called backup-config that executes copy running-config tftp://server/backup-$(TIMESTAMP).cfg and schedule it to run daily at 2 AM. The scheduler maintains logs of job execution including start time, completion time, and command output, allowing administrators to verify successful execution. The feature supports environment variables like TIMESTAMP, HOSTNAME, and SWITCHNAME that can be embedded in commands for dynamic file naming or context-aware execution. Scheduler jobs can be configured to send command output to files, syslog, or email recipients for automated reporting.
Scheduling VLAN creation is not a typical use case for the scheduler feature. While technically possible to schedule VLAN configuration commands, VLAN provisioning is usually event-driven based on application deployment needs rather than time-based. Network automation tools or orchestration platforms typically handle dynamic VLAN provisioning.
Scheduling power on/off of switches would be disruptive to production operations and is not a common or recommended use of the scheduler. Data center switches run continuously to provide services, and scheduled power cycling would cause outages. Power management in data centers focuses on redundancy and efficiency, not scheduled shutdowns.
Scheduling user password expiration is handled through AAA (Authentication, Authorization, and Accounting) systems and password aging policies configured on authentication servers (TACACS+, RADIUS, or local user database settings). The scheduler does not manage user password policies or expiration.
Question 222:
Which Cisco ACI feature allows external Layer 3 connectivity to networks outside the ACI fabric?
A) External EPG (L3Out)
B) Internal EPG
C) Bridge Domain only
D) Access port
Answer: A
Explanation:
External EPG (L3Out) in Cisco Application Centric Infrastructure provides Layer 3 connectivity to networks outside the ACI fabric, enabling communication with external routers, WAN connections, internet gateways, or traditional data center networks. L3Out configuration involves designating border leaf switches that connect to external routers, configuring routing protocols (BGP, OSPF, or EIGRP) to exchange routes with external networks, and creating External EPGs that represent external subnets or routing domains. External EPGs function similarly to internal EPGs in that they participate in the ACI contract model—communication between internal EPGs and External EPGs requires explicit contracts defining permitted traffic. This integration allows ACI to extend its policy-based security model to include external network resources.
L3Out configuration encompasses several components that work together to provide external connectivity. The L3Out object defines the routing domain including which VRF it belongs to, which border leaf switches participate, and which routing protocols are used. Node profiles specify which leaf switches serve as border leaves and their router IDs for routing protocols. Interface profiles define the physical or sub-interfaces connected to external routers and their IP addressing. Routing protocol configuration includes OSPF areas and authentication, BGP autonomous systems and neighbor relationships, or EIGRP autonomous system numbers. External EPG configuration specifies which external subnets or routes are represented by each External EPG, either through explicit subnet specification or route filtering. The L3Out supports features like route maps for routing policy control, BGP peer templates for simplified configuration, transit routing where the ACI fabric can pass traffic between different external networks, route summarization to reduce routing table size, and VRF route leaking for inter-VRF communication. Contracts between External EPGs and internal EPGs implement security policy, while contract preferred groups can simplify policy for trusted external networks.
Internal EPG represents endpoints within the ACI fabric such as virtual machines, containers, or physical servers. Internal EPGs cannot provide external Layer 3 connectivity as they are designed for endpoints residing within the fabric. Communication with external networks requires External EPGs configured through L3Out.
Bridge Domain only provides Layer 2 forwarding within the ACI fabric and does not enable external Layer 3 connectivity. While Bridge Domains can have associated subnets for IP addressing, they do not provide routing to external networks. External connectivity requires L3Out configuration in addition to Bridge Domains and VRFs.
Access port connects endpoints like servers or storage to the ACI fabric at Layer 2 but does not provide external Layer 3 routing capabilities. Access ports are configured with EPG associations for internal endpoints, while external routing requires dedicated L3Out configuration on border leaf switches.
Question 223:
What is the purpose of Port Security in Cisco Nexus switches?
A) Encrypt all traffic
B) Limit and control MAC addresses allowed on a port to prevent unauthorized device connection
C) Configure port speed
D) Enable spanning tree
Answer: B
Explanation:
Port Security on Cisco Nexus switches provides a mechanism to limit and control which MAC addresses are allowed to transmit traffic on a specific port, preventing unauthorized devices from connecting to the network and protecting against MAC address spoofing or flooding attacks. Port Security can be configured to allow a specific number of MAC addresses on a port, statically define which MAC addresses are permitted, or dynamically learn MAC addresses up to a configured limit. When an unauthorized MAC address attempts to use the port, Port Security triggers a security violation that can result in various actions including shutting down the port, dropping frames from the violating MAC address, or generating logging and SNMP alerts while still allowing traffic from authorized MAC addresses.
Security implementation involves several configuration options and violation responses. Administrators can set the maximum number of MAC addresses allowed on a port, ranging from a single MAC (for connecting a single server or desktop) to higher limits for connections to other switches or virtualization hosts. MAC addresses can be learned dynamically where the switch remembers the first MAC addresses seen up to the configured limit, configured statically where specific MAC addresses are explicitly allowed, or use sticky learning where dynamically learned addresses are converted to static configuration. Violation actions include shutdown mode which err-disables the interface requiring manual recovery, restrict mode which drops violating traffic but keeps the port operational for allowed MAC addresses, and protect mode which silently drops violating traffic without logging. Port Security integrates with other security features like DHCP Snooping and IP Source Guard to provide comprehensive access control, preventing attacks like MAC flooding that attempt to overflow the switch CAM table or spoofing attacks where unauthorized devices impersonate legitimate MAC addresses.
Encrypting all traffic is accomplished through technologies like MACsec for Layer 2 encryption, IPsec for Layer 3 encryption, or TLS for application-layer encryption, not through Port Security. Port Security controls which devices can connect to ports based on MAC addresses but does not encrypt traffic passing through those ports.
Configuring port speed is accomplished through interface configuration commands like speed 1000 or auto-negotiation settings, not through Port Security. Port speed determines the transmission rate of the interface (10/100/1000 Mbps, 10GE, 25GE, etc.) while Port Security controls access based on MAC addresses.
Enabling spanning tree is done through STP configuration commands and operates independently of Port Security. Spanning Tree Protocol prevents Layer 2 loops in networks with redundant paths by blocking certain ports. Port Security and STP are separate features that can operate simultaneously on the same ports.
Question 224:
Which Cisco Nexus feature provides automated discovery and configuration of new switches joining the fabric?
A) DHCP
B) POAP (Power-On Auto Provisioning)
C) SNMP
D) Telnet
Answer: B
Explanation:
Power-On Auto Provisioning (POAP) is a Cisco Nexus feature that automates the initial deployment and configuration of new switches by allowing them to discover and download the appropriate software image and configuration file when first powered on. POAP eliminates manual intervention during switch installation, reducing deployment time from hours to minutes and minimizing configuration errors. When a new switch with no startup configuration boots up, it enters POAP mode and sends DHCP requests to obtain an IP address and the location of a configuration script. The DHCP server provides IP addressing and specifies the TFTP, HTTP, or FTP server location where the POAP script resides. The switch downloads and executes this script, which typically downloads the desired NX-OS software version, installs it, downloads the switch configuration file, and applies it automatically.
POAP workflows provide flexible deployment options for different data center environments. The POAP script can be customized using Python or TCL to implement organization-specific logic, such as determining which configuration to apply based on switch serial number, model, or network location. Scripts can perform tasks like downloading and applying licenses, configuring initial management access, establishing connectivity to centralized management systems like DCNM, or joining the switch to ACI fabric. POAP supports zero-touch provisioning where switches can be deployed by installation technicians without networking expertise—they simply rack, cable, and power on the switches, with all configuration applied automatically. The feature integrates with configuration management systems, enabling network automation workflows that track switch inventory, maintain configuration versions, and ensure compliance with standards. POAP can also be used for disaster recovery scenarios where failed switches are replaced with new hardware that automatically configures itself to match the failed switch. The process is logged for audit and troubleshooting purposes.
DHCP (Dynamic Host Configuration Protocol) is a component used within POAP to provide IP addressing and bootstrap information to the new switch, but DHCP itself does not provide automated configuration. DHCP can supply IP addresses and basic network parameters but cannot download software images or apply complete switch configurations without POAP orchestration.
SNMP (Simple Network Management Protocol) is used for monitoring and managing network devices through polling and traps but does not provide automated provisioning capabilities. SNMP allows management systems to collect operational data and configuration information but does not automate initial device deployment.
Telnet is an insecure remote access protocol that could theoretically be used to manually configure switches but does not provide automation or auto-provisioning capabilities. POAP specifically addresses the need for automated, touchless deployment that Telnet cannot provide.
Question 225:
What is the function of the Cisco ACI Application Profile?
A) Monitor CPU usage
B) Logical container that groups EPGs and defines application network requirements
C) Physical server component
D) Storage configuration
Answer: B
Explanation:
An Application Profile in Cisco Application Centric Infrastructure is a logical container within a tenant that groups related Endpoint Groups and defines the network and policy requirements for a complete application. Application Profiles organize EPGs based on application tiers or functional components, providing a clear hierarchical structure that reflects application architecture. For example, a three-tier application might have an Application Profile called “Web-App” containing three EPGs: web-tier, app-tier, and db-tier. This logical grouping makes it easier to visualize, manage, and troubleshoot application connectivity while ensuring all components of an application are organized together. Multiple Application Profiles can exist within a single tenant, each representing a different application or service.
Application Profiles serve as organizational containers but also provide important context for policy management and service integration. Contracts that define communication between EPGs within the same Application Profile can be scoped appropriately to ensure security policies align with application requirements. Application Profiles integrate with VMware vCenter, Microsoft SCVMM, Kubernetes, and other orchestration platforms to dynamically assign virtual machines, containers, or workloads to the appropriate EPGs based on application attributes or tagging. Service graphs for Layer 4-7 services like load balancers, firewalls, or application delivery controllers are associated with Application Profiles to insert these services into the application traffic path. The Application Profile construct enables application-centric management where network operations align with application delivery teams’ understanding of application architecture. This alignment improves collaboration between network and application teams, reduces miscommunication, and enables faster troubleshooting by providing clear visibility into which network policies apply to which application components. Application Profiles can be cloned or templated to rapidly deploy multiple instances of the same application with consistent networking and security policies.
Monitoring CPU usage is a system monitoring function typically handled by switch or server management interfaces, SNMP monitoring, or specialized monitoring tools. While ACI provides some infrastructure health monitoring, this is not the purpose of Application Profiles, which are logical policy containers for organizing EPGs.
Physical server component mischaracterizes Application Profiles as hardware. Application Profiles are entirely logical constructs within the ACI policy model that organize EPGs and policies. They exist as configuration objects in APIC and have no physical hardware representation.
Storage configuration involves setting up SAN zoning, LUN provisioning, and storage array management, which is separate from Application Profiles. While ACI can extend to storage networks and EPGs might represent storage traffic, Application Profiles themselves do not configure storage systems. They organize network policies for applications.