Visit here for our full Cisco 350-601 exam dumps and practice test questions.
Question 76:
What is the primary function of Cisco Fabric Extenders (FEX) in a data center environment?
A) Provide standalone switching functionality
B) Extend the fabric of parent switches to remote locations with simplified management
C) Replace core switches entirely
D) Function as wireless access points
Answer: B
Explanation:
Cisco Fabric Extenders extend the fabric of parent switches to remote locations, enabling simplified management by presenting multiple physical switches as a single logical device while reducing operational complexity and cost in data center architectures. FEX technology addresses the challenge of managing numerous access layer switches by centralizing control and configuration at the parent switch level while distributing physical connectivity through extender units. The architecture consists of parent switches, typically Nexus series devices that provide control plane and management functions, and FEX units that function as remote line cards extending parent switch connectivity to endpoints. FEX operates as a fabric extension rather than standalone switch, forwarding all traffic to the parent switch for switching decisions and policy application without maintaining local MAC address tables or making independent forwarding decisions. This parent-child relationship creates several operational benefits including single point of management where administrators configure the parent switch and settings propagate to all attached FEX units automatically, unified software image with one operating system version across parent and extenders eliminating version compatibility issues, centralized troubleshooting providing consistent diagnostic tools and visibility, and reduced device count in management systems since FEX units are not individually managed entities. The FEX connectivity model supports various topologies including straight-through where each FEX connects to a single parent switch for cost-effective deployment, and dual-homed where FEX connects to two parent switches in virtual port channel configuration providing high availability and load balancing. Communication between parent switch and FEX occurs over fabric links using specialized encapsulation that carries control traffic, data plane traffic, and management communications within the same physical connections. FEX units support multiple interface types including 1GE, 10GE, 25GE, 40GE, and 100GE for server connectivity and fabric uplinks enabling flexible deployment in various scenarios. Power over Ethernet capabilities in some FEX models enable deployment in locations requiring endpoint power delivery like IP phones or wireless access points. The architecture scales efficiently supporting numerous FEX units per parent switch with ratios depending on specific models and bandwidth requirements. FEX deployment is particularly common in top-of-rack scenarios where FEX units installed in server racks provide local connectivity while parent switches reside in centralized network locations, reducing structured cabling requirements and improving cable management. Quality of Service policies, VLANs, and security configurations defined on parent switches apply consistently across all FEX ports ensuring uniform treatment. FEX does not provide standalone switching as it requires parent switch connectivity. It complements rather than replaces core infrastructure. Wireless functionality is unrelated to FEX purpose.
Question 77:
Which Cisco Nexus feature provides automated discovery and configuration of network devices?
A) SPAN
B) Power On Auto Provisioning (POAP)
C) ERSPAN
D) Port Security
Answer: B
Explanation:
Power On Auto Provisioning automates the initial configuration and software image installation process for Cisco Nexus switches, enabling zero-touch deployment where devices automatically discover configuration servers, download appropriate configurations and software, and become operational without manual intervention. POAP addresses the operational challenge of deploying numerous switches in large-scale data centers where manual configuration of each device is time-consuming, error-prone, and inconsistent. The POAP process initiates when a new or reset Nexus switch boots without a startup configuration, triggering automatic configuration discovery through a multi-step workflow. The switch first obtains an IP address through DHCP, with the DHCP server providing not only network addressing but also critical parameters including TFTP or HTTP server location, configuration script filename, and software image location. The switch then downloads and executes a configuration script, typically written in Python or TCL, that contains logic for identifying the specific switch based on serial number, MAC address, or other unique identifiers and applying appropriate configuration. The script can perform conditional actions including downloading the correct software image for the switch model, installing the software if different from current version, downloading the specific configuration file for this switch’s role and location, and applying the configuration automatically. POAP supports complex deployment scenarios through scripting flexibility enabling customized workflows such as downloading configurations from central repositories like configuration management databases, validating switch identity against inventory systems, configuring management interfaces and enabling remote access, setting up initial VLAN, routing, and security configurations, and registering the switch with network management systems. The automation reduces deployment time from hours to minutes per switch while ensuring consistency across all devices in the environment. POAP scripts can incorporate error handling, logging deployment progress to syslog servers, and sending notifications upon successful completion or failure. Security considerations include authenticating configuration sources to prevent unauthorized configuration injection, encrypting configuration transfers to protect sensitive information, and validating script integrity before execution. POAP is particularly valuable during initial data center builds when numerous switches need deployment simultaneously, during refresh cycles when replacing older equipment, and for remote site deployments where sending skilled personnel for basic configuration is inefficient. The feature integrates with orchestration platforms enabling automated network provisioning as part of broader infrastructure automation. POAP supports various switch models across the Nexus product line including 9000, 7000, 5000, and 3000 series. Organizations typically maintain POAP infrastructure including DHCP servers, script repositories, configuration templates, and software image libraries as part of their automated deployment environment. SPAN and ERSPAN are traffic mirroring features. Port Security restricts MAC addresses on interfaces.
Question 78:
What is the purpose of the Cisco Nexus vPC (Virtual Port Channel) technology?
A) Provide wireless connectivity
B) Enable a device to form a port channel across two separate switches for redundancy and load balancing
C) Configure virtual machines
D) Manage storage networks
Answer: B
Explanation:
Virtual Port Channel enables a downstream device to form a port channel across two separate physical Nexus switches, providing both redundancy and load balancing while eliminating Spanning Tree Protocol blocked ports and enabling full utilization of available bandwidth. vPC addresses fundamental data center networking challenges including the Spanning Tree limitation where only one path can be active between switches leaving backup links idle, and single point of failure when devices connect to only one switch. The vPC architecture involves several key components including vPC peers, the two Nexus switches that function as a single logical port channel endpoint, vPC peer link connecting the two peers and carrying control traffic and data plane synchronization, vPC peer keepalive link, typically a separate management network connection that monitors peer health, and vPC member ports, the physical interfaces on each peer that together form the logical port channel to downstream devices. From the downstream device perspective, the vPC appears as a standard port channel to a single logical switch, enabling compatibility with devices that do not have vPC awareness including servers, storage systems, or switches from other vendors. The dual-active nature allows both vPC peers to forward traffic simultaneously, effectively doubling available bandwidth compared to traditional Spanning Tree architectures where one link would be blocked. High availability is achieved through peer redundancy where if one vPC peer fails, the surviving peer continues forwarding traffic for all vPC member ports with minimal disruption. The vPC peer link serves multiple critical functions including synchronizing MAC address tables between peers ensuring both switches know which MAC addresses are reachable through which interfaces, carrying traffic from one peer to the other when the destination is known to be on the remote peer’s member ports, and transporting multicast and broadcast traffic to ensure proper delivery. The peer keepalive link provides an independent heartbeat mechanism for peer health monitoring, with careful design ensuring this link remains available even when the peer link fails to prevent split-brain scenarios. vPC supports various topologies including vPC to single-homed devices where a device connects to both vPC peers through the port channel, vPC to dual-homed devices where each connection goes to a different peer, and vPC in combination with FEX where Fabric Extenders dual-home to vPC peers. Configuration consistency requirements mandate that certain parameters be identical on both vPC peers including port channel mode and parameters, VLAN configurations for VLANs carried on vPC, and Spanning Tree settings to ensure proper operation. Quality of Service, security, and routing configurations should also be synchronized between peers. Advanced vPC features include vPC peer-gateway enabling devices to use either peer’s MAC address as the gateway, and vPC domain allowing multiple vPC peer relationships within a single network. Wireless connectivity uses different technologies. VM configuration occurs in hypervisors. Storage networks may use vPC but it serves broader purposes.
Question 79:
Which protocol does Cisco ACI use for communication between the APIC controller and leaf switches?
A) SNMP
B) OpFlex
C) SMTP
D) FTP
Answer: B
Explanation:
OpFlex is the southbound protocol that Cisco Application Centric Infrastructure uses for communication between the Application Policy Infrastructure Controller and leaf switches, enabling declarative policy distribution where the APIC specifies desired network behavior without dictating specific implementation details. OpFlex represents a policy-based approach fundamentally different from traditional imperative configuration models where administrators configure individual devices with explicit commands. In the OpFlex model, the APIC maintains the policy repository defining application requirements, security rules, quality of service parameters, and network services, then distributes relevant policy subsets to leaf switches which autonomously resolve these policies into concrete forwarding and filtering configurations appropriate for their local context. This declarative approach provides several advantages including abstraction where application teams define requirements without needing detailed network knowledge, scalability with centralized policy management even as the fabric grows, consistency ensuring uniform policy enforcement across the entire fabric, and flexibility allowing the fabric to optimize policy implementation based on local conditions. The OpFlex communication operates in a client-server model with APIC functioning as the policy repository and authority, and leaf switches acting as policy elements that request relevant policies and report status. When endpoints connect to leaf switches, the switches send endpoint identity and location information to APIC, which responds with applicable policies including EPG memberships, contracts defining allowed communication, and quality of service requirements. The leaf switch then programs its hardware forwarding tables to implement these policies in the data plane using ASIC capabilities for line-rate policy enforcement. OpFlex supports policy resolution at the point of enforcement meaning leaf switches receive policy intent and translate it into specific actions matching their hardware capabilities and local topology, enabling vendor-neutral policy distribution where different switch models can implement the same policy using their native capabilities. The protocol includes bidirectional communication with APIC pushing policy updates when changes occur and leaf switches reporting endpoint events, health status, and statistics. OpFlex operates over TLS-encrypted connections ensuring confidentiality and integrity of policy communications. The protocol handles various policy elements including endpoint groups defining collections of endpoints with common requirements, contracts specifying communication rules between EPGs, filters defining specific protocols and ports allowed in contracts, and bridge domains providing layer 2 connectivity properties. OpFlex enables multi-tenancy with each tenant’s policies isolated and distributed only to leaf switches hosting that tenant’s endpoints. The protocol supports policy updates with transactional semantics ensuring consistent policy application across the fabric and handling failure scenarios gracefully. OpFlex complements VXLAN in ACI where OpFlex handles control plane policy distribution and VXLAN provides data plane encapsulation for multi-tenant traffic. The separation of policy distribution through OpFlex and data plane operation through VXLAN enables clean architectural layering. OpFlex is standardized through the IETF allowing potential multi-vendor adoption though Cisco ACI is the primary implementation. SNMP is for monitoring. SMTP handles email. FTP transfers files but not used for ACI control.
Question 80:
What is the primary benefit of Cisco Nexus Multi-Site Orchestrator in ACI deployments?
A) Manage single data center only
B) Orchestrate policy and network configuration across multiple ACI fabrics in different locations
C) Replace APIC controllers
D) Provide wireless management
Answer: B
Explanation:
Cisco Nexus Multi-Site Orchestrator provides centralized orchestration of policy and network configuration across multiple ACI fabrics located in different data centers or geographic regions, enabling consistent application deployment, seamless workload mobility, and unified management while maintaining local autonomy for each fabric. MSO addresses the challenge of operating multiple independent ACI fabrics where managing each fabric separately creates configuration inconsistencies, application deployment complexity, and operational overhead. The Multi-Site architecture operates as a management layer above individual APIC clusters, with MSO maintaining a superset of policies and configurations that span multiple sites while each APIC continues to manage its local fabric. MSO orchestrates several critical multi-site capabilities including stretched EPGs that exist across multiple fabrics allowing application components in different sites to communicate as if in the same fabric, inter-site contracts defining security policies for communication between fabrics, stretched bridge domains providing layer 2 connectivity across sites when required, and coordinated external connectivity ensuring consistent external network integration. The orchestrator automates complex multi-site tasks such as synchronizing tenant configurations across fabrics, managing inter-site routing and VXLAN tunnels, coordinating policy changes to prevent inconsistencies, and orchestrating disaster recovery failover scenarios. Each fabric maintains operational independence continuing to function if MSO becomes unreachable or if inter-site connectivity fails, with local APIC controllers managing their fabric autonomously. MSO supports various deployment models including active-active where workloads run simultaneously in multiple sites with load distribution, active-standby where one site serves as disaster recovery for another, and hybrid scenarios combining stretched and site-local resources. The platform enables application mobility allowing workloads to migrate between sites while maintaining network connectivity and policy enforcement. Inter-site policy management includes defining which EPGs extend across sites, specifying contracts between sites, configuring quality of service for inter-site traffic, and establishing security policies for cross-site communication. MSO provides unified visibility showing application topology spanning multiple fabrics, displaying inter-site connectivity status, and monitoring cross-site traffic flows. The orchestrator integrates with automation and orchestration platforms through APIs enabling programmatic multi-site management. Configuration management includes templates for common multi-site scenarios, validation preventing configurations that would cause operational issues, and audit logging tracking all multi-site policy changes. MSO supports disaster recovery workflows including automated failover when sites become unavailable, controlled migration for planned maintenance, and traffic engineering for load distribution. Best practices for MSO deployment include designing multi-site topology carefully considering latency requirements and bandwidth capacity, planning EPG and bridge domain placement balancing locality with mobility, implementing robust connectivity between sites with redundant paths, and testing failover scenarios validating automated recovery. MSO does not replace APIC but works with multiple APICs. It is specific to data center ACI fabric management. Wireless management uses different systems.
Question 81:
Which Cisco Nexus feature provides the ability to create multiple virtual device contexts (VDCs)?
A) VRF
B) Virtual Device Context
C) VLAN
D) Port Channel
Answer: B
Explanation:
Virtual Device Context is a Cisco Nexus feature that partitions a single physical Nexus switch into multiple logical devices, each functioning as an independent virtual switch with dedicated resources, separate management domains, and isolated failure boundaries. VDC technology enables multi-tenancy within a single chassis providing the operational isolation of multiple physical switches while reducing hardware costs, power consumption, and rack space requirements. Each VDC operates as a separate logical entity with its own dedicated resources including allocated physical interfaces that belong exclusively to that VDC, guaranteed CPU and memory resources ensuring one VDC cannot starve others, independent management plane with separate CLI, SSH, and SNMP access, distinct routing and switching tables maintaining isolation, and separate configuration files allowing different administrative teams. VDC benefits include resource consolidation using one physical device for multiple logical functions, administrative separation enabling different teams to manage their VDCs independently, failure isolation where issues in one VDC do not affect others, and operational flexibility allowing different software features or versions in different VDCs. Common VDC use cases include service provider environments where different customers receive dedicated VDCs, enterprise deployments separating production, development, and test environments, multi-department organizations where each department operates its own virtual network, and security segmentation isolating sensitive workloads from general traffic. VDC architecture includes a default VDC that always exists and manages the physical chassis, and user-created VDCs for specific purposes or tenants. The default VDC retains certain management responsibilities including hardware monitoring, environmental control, and physical interface allocation to other VDCs. VDC configuration involves creating the VDC, allocating physical interfaces from the default VDC to the new VDC, assigning resource limits including maximum routes and VLANs, and configuring independent settings for routing protocols, VLANs, and features within the VDC. Resource allocation can be modified dynamically allowing rebalancing as requirements change. VDC limitations include that allocated interfaces cannot be shared between VDCs without reconfiguration, some advanced features may not be available in all VDCs depending on licenses, and the total resource pool is finite requiring planning to avoid over-subscription. High availability considerations include that VDC failover occurs independently when switches operate in HA pairs with VDCs failing over as units. Not all Nexus models support VDC with the feature typically available on modular chassis like the 7000 series rather than fixed-configuration switches. VDC management includes monitoring resource utilization per VDC, troubleshooting isolated to specific VDCs, and coordinating between VDC administrators when integration is required. Security benefits include attack surface reduction where compromise of one VDC does not grant access to others, compliance support through logical separation of regulated workloads, and administrative least privilege where admins only access their assigned VDCs. VRF provides routing instance separation within a VDC. VLANs provide layer 2 segmentation. Port Channel bundles links for bandwidth and redundancy.
Question 82:
What is the purpose of Cisco Data Center Network Manager (DCNM)?
A) Manage wireless networks only
B) Provide centralized management, monitoring, and provisioning for Nexus switches
C) Configure individual server operating systems
D) Manage SAN storage exclusively
Answer: B
Explanation:
Cisco Data Center Network Manager provides centralized management, monitoring, and provisioning capabilities for Cisco Nexus switches and data center network infrastructure, offering a unified platform for configuring, operating, and troubleshooting data center networks at scale. DCNM addresses the complexity of managing numerous switches across potentially multiple data centers by providing consistent management interface, automated provisioning workflows, comprehensive monitoring and analytics, and topology visualization. The platform supports multiple operational modes including LAN Fabric mode for managing VXLAN EVPN fabrics, SAN mode for managing Fibre Channel SAN environments, and classic Ethernet mode for traditional Layer 2/3 networks. DCNM functionality encompasses several key areas with configuration management providing centralized configuration creation, validation, and deployment across multiple switches simultaneously, using templates for consistency and compliance with organizational standards, and maintaining configuration version control with rollback capabilities. Monitoring capabilities include real-time visibility into network health showing interface status, utilization, and errors, topology discovery automatically mapping network connectivity and displaying graphical representations, performance analytics tracking trends and identifying anomalies, and alerting when thresholds are exceeded or failures occur. DCNM supports fabric lifecycle management for VXLAN EVPN deployments including fabric creation with automated underlay and overlay configuration, fabric expansion adding new switches to existing fabrics seamlessly, fabric operations managing ongoing configuration changes, and fabric monitoring tracking fabric health and performance. The platform provides automation features including zero-touch provisioning for new switches through integration with POAP, consistent policy enforcement across the fabric, and programmability through REST APIs enabling integration with broader orchestration systems. DCNM’s topology visualization displays network connectivity in intuitive graphical formats showing physical connections, logical relationships, and overlay networks with the ability to drill down from high-level views to detailed interface information. The platform performs configuration compliance checking comparing actual device configurations against defined standards and highlighting deviations requiring remediation. Change management workflow capabilities include staging configuration changes, validating changes before deployment, scheduling maintenance windows, and tracking change history for audit purposes. DCNM supports multi-tenancy in VXLAN fabrics managing tenant network configurations, VNI assignments, and routing policies. Integration with Cisco APIC enables hybrid management scenarios where ACI fabrics and Nexus-based VXLAN fabrics coexist. Performance and capacity planning features analyze utilization trends, forecast capacity needs, and recommend infrastructure expansion. Troubleshooting tools include flow analysis tracing packet paths through the network, and diagnostic data collection from multiple switches simultaneously. DCNM deployment options include on-premises installation on dedicated servers or virtual machines, and SaaS delivery through Cisco Intersight for cloud-based management. The platform scales to manage thousands of switches across multiple sites from a single interface. Role-based access control enables delegation of management tasks to appropriate teams while maintaining security separation. While DCNM includes SAN management capabilities, its primary focus is broader network infrastructure management. Server OS configuration is outside DCNM scope. Wireless management uses different platforms.
Question 83:
Which Cisco Nexus command displays the current vPC status and configuration?
A) show vpc
B) show vlan
C) show interface
D) show ip route
Answer: A
Explanation:
The show vpc command displays comprehensive information about Virtual Port Channel status and configuration including vPC domain parameters, peer link status, peer keepalive connectivity, vPC member ports, and consistency checks, providing essential visibility for operating and troubleshooting vPC environments. This command is the primary diagnostic tool for vPC deployments revealing the operational state of all vPC components and identifying configuration inconsistencies that could cause operational issues. The command output includes multiple sections with vPC domain information showing domain ID identifying the vPC relationship, role indicating whether the switch is primary or secondary peer, peer status reflecting whether the peer is reachable and operational, and peer keepalive status showing the health monitoring link state. The peer link section displays physical interfaces comprising the peer link, operational status indicating whether the peer link is up, and VLAN information showing which VLANs are allowed across the peer link. The vPC member ports section lists all port channels configured as vPC, showing their vPC numbers, member interfaces on the local switch, operational status, and consistency status indicating whether configurations match between peers. Consistency checking is critical for vPC operation with the command displaying global consistency parameters that must match between peers including STP mode and settings, port channel mode and parameters, and VLAN configuration for VLANs on vPC, as well as interface-level consistency for parameters on vPC member ports. The command reveals type-1 inconsistencies that prevent vPC from coming up such as mismatched global parameters, and type-2 inconsistencies that allow vPC operation but may cause unexpected behavior like mismatched interface configurations. Additional information includes vPC statistics showing packets forwarded across the peer link, orphan ports listing interfaces that are not part of vPC, and vPC role indicating which peer is primary based on configured priority. Troubleshooting with show vpc enables identifying common issues including peer keepalive failures indicating connectivity problems on the keepalive link, peer link down conditions preventing proper vPC operation, consistency check failures revealing configuration mismatches requiring correction, and individual vPC member ports in down state despite physical connectivity suggesting configuration issues. The command supports various options including show vpc brief providing condensed summary, show vpc consistency-parameters displaying detailed consistency check results, show vpc peer-keepalive showing keepalive link details, and show vpc statistics displaying packet counters. Regular monitoring of vPC status through this command enables proactive identification of problems before they impact services. Best practices include verifying vPC status after configuration changes, checking consistency parameters when adding new vPC member ports, monitoring peer link utilization to ensure adequate capacity, and reviewing vPC statistics to understand traffic patterns. The command is available on both vPC peers providing visibility into the relationship from each switch’s perspective. Understanding command output is essential for data center network operations teams managing Nexus environments with vPC deployments. Show vlan displays VLAN information. Show interface shows interface status. Show ip route displays routing table entries.
Question 84:
What is the primary function of Cisco Nexus FabricPath?
A) Provide routing between VLANs
B) Create a layer 2 multi-path network using IS-IS routing protocol
C) Manage QoS policies
D) Configure access control lists
Answer: B
Explanation:
Cisco Nexus FabricPath creates a layer 2 multi-path network that uses the IS-IS routing protocol to build loop-free topologies enabling all available paths for forwarding, dramatically increasing bisectional bandwidth and eliminating Spanning Tree Protocol limitations in data center layer 2 networks. FabricPath addresses traditional Ethernet challenges including Spanning Tree blocking redundant paths leaving bandwidth unused, limited scalability with large layer 2 domains, and slow convergence during topology changes. The technology combines layer 2 forwarding simplicity with layer 3 routing benefits providing the any-to-any connectivity applications expect from layer 2 while leveraging multiple paths like layer 3 routing. FabricPath operation uses IS-IS as the control plane protocol running between FabricPath-enabled switches to discover topology, exchange switch and endpoint information, and calculate optimal forwarding paths. IS-IS builds a complete network map enabling each switch to compute loop-free paths to all destinations using equal-cost multi-path when multiple paths exist. The data plane uses MAC-in-MAC encapsulation where original Ethernet frames are encapsulated within FabricPath headers containing source and destination switch IDs rather than MAC addresses, enabling routing based on switch topology rather than endpoint MAC addresses. This approach prevents MAC address table scalability issues because switches only need to learn MAC addresses for locally connected endpoints plus switch IDs for the FabricPath network rather than all MAC addresses in the entire domain. FabricPath supports ECMP distributing traffic across multiple equal-cost paths based on conversation hashing ensuring all paths carry traffic while maintaining per-flow ordering. The technology provides fast convergence with IS-IS detecting topology changes quickly and reconverging in milliseconds, significantly faster than Spanning Tree which can take seconds. FabricPath enables creation of large layer 2 domains supporting thousands of endpoints while maintaining stability through the distributed control plane. The architecture supports hierarchical designs with FabricPath cores connecting to access switches running traditional Ethernet or vPC, enabling incremental deployment without requiring full network replacement. FabricPath-to-classical Ethernet boundaries called edge ports connect traditional Ethernet devices to the FabricPath network with the edge switch learning MAC addresses from classical Ethernet side and advertising them into FabricPath. Multi-destination traffic like broadcasts, unknown unicast, and multicast use multi-destination trees calculated by IS-IS to deliver traffic efficiently across the fabric. FabricPath supports virtual networks through conversational learning and forwarding tables per VLAN maintaining multi-tenant separation. Configuration simplicity is a FabricPath benefit with minimal configuration required to enable the feature and automatic topology discovery eliminating manual path configuration. FabricPath is primarily deployed on Nexus 7000 and some 5000/6000 series switches though its use has declined with the emergence of VXLAN EVPN as the preferred layer 2 overlay solution offering additional benefits including better multi-site support and broader ecosystem. Inter-VLAN routing uses SVIs or routers. QoS management uses separate mechanisms. ACLs provide security filtering independent of FabricPath.
Question 85:
Which Cisco ACI object represents a collection of endpoints that require similar network policies?
A) Bridge Domain
B) Endpoint Group (EPG)
C) Tenant
D) Context
Answer: B
Explanation:
Endpoint Groups in Cisco Application Centric Infrastructure represent logical collections of endpoints that require similar network policies including connectivity rules, quality of service, and security controls, forming the fundamental unit for policy application in the ACI fabric. EPGs group endpoints based on application tiers, functions, or security zones regardless of their physical location or network topology, enabling policy-based networking where administrators define what endpoints need rather than how to configure individual network devices. EPG membership can be determined through various methods including static assignment where administrators explicitly associate specific interfaces or VLANs to EPGs, dynamic assignment using VMware vCenter or Microsoft SCVMM integration where virtual machines are automatically placed in EPGs based on attributes like port groups or network adapters, and IP-based assignment where endpoints are classified into EPGs based on source IP addresses. The EPG abstraction enables application-centric thinking where network policies align with application architecture rather than network topology, allowing teams to specify that web tier endpoints in an EPG can communicate with application tier endpoints in another EPG without configuring individual switch ACLs or firewall rules. EPG-to-EPG communication is controlled through contracts which define the protocols, ports, and directionality of permitted traffic, implementing micro-segmentation where each EPG pair requires explicit contract authorization. This default-deny security posture ensures that only specified communication is allowed. EPGs belong to bridge domains determining layer 2 properties including MAC learning, flooding behavior, and unicast routing enablement, and to application profiles that group related EPGs representing a complete application. Multiple EPGs can exist in a single bridge domain enabling communication within the layer 2 domain while contracts control cross-EPG traffic even within the same subnet. EPGs map to VLANs or VXLAN network identifiers when traffic enters or leaves the ACI fabric, with encapsulation managed automatically by the fabric. Quality of service policies can be assigned to EPGs ensuring consistent treatment for all endpoints in the group. EPG monitoring provides visibility into endpoint members, contracts applied, and traffic statistics aggregated at the EPG level. EPG design considerations include balancing granularity where too few EPGs reduce security effectiveness through overly broad grouping while too many EPGs create management complexity, aligning EPGs with application architecture for intuitive policy definition, and planning for multi-tier applications where each tier receives its own EPG. EPG stretch across multiple leaf switches enables endpoint mobility where workloads can move while retaining their EPG membership and associated policies. Microsegmentation within EPGs is possible through uSeg EPGs that further subdivide based on attributes like IP addresses or VM attributes. EPG analytics track member counts, policy hits, and health scores supporting operational visibility and troubleshooting. Best practices include naming EPGs to reflect their application context, documenting EPG purposes and membership criteria, and regularly reviewing EPG membership as applications evolve. Bridge Domains provide layer 2 properties. Tenants provide administrative isolation. Contexts are private layer 3 spaces equivalent to VRFs.
Question 86:
What is the purpose of Cisco Nexus Scheduler feature?
A) Schedule employee work shifts
B) Schedule configuration changes and maintenance tasks to execute at specified times
C) Manage meeting room reservations
D) Schedule backups of virtual machines
Answer: B
Explanation:
Cisco Nexus Scheduler enables scheduling of configuration changes and maintenance tasks to execute automatically at specified times, supporting operational workflows that require actions during maintenance windows, regular recurring tasks, or precise timing coordination across multiple devices. The scheduler addresses operational needs including executing changes during low-traffic periods minimizing user impact, performing regular maintenance tasks without manual intervention, coordinating changes across multiple switches for consistent timing, and ensuring critical tasks execute reliably without depending on administrator availability. Scheduler functionality includes defining jobs that contain commands to execute, scheduling when jobs run using time-based triggers, and logging job execution results for audit and troubleshooting. Job definitions specify one or more CLI commands that execute sequentially, enabling complex multi-step operations like changing configurations, clearing counters, or generating reports. Time specifications use flexible formats including one-time execution at a specific date and time for unique maintenance events, recurring execution with daily, weekly, or monthly patterns for regular tasks, and time-delta execution running at specific intervals. Scheduler supports various use cases including configuration deployment scheduling complex configuration changes for maintenance windows, log collection gathering diagnostic information at regular intervals, interface flapping executing interface shut/no shut sequences for cable testing, and housekeeping performing routine maintenance like clearing log files. The scheduler maintains job execution history recording when jobs ran, whether they completed successfully, and command output enabling verification that scheduled tasks executed as intended. Multiple jobs can be scheduled independently with the scheduler managing execution queues and resolving conflicts when jobs overlap. Scheduler configuration includes creating job definitions with command sequences, defining schedules specifying execution timing, and associating jobs with schedules. Job commands execute in the context of the user who created the job inheriting that user’s privileges and access controls. Scheduler management commands include displaying configured jobs and schedules, showing job execution history, manually triggering jobs outside their normal schedule for testing, and deleting obsolete jobs or schedules. Operational considerations include testing jobs before scheduling to ensure commands execute correctly, scheduling jobs during maintenance windows when change risk is acceptable, monitoring job execution results through logs or alerts, and coordinating scheduled jobs across multiple switches when synchronization is important. Common scheduled tasks include VLAN pruning removing unused VLANs periodically, log rotation managing log file sizes, configuration backups copying running configurations to archive locations, and statistics collection gathering performance data for analysis. The scheduler is not a comprehensive automation platform but provides essential scheduling capabilities for routine operational tasks. Complex automation workflows typically use external orchestration tools like Ansible or Python scripts that can leverage scheduler for time-based triggering. Scheduler persistence ensures schedules survive switch reboots reactivating upon system restart. Security considerations include restricting scheduler job creation to authorized administrators and auditing scheduled job creation and modification. Employee scheduling is HR function. Meeting reservations use facility management systems. VM backups use hypervisor or backup software.
Question 87:
Which Cisco Nexus feature provides the ability to mirror traffic for monitoring and analysis?
A) Port Channel
B) SPAN (Switched Port Analyzer)
C) VDC
D) VRF
Answer: B
Explanation:
Switched Port Analyzer provides traffic mirroring capabilities on Cisco Nexus switches, enabling traffic from source interfaces to be copied to destination interfaces where monitoring tools, network analyzers, or intrusion detection systems can capture and analyze the traffic for troubleshooting, security monitoring, or performance analysis. SPAN addresses the operational requirement to observe network traffic without disrupting data flows, supporting use cases including troubleshooting application issues by capturing traffic between endpoints, security monitoring detecting suspicious activities or policy violations, performance analysis measuring response times and throughput, and compliance monitoring documenting specific communications for regulatory requirements. SPAN operates by configuring sessions that define traffic sources and monitoring destinations. Source configuration specifies what traffic to mirror including ingress traffic arriving on interfaces, egress traffic leaving interfaces, or both directions, and can include physical interfaces, port channels, VLANs, or specific VLAN traffic on interfaces. Destination configuration defines where mirrored traffic is sent, typically a physical interface connected to monitoring equipment. SPAN maintains separate sessions allowing multiple simultaneous monitoring activities with different sources and destinations. The feature supports filtering to mirror only traffic matching specific criteria including VLAN filters mirroring only traffic in particular VLANs, and ACL-based SPAN mirroring only traffic matching access control list criteria enabling precise selection of interesting traffic. Nexus switches support various SPAN types including local SPAN where source and destination are on the same switch, and RSPAN (Remote SPAN) extending mirroring across multiple switches by carrying mirrored traffic over special RSPAN VLANs enabling monitoring of traffic from remote switches. ERSPAN (Encapsulated Remote SPAN) provides even greater flexibility by encapsulating mirrored traffic in GRE enabling transmission across routed networks to monitoring tools anywhere in the infrastructure. SPAN session configuration includes creating the session, defining source interfaces or VLANs, specifying the destination interface, and optionally configuring filters. Session limits vary by Nexus platform with switches supporting specific maximum numbers of SPAN sessions simultaneously. SPAN impact considerations include ensuring destination interfaces have sufficient bandwidth to handle mirrored traffic volume which can be substantial especially when mirroring multiple high-speed sources, monitoring destination interface buffer utilization to prevent packet drops, and understanding that SPAN is best effort without guarantees of complete packet capture during congestion. Operational best practices include using filters to limit mirrored traffic to relevant flows reducing volume, selecting appropriate destinations that match or exceed source bandwidth, avoiding continuous SPAN sessions when not actively monitoring to conserve resources, and documenting SPAN configurations for troubleshooting and compliance. SPAN differs from network taps which physically split signals providing completely passive monitoring, offering the advantage of flexibility without requiring additional hardware but with the tradeoff of potential packet drops under heavy load. Advanced monitoring scenarios combine SPAN with inline network packet brokers that aggregate, filter, and distribute mirrored traffic to multiple analysis tools optimizing monitoring infrastructure. SPAN supports troubleshooting workflows including isolating problematic application traffic, validating QoS marking and policy enforcement, detecting unauthorized protocols or applications, and capturing traffic samples for baseline establishment. Security monitoring uses SPAN to feed intrusion detection systems, data loss prevention tools, and security information event management platforms enabling threat detection without inline performance impact. Compliance scenarios leverage SPAN for recording specific communications as audit evidence. SPAN configuration persists across reboots maintaining monitoring continuity. Role-based access control restricts SPAN configuration to authorized administrators preventing unauthorized monitoring. Port Channel bundles links for bandwidth and redundancy. VDC partitions switches logically. VRF separates routing instances.
Question 88:
What is the primary purpose of Cisco Nexus Object Tracking?
A) Track physical inventory of network devices
B) Monitor the state of network objects and trigger actions based on state changes
C) Track user login sessions
D) Monitor shipping packages
Answer: B
Explanation:
Cisco Nexus Object Tracking monitors the state of various network objects and triggers configured actions when state changes occur, providing dynamic responsiveness to network conditions and enabling automated failover, route manipulation, and high availability scenarios. Object tracking addresses situations requiring network behavior to adapt based on changing conditions such as interface status, routing protocol states, or reachability to critical destinations. The tracking framework supports multiple object types including interface line protocol states monitoring whether interfaces are operationally up or down, IP SLA operations tracking reachability and performance to destinations, routing protocol states monitoring protocol neighbor relationships, and boolean combinations enabling complex tracking conditions based on multiple inputs. Tracked objects maintain binary states of up or down based on monitored conditions, with state transitions triggering notifications to registered clients that take appropriate actions. Common use cases include floating static routes where static routes track reachability objects and are installed only when tracking objects are up, enabling automatic failover from primary to backup paths, VRRP or HSRP priority manipulation where gateway priority depends on tracked objects creating dynamic master election based on upstream connectivity, policy routing adaptation where next-hop selection tracks object states enabling intelligent path selection, and interface dampening where interfaces track objects preventing flapping during instability. Object tracking configuration involves defining track objects specifying what to monitor and conditions for up state, and configuring clients to reference track objects and define actions when state changes. Track objects can monitor single items like a specific interface or IP SLA operation, or combine multiple items using boolean logic with track list all requiring all members to be up, track list any requiring at least one member to be up, or threshold specifications requiring minimum numbers of members to be up. IP SLA integration enables sophisticated reachability tracking beyond simple ping, including measuring response times, verifying specific application behaviors, and tracking through intermediate hops. The tracking system provides dampening mechanisms preventing rapid state transitions during instability by requiring conditions to persist for minimum durations before declaring state changes. Track state change notifications propagate immediately to clients enabling rapid convergence with failover occurring within seconds of detecting failures. Object tracking supports high availability architectures where multiple paths exist with primary paths tracked and backup paths activated automatically upon tracking object failure. Network designs leverage tracking for intelligent routing where static or policy routes select optimal paths dynamically based on performance or reachability metrics from tracked IP SLA operations. Security scenarios use tracking to detect critical infrastructure failures and trigger protective responses like isolating compromised segments. Troubleshooting object tracking involves verifying tracked object states, reviewing state change history, and validating client responses to state transitions. Best practices include defining appropriate tracking intervals balancing responsiveness with overhead, implementing dampening to prevent reaction to transient failures, testing failover scenarios validating that tracking triggers expected behaviors, and monitoring tracking object states as part of operational visibility. Object tracking complements routing protocols enabling policy-based path selection beyond standard routing metrics. Physical inventory tracking is asset management. User session tracking is authentication monitoring. Package tracking is logistics management.
Question 89:
Which Cisco ACI component provides external connectivity to networks outside the ACI fabric?
A) Spine Switch
B) Leaf Switch
C) APIC
D) Border Leaf
Answer: D
Explanation:
Border Leaf switches in Cisco Application Centric Infrastructure provide external connectivity to networks outside the ACI fabric including traditional data center networks, WAN connections, internet services, and other ACI fabrics, functioning as the integration point between the ACI policy-based fabric and external Layer 2 and Layer 3 networks. Border leafs are standard ACI leaf switches configured with additional external connectivity capabilities, maintaining all normal leaf functions while also supporting external network connections. External connectivity configuration differs from internal fabric connectivity by requiring creation of Layer 3 Outside or Layer 2 Outside connections that extend the ACI policy model to external networks. Layer 3 Outside connections provide routed connectivity running dynamic routing protocols including OSPF, BGP, or EIGRP with external routers exchanging routes between the ACI fabric and external networks. The border leaf acts as the routing boundary establishing neighbor relationships with external routers, advertising fabric internal routes to external networks, and importing external routes into the fabric. Layer 2 Outside connections extend Layer 2 domains from the ACI fabric to external switches supporting scenarios where broadcast domains must span between ACI and traditional networks. Border leaf configuration involves creating external routed networks or Layer 2 external networks in the APIC defining the logical external network, configuring physical or port channel interfaces on border leafs connecting to external devices, establishing routing protocol parameters when using Layer 3 connectivity, and associating external networks with contracts enabling EPG-to-external-network communication through ACI policy model. External EPGs represent destinations outside the fabric enabling policy definition for traffic between internal EPGs and external networks. Transit routing through the ACI fabric is supported allowing ACI to route between different external networks providing data center network aggregation. Border leaf high availability uses standard ACI mechanisms including border leaf redundancy where multiple border leafs provide external connectivity with routing protocol failover, vPC for Layer 2 external connections ensuring link-level redundancy, and routing protocol features like BFD for fast failure detection. Multi-site ACI deployments use border leafs for inter-site connectivity establishing VXLAN tunnels between fabrics enabling stretched EPGs and inter-site routing. Internet edge scenarios place border leafs at network perimeter connecting to internet service providers and implementing security policies for internet-facing applications. WAN integration connects border leafs to WAN routers enabling branch connectivity or cloud on-ramps. Border leaf traffic flow shows external traffic entering ingress border leaf, being classified into external EPG, having contracts applied between external EPG and destination internal EPG, being forwarded to destination leaf, and returning through egress border leaf potentially the same or different from ingress. Quality of service policies applied to traffic entering or leaving the fabric are enforced on border leafs. Security policies including contracts and filters apply to external traffic identically to internal traffic maintaining consistent policy model. Border leaf placement considerations include geographic distribution when external connections are in different locations, capacity planning ensuring border leafs handle aggregate external traffic, and role separation when dedicated border leafs vs. combined border and compute leafs. Spine switches provide fabric interconnect. Standard leaf switches connect endpoints. APIC is the controller.
Question 90:
What is the function of Cisco Nexus Rollback feature?
A) Physically roll back network cables
B) Revert configuration to a previous checkpoint
C) Roll back software versions
D) Undo physical changes to the data center
Answer: B
Explanation:
Cisco Nexus Rollback enables reverting switch configuration to a previous checkpoint, providing safety net for configuration changes by allowing quick recovery when changes cause unexpected problems or need to be undone. The rollback feature addresses operational challenges including configuration errors that disrupt services requiring rapid restoration, changes that have unintended side effects needing reversal, and the need to verify changes in production with ability to back out quickly. Rollback operates through checkpoint mechanism where configuration snapshots are captured at specific points in time creating restore points that can be referenced later. Checkpoint creation can be manual where administrators explicitly create checkpoints before making changes, or automatic with the system creating checkpoints at defined intervals or events. Each checkpoint receives a unique name or identifier enabling specific checkpoint selection during rollback operations. The rollback process compares the current running configuration with the checkpoint configuration, determines the differences, and generates the CLI commands necessary to revert to the checkpoint state. Rollback application can be immediate where configuration changes take effect instantly, or preview mode showing the commands that would execute without applying them enabling verification before actual rollback. Atomic rollback ensures either complete success with all changes applied or complete failure with no changes applied, preventing partial rollback states that could leave configuration inconsistent. Rollback supports various scopes including specific features where rollback affects only certain configuration sections like VLAN or routing protocol configuration, or complete rollback reverting all configuration differences. Checkpoint management includes creating checkpoints manually before planned changes, listing available checkpoints showing creation dates and descriptions, displaying checkpoint contents viewing configuration at checkpoint creation time, comparing checkpoints identifying differences between configuration states, and deleting old checkpoints managing storage space. Typical rollback workflow involves creating checkpoint before making changes establishing restore point, implementing configuration changes through normal processes, validating changes ensuring they work as expected, and either deleting checkpoint if changes are successful or rolling back to checkpoint if problems occur. Rollback limitations include that it affects only configuration with no impact on operational state like learned MAC addresses or routing protocol neighbors which must re-establish after rollback, and that some features may not support rollback requiring manual configuration reversal. Best practices include always creating checkpoints before significant changes, using descriptive checkpoint names indicating the configuration state or change purpose, testing rollback procedures in non-production environments, maintaining limited checkpoint numbers to avoid consuming excessive storage, and documenting which checkpoints represent known-good configurations. Rollback differs from configuration archive which maintains historical configuration versions for audit and comparison but may not provide one-command restoration. Change management processes incorporate rollback as risk mitigation enabling rapid recovery from failed changes. Rollback is particularly valuable during maintenance windows where time pressure demands quick recovery if changes fail. The feature complements configuration validation tools providing recovery mechanism when validation misses issues. Physical cable management is unrelated. Software version changes use separate upgrade procedures. Physical infrastructure changes cannot be automated through configuration rollback.