Visit here for our full Cisco 350-601 exam dumps and practice test questions.
Question 61:
Which Cisco Nexus feature provides automated network discovery and topology visualization in the data center?
A) LLDP
B) CDP
C) DCNM
D) Spanning Tree Protocol
Answer: C
Explanation:
Cisco Data Center Network Manager (DCNM) is a comprehensive management solution that provides automated network discovery, topology visualization, configuration management, and monitoring for Cisco data center networks. DCNM automatically discovers network devices, their connections, and relationships, then creates visual topology maps that help administrators understand the network infrastructure. The platform supports both LAN and SAN fabrics, providing a unified management interface for the entire data center network. DCNM uses protocols like CDP, LLDP, and SNMP to discover devices and build topology maps, while also offering features like configuration compliance, performance monitoring, and troubleshooting tools.
DCNM is essential for managing complex data center environments with numerous switches, routers, and network devices. It provides role-based access control, allowing different teams to manage their respective portions of the network. The platform supports both VXLAN EVPN fabrics and traditional data center networks, offering automation capabilities through templates and policies. DCNM can automatically provision VLANs, VRFs, and network policies across multiple switches, reducing manual configuration errors and deployment time. The topology visualization feature helps identify redundant paths, single points of failure, and network bottlenecks. Integration with Cisco Application Centric Infrastructure and other Cisco technologies makes DCNM a central management platform for modern data centers.
LLDP (Link Layer Discovery Protocol) is an industry-standard protocol that allows network devices to advertise their identity and capabilities to neighbors on the local network. While LLDP enables device discovery, it is just a protocol and does not provide topology visualization or comprehensive management capabilities. LLDP operates at Layer 2 and exchanges information like device name, port description, VLAN information, and capabilities with directly connected neighbors.
CDP (Cisco Discovery Protocol) is a Cisco proprietary protocol similar to LLDP that discovers directly connected Cisco devices and gathers information about them. Like LLDP, CDP is a discovery protocol but not a complete management solution. It provides information about neighboring devices including platform type, capabilities, and interface details, but requires additional tools for topology mapping and visualization.
Spanning Tree Protocol prevents Layer 2 loops in switched networks by blocking redundant paths and is not used for network discovery or topology visualization. STP ensures a loop-free topology in networks with redundant links but does not provide management or monitoring capabilities.
Question 62:
What is the primary function of the Cisco Nexus 9000 Series switches in ACI mode?
A) Traditional switching only
B) Provide hardware for the ACI fabric and implement policies from APIC
C) Function as standalone switches
D) Replace the APIC controller
Answer: B
Explanation:
When Cisco Nexus 9000 Series switches operate in Application Centric Infrastructure (ACI) mode, they function as the hardware foundation of the ACI fabric and implement policies centrally defined and pushed from the Application Policy Infrastructure Controller (APIC). In ACI mode, these switches operate as leaf switches or spine switches within the fabric, executing the policy-based forwarding and segmentation defined in the APIC. The switches run ACI firmware and use protocols like VXLAN for overlay networking, IS-IS for underlay routing, and MP-BGP EVPN for control plane operations. All configuration and policy enforcement is controlled through the APIC, making the fabric operate as a single integrated system rather than individual switches.
The ACI fabric architecture uses a spine-leaf topology where leaf switches connect to endpoints (servers, storage, network services) and spine switches provide full-mesh connectivity between all leaf switches. When operating in ACI mode, Nexus 9000 switches maintain the ACI policy model, implementing endpoint groups, contracts, bridge domains, and VRFs as defined in the APIC. The switches handle local switching decisions and policy enforcement while the APIC maintains the central policy repository and fabric-wide visibility. This separation of control plane (APIC) and data plane (Nexus switches) enables scalable, automated network provisioning. The switches support hardware-based VXLAN encapsulation and decapsulation, providing high-performance overlay networking that enables microsegmentation and workload mobility across the data center.
Traditional switching only represents NX-OS standalone mode operation, not ACI mode. In standalone mode, Nexus 9000 switches operate like traditional Cisco switches with local configuration through CLI or management interfaces. This mode provides standard switching features but does not include ACI policy model, centralized management through APIC, or automated fabric capabilities.
Functioning as standalone switches contradicts the fundamental architecture of ACI mode. ACI mode requires integration with APIC and operation as part of a fabric. Standalone operation means each switch is independently configured and managed, which is the opposite of the centralized policy model that defines ACI mode.
Replacing the APIC controller is not possible because the Nexus 9000 switches and APIC serve different architectural roles. The APIC provides centralized policy definition, configuration management, and fabric-wide visibility, while the switches implement those policies in the data plane. The APIC runs on dedicated appliances or virtual machines and cannot be replaced by switching hardware.
Question 63:
Which protocol does VXLAN use for encapsulation in overlay networks?
A) GRE
B) UDP
C) TCP
D) ICMP
Answer: B
Explanation:
VXLAN (Virtual Extensible LAN) uses UDP (User Datagram Protocol) for encapsulating Layer 2 Ethernet frames within Layer 3 IP packets, enabling overlay networking across IP networks. The VXLAN header is placed between the outer UDP header and the original Ethernet frame, creating a MAC-in-UDP encapsulation. VXLAN uses UDP destination port 4789 (the IANA-assigned port) or sometimes port 8472 in certain implementations. The UDP-based encapsulation allows VXLAN packets to traverse existing IP networks, taking advantage of equal-cost multipath routing and standard IP routing capabilities. This approach enables the creation of logical Layer 2 networks over Layer 3 infrastructure without requiring changes to the underlying network.
The choice of UDP for VXLAN encapsulation provides several benefits for data center networking. UDP is connectionless and stateless, reducing overhead compared to connection-oriented protocols. The UDP source port can be varied based on a hash of the inner frame’s header fields, creating entropy that allows underlying network devices to perform effective load balancing across multiple paths using ECMP. VXLAN adds a 24-bit VXLAN Network Identifier (VNI) which provides over 16 million unique network segments, far exceeding the 4096 VLAN limitation. The encapsulation format includes an 8-byte VXLAN header, outer UDP header, outer IP header, and outer Ethernet header, adding approximately 50 bytes of overhead to each packet. VXLAN Tunnel Endpoints (VTEPs) perform the encapsulation and decapsulation operations, maintaining mappings between MAC addresses and VTEP IP addresses.
GRE (Generic Routing Encapsulation) is a different tunneling protocol that can encapsulate various network layer protocols but is not used by VXLAN. GRE creates point-to-point tunnels and lacks the built-in support for multitenancy through network identifiers like VXLAN’s VNI. While GRE can be used for overlay networks, it does not provide the same scale or features as VXLAN for data center applications.
TCP (Transmission Control Protocol) is not used for VXLAN encapsulation because it is connection-oriented and would add significant overhead and complexity. TCP requires three-way handshake establishment, acknowledgments, and state maintenance, which would be inefficient for encapsulating large numbers of flows in a data center environment. The stateless nature of UDP is better suited for the high-performance requirements of data center overlay networking.
ICMP (Internet Control Message Protocol) is used for diagnostic and error reporting purposes in IP networks, not for encapsulation. ICMP handles functions like ping and traceroute but does not provide tunneling or encapsulation capabilities needed for overlay networking.
Question 64:
In Cisco ACI, what is an Endpoint Group (EPG)?
A) A collection of physical ports
B) A logical grouping of endpoints with similar policy requirements
C) A VLAN configuration
D) A routing protocol instance
Answer: B
Explanation:
An Endpoint Group (EPG) in Cisco Application Centric Infrastructure is a fundamental policy construct that represents a logical collection of endpoints (such as virtual machines, physical servers, or containers) that share common policy requirements and security posture. EPGs enable application-centric network configuration by grouping endpoints based on their application tier, security requirements, or functional role rather than network location or VLAN assignment. For example, a three-tier application might have separate EPGs for web servers, application servers, and database servers, each with appropriate security policies. Endpoints are dynamically assigned to EPGs based on various criteria including VLAN, IP subnet, VM attributes, or physical port connections.
EPGs interact through contracts which define the communication policies between different EPGs. A contract specifies which protocols, ports, and directions of traffic are permitted between EPGs, implementing microsegmentation and zero-trust security models. For instance, a web EPG might have a contract allowing it to communicate with an application EPG on specific ports, while the application EPG has a separate contract for database EPG communication. This contract-based model provides granular security control and makes network policies application-aware rather than infrastructure-dependent. EPGs are associated with bridge domains for Layer 2 connectivity and VRFs for Layer 3 isolation, providing complete network segmentation capabilities. The APIC automatically programs the necessary forwarding rules and access control policies across all fabric switches based on EPG definitions and contracts.
A collection of physical ports represents a traditional port-group or port-channel configuration, not an EPG. While endpoints connected to physical ports can be assigned to an EPG, the EPG itself is a logical construct that can span multiple physical locations, switches, and even data centers. Physical connectivity is just one method of identifying endpoints for EPG membership.
A VLAN configuration is a Layer 2 segmentation technique from traditional networking, not an EPG. While VLANs can be used as one criterion for assigning endpoints to EPGs, EPGs provide much more flexibility and functionality. EPGs can include endpoints from multiple VLANs, and a single VLAN can contain endpoints from multiple EPGs. The policy-based nature of EPGs transcends simple VLAN-based segmentation.
A routing protocol instance refers to processes like OSPF, EIGRP, or BGP that exchange routing information, which is unrelated to EPGs. EPGs operate at the policy and endpoint management level, while routing protocols handle path determination and reachability. ACI fabrics use IS-IS in the underlay and MP-BGP EVPN for overlay control plane, but these are separate from EPG functionality.
Question 65:
What is the purpose of a Bridge Domain in Cisco ACI?
A) To connect spine and leaf switches
B) To provide Layer 2 forwarding within the fabric
C) To configure BGP routing
D) To manage APIC controllers
Answer: B
Explanation:
A Bridge Domain in Cisco Application Centric Infrastructure provides Layer 2 forwarding and represents a Layer 2 flood domain within the ACI fabric. It defines the unique Layer 2 MAC address space and handles Layer 2 forwarding behavior including learning, flooding, and unknown unicast handling. Bridge Domains are associated with one or more EPGs and provide the Layer 2 connectivity framework for those EPGs. Each Bridge Domain is linked to a VRF (Virtual Routing and Forwarding) instance, establishing the Layer 3 context, and can have one or more subnets configured for IP addressing. The Bridge Domain controls whether unicast routing is enabled, allowing inter-subnet communication, and configures ARP flooding behavior to optimize network traffic.
Bridge Domains in ACI implement several important features for data center networking. They support multiple EPGs sharing the same Layer 2 domain while maintaining separate policy enforcement through contracts. The Bridge Domain configuration includes settings for limiting IP learning to subnet boundaries, enabling ARP flooding optimization to reduce broadcast traffic, and configuring Layer 2 unknown unicast flooding behavior. Bridge Domains can be extended across multiple leaf switches in the fabric, providing Layer 2 adjacency for workloads regardless of physical location. This enables virtual machine mobility and flexible workload placement without network reconfiguration. The relationship between Bridge Domains, EPGs, and VRFs creates a hierarchical structure where VRFs provide Layer 3 isolation, Bridge Domains provide Layer 2 domains within VRFs, and EPGs provide policy-based endpoint grouping within Bridge Domains.
Connecting spine and leaf switches is accomplished through the physical fabric cabling and the underlay routing protocols (IS-IS), not through Bridge Domains. The spine-leaf architecture uses direct physical connections with each leaf connected to every spine switch, creating a non-blocking fabric. Bridge Domains operate at a logical level above this physical infrastructure.
Configuring BGP routing refers to setting up Border Gateway Protocol for external routing or MP-BGP EVPN for the ACI fabric control plane. While ACI uses MP-BGP EVPN for distributing endpoint information, this is separate from Bridge Domain functionality. Bridge Domains focus on Layer 2 forwarding behavior, while BGP handles routing protocol operations.
Managing APIC controllers is an infrastructure management function unrelated to Bridge Domains. APIC management involves clustering controllers, managing high availability, and configuring administrative access. Bridge Domains are tenant network constructs configured through the APIC but do not manage the APIC itself.
Question 66:
Which Cisco UCS component provides centralized management for the UCS domain?
A) Fabric Interconnect
B) UCS Manager
C) IOM (I/O Module)
D) BMC (Baseboard Management Controller)
Answer: B
Explanation:
Cisco UCS Manager is the centralized management and policy engine for the Cisco Unified Computing System domain, providing a single point of management for all UCS components including servers, fabric interconnects, chassis, and I/O modules. UCS Manager runs on the Fabric Interconnects in a high-availability configuration and provides both GUI and CLI interfaces for managing the entire UCS infrastructure. It implements a model-based architecture where administrators create service profiles containing server configuration, network connectivity, storage access, and firmware policies that can be applied to physical servers. This abstraction of configuration from hardware enables stateless computing where server identity and configuration are separated from physical hardware.
UCS Manager’s policy-based management approach significantly simplifies data center operations and ensures consistency across hundreds or thousands of servers. Administrators create pools of resources (MAC addresses, WWN addresses, IP addresses, UUIDs) and policies (boot order, BIOS settings, network adapter configuration) that are automatically applied when service profiles are associated with servers. This automation reduces deployment time from hours to minutes and eliminates manual configuration errors. UCS Manager handles firmware updates across the domain, ensuring all components run compatible versions and supporting non-disruptive rolling upgrades. The platform provides comprehensive monitoring, alerting, and reporting capabilities for hardware health, performance metrics, and fault conditions. Integration with Cisco Intersight extends management capabilities to the cloud for multi-domain management and analytics.
Fabric Interconnect is the physical infrastructure device that provides network connectivity and houses the UCS Manager software, but it is not itself the management component. Fabric Interconnects are deployed in pairs for high availability and provide unified fabric capabilities combining LAN, SAN, and management traffic. While critical infrastructure components, they serve as the platform for UCS Manager rather than being the management system themselves.
IOM (I/O Module) or Fabric Extender modules are installed in UCS chassis to provide connectivity between blade servers and the Fabric Interconnects. They extend the unified fabric into the chassis but do not provide management capabilities. IOMs are managed by UCS Manager but do not manage the domain themselves.
BMC (Baseboard Management Controller) is an embedded processor on individual servers that provides low-level hardware management, monitoring, and remote access capabilities. While BMCs enable remote server management and hardware monitoring, they operate at the individual server level and do not provide domain-wide management. UCS Manager communicates with BMCs to manage servers but provides the centralized control layer above individual BMCs.
Question 67:
What is the function of a Service Profile in Cisco UCS?
A) Monitor network traffic
B) Define server identity and configuration that can be applied to physical hardware
C) Configure storage arrays
D) Manage VLAN assignments only
Answer: B
Explanation:
A Service Profile in Cisco Unified Computing System is a comprehensive definition of server identity and configuration that encapsulates all aspects of server personality including network identity (MAC addresses, WWN addresses), firmware versions, BIOS settings, boot order, network adapter configuration, and storage controller settings. Service Profiles enable the fundamental UCS concept of stateless computing by completely abstracting server configuration from physical hardware. When a Service Profile is associated with a physical server, UCS Manager automatically configures that server according to all policies and settings defined in the profile. If hardware fails, the Service Profile can be quickly migrated to a different physical server, which then assumes the exact identity and configuration of the failed server, dramatically reducing recovery time.
Service Profiles leverage UCS Manager’s policy-based architecture through templates and pools. Service Profile Templates allow administrators to create standardized configurations that can be instantiated multiple times, ensuring consistency across server deployments. The templates can be updating (where changes to the template propagate to all derived Service Profiles) or initial-template (where derived profiles become independent after creation). Resource pools for UUID, MAC addresses, and WWN addresses ensure uniqueness while enabling reuse when servers are decommissioned. Policies within Service Profiles include boot policy defining boot device order and priorities, local disk configuration policy, scrub policy for data sanitization, maintenance policy controlling firmware update behavior, and power policy managing power consumption. This comprehensive approach eliminates manual server configuration, reduces deployment time from hours to minutes, and ensures compliance with organizational standards.
Monitoring network traffic is accomplished through network analysis tools, NetFlow, SPAN/ERSPAN, or packet capture solutions, not through Service Profiles. While Service Profiles define network connectivity and adapter settings, they do not perform traffic monitoring or analysis functions. UCS Manager provides some visibility into traffic statistics, but this is separate from Service Profile functionality.
Configuring storage arrays is managed through storage array management interfaces and protocols like SMI-S, not through UCS Service Profiles. While Service Profiles can define storage connectivity through vHBA configuration and boot from SAN settings, they do not configure the storage arrays themselves. The Service Profile handles the server-side storage access configuration.
Managing VLAN assignments only represents a small subset of Service Profile capabilities. While Service Profiles do include network policies that specify VLAN assignments for virtual NICs (vNICs), they encompass far more than just VLAN configuration. Service Profiles define complete server identity, all adapter settings, storage configuration, firmware versions, BIOS settings, and many other aspects of server configuration.
Question 68:
Which protocol does Cisco ACI use for the underlay network routing?
A) OSPF
B) EIGRP
C) IS-IS
D) RIP
Answer: C
Explanation:
Cisco Application Centric Infrastructure uses IS-IS (Intermediate System to Intermediate System) as the routing protocol for the underlay network, providing the Layer 3 connectivity between all spine and leaf switches in the fabric. IS-IS operates at Layer 2 of the OSI model and is highly efficient for data center environments due to its fast convergence, scalability, and minimal overhead. In the ACI fabric, IS-IS runs on the physical interfaces between spine and leaf switches, establishing the underlay IP connectivity that VXLAN tunnels utilize for overlay networking. The protocol quickly learns all available paths and enables ECMP (Equal-Cost Multi-Path) routing, ensuring optimal traffic distribution across the spine-leaf topology.
The choice of IS-IS for ACI underlay provides several advantages for modern data center operations. IS-IS converges quickly during topology changes, minimizing disruption when links or switches fail. The protocol supports graceful restart and other high-availability features essential for production data centers. IS-IS’s behavior is deterministic and well-suited to the symmetrical spine-leaf architecture where each leaf has equal-cost paths to all other leaves through the spine layer. The underlay network remains simple and focused solely on providing IP connectivity between VTEPs (VXLAN Tunnel Endpoints), while all application awareness and policy enforcement occurs in the overlay network. ACI configures IS-IS automatically during fabric initialization, and administrators rarely need to interact directly with the underlay routing protocol. The separation of underlay (providing connectivity) and overlay (providing services and policy) enables the fabric to scale efficiently while maintaining operational simplicity.
OSPF (Open Shortest Path First) is a widely-used link-state routing protocol but is not chosen for ACI underlay. While OSPF could theoretically provide similar functionality, IS-IS offers better performance characteristics for the ACI fabric architecture. OSPF is more commonly used in enterprise campus and WAN environments rather than data center fabrics.
EIGRP (Enhanced Interior Gateway Routing Protocol) is a Cisco proprietary advanced distance-vector routing protocol used in many enterprise networks. However, it is not suitable for ACI underlay due to its convergence characteristics and operational model. ACI’s architecture benefits from the specific characteristics of link-state protocols like IS-IS.
RIP (Routing Information Protocol) is a legacy distance-vector routing protocol with slow convergence and limited scalability. RIP is unsuitable for modern data center environments and is not used in ACI fabrics. The protocol’s hop-count limitation and slow convergence would severely impact ACI fabric performance and reliability.
Question 69:
What is the maximum number of spine switches supported in a single Cisco ACI fabric?
A) 4
B) 8
C) 12
D) 16
Answer: C
Explanation:
A single Cisco Application Centric Infrastructure fabric supports a maximum of 12 spine switches in current ACI releases, providing the interconnection layer for all leaf switches in the fabric. The spine switches create a non-blocking fabric where every leaf switch connects to every spine switch, ensuring any-to-any connectivity with multiple equal-cost paths. This spine-leaf architecture eliminates traditional three-tier network designs and their associated complexities, providing predictable latency and consistent performance regardless of which leaf switches communicate. The 12-spine limitation provides substantial scalability while maintaining the architectural simplicity and operational benefits of the spine-leaf topology.
The spine layer in ACI fabrics serves as the backplane for the entire data center network, forwarding traffic between leaf switches without performing any endpoint learning or policy enforcement. All spines are identical from a functionality perspective, and traffic load-balances across all available spine switches using ECMP routing in the underlay network. Adding more spine switches increases fabric bandwidth proportionally, as each new spine provides additional paths between all leaves. The fabric can operate with fewer than 12 spines and scale up as bandwidth requirements grow. Spine switches in ACI are dedicated to their role and cannot function as leaf switches or border leaf switches. The separation of spine and leaf functions ensures clean architecture and optimal traffic flow patterns. When planning fabric capacity, administrators must consider both the number of leaf switches (which determines endpoint capacity) and the number of spine switches (which determines inter-leaf bandwidth).
A limit of 4 spine switches would significantly restrict fabric bandwidth and scalability. While small ACI deployments might start with 2 or 4 spine switches, the architecture supports scaling to 12 spines for larger deployments requiring higher bandwidth between leaf switches.
A limit of 8 spine switches represents an intermediate value but is not the actual maximum. Some earlier ACI documentation referenced different limits, but current ACI fabric specifications support 12 spine switches per fabric for maximum scale deployments.
A limit of 16 spine switches exceeds the current ACI fabric capabilities. While future releases might increase this limit, the current architecture specifies 12 as the maximum number of spine switches in a single fabric. Organizations requiring capacity beyond a 12-spine fabric would need to implement multiple ACI fabrics with Multi-Site orchestration.
Question 70:
Which Cisco Nexus feature allows multiple VLANs to use the same IP subnet?
A) VRF
B) Private VLAN
C) VXLAN
D) SVI
Answer: B
Explanation:
Private VLANs (PVLANs) allow multiple VLANs to share the same IP subnet while providing Layer 2 isolation between specific ports or groups of ports. Private VLANs segment a primary VLAN into secondary VLANs, creating isolated, community, or promiscuous port types that control which ports can communicate with each other. This feature is particularly useful in service provider environments, data centers with customer isolation requirements, or situations where IP address space is limited but Layer 2 isolation is needed. A primary VLAN contains one or more secondary VLANs, where isolated VLANs prevent communication between ports in the same isolated VLAN, community VLANs allow communication within the community but not with other communities, and promiscuous ports can communicate with all other port types.
Private VLANs solve the problem of IP address exhaustion while maintaining security and isolation requirements. For example, a hosting provider might have limited public IP addresses but needs to provide isolated environments for multiple customers. Using Private VLANs, all customers can use the same subnet, but isolated ports ensure customers cannot communicate with each other at Layer 2, while promiscuous ports (typically connected to routers or firewalls) can reach all customer ports. The configuration requires a primary VLAN and one or more secondary VLANs associated with it, along with appropriate port mappings. Private VLANs work with features like DHCP snooping, Dynamic ARP Inspection, and IP Source Guard to provide comprehensive security. The technology is defined in RFC 5517 and is widely supported across Cisco Nexus platforms.
VRF (Virtual Routing and Forwarding) provides Layer 3 routing instance separation, creating multiple independent routing tables on the same physical device. While VRFs enable the use of overlapping IP address spaces across different VRFs, they do not allow multiple VLANs to share the same subnet. Each VRF maintains separate routing information and cannot route between VLANs using identical IP subnets.
VXLAN (Virtual Extensible LAN) is an overlay networking technology that encapsulates Layer 2 Ethernet frames in Layer 3 UDP packets, enabling large-scale multi-tenancy and network virtualization. While VXLAN provides segmentation and can support overlapping IP address spaces across different VNIs, it does not specifically enable multiple VLANs to share the same IP subnet in the way Private VLANs do.
SVI (Switched Virtual Interface) is a virtual Layer 3 interface associated with a VLAN, providing routing functionality for that VLAN. SVIs enable inter-VLAN routing and serve as the default gateway for hosts in a VLAN. However, SVIs do not allow multiple VLANs to share the same IP subnet; each SVI requires a unique subnet for proper routing operation.
Question 71:
What is the purpose of the Application Policy Infrastructure Controller (APIC) in Cisco ACI?
A) Provide hardware switching only
B) Serve as the centralized policy and management controller for the ACI fabric
C) Replace all spine switches
D) Function as a load balancer
Answer: B
Explanation:
The Application Policy Infrastructure Controller (APIC) serves as the centralized policy and management controller for the Cisco Application Centric Infrastructure fabric, providing a single point of automation, management, monitoring, and programmability for the entire ACI environment. APIC maintains the desired state configuration for the fabric, automatically programming all spine and leaf switches to implement defined policies and network services. The controller uses a declarative model where administrators specify what the network should accomplish (the desired state) rather than how to configure individual devices, enabling intent-based networking. APIC provides multiple interfaces including a graphical user interface, REST API, CLI, and integrations with automation tools like Ansible, Terraform, and Python libraries.
APIC implements several critical functions for ACI fabric operations. It maintains the Management Information Tree (MIT), a hierarchical object-oriented database containing all configuration and operational state information for the fabric. When administrators create policies, EPGs, bridge domains, or contracts, APIC translates these high-level constructs into the specific configurations needed on each switch and pushes them automatically. The APIC cluster (typically 3 or 5 controllers for redundancy) distributes policy information to all fabric switches and maintains synchronization across all controllers. APIC provides comprehensive visibility into fabric health, performance, and traffic flows through built-in analytics and visualization tools. It performs fabric discovery, initializing new switches added to the fabric and automatically configuring underlay connectivity. APIC also manages fabric upgrades, handles fault detection and remediation, and provides audit logs for all configuration changes. Integration with VMware vCenter, Microsoft SCVMM, and container orchestrators enables automatic endpoint discovery and dynamic policy application.
Providing hardware switching only describes the function of spine and leaf switches in the fabric, not the APIC. The Nexus 9000 series switches in ACI mode perform the actual packet forwarding and policy enforcement in hardware, while APIC provides the control plane and management functions. The separation of control and data planes enables scalable, centralized management.
Replacing all spine switches misunderstands APIC’s architectural role. APIC is a controller that manages spine and leaf switches but does not replace them. The spine switches are essential fabric infrastructure providing the backplane connectivity between all leaf switches. APIC controllers are separate appliances or virtual machines that communicate with the fabric switches through the management network.
Functioning as a load balancer is not the primary purpose of APIC. While ACI fabrics can integrate with load balancers as Layer 4-7 services, APIC itself is the fabric controller. Load balancing services in ACI are provided through service graphs that chain together network services, which APIC orchestrates but does not directly perform.
Question 72:
Which Cisco UCS feature allows server configuration to be decoupled from physical hardware?
A) VLAN trunking
B) Service Profiles
C) Port channels
D) VTP
Answer: B
Explanation:
Service Profiles in Cisco Unified Computing System enable complete decoupling of server configuration from physical hardware, embodying the concept of stateless computing where server identity and settings exist independently of the physical server. A Service Profile contains all configuration elements that define a server including network identity (MAC addresses, WWN addresses, UUIDs), firmware versions, BIOS settings, boot order, local disk configuration, network adapter settings, storage controller configuration, and management settings. When a Service Profile is associated with a physical server, UCS Manager automatically configures that server to match the profile specification, downloading firmware, configuring adapters, and setting all parameters. This abstraction provides unprecedented flexibility and agility in data center operations.
The business value of Service Profiles extends across multiple operational areas. Hardware failures that traditionally required hours of manual reconfiguration can be resolved in minutes by associating the failed server’s Service Profile with a new physical server, which immediately assumes the identity and configuration of the failed system. This capability dramatically reduces downtime and simplifies disaster recovery. Service Profiles enable true server pooling where spare servers can quickly assume any role based on organizational needs. Standardization through Service Profile Templates ensures all servers deployed for specific applications have identical configurations, eliminating human error and configuration drift. Resource pools for identities mean MAC addresses, WWN addresses, and UUIDs can be reused as servers are decommissioned and redeployed, optimizing resource utilization. The stateless approach also simplifies compliance and auditing because server configurations are centrally defined and version-controlled in UCS Manager rather than distributed across individual servers.
VLAN trunking is a network configuration technique that allows multiple VLANs to traverse a single physical link, but it does not provide server configuration decoupling. VLAN trunking is one aspect that might be configured through a Service Profile’s network policy, but it is not the feature that enables stateless computing.
Port channels aggregate multiple physical links into a single logical link for increased bandwidth and redundancy. While port channels can be configured as part of UCS networking, they do not provide the capability to decouple server configuration from hardware. Port channels are network infrastructure configurations rather than server identity abstractions.
VTP (VLAN Trunking Protocol) is a Cisco protocol for distributing VLAN configuration information across switches in a network. VTP is unrelated to server configuration decoupling and is primarily used in traditional switching environments, not in UCS service profile management.
Question 73:
What is the function of the MP-BGP EVPN protocol in Cisco ACI?
A) Provide physical connectivity
B) Distribute endpoint and routing information across the fabric
C) Configure VLANs
D) Manage power consumption
Answer: B
Explanation:
MP-BGP EVPN (Multiprotocol Border Gateway Protocol Ethernet VPN) serves as the control plane protocol in Cisco Application Centric Infrastructure, distributing endpoint MAC and IP address information, routing information, and reachability data across the fabric. EVPN enables the fabric to maintain awareness of where endpoints are located, which VTEPs they connect to, and how to reach them efficiently. The protocol uses BGP’s robust and scalable architecture to exchange EVPN routes between leaf switches through the spine switches acting as route reflectors. EVPN route types include Type-2 routes for MAC and IP advertisement, Type-3 routes for multicast group membership, and Type-5 routes for IP prefix advertisements, providing comprehensive information distribution for both Layer 2 and Layer 3 forwarding.
The use of MP-BGP EVPN provides several significant advantages for ACI fabric operations. It enables distributed learning of endpoint information where each leaf switch learns about endpoints locally and advertises this information to other leaf switches through the control plane. This approach eliminates the need for data plane learning and flooding, significantly reducing unnecessary traffic in the fabric. EVPN supports host routing where individual host routes can be advertised throughout the fabric, enabling optimal forwarding and microsegmentation. The protocol facilitates seamless mobility as endpoints move between leaf switches, with EVPN quickly updating the fabric about the new endpoint location. EVPN also provides the foundation for multi-tenancy by carrying tenant context information with each route, ensuring proper traffic isolation. Integration with external networks through border leaf switches uses EVPN to exchange routing information between the ACI fabric and external routers, supporting hybrid cloud and data center interconnect scenarios.
Providing physical connectivity is accomplished through the physical cabling of the fabric and the underlay network running IS-IS. MP-BGP EVPN operates as a control plane protocol on top of this physical infrastructure and does not provide the actual physical connectivity between switches.
Configuring VLANs is a traditional network management task that in ACI is handled through APIC’s policy model using EPGs, bridge domains, and application profiles. While EVPN might carry information related to Layer 2 domains, it does not configure VLANs. ACI abstracts away traditional VLAN configuration in favor of policy-based networking.
Managing power consumption is an infrastructure management function handled by UCS Manager in compute environments or by individual switch management functions. EVPN is a network control plane protocol and has no role in power management. ACI does provide some power monitoring capabilities, but these are separate from EVPN functionality.
Question 74:
Which type of port on a Cisco Nexus switch in ACI connects to endpoints like servers or storage?
A) Fabric port
B) Access port
C) Spine port
D) Console port
Answer: B
Explanation:
Access ports on Cisco Nexus switches operating in ACI mode are the interfaces that connect to endpoints such as physical servers, storage devices, hypervisors, bare-metal systems, or network appliances. These ports are configured with EPG assignments, VLAN encapsulation, and port policies that define how endpoints connect to the fabric. Access ports on leaf switches represent the edge of the ACI fabric where endpoints attach and where policy enforcement begins. The ports can be configured in various modes including access mode for untagged traffic, trunk mode for VLAN-tagged traffic, or as part of a port-channel for link aggregation. Static or dynamic VLAN assignment determines how traffic from these ports is classified into the appropriate EPGs for policy application.
Access port configuration in ACI involves creating interface policies that define settings like speed, duplex, MTU, CDP/LLDP behavior, and storm control parameters. These ports are associated with EPGs either statically through explicit configuration or dynamically through VM manager integration where virtual machine attributes determine EPG membership. Access ports support various deployment scenarios including connecting to standalone servers, hypervisor hosts running multiple virtual machines, storage arrays requiring specific QoS treatment, or legacy network devices requiring VLAN-based connectivity. The ACI fabric automatically programs the necessary VXLAN encapsulation, policy enforcement, and forwarding rules based on the EPG configuration associated with access ports. Physical domain configuration specifies which VLANs and policies are available on which access ports, providing administrative control over endpoint connectivity.
Fabric ports are the interfaces on leaf switches that connect to spine switches, forming the internal fabric infrastructure. These ports carry VXLAN-encapsulated traffic between leaf switches through the spine layer and run IS-IS routing protocol in the underlay. Fabric ports do not connect to endpoints and are automatically configured when switches join the ACI fabric.
Spine ports exist on spine switches and connect exclusively to leaf switches’ fabric ports, creating the full-mesh connectivity pattern in the spine-leaf architecture. Spine switches have no access ports and never connect directly to endpoints. All endpoint traffic enters through leaf switch access ports.
Console ports provide out-of-band management access to switches for initial configuration, troubleshooting, or emergency access. Console ports do not carry production traffic and are not used for endpoint connectivity. They provide serial terminal access for direct switch management.
Question 75:
What is the primary benefit of using VXLAN in data center networks?
A) Reduce power consumption
B) Overcome the 4096 VLAN limitation and enable network virtualization
C) Eliminate the need for switches
D) Provide console access
Answer: B
Explanation:
The primary benefit of VXLAN (Virtual Extensible LAN) is overcoming the traditional 4096 VLAN limitation imposed by the 12-bit VLAN ID field in 802.1Q tagging and enabling large-scale network virtualization in modern data centers. VXLAN uses a 24-bit VXLAN Network Identifier (VNI) providing over 16 million unique network segments, which is essential for cloud-scale multi-tenant environments, large enterprise data centers, and service provider networks. This massive expansion in available network identifiers enables true network virtualization where each tenant, application, or service can have isolated network segments without concern for VLAN ID exhaustion. VXLAN creates Layer 2 overlay networks on top of existing Layer 3 infrastructure, enabling virtual machine mobility, workload flexibility, and simplified network operations.
VXLAN provides numerous additional benefits beyond addressing the VLAN limitation. It enables workload mobility across data centers by extending Layer 2 networks over Layer 3 infrastructure, allowing virtual machines to migrate between physical locations while maintaining their IP addresses and network connections. The technology leverages existing IP networks and uses UDP encapsulation to enable equal-cost multipath routing, optimizing traffic distribution across multiple paths. VXLAN supports network segmentation and multi-tenancy by providing isolated network domains for different customers, applications, or security zones. The overlay model decouples logical network topology from physical infrastructure, simplifying network design and enabling rapid deployment of new network services without physical reconfiguration. VXLAN also facilitates hybrid cloud connectivity by creating consistent network abstractions between on-premises data centers and public cloud environments. The protocol has become foundational for software-defined networking, network function virtualization, and container networking solutions.
Reducing power consumption is not a function of VXLAN. Power management in data centers involves hardware efficiency, cooling systems, power distribution, and workload consolidation strategies. While network virtualization might indirectly contribute to overall data center efficiency by improving resource utilization, VXLAN itself is a network encapsulation and virtualization protocol, not a power management technology.
Eliminating the need for switches fundamentally misunderstands network architecture. VXLAN requires switches or software virtual switches to perform encapsulation, decapsulation, and forwarding of VXLAN packets. VTEPs implemented on physical switches or hypervisors are essential components of VXLAN deployments. VXLAN changes how networks are architected but does not eliminate switching infrastructure.
Providing console access is an administrative and management function unrelated to VXLAN. Console access involves serial or network-based connections to network devices for configuration and troubleshooting purposes. VXLAN is a data plane technology for network virtualization and has no relationship to management console access.