Cisco 350-601 Implementing and Operating Cisco Data Center Core Technologies (DCCOR) Exam Dumps and Practice Test Questions Set 12 Q166 – 180

Visit here for our full Cisco 350-601 exam dumps and practice test questions.

Question 166

Which storage protocol uses TCP/IP as its transport mechanism?

A) Fibre Channel

B) FCoE

C) iSCSI

D) FICON

Answer: C

Explanation:

iSCSI uses TCP/IP as its transport mechanism, encapsulating SCSI commands within TCP/IP packets for transmission over standard Ethernet networks. This approach allows storage traffic to traverse existing IP networks without requiring specialized Fibre Channel infrastructure, making iSCSI an economical alternative for organizations wanting to implement networked storage using commodity networking equipment. Understanding iSCSI’s transport characteristics is important for designing storage networks and troubleshooting performance issues.

iSCSI operates by mapping SCSI protocol operations, which were originally designed for parallel bus connections, into TCP/IP packets that can traverse routed networks. An iSCSI initiator, typically a server running iSCSI driver software or using an iSCSI HBA, generates SCSI commands and encapsulates them in iSCSI PDUs. These PDUs are then encapsulated in TCP segments for reliable delivery, which are further encapsulated in IP packets for routing across the network. The iSCSI target, typically a storage array or storage server, receives these packets, extracts the SCSI commands, executes them against attached storage, and returns responses using the same encapsulation process in reverse.

The use of TCP/IP provides several advantages and considerations. TCP ensures reliable, ordered delivery of storage commands and data through acknowledgments, retransmissions, and flow control, which is essential for data integrity. The protocol can traverse routers and operate over wide area networks, enabling storage replication and disaster recovery across geographic distances. Standard Ethernet switches and IP routing infrastructure can carry iSCSI traffic, avoiding the cost of Fibre Channel switches and specialized cabling. However, TCP processing overhead and IP routing latency can impact performance compared to native Fibre Channel, particularly for small random I/O patterns. Network design considerations include implementing jumbo frames to reduce overhead, using dedicated VLANs or networks for storage traffic isolation, implementing QoS to prioritize storage traffic, and ensuring sufficient bandwidth to meet throughput requirements.

Option A, Fibre Channel, uses its own specialized transport protocol running over dedicated Fibre Channel networks. FC does not use TCP/IP but instead uses FC-2 through FC-4 layers in the Fibre Channel protocol stack.

Option B, FCoE or Fibre Channel over Ethernet, encapsulates native Fibre Channel frames in Ethernet frames but does not use TCP/IP. FCoE operates at Layer 2 over lossless Ethernet and requires DCB capabilities.

Option D, FICON, is an IBM mainframe storage protocol that uses Fibre Channel transport but with mainframe-specific command sets. Like FC, it does not use TCP/IP.

iSCSI’s use of TCP/IP makes it accessible and economical while requiring careful network design to achieve adequate storage performance.

Question 167

What is the purpose of an EPG in Cisco ACI?

A) To define physical port configurations

B) To group endpoints with similar policy requirements

C) To configure routing protocols

D) To manage power distribution

Answer: B

Explanation:

An Endpoint Group in Cisco ACI groups endpoints such as virtual machines, physical servers, or containers that have similar policy requirements, allowing administrators to apply consistent security, QoS, and connectivity policies to all members of the group. EPGs represent collections of application components that share common characteristics and require similar network treatment, forming the fundamental building block of ACI’s application-centric policy model. Understanding EPGs is essential for designing and implementing ACI policy architecture.

EPGs operate at a higher abstraction level than traditional networking constructs like VLANs or IP subnets. Rather than grouping endpoints by network location or addressing, EPGs group them by application function or role. For example, a three-tier application might use a web EPG for web servers, an application EPG for application servers, and a database EPG for database servers. Each EPG contains endpoints performing similar functions regardless of their physical location, IP addresses, or underlying infrastructure. This application-centric approach aligns network policy with application architecture, making policies more intuitive to define and easier to maintain as applications evolve.

Endpoints become members of EPGs through various mechanisms. Static assignment associates specific switch ports, VLANs, or encapsulation values with an EPG, useful for physical servers or network devices. Dynamic assignment uses integration with hypervisors like VMware vCenter or container orchestrators like Kubernetes to automatically place virtual machines or containers into appropriate EPGs based on metadata. Attribute-based assignment uses endpoint attributes like MAC addresses or IP addresses to determine EPG membership. Once an endpoint is associated with an EPG, it inherits all policies applied to that EPG including contracts defining allowed communication, QoS settings, and security posture. EPGs are associated with bridge domains for Layer 2 connectivity and indirectly with VRFs through the bridge domain association, establishing the complete networking context for endpoints.

Option A incorrectly focuses on physical port configuration. While EPGs can include physical ports through static binding, their purpose is logical grouping for policy application rather than physical configuration management.

Option C suggests routing protocol configuration, which is unrelated to EPGs. Routing protocols in ACI are configured separately at the VRF and Layer 3 Out levels, not within EPG definitions.

Option D mentions power distribution management, which is completely unrelated to EPGs. EPGs are policy constructs for network and application management, not infrastructure power management.

Endpoint Groups are central to ACI’s policy model, enabling application-aligned network policies that are flexible, scalable, and easy to manage.

Question 168

Which command verifies VXLAN tunnel status on a Cisco Nexus switch?

A) show vxlan

B) show nve peers

C) show tunnel status

D) show overlay

Answer: B

Explanation:

The show nve peers command verifies VXLAN tunnel status on a Cisco Nexus switch by displaying information about Network Virtualization Edge peers, showing which remote VTEPs have been discovered and whether tunnels to those peers are operational. NVE refers to the VXLAN tunnel endpoint functionality, and this command provides visibility into the overlay network connectivity between VTEPs. Understanding how to verify VXLAN tunnel status is essential for troubleshooting overlay network issues and ensuring proper connectivity between leaf switches or other VXLAN-capable devices.

The show nve peers output displays several important pieces of information about each discovered peer VTEP. The Peer IP address shows the underlay IP address of the remote VTEP, which is used as the destination for encapsulated VXLAN traffic. The State column indicates whether the tunnel to that peer is up and operational or if there are connectivity issues preventing communication. The Up Time field shows how long the tunnel has been established, useful for identifying recent flaps or connectivity problems. Some implementations also display the number of VNIs or network segments being shared with each peer, indicating the scope of overlay connectivity.

VXLAN tunnels between NVE peers are established dynamically based on endpoint learning and control plane protocols. When a VTEP learns that an endpoint in a particular VXLAN segment exists behind a remote VTEP, it establishes a tunnel to that peer if one does not already exist. In EVPN-based VXLAN deployments, MP-BGP control plane distribution automatically informs VTEPs about remote endpoints and their associated VTEPs, triggering tunnel establishment. The show nve peers command helps verify that these dynamic tunnel establishment processes are working correctly. If expected peers do not appear in the output, it indicates problems with the control plane, underlay routing, or VXLAN configuration. The absence of tunnels prevents overlay communication even if underlay connectivity exists.

Option A, show vxlan, is not a complete command on most Nexus platforms. Various show commands with vxlan keywords exist for different purposes, but show nve peers is the specific command for verifying tunnel peer status.

Option C, show tunnel status, is not a valid NX-OS command for VXLAN verification. While the concept is correct, this is not the actual command syntax used on Nexus switches.

Option D, show overlay, similarly is not a standard NX-OS command for VXLAN tunnel verification. The NVE interface and related show nve commands are the proper way to examine VXLAN overlay status.

The show nve peers command is essential for verifying and troubleshooting VXLAN overlay connectivity in data center networks.

Question 169

What is the default administrative distance for EIGRP internal routes?

A) 90

B) 110

C) 120

D) 170

Answer: A

Explanation:

The default administrative distance for EIGRP internal routes is 90, making EIGRP routes more preferred than OSPF routes (administrative distance 110) or RIP routes (administrative distance 120) when the same destination is learned through multiple routing protocols. Administrative distance is a measure of routing protocol trustworthiness, with lower values indicating more trusted sources. Understanding administrative distance is important for predicting route selection behavior in networks running multiple routing protocols.

Administrative distance provides a mechanism for routers to choose between routes learned from different sources or protocols when multiple paths to the same destination exist with equal prefix lengths. Each routing protocol and route source has a default administrative distance value. Connected interfaces have administrative distance 0, static routes default to 1, EIGRP internal routes use 90, IGRP routes use 100, OSPF routes use 110, IS-IS routes use 115, RIP routes use 120, and EIGRP external routes use 170. When a router learns the same destination network from multiple protocols, it installs only the route with the lowest administrative distance in the routing table, regardless of metric values within each protocol.

EIGRP distinguishes between internal and external routes, assigning different administrative distances to each. Internal routes with administrative distance 90 are learned from EIGRP neighbors within the same autonomous system, representing networks that EIGRP itself is routing to. External routes with administrative distance 170 are networks learned from other routing protocols and redistributed into EIGRP, such as routes originally from OSPF, BGP, or static routes that were redistributed into EIGRP. This distinction allows EIGRP to prefer its own natively learned routes over redistributed routes, reducing the likelihood of routing loops during mutual redistribution scenarios. Administrators can manually adjust administrative distance values when needed to influence route selection, though this should be done carefully with full understanding of the implications.

Option B, 110, is the administrative distance for OSPF routes, not EIGRP. This value makes OSPF routes less preferred than EIGRP internal routes when both protocols advertise the same destination.

Option C, 120, is the administrative distance for RIP routes. RIP routes are trusted less than either EIGRP or OSPF routes according to default administrative distance values.

Option D, 170, is the administrative distance for EIGRP external routes, not internal routes. This higher value distinguishes redistributed routes from natively learned EIGRP routes.

Understanding administrative distance, particularly for commonly used protocols like EIGRP, is essential for predicting routing behavior in multi-protocol environments.

Question 170

Which Cisco Nexus feature allows traffic mirroring for network analysis?

A) NetFlow

B) SPAN

C) sFlow

D) RMON

Answer: B

Explanation:

SPAN or Switched Port Analyzer allows traffic mirroring for network analysis by copying packets from source ports or VLANs to a destination port where monitoring tools can capture and analyze the traffic. SPAN is essential for network troubleshooting, security monitoring, application performance analysis, and protocol debugging, as it provides visibility into actual network traffic without disrupting production flows. Understanding SPAN configuration and limitations is important for effective network monitoring and troubleshooting.

SPAN operates by configuring a session that specifies traffic sources and a monitoring destination. Source traffic can come from physical ports, port channels, VLANs, or in some cases specific traffic matching certain criteria. The switch replicates packets matching the source specification and forwards copies to the destination port where a network analyzer, packet capture device, intrusion detection system, or other monitoring tool connects. The original traffic continues to its intended destination unaffected, allowing non-intrusive monitoring of production traffic. Multiple SPAN sessions can run simultaneously on a switch, subject to hardware limitations, enabling monitoring of different traffic sources for different purposes.

Several SPAN variants address different monitoring scenarios. Local SPAN mirrors traffic to a destination port on the same switch as the source, the most common configuration for troubleshooting and monitoring. Remote SPAN or RSPAN extends monitoring across multiple switches by transporting mirrored traffic over a special RSPAN VLAN to a destination switch where monitoring tools connect. Encapsulated Remote SPAN or ERSPAN further extends capabilities by encapsulating mirrored packets in GRE tunnels, allowing transport across Layer 3 networks and even across WAN connections for centralized monitoring. SPAN has limitations including potential packet drops if mirrored traffic exceeds destination port bandwidth, inability to capture certain control plane traffic, and performance impacts if excessive traffic is mirrored on resource-constrained platforms.

Option A, NetFlow, is a different traffic analysis technology that collects flow statistics rather than copying actual packets. NetFlow provides aggregated information about traffic patterns but does not mirror packets for detailed protocol analysis.

Option C, sFlow, is a sampling-based traffic monitoring technology similar to NetFlow but using a different architecture. Like NetFlow, sFlow provides statistical information rather than complete packet copies.

Option D, RMON or Remote Monitoring, is an SNMP-based network monitoring standard that collects various statistics and alarms. RMON provides monitoring data but does not mirror traffic like SPAN.

SPAN is the fundamental tool for packet-level network traffic analysis and troubleshooting on Cisco switches.

Question 171

What is the purpose of multicast routing in a data center network?

A) To replicate traffic efficiently to multiple receivers

B) To provide redundant unicast paths

C) To encrypt broadcast traffic

D) To compress network traffic

Answer: A

Explanation:

The purpose of multicast routing in a data center network is to replicate traffic efficiently to multiple receivers, delivering a single stream from a source to many destinations without consuming bandwidth proportional to the number of receivers. Multicast is essential for applications like video streaming, financial data distribution, software updates, and database replication where the same content must reach multiple endpoints simultaneously. Efficient multicast implementation reduces network bandwidth consumption and server load compared to using multiple unicast streams.

Multicast operates using special IP addresses in the 224.0.0.0 to 239.255.255.255 range (Class D addresses), where each multicast group address represents a logical set of receivers interested in particular content. Sources send traffic to a multicast group address rather than individual receiver addresses. Routers and Layer 3 switches use multicast routing protocols like PIM Protocol Independent Multicast to build distribution trees that efficiently deliver traffic from sources to all interested receivers. The network replicates packets only where paths diverge to reach different receivers, avoiding the bandwidth multiplication that would occur if separate unicast streams were sent to each receiver.

Data centers commonly use multicast for several purposes. Storage replication applications use multicast to efficiently copy data to multiple storage nodes simultaneously, critical for distributed storage systems and backup operations. Market data distribution in financial environments uses multicast to deliver real-time trading information to thousands of client systems. Virtual machine management platforms use multicast for certain cluster communication and synchronization tasks. Video distribution systems, whether for surveillance, corporate communications, or media production, rely heavily on multicast to deliver streams to multiple viewers without overwhelming source servers or network bandwidth. Proper multicast implementation requires configuring PIM on all Layer 3 devices in the path, enabling IGMP on Layer 2 switches for efficient delivery within VLANs, and potentially implementing multicast optimization features like IGMP snooping to prevent flooding within broadcast domains.

Option B incorrectly describes unicast redundancy mechanisms. Multicast is about efficient one-to-many delivery, not providing redundant paths for unicast traffic.

Option C suggests encryption of broadcast traffic, which is unrelated to multicast’s purpose. Multicast provides efficient replication; encryption would be applied separately if needed for security.

Option D mentions compression, which is not multicast’s function. Multicast reduces bandwidth through efficient forwarding topology, not by compressing data.

Multicast routing is essential for efficiently delivering one-to-many traffic patterns common in modern data center applications.

Question 172

Which field in the VXLAN header identifies the virtual network segment?

A) VRF ID

B) VLAN ID

C) VNI

D) VSAN ID

Answer: C

Explanation:

The VNI or VXLAN Network Identifier field in the VXLAN header identifies the virtual network segment, providing the network segmentation and isolation necessary for multi-tenancy in overlay networks. The VNI is a 24-bit field allowing over 16 million unique network segments, dramatically exceeding the 4094 VLAN limitation of traditional 802.1Q tagging. Understanding the VNI and its role in VXLAN encapsulation is fundamental to working with overlay networking technologies.

The VNI serves as the network identifier within VXLAN encapsulation, conceptually similar to a VLAN ID but operating at the overlay layer rather than being constrained by physical network limitations. When a VTEP encapsulates an Ethernet frame for transmission across the VXLAN overlay, it inserts a VXLAN header containing the VNI that identifies which virtual network segment the frame belongs to. All endpoints within the same VNI can communicate at Layer 2 as if they were on the same VLAN, regardless of their physical location or the underlay network topology. Different VNIs provide complete traffic isolation, enabling secure multi-tenancy where different customers, applications, or security zones use distinct VNIs without visibility into each other’s traffic.

The mapping between VNIs and traditional VLANs occurs at the VTEP, which bridges between the overlay and underlay networks. On the access side facing endpoints, the VTEP receives frames on VLANs, and based on configuration, maps those VLANs to specific VNIs for encapsulation. The reverse process occurs when receiving VXLAN traffic, where the VTEP extracts the VNI from the VXLAN header, decapsulates the original frame, and forwards it on the appropriate local VLAN. In ACI deployments, VNIs are automatically allocated when EPGs are created, with the APIC managing VNI assignment and ensuring consistency across the fabric. The large VNI space allows for massive scale-out of network segments, supporting cloud-scale infrastructure with thousands of tenants or micro-segmentation strategies that create dedicated network segments for individual applications or security policies.

Option A, VRF ID, relates to virtual routing and forwarding instances providing Layer 3 isolation. While VRFs and VNIs both provide segmentation, the VXLAN header specifically uses VNI for network segment identification.

Option B, VLAN ID, identifies traditional 802.1Q VLANs at Layer 2. VXLAN specifically uses VNI rather than VLAN ID in its encapsulation header to overcome VLAN scaling limitations.

Option D, VSAN ID, identifies virtual storage area networks in Fibre Channel environments. VSAN is unrelated to VXLAN, which operates in Ethernet networking contexts.

The VNI is the fundamental identifier enabling scalable network segmentation in VXLAN overlay networks.

Question 173

What is the purpose of BGP route reflectors?

A) To reduce the number of IBGP peering sessions required

B) To convert OSPF routes to BGP

C) To provide Layer 2 redundancy

D) To encrypt BGP updates

Answer: A

Explanation:

The purpose of BGP route reflectors is to reduce the number of IBGP peering sessions required in an autonomous system, simplifying BGP deployment and reducing configuration overhead in large networks. Without route reflectors, IBGP requires a full mesh of peering sessions between all BGP routers within the autonomous system to ensure that all routers learn all routes, which becomes unmanageable as the number of routers grows. Route reflectors eliminate the full mesh requirement by allowing certain routers to redistribute IBGP-learned routes to other IBGP peers, dramatically reducing the number of sessions needed.

BGP operates differently for external and internal peers due to loop prevention mechanisms. When a BGP router receives a route from an EBGP peer in another autonomous system, it can advertise that route to both EBGP and IBGP peers. However, when receiving a route from an IBGP peer, the router does not advertise it to other IBGP peers by default to prevent routing loops. This behavior necessitates the full mesh IBGP topology where every router peers with every other router to ensure complete route distribution. In a network with N BGP routers, a full mesh requires N*(N-1)/2 peering sessions, which quickly becomes impractical as the network scales.

Route reflectors modify this behavior in a controlled way to enable scalable IBGP topologies. A route reflector is allowed to readvertise IBGP-learned routes to other IBGP peers called route reflector clients. The route reflector forms IBGP sessions with its clients and potentially with other route reflectors or non-client IBGP routers. Clients only need to peer with the route reflector rather than with all other routers, reducing session counts dramatically. The route reflector adds special attributes like Originator ID and Cluster List to prevent loops. Multiple route reflectors can be deployed for redundancy, with clients peering to multiple route reflectors. This hierarchical topology is commonly used in data center spine-leaf architectures where spine switches act as route reflectors for leaf switches running BGP for EVPN control plane in VXLAN deployments.

Option B incorrectly suggests protocol conversion. Route reflectors work within BGP and do not convert between routing protocols. Route redistribution would be used for importing routes from other protocols.

Option C mentions Layer 2 redundancy, which is unrelated to route reflectors. Route reflectors operate at Layer 3 in the BGP control plane.

Option D suggests encryption, which is not the purpose of route reflectors. BGP sessions can be secured with MD5 authentication or TCP-AO, but route reflectors are about topology optimization.

Route reflectors are essential for scaling IBGP deployments in enterprise and service provider networks.

Question 174

Which protocol provides the control plane for VXLAN in ACI fabrics?

A) OSPF

B) IS-IS

C) MP-BGP with EVPN

D) RIP

Answer: C

Explanation:

Multiprotocol BGP with Ethernet VPN extensions provides the control plane for VXLAN in ACI fabrics, enabling automated distribution of endpoint reachability information, MAC address learning, and VTEP discovery across the overlay network. EVPN is a standards-based control plane that eliminates the need for data plane learning and flooding in VXLAN networks, providing superior scalability, faster convergence, and enhanced features compared to flood-and-learn VXLAN implementations. Understanding EVPN’s role in ACI is essential for comprehending how the fabric distributes endpoint information and builds optimal forwarding paths.

EVPN uses MP-BGP to distribute several types of reachability information through different route types. Type 2 routes advertise MAC and IP address information for specific endpoints, informing all VTEPs about endpoint locations and their associated VNIs. Type 3 routes advertise VTEP addresses and the VNIs they support, enabling VTEPs to discover each other and determine which remote VTEPs host endpoints in particular network segments. Type 5 routes carry IP prefix information for inter-subnet routing, supporting distributed anycast gateway implementations. This control plane distribution allows VTEPs to build their forwarding tables proactively based on advertised information rather than reactively through data plane learning and flooding.

The benefits of using EVPN as the VXLAN control plane are substantial. It eliminates unknown unicast flooding in the overlay by ensuring all VTEPs learn endpoint locations through BGP advertisements before traffic arrives. It provides optimal forwarding where traffic flows directly from source to destination VTEP without intermediate hops or flooding. EVPN enables advanced features like multi-homing and fast failover through active-active VTEP connectivity. It supports seamless integration with external networks through BGP route distribution. The protocol scales efficiently to very large fabrics because BGP is designed for internet-scale routing with proven scalability characteristics. In ACI, the spine switches act as BGP route reflectors distributing EVPN information between leaf switches, which operate as EVPN PE routers hosting endpoints and VTEPs.

Option A, OSPF, is a link-state interior gateway protocol used for IP routing. OSPF is not designed for distributing MAC address reachability information or serving as a VXLAN control plane.

Option B, IS-IS, is used as the underlay routing protocol in ACI fabrics to establish IP connectivity between switches. However, IS-IS is not the overlay control plane; EVPN serves that function.

Option D, RIP, is a legacy distance-vector routing protocol with limited scalability. RIP lacks any capability to serve as a VXLAN control plane and is not used in modern data center fabrics.

MP-BGP with EVPN provides the sophisticated control plane necessary for scalable, efficient VXLAN overlay networking in ACI.

Question 175

What is the purpose of a contract in Cisco ACI?

A) To configure physical interfaces

B) To define communication rules between EPGs

C) To establish VPN tunnels

D) To manage power consumption

Answer: B

Explanation:

The purpose of a contract in Cisco ACI is to define communication rules between Endpoint Groups, specifying what traffic is permitted to flow from one EPG to another and under what conditions. Contracts implement the security policy model in ACI, providing a declarative way to express application communication requirements without configuring individual access control lists on every interface. Understanding contracts and their provider-consumer relationship model is fundamental to implementing ACI security policies.

Contracts work through an explicit permission model where communication between EPGs is denied by default unless a contract permits it. One EPG provides a contract, indicating it offers certain services or is willing to receive certain traffic types. Another EPG consumes the contract, indicating it needs to access those services. Only when both relationships exist does the fabric permit traffic flow according to the rules defined within the contract. This model aligns naturally with application architectures where service providers offer APIs or services that consumers access, making security policy intuitive and application-centric.

A contract contains one or more subjects, which group related policy elements. Each subject contains filters that define the specific traffic allowed, typically specifying protocols, port numbers, and other Layer 4 information. For example, a web contract might have a subject with a filter permitting TCP port 443 for HTTPS traffic. Subjects can also specify additional policies beyond basic permit/deny including quality of service settings, service graph insertion for directing traffic through security appliances or load balancers, traffic logging requirements, and traffic redirection. Contracts support both bidirectional and unidirectional communication patterns. A single contract can be provided by one EPG and consumed by many, supporting one-to-many service models. Similarly, an EPG can consume multiple contracts to access different services from different provider EPGs, supporting many-to-many communication patterns with granular control.

Option A incorrectly suggests physical interface configuration as the contract purpose. Contracts are logical policy constructs operating at the EPG level, completely separate from physical interface configuration.

Option C mentions VPN tunnels, which are configured through different ACI mechanisms when needed for external connectivity. Contracts define EPG-to-EPG communication within or across fabrics, not VPN establishment.

Option D refers to power management, which is unrelated to contracts. ACI contracts are network security and communication policies, not infrastructure power management tools.

Contracts are central to ACI’s security model, enabling intuitive, application-aligned access control policies.

Question 176

Which command configures a VLAN on a Cisco Nexus switch?

A) vlan database

B) vlan configuration

C) vlan

D) create vlan

Answer: C

Explanation:

The vlan command configures a VLAN on a Cisco Nexus switch, entering VLAN configuration mode where additional VLAN parameters can be set. The command syntax is straightforward: entering vlan followed by the VLAN ID or range of IDs creates the VLAN if it does not exist or enters configuration mode for an existing VLAN. This command represents NX-OS’s modern approach to VLAN configuration, differing from legacy IOS methods. Understanding basic VLAN configuration is fundamental to managing Nexus switches.

After entering VLAN configuration mode with the vlan command, administrators can configure various VLAN properties. The name command assigns a descriptive name to the VLAN, making configurations more readable and manageable than using only numeric IDs. The state command can set the VLAN to active or suspend state, controlling whether the VLAN can pass traffic. The no shutdown command ensures the VLAN is operational, though VLANs are active by default unless explicitly shut down. Configuration changes take effect immediately, and VLANs can be created and modified without interrupting existing traffic on other VLANs.

NX-OS supports VLAN ranges for efficient configuration of multiple VLANs simultaneously. The command vlan 10-20 creates or configures VLANs 10 through 20 in a single operation, useful when provisioning many VLANs for a new deployment. After VLAN creation, the VLANs must be associated with interfaces through switchport configuration, specifying whether interfaces operate in access mode for a single VLAN or trunk mode for multiple VLANs. The show vlan command displays configured VLANs, their names, states, and associated interfaces, providing verification of VLAN configuration. VLANs are stored in the VLAN database, which is separate from the running configuration, though show running-config displays VLAN commands for documentation purposes.

Option A, vlan database, was used in older Cisco IOS versions to enter a separate VLAN configuration mode. This command is not used in NX-OS, which uses the simpler vlan command directly in global configuration mode.

Option B, vlan configuration, is not valid NX-OS syntax. While the concept is correct, the actual command is simply vlan without the configuration keyword.

Option D, create vlan, is not the correct syntax. NX-OS uses vlan followed by the VLAN ID, not a create keyword, to establish VLANs.

The vlan command provides the straightforward VLAN creation and configuration interface in Cisco NX-OS.

Question 177

What is the primary function of the Application Policy Infrastructure Controller in ACI?

A) To forward data plane traffic

B) To provide centralized policy definition and fabric management

C) To replace all leaf switches

D) To provide wireless controller functionality

Answer: B

Explanation:

The primary function of the Application Policy Infrastructure Controller in ACI is to provide centralized policy definition and fabric management, serving as the single source of truth for all network policies and configurations across the entire ACI fabric. The APIC translates high-level business intent and application requirements into the detailed network configurations deployed to individual switches, automating complex operations and ensuring consistency. Understanding the APIC’s role is fundamental to comprehending how ACI implements policy-driven networking.

The APIC performs several critical functions that distinguish ACI from traditional network management approaches. It provides a unified interface, whether GUI, CLI, or REST API, where administrators define policies describing how applications should communicate, what security rules apply, what QoS is required, and other networking requirements. The APIC maintains a logical model of the desired network state independent of physical topology, allowing administrators to work with application-centric abstractions like endpoint groups and contracts rather than low-level switch configurations. When policies are created or modified, the APIC automatically calculates the necessary configurations for all affected switches and pushes those configurations through the OpFlex protocol.

The APIC also provides comprehensive visibility and monitoring capabilities. It collects health scores, statistics, fault information, and operational state from all fabric elements, correlating this data with defined policies to provide insight into how well the fabric is meeting application requirements. The APIC’s cluster architecture ensures high availability, with typically three or more controllers forming a cluster where policy data is replicated. If one APIC fails, others continue providing full functionality. Importantly, the APIC operates out-of-band from the data plane, so fabric switches continue forwarding traffic even if all APICs become temporarily unavailable, though policy modifications cannot be made until APIC connectivity restores. The controller handles various lifecycle management tasks including firmware upgrades across the fabric, backup and restore operations, and disaster recovery procedures.

Option A incorrectly suggests the APIC forwards data plane traffic. The APIC is a management and control plane system only; all data forwarding occurs in the leaf and spine switches independently.

Option C claims the APIC replaces leaf switches, which is wrong. The APIC is a separate controller that manages switches but does not replace them. The switches handle all packet forwarding.

Option D mentions wireless controller functionality, which is not the APIC’s purpose. ACI is focused on data center networking; wireless management would require separate controllers.

The APIC’s centralized management and policy automation capabilities are what enable ACI’s transformative approach to data center networking.

Question 178

Which storage topology connects servers directly to storage without a network?

A) NAS

B) SAN

C) DAS

D) iSCSI

Answer: C

Explanation:

Direct Attached Storage connects servers directly to storage devices without an intervening network, providing a simple storage architecture where disk drives, JBODs, or storage arrays connect directly to server host bus adapters through interfaces like SATA, SAS, or direct-attach SAS cables. DAS represents the most basic storage architecture, contrasting with networked storage approaches like SAN and NAS that enable storage sharing across multiple servers. Understanding DAS characteristics helps in selecting appropriate storage architectures for different use cases.

DAS operates by connecting storage devices directly to individual servers using cable connections rather than through network switches or fabrics. Internal DAS includes hard drives or SSDs installed inside server chassis connected to internal SATA, SAS, or NVMe interfaces, the most common form for boot drives and local storage. External DAS uses external storage enclosures connected to servers through external SAS cables, providing additional capacity beyond internal drive bays. This direct connection architecture means each server has exclusive access to its attached storage, with no ability for other servers to directly access that storage without server cooperation at the application layer.

DAS provides several advantages in appropriate scenarios. It offers the lowest latency and highest performance because data does not traverse network infrastructure, eliminating network-related bottleneck and latency sources. Configuration is simple with no need for SAN switches, Fibre Channel infrastructure, or complex storage network design. Cost is typically lower than networked storage for small deployments because it avoids network infrastructure expenses. However, DAS has significant limitations for larger or more complex environments. Storage capacity is isolated to individual servers, making it difficult to pool storage resources or move capacity between applications. Data protection and disaster recovery are more complex because backup systems cannot directly access storage, requiring server-level backup agents. Server failure means attached storage becomes inaccessible until the server recovers. DAS is commonly used for server boot drives, applications requiring maximum I/O performance, small deployments where storage sharing is unnecessary, or edge locations where simplicity outweighs sharing benefits.

Option A, NAS or Network Attached Storage, provides file-level storage access over network protocols like NFS or SMB. NAS specifically uses network infrastructure rather than direct attachment.

Option B, SAN or Storage Area Network, provides block-level storage access over dedicated storage networks using Fibre Channel or iSCSI. SAN explicitly requires network infrastructure.

Option D, iSCSI, is a protocol for accessing block storage over IP networks. iSCSI operates over Ethernet networks rather than direct server-to-storage connections.

Direct Attached Storage serves important roles in storage architectures despite the prevalence of networked storage in modern data centers.

Question 179

What is the purpose of QoS marking in a data center network?

A) To encrypt traffic

B) To classify and prioritize traffic for differential treatment

C) To compress packets

D) To route traffic between VLANs

Answer: B

Explanation:

The purpose of QoS marking in a data center network is to classify and prioritize traffic by writing priority values into packet headers that subsequent network devices can use to provide differential treatment based on application requirements. QoS marking establishes traffic classes that distinguish latency-sensitive applications like voice and video from bulk data transfers, enabling network devices to allocate resources appropriately and ensure critical applications receive necessary service levels. Understanding QoS marking is essential for implementing effective quality of service policies.

QoS marking operates by modifying specific fields in packet headers to carry priority information. At Layer 2, the 802.1p Class of Service field in the VLAN tag uses three bits to provide eight priority levels from 0 (lowest) to 7 (highest). Common practices map voice to CoS 5, video to CoS 4, critical data to CoS 3, and best-effort traffic to CoS 0. At Layer 3, the DiffServ Code Point field in the IP header uses six bits providing 64 possible values. Standard DSCP values include EF or Expedited Forwarding (typically value 46) for voice, AF or Assured Forwarding classes (like AF31, AF32, AF33) for different data priorities, and default DSCP 0 for best-effort traffic. These markings travel with packets throughout the network, allowing each device to identify traffic classes and apply appropriate treatment.

The marking process typically occurs at the network edge when traffic enters the data center, establishing classifications that remain consistent throughout the infrastructure. Trust boundaries determine where markings are accepted as-is versus where traffic is reclassified. Common designs trust markings from IP phones and application servers that are known to mark traffic appropriately, but remark traffic from user workstations to prevent priority escalation. Once marked, packets can receive preferential treatment through various QoS mechanisms. Priority queuing places marked traffic into different output queues with higher-priority queues serviced first. Weighted fair queuing allocates bandwidth proportionally based on markings. Traffic shaping and policing enforce rate limits differently based on priority classes. Congestion avoidance mechanisms like WRED drop lower-priority traffic before affecting higher priorities during congestion. Proper QoS marking and enforcement ensures that latency-sensitive applications maintain acceptable performance even during network congestion.

Option A incorrectly suggests encryption as the purpose of QoS marking. Marking writes priority values for traffic classification, not cryptographic protection. Encryption would be provided by separate security mechanisms.

Option C mentions compression, which is unrelated to QoS marking. Marking classifies traffic for prioritization; compression reduces data size, a completely different function.

Option D describes routing functionality, which is separate from QoS. While QoS policies may influence path selection in some scenarios, marking specifically identifies priority levels for differential treatment.

QoS marking is fundamental to implementing effective quality of service policies that ensure critical applications receive appropriate network treatment.

Question 180

Which Cisco technology provides automated fabric deployment and Day-2 operations for data center networks?

A) Cisco DNA Center

B) Cisco DCNM

C) Cisco Prime

D) Cisco ISE

Answer: B

Explanation:

Cisco Data Center Network Manager provides automated fabric deployment and Day-2 operations for data center networks running NX-OS, including VXLAN-EVPN fabrics, traditional Layer 2/Layer 3 networks, and storage networking configurations. DCNM serves as the management and orchestration platform for Nexus-based data centers not using ACI, offering visibility, configuration management, fabric provisioning, and lifecycle operations. Understanding DCNM’s role is important for organizations deploying Nexus switches in traditional mode rather than ACI mode.

DCNM provides comprehensive fabric lifecycle management starting from initial deployment through ongoing operations. During initial deployment, DCNM can discover existing Nexus switches, inventory their configurations, and bring them under management. For greenfield deployments, DCNM offers fabric templates and wizards that automate the configuration of VXLAN-EVPN overlays, underlay routing protocols, and required features across all fabric switches. Administrators define fabric-wide policies and parameters through the DCNM interface, and DCNM generates and deploys the appropriate configurations to individual switches, ensuring consistency and eliminating manual configuration errors.

For Day-2 operations, DCNM provides ongoing network management capabilities. It offers topology visualization showing switch interconnections, VPC relationships, and fabric structure. Configuration compliance monitoring ensures switches maintain desired configurations and alerts administrators to drift or unauthorized changes. Performance monitoring collects statistics, health scores, and capacity information across the fabric. DCNM supports network segmentation through VXLAN VNI provisioning, making it easy to extend network segments across the fabric. The platform handles software lifecycle management including firmware upgrades coordinated across multiple switches with rollback capabilities if issues occur. DCNM also provides integration points with orchestration platforms and can export data to external analytics systems. While sharing some conceptual similarities with the APIC in ACI environments, DCNM operates differently as a traditional network management system rather than a policy-driven controller.

Option A, Cisco DNA Center, is the management platform for enterprise campus and branch networks, not data center networks. DNA Center manages Catalyst switches and wireless infrastructure rather than Nexus switches and data center fabrics.

Option C, Cisco Prime, is a legacy network management platform that has been largely replaced by more specialized tools like DNA Center for campus and DCNM for data centers.

Option D, Cisco ISE or Identity Services Engine, provides network access control and policy enforcement based on user and device identity. ISE handles authentication and authorization, not fabric deployment and operations management.

DCNM is the essential management platform for organizations deploying Nexus-based data center networks in non-ACI configurations.