Cisco 350-601 Implementing and Operating Cisco Data Center Core Technologies (DCCOR) Exam Dumps and Practice Test Questions Set 8 Q106 – 120

Visit here for our full Cisco 350-601 exam dumps and practice test questions.

Question 106

Which protocol is used by Cisco Nexus switches to provide loop-free Layer 2 topology in a VXLAN EVPN fabric?

A) Spanning Tree Protocol

B) TRILL

C) IS-IS

D) FabricPath

Answer: C

Explanation:

In a VXLAN EVPN fabric, IS-IS is the protocol used by Cisco Nexus switches to provide a loop-free Layer 2 topology in the underlay network. IS-IS is an interior gateway protocol that operates at Layer 2 of the OSI model and is designed to move information efficiently within a computer network.

VXLAN EVPN fabrics typically use a two-tier protocol architecture. The underlay network uses IS-IS or OSPF as the IGP to provide IP reachability between all VTEP endpoints. IS-IS is preferred in many data center deployments because it converges faster than traditional routing protocols and has lower overhead. The underlay network creates a loop-free topology by using equal-cost multipath routing, which allows traffic to be distributed across multiple paths.

The overlay network in VXLAN EVPN uses MP-BGP EVPN as the control plane protocol to exchange MAC and IP address information between VTEPs. This separation of underlay and overlay protocols allows for better scalability and flexibility in the network design.

Spanning Tree Protocol is a legacy Layer 2 protocol used in traditional Ethernet networks to prevent loops, but it is not used in VXLAN EVPN fabrics because the underlay is routed, not switched. STP blocks redundant paths, which is inefficient in modern data center networks.

TRILL is a protocol designed to replace Spanning Tree Protocol in data center networks, but it is not commonly used in VXLAN EVPN deployments with Cisco Nexus switches.

FabricPath is Cisco’s proprietary Layer 2 multipathing technology that was popular before VXLAN became the standard. While FabricPath provides a loop-free topology, it is not the protocol used in VXLAN EVPN fabrics. Modern data centers have largely migrated to VXLAN EVPN for its superior scalability and standards-based approach.

Question 107

What is the default BGP keepalive timer value in Cisco NX-OS?

A) 30 seconds

B) 60 seconds

C) 90 seconds

D) 180 seconds

Answer: B

Explanation:

The default BGP keepalive timer value in Cisco NX-OS is 60 seconds. BGP uses keepalive messages to maintain and verify the connection between BGP peers. These timers are critical for ensuring that BGP sessions remain active and that any failures are detected promptly.

BGP uses two timers to manage peer sessions: the keepalive timer and the hold timer. The keepalive timer determines how frequently keepalive messages are sent to BGP neighbors. The hold timer specifies how long a BGP router will wait to receive a keepalive or update message from a peer before declaring the peer down.

In Cisco NX-OS, the default keepalive timer is set to 60 seconds, and the default hold timer is set to 180 seconds, which is three times the keepalive interval. This ratio ensures that at least three keepalive messages can be missed before the BGP session is considered down, providing tolerance for occasional network issues.

These timer values can be adjusted based on network requirements. In environments where faster convergence is needed, administrators might reduce these timers to detect failures more quickly. However, setting timers too low can cause BGP sessions to flap unnecessarily due to temporary network congestion or delays.

The 30-second option would be too aggressive for a default keepalive timer, though it can be configured manually if needed for specific scenarios requiring faster failure detection.

The 90-second and 180-second options are incorrect for the keepalive timer. The 180-second value corresponds to the default hold timer, not the keepalive timer. Understanding the relationship between these two timers is important for BGP troubleshooting and optimization in data center environments.

Question 108

Which command displays the MAC address table on a Cisco Nexus switch?

A) show mac address-table

B) show mac-address-table

C) show cam dynamic

D) show bridge table

Answer: B

Explanation:

The correct command to display the MAC address table on a Cisco Nexus switch is show mac-address-table. This command provides comprehensive information about all MAC addresses learned by the switch, including the VLAN, MAC address, type, age, and the interface where each MAC address was learned.

The MAC address table, also known as the Content Addressable Memory table, is fundamental to switch operation. When a switch receives a frame, it learns the source MAC address and associates it with the ingress port. The switch then uses this table to make forwarding decisions by looking up the destination MAC address and forwarding the frame out the appropriate port.

The show mac-address-table command can be used with various options to filter the output. For example, you can specify a particular VLAN, interface, or MAC address to narrow down the results. You can also use keywords like dynamic or static to show only dynamically learned or statically configured entries.

The show mac address-table command without the hyphen is not the correct syntax in NX-OS. While some Cisco IOS platforms might accept variations, NX-OS specifically requires the hyphenated format show mac-address-table.

The show cam dynamic command is not valid in Cisco NX-OS. While CAM refers to Content Addressable Memory, which is the hardware used to store the MAC address table, this command syntax is not used in NX-OS.

The show bridge table command is not a valid command in Cisco NX-OS. This command format might be found in other networking devices or older platforms, but it is not applicable to Nexus switches running NX-OS.

Question 109

In Cisco ACI, what is the function of a Bridge Domain?

A) To provide routing between EPGs

B) To represent a Layer 2 forwarding domain

C) To define security policies

D) To configure external connectivity

Answer: B

Explanation:

In Cisco ACI, a Bridge Domain represents a Layer 2 forwarding domain within the fabric. It defines the unique Layer 2 MAC address space and flooding domain for broadcast, unknown unicast, and multicast traffic. Bridge Domains are fundamental building blocks in the ACI policy model and work in conjunction with EPGs to provide network segmentation and forwarding.

A Bridge Domain is associated with one VRF instance and can contain one or more subnets. It defines how Layer 2 traffic is handled within the ACI fabric, including settings for flooding, ARP handling, and unicast routing. Each Bridge Domain can be configured with specific parameters such as whether to enable unicast routing, limit IP learning to subnets, or enable ARP flooding.

When multiple EPGs are associated with the same Bridge Domain, endpoints in those EPGs can communicate at Layer 2 without requiring a contract, as they share the same broadcast domain. However, if unicast routing is enabled on the Bridge Domain, endpoints can also communicate at Layer 3 through the distributed anycast gateway.

The option about providing routing between EPGs is incorrect because routing between EPGs is actually handled by contracts and the VRF, not the Bridge Domain directly. While a Bridge Domain can enable unicast routing, the policy enforcement between EPGs requires contracts.

Defining security policies is the function of contracts in ACI, not Bridge Domains. Contracts define the rules that govern communication between EPGs, specifying which protocols and ports are allowed.

Configuring external connectivity is handled by External EPGs and Layer 3 Outside connections, not Bridge Domains. While Bridge Domains participate in the overall connectivity architecture, they do not directly configure external connections to networks outside the ACI fabric.

Question 110

Which feature allows Cisco UCS to present multiple vNICs or vHBAs to the operating system?

A) Port Channels

B) Adapter FEX

C) Virtual Interface Cards

D) Fabric Extenders

Answer: B

Explanation:

Adapter FEX is the feature that allows Cisco UCS to present multiple vNICs or vHBAs to the operating system. Adapter FEX technology extends the fabric from the Fabric Interconnects all the way to the Virtual Interface Card in each server, allowing the VIC to function as a remote line card of the Fabric Interconnect.

When Adapter FEX is enabled, the VIC adapter in each UCS server becomes an extension of the fabric. This allows administrators to create multiple virtual network interface cards and virtual host bus adapters that appear as separate physical devices to the server’s operating system. Each vNIC or vHBA can be configured with its own MAC address, VLAN assignments, QoS policies, and network settings.

This capability provides tremendous flexibility in server connectivity. A single physical adapter can support dozens of vNICs and vHBAs, eliminating the need for multiple physical network cards. Administrators can dynamically add or remove virtual adapters through the UCS Manager without physically touching the server, enabling true stateless computing.

Port Channels are used to aggregate multiple physical links into a single logical link for increased bandwidth and redundancy, but they do not present multiple virtual interfaces to the operating system. Port Channels operate at a different layer of the network stack.

Virtual Interface Cards are the physical adapters installed in UCS servers, but the VIC itself is not the feature that enables multiple virtual adapters. Rather, it is the Adapter FEX technology running on the VIC that provides this capability.

Fabric Extenders refer to the UCS 2200 or 2300 series IOM modules that connect blade servers to the Fabric Interconnects in a blade chassis, or to rack-mount FEX devices. While FEX technology is related, it is not specifically the feature that presents multiple vNICs to the OS.

Question 111

What is the maximum number of Fabric Interconnects that can be used in a single Cisco UCS domain?

A) 2

B) 4

C) 6

D) 8

Answer: A

Explanation:

The maximum number of Fabric Interconnects that can be used in a single Cisco UCS domain is 2. This design provides high availability and redundancy for the UCS infrastructure while maintaining a simple and manageable architecture.

In a Cisco UCS domain, the two Fabric Interconnects operate in a redundant configuration where each FI provides independent connectivity to the servers and upstream network. The dual-FI design ensures that if one Fabric Interconnect fails, the other continues to provide full connectivity to all servers in the domain. Each server typically has connections to both Fabric Interconnects, ensuring no single point of failure.

The two Fabric Interconnects work in an active-active configuration for data plane traffic, meaning both FIs can simultaneously forward traffic. However, for management and control plane functions, they operate in an active-passive mode where one FI is designated as the primary and the other as the subordinate. The primary FI manages the UCS domain configuration, and changes are synchronized to the subordinate.

This two-FI limitation is by design and ensures consistent configuration across the domain. All policies, service profiles, and network configurations are managed through UCS Manager running on the Fabric Interconnects, and having exactly two FIs simplifies the synchronization and failover mechanisms.

Options suggesting 4, 6, or 8 Fabric Interconnects are incorrect. While you can deploy multiple UCS domains each with their own pair of Fabric Interconnects, a single UCS domain is limited to exactly two FIs. If you need to scale beyond what a single domain can support, you would deploy multiple independent UCS domains, each with its own pair of Fabric Interconnects.

Question 112

Which Cisco Nexus feature provides the ability to run multiple instances of the control plane on a single physical switch?

A) VRF

B) VDC

C) vPC

D) FabricPath

Answer: B

Explanation:

Virtual Device Context is the Cisco Nexus feature that provides the ability to run multiple instances of the control plane on a single physical switch. VDC technology essentially partitions a physical Nexus switch into multiple logical switches, each with its own independent control plane, management interface, and dedicated resources.

Each VDC operates as if it were a separate physical device with its own configuration, routing tables, forwarding tables, and management domain. This allows different administrative groups to manage their own VDC independently without affecting other VDCs on the same physical switch. VDCs provide complete isolation between partitions, making them ideal for multi-tenant environments or for separating production and development networks.

VDCs are available on specific Cisco Nexus platforms, particularly the Nexus 7000 series switches. When you configure VDCs, you can allocate specific interfaces, modules, and resources to each VDC. Each VDC can run different feature sets and can be managed separately with its own user accounts and access controls.

VRF is a technology that provides routing table isolation, allowing multiple routing instances on a single router or switch. While VRF creates separate routing tables, it does not create multiple instances of the entire control plane. All VRFs share the same control plane and management interface.

vPC is virtual Port Channel technology that allows links that are physically connected to two different Nexus switches to appear as a single port channel to a downstream device. vPC provides Layer 2 redundancy but does not create multiple control plane instances.

FabricPath is Cisco’s Layer 2 multipathing technology that enables equal-cost multipath forwarding in Layer 2 networks. While FabricPath changes how the data plane operates, it does not provide multiple control plane instances on a single switch.

Question 113

In VXLAN, what is the purpose of the VTEP?

A) To encrypt traffic between sites

B) To encapsulate Layer 2 frames in UDP packets

C) To provide QoS marking

D) To perform load balancing

Answer: B

Explanation:

In VXLAN, the VTEP serves the purpose of encapsulating Layer 2 Ethernet frames in UDP packets for transport across the Layer 3 network. VTEP stands for VXLAN Tunnel Endpoint, and it is a fundamental component of the VXLAN architecture that enables Layer 2 network extension over Layer 3 infrastructure.

When a VTEP receives a Layer 2 frame from a local host, it encapsulates the entire Ethernet frame within a VXLAN header, which is then placed inside a UDP packet. This UDP packet is further encapsulated in an IP packet for routing across the Layer 3 underlay network. The VXLAN header includes a 24-bit VXLAN Network Identifier that allows for up to 16 million unique Layer 2 segments, far exceeding the 4096 VLAN limitation.

At the destination, another VTEP receives the UDP packet, strips off the outer IP and UDP headers along with the VXLAN header, and delivers the original Layer 2 frame to the destination host. This encapsulation and decapsulation process is transparent to the endpoints, which believe they are on the same Layer 2 network even though they may be separated by a routed infrastructure.

VTEPs can be implemented in hardware on physical switches or in software on hypervisors. In Cisco Nexus switches, VTEPs are configured on Network Virtualization Endpoints that participate in the VXLAN fabric.

Encrypting traffic between sites is not the primary purpose of VTEP. While VXLAN can be used with encryption technologies, the VTEP itself does not provide encryption functionality. Encryption would require additional protocols like IPsec or MACsec.

Providing QoS marking and performing load balancing are not primary functions of the VTEP. While these functions can be implemented in VXLAN networks, they are handled by other components and features of the network infrastructure.

Question 114

Which protocol does Cisco ACI use for its control plane communication within the fabric?

A) BGP

B) OSPF

C) IS-IS

D) OpFlex

Answer: D

Explanation:

Cisco ACI uses OpFlex as its control plane protocol for communication within the fabric. OpFlex is a southbound protocol that allows the Application Policy Infrastructure Controller to communicate policy information to the leaf and spine switches in the ACI fabric.

OpFlex is a declarative control protocol that was developed to support policy-driven networking. Instead of the APIC directly programming each switch with specific forwarding rules, OpFlex allows the APIC to declare the desired policy state, and the switches then resolve this policy into concrete forwarding rules. This approach provides flexibility and scalability because the switches can make local decisions about how to implement policies.

The OpFlex protocol operates between the APIC controllers and the leaf switches. The APIC defines policies in terms of EPGs, contracts, and other ACI constructs, and uses OpFlex to push this policy information to the leaf switches. The leaf switches then use this information along with endpoint learning to program their forwarding tables appropriately.

Within the ACI fabric, the spine and leaf switches also use IS-IS as an underlay routing protocol to exchange reachability information, and MP-BGP for overlay routing. However, these protocols are part of the data plane and infrastructure routing, not the primary policy control plane.

BGP is used in ACI for specific purposes such as route reflection and external routing with Layer 3 Outside connections, but it is not the primary control plane protocol for policy distribution within the fabric.

OSPF can be used for external routing connections in ACI but is not used as the internal control plane protocol for the fabric itself.

IS-IS is used as the underlay routing protocol in the ACI fabric infrastructure but is not the policy control plane protocol between APIC and the switches.

Question 115

What is the default administrative distance for EIGRP internal routes in Cisco NX-OS?

A) 90

B) 100

C) 110

D) 170

Answer: A

Explanation:

The default administrative distance for EIGRP internal routes in Cisco NX-OS is 90. Administrative distance is a value that routers use to determine which routing protocol to trust when multiple routing protocols provide routes to the same destination network. Lower administrative distance values are preferred over higher values.

EIGRP uses different administrative distances for different types of routes. Internal EIGRP routes, which are routes learned from EIGRP neighbors within the same autonomous system, have an administrative distance of 90. This makes EIGRP internal routes more preferred than OSPF routes, which have an administrative distance of 110, but less preferred than directly connected interfaces and static routes.

EIGRP also assigns different administrative distances to summary routes and external routes. EIGRP summary routes have an administrative distance of 5, making them highly preferred. EIGRP external routes, which are routes redistributed into EIGRP from other routing protocols, have an administrative distance of 170, making them less preferred than most other routing sources.

Understanding administrative distance is crucial for network design and troubleshooting, especially in environments where multiple routing protocols coexist. When configuring route redistribution or designing network topologies, administrators must consider how administrative distance affects routing decisions.

The value of 100 is not used by EIGRP but is instead the default administrative distance for IGRP, an older predecessor to EIGRP that is now largely obsolete.

The value of 110 is the default administrative distance for OSPF, not EIGRP. This makes OSPF routes less preferred than EIGRP internal routes when both protocols are running.

The value of 170 is the administrative distance for EIGRP external routes, not internal routes. External routes are those redistributed into EIGRP from other routing sources.

Question 116

Which command enables LACP on a port channel interface in Cisco NX-OS?

A) channel-group mode active

B) channel-protocol lacp

C) lacp mode active

D) port-channel lacp

Answer: A

Explanation:

The command that enables LACP on a port channel interface in Cisco NX-OS is channel-group mode active. This command is applied to the physical interfaces that will be members of the port channel and configures them to use LACP for negotiation.

LACP is the Link Aggregation Control Protocol defined in IEEE 802.3ad, which provides a standards-based method for bundling multiple physical links into a single logical link. In Cisco NX-OS, when you configure a port channel, you must specify the channel-group number and the LACP mode on each member interface.

The mode active keyword indicates that the interface will actively send LACP packets to negotiate the port channel formation. Another option is mode passive, where the interface waits to receive LACP packets before responding. For LACP negotiation to succeed, at least one side must be in active mode.

The complete configuration involves first creating the port-channel interface, then configuring the physical interfaces with the channel-group command. For example, you would use interface port-channel 10 to create the logical interface, then on each physical interface, you would enter channel-group 10 mode active to add it to the port channel using LACP.

The command channel-protocol lacp is not required in NX-OS. While this command existed in some older Cisco IOS versions to explicitly specify the protocol, NX-OS determines the protocol from the mode specified in the channel-group command.

The command lacp mode active is not valid syntax in Cisco NX-OS. The correct syntax requires the channel-group command with the mode parameter.

The command port-channel lacp is not a valid command in NX-OS for enabling LACP on interfaces. Port-channel configuration uses different command structures.

Question 117

In Cisco UCS, what is a Service Profile?

A) A template for creating VLANs

B) A definition of server hardware and connectivity

C) A backup configuration file

D) A monitoring tool for server health

Answer: B

Explanation:

In Cisco UCS, a Service Profile is a definition of server hardware configuration and connectivity requirements that can be applied to physical servers. Service Profiles are a cornerstone of the UCS stateless computing model and enable the separation of server identity from physical hardware.

A Service Profile contains all the information needed to configure a server, including BIOS settings, firmware versions, network adapter configurations, storage configurations, boot order, and identity information such as MAC addresses, WWNs, and UUIDs. When a Service Profile is associated with a physical server, UCS Manager automatically configures that server according to the specifications in the profile.

This abstraction provides tremendous operational benefits. If a server fails, the Service Profile can be moved to another physical server in minutes, and the new server will assume the exact same identity and configuration. This eliminates the need for manual reconfiguration and reduces downtime significantly.

Service Profiles can be created individually or derived from templates. Service Profile Templates allow administrators to define common configurations that can be applied to multiple servers, ensuring consistency across the infrastructure. Templates can be updating templates, where changes to the template automatically propagate to derived profiles, or initial templates, where the profile becomes independent after creation.

A template for creating VLANs is not what a Service Profile represents. VLAN configuration in UCS is handled through network policies and VLAN definitions that are separate from Service Profiles, though Service Profiles may reference these policies.

A backup configuration file is not a Service Profile. While Service Profiles can be exported for backup purposes, their primary function is server configuration and provisioning, not backup.

A monitoring tool for server health is not a Service Profile. UCS provides separate monitoring and management tools for server health tracking.

Question 118

Which feature in Cisco Nexus switches allows for the separation of control plane and data plane traffic?

A) CoPP

B) QoS

C) ACLs

D) VRF

Answer: A

Explanation:

Control Plane Policing is the feature in Cisco Nexus switches that allows for the separation and protection of control plane traffic from data plane traffic. CoPP is a critical security and stability feature that prevents the control plane from being overwhelmed by excessive traffic, whether from legitimate sources or malicious attacks.

The control plane is responsible for running routing protocols, managing the switch itself, and making forwarding decisions. The data plane, also called the forwarding plane, is responsible for actually forwarding packets based on the decisions made by the control plane. If the control plane becomes overwhelmed with traffic, it can lead to routing protocol failures, management access issues, and overall network instability.

CoPP works by classifying traffic destined for the control plane into different classes and applying rate limiting policies to each class. For example, you might configure CoPP to allow routing protocol traffic at higher rates while limiting ICMP traffic or unknown protocols to lower rates. This ensures that critical control plane functions always have the resources they need.

In Cisco NX-OS, CoPP uses a default policy that provides baseline protection, but administrators can customize these policies based on their specific network requirements. CoPP policies are defined using class maps to classify traffic and policy maps to specify the rate limiting actions.

Quality of Service is used to prioritize different types of data plane traffic based on business requirements, but it is not specifically designed to separate control plane from data plane traffic. QoS marks and queues packets but does not provide the same level of control plane protection.

Access Control Lists filter traffic based on packet headers but do not specifically separate control and data plane traffic or provide rate limiting for control plane protection.

Virtual Routing and Forwarding provides routing table isolation for different routing domains but does not separate control plane from data plane traffic.

Question 119

What is the purpose of the Cisco Discovery Protocol in a data center environment?

A) To discover and display information about directly connected Cisco devices

B) To provide dynamic routing updates

C) To encrypt management traffic

D) To load balance traffic across multiple paths

Answer: A

Explanation:

The purpose of Cisco Discovery Protocol in a data center environment is to discover and display information about directly connected Cisco devices. CDP is a Layer 2 proprietary protocol that runs on all Cisco devices and allows them to advertise their existence and capabilities to neighboring devices.

CDP operates by sending periodic advertisements out of all active interfaces. These advertisements contain information such as device hostname, IP addresses, platform type, software version, capabilities, native VLAN, and port identifiers. Neighboring Cisco devices receive these advertisements and store the information, which can be viewed using show cdp neighbors commands.

In data center environments, CDP is particularly useful for network discovery and documentation. When you connect to a switch, you can use CDP to quickly determine what devices are connected to which ports, helping with troubleshooting, cable tracing, and network mapping. This is invaluable when dealing with complex data center topologies with hundreds or thousands of connections.

CDP also plays a role in certain automation and management functions. For example, Cisco UCS uses CDP to discover and verify connections between components. Network management tools can use CDP information to automatically build network topology maps.

However, CDP should be used carefully in production environments. Because it advertises detailed device information, it can present a security risk if enabled on ports that connect to untrusted networks. Best practice is to disable CDP on user-facing or external-facing ports while keeping it enabled on infrastructure links.

Providing dynamic routing updates is not a function of CDP. Routing updates are handled by routing protocols such as OSPF, EIGRP, or BGP.

Encrypting management traffic is not a function of CDP. CDP messages are sent in clear text and do not provide any encryption capabilities.

Question 120

Which technology allows Cisco Nexus switches to present themselves as a single logical switch to downstream devices?

A) StackWise

B) VSS

C) vPC

D) MLAG

Answer: C

Explanation:

Virtual Port Channel is the technology that allows Cisco Nexus switches to present themselves as a single logical switch to downstream devices. vPC enables a device to use a port channel across two separate Nexus switches, eliminating Spanning Tree Protocol blocked ports and providing active-active Layer 2 connectivity.

In a vPC configuration, two Nexus switches form a vPC peer relationship and coordinate their operation through a dedicated peer-keepalive link and a peer link. From the perspective of a downstream device, these two switches appear as a single logical switch. The downstream device can configure a standard port channel that connects to both vPC peer switches, and traffic will be load balanced across both links without any blocked ports.

vPC provides several significant benefits in data center networks. It enables full utilization of available bandwidth by eliminating STP blocking, improves convergence times during failures, and provides seamless redundancy without complex protocols. vPC is commonly used to connect access switches, servers, and other network devices to a pair of distribution or core switches.

The vPC technology requires careful configuration including vPC domain IDs, peer-keepalive links for control plane communication, peer links for data synchronization, and identical configurations on both vPC peer switches for consistent operation. Each vPC member port must be configured with the same port channel settings.

StackWise is a Cisco Catalyst switching technology that allows multiple physical switches to operate as a single logical switch, but it is not used in Nexus switches. StackWise creates a true single management and control plane across stacked switches.

VSS is Virtual Switching System technology used in Cisco Catalyst 6500 and similar platforms, not Nexus switches. While VSS provides similar functionality to vPC by creating a single logical switch from two physical switches, it uses different mechanisms.