Visit here for our full Cisco 350-401 exam dumps and practice test questions.
Question 76:
Which feature in enterprise networks allows a switch to detect and prevent loops at Layer 2 by selectively blocking redundant paths while allowing backup links to remain available?
A) Spanning Tree Protocol
B) HSRP
C) NAT
D) QoS
Answer:
A) Spanning Tree Protocol
Explanation:
Spanning Tree Protocol (STP) is a Layer 2 network protocol that is fundamental in enterprise networks to prevent switching loops while maintaining redundant paths for fault tolerance. In a switched Ethernet network, redundant paths are often intentionally deployed to provide resiliency in case of link or switch failure. However, redundant paths without loop prevention can lead to broadcast storms, multiple frame copies, and MAC table instability, which can disrupt network operations. STP addresses these issues by selectively blocking certain paths while leaving others active, ensuring a single loop-free logical topology.
STP elects a root bridge based on bridge IDs, which consist of a configurable priority and the switch’s MAC address. The root bridge becomes the central reference point for path selection, and all other switches calculate the shortest path to the root bridge using the cost of each link. Ports are then assigned roles such as root port, designated port, or blocked port based on their position relative to the root bridge and the network topology. Root ports carry traffic toward the root bridge, designated ports forward traffic away from the root bridge, and blocked ports prevent loops while remaining available as backup paths.
STP operates by exchanging Bridge Protocol Data Units (BPDUs), which carry information about bridge IDs, path costs, and port roles. These BPDUs are sent periodically to detect topology changes, determine the current network state, and reconfigure the network if a failure occurs. When a link or switch fails, STP recalculates the spanning tree, activates previously blocked ports as needed, and maintains uninterrupted network connectivity. Variants of STP, such as Rapid Spanning Tree Protocol (RSTP) and Multiple Spanning Tree Protocol (MSTP), provide faster convergence and support for multiple VLAN instances in larger enterprise networks.
RSTP significantly reduces convergence times compared to traditional STP by immediately transitioning ports to forwarding or discarding states based on port roles and link types. MSTP allows multiple VLANs to share the same spanning tree instance, reducing CPU and memory overhead while maintaining loop-free paths for each VLAN. Both RSTP and MSTP are critical for modern enterprise networks that require fast recovery from failures, high availability, and scalable VLAN designs.
Other options provide different functionalities. HSRP ensures gateway redundancy but does not prevent Layer 2 loops. NAT translates IP addresses for connectivity across networks but does not control loops. QoS prioritizes traffic based on classification and policies but does not prevent loops.
For Cisco 350-401 ENCOR exam candidates, understanding STP involves knowledge of root bridge election, port roles, path cost calculation, BPDU structure, timers, convergence behavior, RSTP and MSTP enhancements, VLAN integration, network design considerations for redundancy, failure handling, troubleshooting tools, and loop prevention best practices. Candidates should be able to design, configure, verify, and troubleshoot STP in enterprise networks to maintain loop-free Layer 2 topologies, provide resilient connectivity, support VLAN segmentation, and ensure stable network operation in environments with multiple redundant links. Mastery of STP enables network engineers to optimize Layer 2 topology, maintain operational efficiency, prevent broadcast storms, and provide predictable network behavior under failure conditions.
Question 77:
Which enterprise network technology allows multiple IP networks to share the same physical interface while maintaining isolation between routing instances and overlapping address spaces?
A) VRF
B) VLAN
C) NAT
D) HSRP
Answer:
A) VRF
Explanation:
Virtual Routing and Forwarding (VRF) is a crucial enterprise network technology that allows multiple IP routing instances to coexist on the same physical device without interfering with one another. Each VRF maintains its own routing table, interfaces, and forwarding decisions, enabling logical separation of traffic for different departments, customers, or services. This separation allows overlapping IP addresses to exist in different VRF instances, providing flexibility in large networks or multi-tenant environments.
VRF can be implemented on routers, Layer 3 switches, or MPLS-enabled devices. By assigning interfaces or subinterfaces to VRFs, administrators can control which routing instance handles traffic on each interface. This enables enterprises to isolate sensitive traffic, enforce specific routing policies, and implement separate security domains within the same physical infrastructure. VRF also supports route leaking, where selective routes from one VRF can be imported into another to allow controlled communication between routing instances, often using route targets, route maps, and import/export policies.
In practice, VRF is used in enterprise WANs, data centers, and service provider environments. In WAN designs, VRF allows multiple customer or department networks to share the same physical infrastructure while keeping routing completely separate. In data centers, VRF enables logical separation of tenant traffic in multi-tenant cloud deployments. VRF is also critical for integrating with MPLS VPNs, where VRF instances map to VPNs to maintain traffic isolation across a service provider network.
Other options provide different functionalities. VLAN provides Layer 2 segmentation but does not create separate Layer 3 routing tables. NAT translates addresses for external connectivity but does not provide multiple independent routing instances. HSRP ensures default gateway redundancy but does not support multiple routing domains.
For Cisco 350-401 ENCOR exam candidates, understanding VRF involves knowledge of VRF creation, interface assignment, routing table isolation, route leaking, route targets, import/export policies, interaction with routing protocols within VRFs, integration with VLANs and trunking, MPLS VPN interoperability, configuration best practices, security considerations, and troubleshooting techniques. Candidates should be able to implement VRF instances to isolate routing domains, manage overlapping address spaces, maintain traffic separation, enforce security policies, and optimize network resource usage. Mastery of VRF allows network engineers to provide scalable, flexible, and secure enterprise network designs that support complex multi-tenant, departmental, or service provider environments while maintaining independent routing and forwarding behavior on shared physical devices.
Question 78:
Which protocol in enterprise networks provides automatic assignment of IP addresses, DNS information, and gateway information to end devices, simplifying network management?
A) DHCP
B) NAT
C) HSRP
D) QoS
Answer:
A) DHCP
Explanation:
Dynamic Host Configuration Protocol (DHCP) is a vital service in enterprise networks that automates the process of assigning IP addresses and associated network configuration parameters to end devices. Without DHCP, administrators would have to manually configure every device, which is time-consuming, error-prone, and difficult to maintain in large networks. DHCP provides IP addresses, subnet masks, default gateways, DNS server information, and optionally other parameters such as lease times and option codes. This automation allows devices to connect to the network with minimal administrative intervention.
DHCP operates using a client-server model. When a device joins the network, it sends a DHCP Discover message to identify available DHCP servers. Servers respond with DHCP Offer messages that include configuration information. The client then sends a DHCP Request to accept an offer, and the server responds with a DHCP Acknowledgment, completing the lease assignment. DHCP leases can be dynamic, manual, or automatic, giving administrators flexibility in managing IP address allocation. Dynamic leases allow addresses to be temporarily assigned and reclaimed, ensuring efficient use of IP space. Manual and automatic assignments provide fixed addressing for critical devices or servers while still benefiting from DHCP management.
In enterprise networks, DHCP simplifies IP address management across multiple subnets and VLANs. DHCP relay agents can forward requests across different subnets to centralized DHCP servers, reducing the need to deploy a server on each subnet. This centralization allows for consistent address management, reduces administrative overhead, and provides a single point for monitoring and controlling IP allocations. DHCP also integrates with security and monitoring systems, providing auditing and logging capabilities to track address assignment and device activity.
Other options provide different functionalities. NAT translates private addresses to public addresses for external communication but does not automatically assign IP configurations to end devices. HSRP ensures gateway redundancy but does not manage IP addressing for clients. QoS prioritizes traffic based on policies but does not provide IP configuration services.
For Cisco 350-401 ENCOR exam candidates, understanding DHCP involves knowledge of lease types, DHCP message types, client-server interactions, relay agent configurations, scope management, option assignment, address pools, network segmentation, integration with VLANs and subnets, conflict detection, logging, and troubleshooting. Candidates should be able to design, implement, and maintain DHCP services that provide efficient and reliable IP addressing, reduce administrative errors, ensure proper configuration of client devices, and support large-scale enterprise networks with multiple VLANs, subnets, and locations. Mastery of DHCP enables network engineers to simplify IP address management, maintain network connectivity for diverse devices, enforce consistent network configurations, and support scalable enterprise network operations.
Question 79:
Which technology allows enterprise networks to implement policy-based traffic prioritization, ensuring that mission-critical applications like voice and video receive higher transmission priority?
A) QoS
B) HSRP
C) NAT
D) VRF
Answer:
A) QoS
Explanation:
Quality of Service (QoS) is a set of techniques used in enterprise networks to manage bandwidth, control latency, and prioritize critical traffic over less important traffic. Modern enterprise networks carry multiple types of traffic simultaneously, including voice, video, database replication, cloud applications, and general data. Without QoS, congestion in network links can lead to packet loss, jitter, and latency, negatively affecting time-sensitive applications such as voice over IP (VoIP) and video conferencing. QoS mechanisms enable administrators to define policies that ensure critical applications receive guaranteed resources while less time-sensitive traffic is managed according to network conditions.
QoS operates at multiple layers, including Layer 2, Layer 3, and Layer 4, allowing classification, marking, queuing, policing, shaping, and congestion management. Classification identifies packets based on criteria such as source and destination IP addresses, ports, VLAN tags, or application types. Once classified, packets can be marked using Differentiated Services Code Point (DSCP) or Class of Service (CoS) values to indicate priority levels. Marked packets are then handled by queuing mechanisms that control the order and bandwidth allocation for forwarding. Common queuing strategies include priority queuing, weighted fair queuing, and class-based weighted fair queuing, each providing different levels of traffic prioritization and fairness.
Policing and shaping further control traffic rates to prevent congestion and enforce network policies. Policing drops or remarks packets that exceed configured thresholds, whereas shaping buffers excess traffic to smooth bursts and maintain steady flow. Congestion management ensures that queues are serviced according to priority and available resources, preventing high-priority applications from being affected by lower-priority traffic. In addition, QoS mechanisms can work across Layer 2 trunk links, Layer 3 routed paths, and WAN connections, providing end-to-end prioritization.
Other options provide different functionalities. HSRP ensures gateway redundancy but does not prioritize traffic. NAT translates IP addresses for connectivity but does not manage application prioritization. VRF isolates routing instances but does not enforce traffic policies.
For Cisco 350-401 ENCOR exam candidates, understanding QoS involves knowledge of traffic classification, marking, queuing strategies, policing, shaping, DSCP and CoS values, interface configuration, end-to-end traffic management, WAN and LAN QoS deployment, voice and video optimization, troubleshooting tools, congestion detection, policy verification, integration with routing protocols, and ensuring QoS consistency across multi-device environments. Candidates should be able to design, implement, verify, and troubleshoot QoS policies that guarantee application performance, maintain predictable latency and jitter, optimize bandwidth utilization, and ensure reliable delivery of mission-critical enterprise services. Mastery of QoS enables network engineers to maintain high-performance enterprise networks capable of supporting converged traffic types while avoiding congestion, packet loss, and performance degradation.
Question 80:
Which enterprise network protocol enables routers to dynamically advertise routes, converge quickly, and scale to large networks by sharing link-state information instead of distance vectors?
A) OSPF
B) RIP
C) EIGRP
D) HSRP
Answer:
A) OSPF
Explanation:
Open Shortest Path First (OSPF) is a link-state routing protocol designed for enterprise networks that require fast convergence, loop-free routing, and scalability. Unlike distance-vector protocols, which rely on hop counts and periodic updates, OSPF maintains a complete view of the network topology by exchanging link-state advertisements (LSAs) among all routers within an area. Each router independently calculates the shortest path to every destination using Dijkstra’s algorithm, ensuring consistent routing decisions and avoiding loops.
OSPF is structured hierarchically using areas to improve scalability and reduce the size of routing tables and LSAs exchanged. Area 0, known as the backbone area, connects all other areas, creating a structured network design that allows efficient path calculation and route summarization. Non-backbone areas communicate through the backbone, and OSPF supports multiple area types such as standard, stub, and not-so-stubby areas (NSSA) to optimize routing behavior according to network needs. OSPF allows administrators to assign metrics, configure cost values based on interface bandwidth, and influence routing decisions to optimize traffic paths for performance and reliability.
OSPF includes authentication mechanisms to ensure that only authorized routers participate in routing exchanges. It can operate over IPv4 and IPv6, and it supports route redistribution with other protocols such as EIGRP, RIP, or BGP. OSPF rapidly detects topology changes, recalculates routes, and converges efficiently to maintain network stability. This is critical for enterprise networks that support real-time applications, data replication, and high-availability services.
Other options provide different functionalities. RIP uses distance-vector routing and is limited in scalability and convergence speed. EIGRP offers rapid convergence but is Cisco-proprietary. HSRP ensures gateway redundancy but does not handle routing between devices.
For Cisco 350-401 ENCOR exam candidates, understanding OSPF involves knowledge of link-state operation, LSA types, SPF algorithm, area hierarchy, backbone area design, route summarization, cost assignment, authentication, IPv4 and IPv6 support, redistribution, stub and NSSA areas, fast convergence, integration with other routing protocols, and troubleshooting techniques. Candidates should be able to design, implement, verify, and maintain OSPF in enterprise networks, ensuring scalable, loop-free, and fast-converging routing that supports multiple areas, optimizes path selection, and maintains operational reliability across large and complex enterprise topologies. Mastery of OSPF enables network engineers to support high-performance enterprise networks, maintain predictable routing behavior, optimize link utilization, and provide reliable end-to-end connectivity for critical applications.
Question 81:
Which feature allows enterprise networks to provide a backup default gateway, ensuring that end devices maintain connectivity even if the primary gateway fails?
A) HSRP
B) VRF
C) NAT
D) DHCP
Answer:
A) HSRP
Explanation:
Hot Standby Router Protocol (HSRP) is a Cisco-proprietary protocol that provides gateway redundancy in enterprise networks, allowing end devices to maintain continuous connectivity if the primary default gateway becomes unavailable. HSRP creates a virtual IP address that multiple routers share, with one router acting as the active gateway and another as the standby. Devices on the network use the virtual IP address as their default gateway, ensuring seamless failover when the active router fails.
HSRP operates by electing an active router and a standby router based on configured priorities and timers. The active router forwards traffic destined for the virtual IP address, while the standby router monitors the active router and takes over forwarding if the active router becomes unavailable. HSRP uses hello messages to maintain awareness of the state of participating routers and ensures rapid failover with minimal disruption to traffic. Enterprise networks often configure multiple HSRP groups for different VLANs, enabling redundancy across multiple Layer 3 segments and supporting high-availability designs.
HSRP provides load sharing by configuring multiple HSRP groups with different active routers for each VLAN. This allows traffic to be distributed across multiple physical devices while maintaining redundancy. In addition, HSRP timers can be tuned to adjust convergence times, ensuring fast failover in critical environments such as voice, video, and real-time application traffic. HSRP is commonly deployed in combination with routing protocols, VLAN segmentation, and QoS policies to maintain high availability, optimize traffic distribution, and ensure service continuity across enterprise networks.
Other options provide different functionalities. VRF creates multiple routing instances but does not provide default gateway redundancy. NAT translates addresses for connectivity but does not handle gateway failover. DHCP assigns IP addresses but does not provide gateway redundancy.
For Cisco 350-401 ENCOR exam candidates, understanding HSRP involves knowledge of active and standby roles, priority configuration, timers, virtual IP and MAC addresses, multiple HSRP group configurations, VLAN integration, load-sharing design, failover behavior, interactions with routing protocols, network convergence, troubleshooting, and verification commands. Candidates should be able to configure, monitor, and troubleshoot HSRP to provide high-availability gateway redundancy, maintain uninterrupted connectivity for end devices, and ensure operational stability across enterprise networks. Mastery of HSRP enables network engineers to deploy resilient Layer 3 designs, optimize failover performance, integrate redundancy with VLANs and routing protocols, and support critical enterprise services that require continuous access to the network.
Question 82:
Which enterprise network protocol allows secure management access to devices by encrypting the session and authentication credentials?
A) SSH
B) Telnet
C) SNMPv1
D) HTTP
Answer:
A) SSH
Explanation:
Secure Shell (SSH) is a network protocol that provides secure administrative access to network devices by encrypting both the communication channel and authentication credentials. In enterprise networks, remote device management is critical for maintaining operational efficiency, monitoring, configuration, and troubleshooting. Using insecure protocols such as Telnet exposes passwords and configuration information to potential interception by attackers on the network. SSH addresses these risks by implementing strong encryption methods and secure key exchange mechanisms.
SSH operates at Layer 7 of the OSI model and establishes a secure, encrypted session between a client and a server. During the session initiation, SSH performs authentication using methods such as password authentication, public key authentication, or a combination of both. Public key authentication involves generating cryptographic key pairs and configuring the public key on the device to allow only authorized clients to connect. Once authentication is successful, all subsequent traffic, including command inputs, outputs, and configuration files, is encrypted using algorithms such as AES or 3DES.
SSH supports features such as session multiplexing, tunneling, port forwarding, and secure file transfer using SCP or SFTP. Session multiplexing allows multiple virtual sessions over a single TCP connection, reducing overhead and providing flexible management options. Tunneling and port forwarding enable administrators to securely transmit traffic from other applications through the encrypted SSH session. Secure file transfer using SCP or SFTP allows network engineers to safely upload or download configuration files, firmware updates, and device backups without exposing sensitive information to interception.
SSH is often deployed in combination with AAA (Authentication, Authorization, and Accounting) services to provide centralized management of user credentials, role-based access control, and detailed auditing of device access. Enterprise networks implement AAA with RADIUS or TACACS+ to enforce consistent security policies across all managed devices while tracking administrative actions for compliance and operational monitoring. SSH also integrates with logging and monitoring systems to detect unauthorized access attempts, enforce password policies, and maintain records for operational or security audits.
Other options provide different functionalities. Telnet provides unencrypted remote access and is considered insecure in modern networks. SNMPv1 allows device monitoring but does not provide secure administrative access or encryption. HTTP provides web access but does not encrypt device management sessions unless secured with HTTPS, which is different from SSH.
For Cisco 350-401 ENCOR exam candidates, understanding SSH involves knowledge of session establishment, key exchange algorithms, encryption ciphers, authentication methods, integration with AAA services, tunneling and port forwarding, file transfer protocols, secure configuration practices, network security implications, and troubleshooting connectivity issues. Candidates should be able to configure SSH on enterprise devices, verify secure connectivity, enforce security policies, and manage access controls to prevent unauthorized administrative access while maintaining operational efficiency. Mastery of SSH enables network engineers to provide secure device management, prevent credential compromise, maintain encrypted communications, and integrate security best practices into enterprise network operations while supporting critical administrative and operational tasks.
Question 83:
Which technology allows enterprise networks to carry multiple VLANs over a single physical link, preserving VLAN tagging for Layer 2 traffic?
A) Trunking
B) Access ports
C) VRF
D) HSRP
Answer:
A) Trunking
Explanation:
Trunking is a fundamental Layer 2 technology in enterprise networks that enables multiple VLANs to traverse a single physical link between switches or between a switch and a router. This is critical in enterprise network designs where separating traffic into different VLANs for departments, services, or security zones is necessary, but deploying separate physical links for each VLAN would be inefficient, costly, and impractical. Trunking preserves VLAN tags using standards such as IEEE 802.1Q, allowing devices at both ends of the trunk to identify which VLAN each frame belongs to.
In 802.1Q trunking, a four-byte VLAN tag is inserted into the Ethernet frame, containing VLAN identification and priority information. This tagging allows switches to segregate traffic into different VLANs, enforce policies, and forward frames to the correct VLAN endpoints. Native VLAN configurations are used to define untagged traffic, allowing devices that do not support VLAN tagging to communicate while maintaining proper isolation of other VLANs. Trunks can carry hundreds of VLANs across the same physical interface, enabling enterprise networks to efficiently scale Layer 2 domains across multiple switches.
Trunking also integrates with VLAN pruning, which reduces unnecessary broadcast traffic by preventing certain VLANs from being sent across trunks that do not require them. Dynamic trunking protocols, such as Dynamic Trunking Protocol (DTP), allow interfaces to negotiate trunking automatically, reducing configuration overhead and ensuring consistent trunk link behavior. Administrators must manage trunk configurations carefully to prevent misconfigurations, such as mismatched native VLANs or inconsistent VLAN IDs, which can cause spanning tree issues, broadcast storms, and traffic leakage between VLANs.
Other options provide different functionalities. Access ports carry traffic for a single VLAN and do not support multiple VLANs. VRF isolates routing instances at Layer 3 but does not tag Layer 2 VLAN traffic. HSRP provides gateway redundancy but does not transport VLAN information across a link.
For Cisco 350-401 ENCOR exam candidates, understanding trunking involves knowledge of 802.1Q frame tagging, native VLANs, DTP operation, VLAN pruning, integration with spanning tree protocols, interaction with Layer 3 routing, verification commands, troubleshooting mismatched trunk issues, operational considerations for multi-switch environments, VLAN scaling, inter-VLAN routing, and ensuring traffic isolation and security. Candidates should be able to configure trunk links, verify VLAN propagation, manage native VLAN assignments, troubleshoot connectivity and broadcast issues, and ensure VLAN traffic is correctly segmented while optimizing link utilization. Mastery of trunking enables network engineers to efficiently manage enterprise Layer 2 networks, support multi-VLAN environments, enforce traffic policies, and maintain operational stability across complex switching topologies.
Question 84:
Which enterprise network technology provides high-availability routing by allowing multiple routers to share a single IP address for end devices while maintaining continuous connectivity during failures?
A) HSRP
B) VRF
C) NAT
D) DHCP
Answer:
A) HSRP
Explanation:
Hot Standby Router Protocol (HSRP) is a Cisco proprietary protocol designed to provide high-availability default gateway redundancy in enterprise networks. By allowing multiple routers to share a single virtual IP address, HSRP ensures that end devices maintain uninterrupted connectivity even if the primary router fails. The protocol operates by designating one router as active, which forwards traffic destined to the virtual IP, while another router is placed in standby mode, ready to take over if the active router becomes unavailable.
HSRP maintains network availability by monitoring the state of participating routers through hello messages. These periodic messages allow routers to detect failures quickly and initiate a failover process. The protocol supports configuration of priorities to influence which router becomes active, as well as preemption to allow a higher-priority router to regain the active role once it recovers from a failure. HSRP can be configured per VLAN, enabling multiple Layer 3 segments to have independent redundancy, which enhances reliability and supports enterprise designs that require segmented traffic for departments or applications.
HSRP also supports load sharing through the deployment of multiple HSRP groups. Different routers can be active for different HSRP groups, distributing traffic across multiple devices while still maintaining redundancy. Timers can be tuned to optimize convergence times and ensure fast failover, which is critical for applications that require low latency and minimal disruption, such as VoIP, video conferencing, and transactional systems. Integration with routing protocols, VLANs, and QoS mechanisms ensures that HSRP operates seamlessly in complex enterprise networks, providing reliable gateway redundancy without affecting traffic performance.
Other options provide different functionalities. VRF provides isolated routing instances but does not manage gateway redundancy. NAT translates addresses for external connectivity but does not provide failover. DHCP assigns IP addresses dynamically but does not maintain gateway availability.
For Cisco 350-401 ENCOR exam candidates, understanding HSRP involves knowledge of active and standby roles, priority configuration, timers, virtual IP and MAC addresses, multiple HSRP groups per VLAN, preemption, load-sharing strategies, integration with routing protocols, verification commands, failover detection, network convergence, troubleshooting, and ensuring continuous connectivity for end devices. Candidates should be able to configure HSRP to provide high-availability gateway redundancy, optimize failover performance, maintain operational stability, and ensure uninterrupted access to enterprise network resources. Mastery of HSRP enables network engineers to deploy resilient Layer 3 designs, provide predictable failover behavior, support multiple VLANs and segments, and maintain network reliability for critical enterprise applications and services.
Question 85:
Which feature allows enterprise networks to prioritize certain types of traffic based on DSCP or CoS markings to ensure consistent performance for time-sensitive applications like voice and video?
A) QoS
B) HSRP
C) NAT
D) VRF
Answer:
A) QoS
Explanation:
Quality of Service (QoS) in enterprise networks is essential for managing and optimizing traffic flow, particularly for applications that are sensitive to latency, jitter, and packet loss such as voice over IP (VoIP), video conferencing, and real-time collaboration tools. QoS provides mechanisms to classify, mark, queue, and schedule packets in a network to ensure that critical traffic receives appropriate priority over less time-sensitive traffic. By leveraging DSCP (Differentiated Services Code Point) or CoS (Class of Service) markings, QoS enables network administrators to implement end-to-end traffic prioritization policies, ensuring consistent application performance across Layer 2 and Layer 3 infrastructure.
The first step in implementing QoS is traffic classification, which involves examining packet headers to determine the type of traffic. This can be done based on Layer 2, Layer 3, or Layer 4 attributes, such as VLAN tags, IP addresses, TCP/UDP ports, or application signatures. Once traffic is classified, it can be marked with DSCP values in the IP header or CoS values in the 802.1Q VLAN tag. Marked packets indicate their priority level to all devices along the network path, enabling switches and routers to enforce forwarding behavior based on predefined QoS policies.
Queuing mechanisms are fundamental in QoS, as they determine how packets are buffered and transmitted when network congestion occurs. Common queuing strategies include priority queuing (PQ), which ensures high-priority traffic is always sent first, weighted fair queuing (WFQ), which allocates bandwidth proportionally based on class weights, and class-based weighted fair queuing (CBWFQ), which allows granular bandwidth allocation per traffic class. These mechanisms help prevent high-priority traffic from being delayed or dropped while allowing best-effort traffic to be delivered when resources are available.
Traffic policing and shaping are also integral components of QoS. Policing enforces traffic rate limits by dropping or remarking packets that exceed configured thresholds, which prevents bandwidth abuse and ensures fair resource allocation. Traffic shaping, on the other hand, buffers excess packets and transmits them at a controlled rate, smoothing traffic bursts and maintaining predictable network behavior. Both policing and shaping work together to control congestion and optimize resource utilization across WAN and LAN links.
QoS policies can be applied at interfaces, VLANs, and routing paths, providing end-to-end prioritization. Integration with routing protocols allows QoS to influence path selection, ensuring that high-priority traffic takes optimal routes. For example, voice traffic can be mapped to a low-latency path using RSVP or MPLS TE, while bulk file transfers use best-effort paths. QoS also integrates with monitoring tools such as NetFlow, SNMP, and telemetry to measure traffic performance, validate policy enforcement, and detect congestion points.
Other options provide different functionalities. HSRP ensures gateway redundancy but does not prioritize traffic. NAT translates private addresses to public addresses for connectivity but does not manage priority levels. VRF separates routing instances but does not control traffic prioritization or packet treatment.
For Cisco 350-401 ENCOR exam candidates, understanding QoS involves detailed knowledge of traffic classification, DSCP and CoS markings, queuing algorithms, bandwidth allocation, policing, shaping, congestion management, integration with routing protocols, end-to-end traffic engineering, troubleshooting dropped packets or jitter, verification commands, interface configuration, and scalability considerations. Candidates should be able to design, implement, verify, and maintain QoS policies to guarantee performance for mission-critical applications, optimize network resource utilization, maintain predictable latency and jitter, and ensure high reliability for time-sensitive enterprise traffic. Mastery of QoS enables network engineers to provide consistent application performance, manage bandwidth effectively, prevent congestion, and maintain a high level of operational efficiency across complex enterprise networks.
Question 86:
Which protocol allows multiple routers in an enterprise network to share a single virtual IP address as the default gateway for hosts, providing redundancy in case of a router failure?
A) HSRP
B) VRF
C) NAT
D) DHCP
Answer:
A) HSRP
Explanation:
Hot Standby Router Protocol (HSRP) is widely used in enterprise networks to provide default gateway redundancy for end devices, allowing uninterrupted connectivity in case of a router or link failure. HSRP functions by creating a virtual IP address that hosts use as their default gateway. Among participating routers, one is elected as the active router that forwards traffic for the virtual IP, while another router assumes the standby role to take over if the active router becomes unavailable. This redundancy ensures that critical services remain operational without manual intervention or IP reconfiguration on hosts.
HSRP routers communicate using hello messages to monitor the status of the active and standby devices. If hello messages from the active router cease, the standby router immediately transitions to the active state and begins forwarding traffic to maintain connectivity. Priority values can be configured to influence which router becomes active, and preemption can be enabled to allow a higher-priority router to resume the active role when it comes back online. Multiple HSRP groups can be configured for different VLANs to achieve redundancy and load-sharing across enterprise networks, distributing traffic across multiple routers while maintaining high availability.
Timers, such as hello interval and hold time, determine how quickly failover occurs, impacting the convergence time of the network. Proper tuning of these timers is critical in environments where low latency and minimal downtime are required, such as VoIP, video, and real-time application environments. Integration with Layer 2 VLANs ensures that HSRP functions across segmented networks, while interaction with routing protocols guarantees seamless operation and optimal path selection during failover.
Other options provide different functionalities. VRF isolates routing instances but does not provide gateway redundancy. NAT translates addresses for external communication but does not maintain continuous gateway availability. DHCP dynamically assigns IP addresses but does not provide gateway failover functionality.
For Cisco 350-401 ENCOR exam candidates, understanding HSRP involves knowledge of active and standby router roles, priority configuration, preemption, virtual IP and MAC address assignments, multiple HSRP groups, VLAN integration, failover detection, timers, verification commands, operational behavior during failures, troubleshooting techniques, interaction with routing protocols, load balancing, and optimization of convergence times. Candidates should be able to configure HSRP to provide highly available gateways, verify failover behavior, optimize network resilience, maintain connectivity for all end devices, and ensure network stability in multi-router and multi-VLAN enterprise environments. Mastery of HSRP enables network engineers to design and operate resilient Layer 3 networks capable of maintaining uninterrupted service during failures, distributing traffic across multiple routers, and supporting enterprise requirements for continuous availability.
Question 87:
Which technology allows multiple Layer 3 routing instances to coexist on the same device, enabling overlapping IP address spaces for different departments or customers in an enterprise network?
A) VRF
B) VLAN
C) NAT
D) HSRP
Answer:
A) VRF
Explanation:
Virtual Routing and Forwarding (VRF) is a key technology in enterprise networks that enables multiple independent Layer 3 routing instances to operate on a single physical device. VRF allows traffic from different departments, customers, or services to remain isolated while sharing the same physical infrastructure. Each VRF maintains its own routing table, interfaces, and forwarding decisions, enabling overlapping IP address spaces without conflict. This capability is particularly valuable in large enterprise networks, multi-tenant data centers, and service provider environments where isolation, security, and scalability are essential.
VRF operates by logically partitioning the routing table of a router or Layer 3 switch, ensuring that packets belonging to one VRF do not interfere with or traverse another VRF unless explicitly configured through route leaking. Interfaces or subinterfaces are assigned to VRF instances, controlling which routing table handles the traffic. VRF allows for selective route import and export, enabling controlled communication between routing instances when necessary. Policies such as route maps, route targets, and redistribution rules can be applied to manage interactions between VRFs and integrate with existing enterprise routing protocols.
VRF is commonly used in enterprise WANs, data centers, and cloud environments to provide multi-tenant segmentation, separate routing domains, and allow overlapping private address spaces. For example, different departments within a corporation may use the same private IP ranges without conflict, or multiple tenants in a shared data center may have isolated routing while using identical IP addressing schemes. Integration with MPLS VPNs further extends VRF capabilities across wide-area networks, enabling service providers to maintain complete separation of customer traffic while providing efficient resource usage.
Other options provide different functionalities. VLAN provides Layer 2 segmentation but does not create separate Layer 3 routing tables. NAT translates addresses for connectivity but does not maintain multiple routing instances. HSRP provides gateway redundancy but does not create isolated routing tables.
For Cisco 350-401 ENCOR exam candidates, understanding VRF involves knowledge of VRF creation, interface assignment, routing table separation, route leaking, route targets, import/export policies, integration with routing protocols, VLANs, MPLS VPNs, configuration best practices, security isolation, scalability considerations, troubleshooting routing conflicts, and verification techniques. Candidates should be able to implement VRF to isolate traffic, manage overlapping IP spaces, enforce security policies, maintain multiple routing domains, integrate with enterprise routing strategies, and provide flexible, scalable network designs. Mastery of VRF allows network engineers to design multi-tenant environments, maintain traffic isolation, optimize routing, provide secure segmentation, and operate complex enterprise networks with multiple independent routing instances on shared physical infrastructure.
Question 88:
Which enterprise network protocol allows for the dynamic assignment of IP addresses to hosts, simplifying IP address management and reducing configuration errors?
A) DHCP
B) HSRP
C) NAT
D) OSPF
Answer:
A) DHCP
Explanation:
Dynamic Host Configuration Protocol (DHCP) is a crucial service in enterprise networks that automates the assignment of IP addresses, subnet masks, default gateways, DNS servers, and other network configuration parameters to end devices. This automation is essential in networks with a large number of hosts, as it reduces manual configuration errors, simplifies administrative tasks, and ensures that devices can join the network quickly and reliably. DHCP operates using a client-server model in which the DHCP server manages pools of available IP addresses and configuration parameters, and the client requests this information when connecting to the network.
The DHCP process begins with the client sending a DHCP Discover message to locate available servers. Servers respond with a DHCP Offer message, indicating available IP addresses and configuration options. The client then selects an offer and sends a DHCP Request message to confirm acceptance. The server responds with a DHCP Acknowledgment, completing the process and allowing the client to configure its network interface with the provided information. This lease-based approach ensures efficient use of IP addresses by reclaiming addresses from devices that leave the network or no longer require them.
DHCP supports a range of features that enhance enterprise network management. Address reservation allows specific devices to consistently receive the same IP address, which is useful for servers, printers, and network devices that require static addresses for routing or service configuration. Options such as default gateway, DNS server, NTP server, domain name, and classless static routes can be delivered to clients automatically, providing consistent network configuration and reducing operational overhead. DHCP relay agents enable communication between clients and DHCP servers across Layer 3 boundaries, ensuring centralized management even in segmented enterprise networks.
DHCP also supports security mechanisms to prevent unauthorized access and address allocation conflicts. DHCP snooping, for example, allows network devices to filter and monitor DHCP messages, ensuring only trusted servers assign IP addresses. Integration with AAA services, such as RADIUS and TACACS+, can enforce authentication and authorization policies to control which devices are allowed to receive DHCP leases. DHCP logging and monitoring tools provide visibility into address allocation, lease utilization, and potential misconfigurations, enabling proactive network management and troubleshooting.
Other options provide different functionalities. HSRP ensures gateway redundancy but does not assign IP addresses. NAT translates private addresses to public addresses but does not automate host configuration. OSPF is a routing protocol that propagates routing information but does not manage host addressing.
For Cisco 350-401 ENCOR exam candidates, understanding DHCP involves knowledge of the DHCP client-server process, lease management, address pools, options and parameters, reservations, relay agent configuration, security features, integration with AAA services, network segmentation, logging and monitoring, troubleshooting lease failures, IP address utilization, and configuration best practices. Candidates should be able to configure DHCP, verify operation, manage address allocation efficiently, secure the service, and ensure seamless IP assignment for end devices across enterprise networks. Mastery of DHCP enables network engineers to reduce manual configuration errors, simplify network administration, maintain operational efficiency, and support enterprise devices in dynamic and large-scale environments.
Question 89:
Which technology allows multiple VLANs to be connected through a single Layer 3 device, enabling inter-VLAN communication in an enterprise network?
A) Router-on-a-stick
B) Access ports
C) HSRP
D) NAT
Answer:
A) Router-on-a-stick
Explanation:
Router-on-a-stick is a network design technique that allows multiple VLANs to communicate through a single physical interface on a Layer 3 device, such as a router or Layer 3 switch. This approach is widely used in enterprise networks where multiple VLANs are deployed for segmentation, security, and traffic isolation, but physical interfaces are limited. The technique leverages 802.1Q trunking to encapsulate VLAN tags, allowing the router to differentiate and route traffic between VLANs.
In a router-on-a-stick configuration, the router’s interface is divided into multiple subinterfaces, each associated with a specific VLAN. Each subinterface is assigned an IP address within the VLAN’s subnet, acting as the default gateway for hosts in that VLAN. The physical interface carries tagged traffic for all VLANs using 802.1Q, allowing a single link to support multiple Layer 2 domains. The router receives incoming frames, examines the VLAN tag, routes the packet to the appropriate VLAN based on its routing table, and forwards it back to the trunk interface. This enables hosts in different VLANs to communicate while maintaining traffic isolation for unicast, multicast, and broadcast traffic.
Router-on-a-stick supports features such as inter-VLAN routing, access control, QoS, and security policies. ACLs can be applied on subinterfaces to control which VLANs or hosts are allowed to communicate, providing granular security enforcement. Traffic shaping and QoS policies can prioritize specific VLAN traffic, ensuring critical applications receive the necessary bandwidth and low latency. This design allows network engineers to implement cost-effective Layer 3 solutions without requiring multiple physical interfaces while maintaining scalability and traffic management capabilities.
Other options provide different functionalities. Access ports carry traffic for a single VLAN and cannot route between VLANs. HSRP provides gateway redundancy but does not enable inter-VLAN routing. NAT translates addresses for external connectivity but does not facilitate Layer 3 communication between VLANs.
For Cisco 350-401 ENCOR exam candidates, understanding router-on-a-stick involves knowledge of subinterface creation, VLAN tagging, 802.1Q trunk configuration, IP addressing for subinterfaces, inter-VLAN routing, ACL application, QoS policies, traffic prioritization, troubleshooting inter-VLAN connectivity, verifying subinterface operation, router interface management, VLAN assignment, and integration with enterprise network topology. Candidates should be able to configure router-on-a-stick, verify traffic forwarding, implement policies for security and performance, troubleshoot connectivity issues, and maintain efficient Layer 3 inter-VLAN communication. Mastery of this technique enables network engineers to provide scalable, cost-effective, and secure routing solutions for enterprise networks with multiple VLANs while ensuring reliable inter-VLAN connectivity and operational efficiency.
Question 90:
Which enterprise network protocol provides loop-free Layer 2 connectivity by electing a root bridge and calculating shortest paths to prevent broadcast storms?
A) STP
B) HSRP
C) OSPF
D) NAT
Answer:
A) STP
Explanation:
Spanning Tree Protocol (STP) is a Layer 2 protocol used in enterprise networks to prevent loops in Ethernet topologies. Loops in a switched network can lead to broadcast storms, multiple frame copies, MAC table instability, and network outages. STP maintains loop-free Layer 2 connectivity by electing a root bridge and calculating the shortest path from all switches to the root. The protocol blocks redundant links while keeping backup paths available to maintain connectivity in case of failures.
STP operates using Bridge Protocol Data Units (BPDUs), which are exchanged between switches to share information about bridge IDs, path costs, port states, and topology changes. The election of a root bridge is based on the lowest bridge ID, combining priority and MAC address values. Once the root is elected, each switch determines the root port, which is the interface with the lowest path cost to the root bridge. Designated ports are assigned for each segment to forward traffic, and other ports are placed in a blocking state to prevent loops.
Multiple STP variants provide improved performance and faster convergence. Rapid Spanning Tree Protocol (RSTP) reduces convergence times by introducing port roles such as alternate and backup ports, allowing the network to adapt quickly to topology changes. Multiple Spanning Tree Protocol (MSTP) allows mapping of VLANs to different spanning tree instances, optimizing link usage and providing load balancing in multi-VLAN environments. STP integrates with VLANs, QoS policies, and Layer 3 routing to maintain operational efficiency, prevent loops, and ensure predictable traffic forwarding.
Other options provide different functionalities. HSRP provides default gateway redundancy but does not prevent loops. OSPF propagates routing information at Layer 3 and does not address Layer 2 broadcast loops. NAT translates addresses for connectivity but does not manage Layer 2 paths.
For Cisco 350-401 ENCOR exam candidates, understanding STP involves knowledge of root bridge election, path cost calculation, port roles, port states, BPDU structure, STP timers, RSTP and MSTP variants, VLAN integration, loop prevention techniques, convergence behavior, troubleshooting topology changes, redundant link management, and verification commands. Candidates should be able to configure STP for loop prevention, optimize topology, verify operational status, troubleshoot blocked or forwarding ports, integrate STP with VLAN designs, and ensure stable Layer 2 connectivity. Mastery of STP enables network engineers to design resilient Layer 2 topologies, prevent broadcast storms, maintain consistent traffic flow, support multi-VLAN environments, and provide high availability in enterprise switched networks.