Visit here for our full CompTIA SY0-701 exam dumps and practice test questions.
Question 166:
What type of testing evaluates application security by analyzing source code?
A) Dynamic testing
B) Static testing
C) Penetration testing
D) Behavioral testing
Answer: B) Static testing
Explanation:
Static testing, also called static application security testing or SAST, evaluates application security by analyzing source code, bytecode, or binaries without executing programs to identify security vulnerabilities, coding errors, and deviations from secure coding practices. This white-box testing approach examines all code paths, logic branches, and data flows comprehensively including error handling and edge cases that might not be exercised during runtime testing. Static analysis enables early vulnerability detection during development before code reaches production environments, when remediation is least expensive and disruptive.
Analysis techniques employ various methods to identify security issues. Pattern matching identifies code constructs matching known vulnerability patterns such as SQL injection opportunities, cross-site scripting potential, or hardcoded credentials. Data flow analysis tracks how data moves through applications from input sources to sensitive operations, identifying paths where untrusted input reaches dangerous functions without proper validation or sanitization. Control flow analysis examines program execution paths detecting logical flaws, unreachable code, or improper error handling. Taint analysis specifically tracks untrusted data propagation marking input sources as “tainted” and flagging when tainted data reaches sensitive operations without cleaning.
Benefits include comprehensive code coverage examining all code paths regardless of whether they execute during testing, early detection identifying vulnerabilities during development when fixes are cheapest, developer-friendly results providing specific file names and line numbers where issues exist with remediation guidance, automation enabling efficient analysis of large codebases without manual review, and integration capabilities embedding security analysis into development workflows and continuous integration pipelines. These advantages make static testing valuable for building security into applications from the beginning.
Integration with software development lifecycles enables “shift left” security where vulnerabilities are caught early. Developers run static analysis on workstations before committing code catching issues immediately. Continuous integration pipelines execute automated static analysis on code commits preventing vulnerable code from progressing through build processes. Nightly comprehensive scans analyze entire codebases identifying issues for prioritization and remediation. This continuous security testing throughout development dramatically improves application security postures while reducing costs compared to finding vulnerabilities in production.
Limitations include false positives where tools flag code as vulnerable when it actually isn’t due to analysis limitations in understanding full context, false negatives missing actual vulnerabilities through incomplete analysis or novel vulnerability types, difficulty with some language features like dynamic code generation or reflection that complicate static analysis, and potential developer fatigue from excessive alerts particularly with poorly tuned tools generating noise. Effective static testing requires tuning tools, prioritizing findings, and combining with other testing approaches for comprehensive coverage.
Different static analysis tools specialize in various aspects. Commercial enterprise tools offer comprehensive vulnerability detection, extensive language support, integration capabilities, and support services. Open source tools provide accessible analysis though potentially with fewer features or language coverage. IDE plugins provide real-time analysis as developers write code enabling immediate feedback. Purpose-built tools target specific vulnerability types like cryptographic misuse or authentication flaws. Organizations often employ multiple tools for complementary coverage.
Supported languages and frameworks vary by tool. Some tools support multiple languages enabling consistent analysis across polyglot codebases. Others specialize in specific languages providing deeper analysis for their target platforms. Framework-specific analysis understands particular application frameworks detecting framework-specific vulnerability patterns. Organizations should select tools supporting their technology stacks.
Remediation workflows connect static analysis findings with developer activities. Integration with issue tracking systems creates tickets for identified vulnerabilities assigning them for remediation. Code review tools display findings during reviews ensuring security issues are addressed before code merges. Trend analysis tracks vulnerability metrics over time measuring security improvement or degradation. These workflows ensure findings drive actual improvements rather than generating reports that nobody acts upon.
Dynamic testing analyzes running applications. Penetration testing simulates attacks. Behavioral testing observes runtime behaviors. Static testing specifically analyzes source code without execution enabling comprehensive early vulnerability detection through code analysis distinguishing it through examination of code artifacts rather than executing applications.
Question 167:
Which attack technique uses malicious code hidden within legitimate-looking files?
A) Virus
B) Trojan
C) Worm
D) Rootkit
Answer: B) Trojan
Explanation:
Trojans, named after the ancient Greek Trojan horse deception, are malicious code hidden within legitimate-looking files or programs that trick users into voluntarily installing and executing them. Unlike viruses that infect files or worms that self-replicate, Trojans rely entirely on social engineering and deception to convince users that malicious software is actually useful, entertaining, or necessary. Once executed, Trojans perform hidden malicious activities while often providing the expected legitimate functionality to avoid arousing suspicion. This dual nature makes Trojans effective at bypassing user caution since victims deliberately run them believing they’re installing genuine software.
Disguise methods vary widely depending on target audiences and distribution channels. Software cracks and keygens promise free access to paid applications attracting users seeking to avoid license purchases. Fake security software falsely claims to detect infections then demands payment for bogus cleaning services. Gaming cheats or mods appeal to gamers seeking competitive advantages. Pirated media files appear as movies, music, or games. Fake productivity utilities mimic system optimizers or file converters. Email attachments masquerade as invoices, shipping notifications, or important documents. These diverse disguises exploit different user motivations and trust assumptions.
Distribution channels reach potential victims through multiple paths. Malicious websites offer trojanized software downloads prominently ranked in search results through search engine optimization. Peer-to-peer networks distribute infected files mixed with legitimate content. Social media spreading shares links to malicious downloads. Email campaigns attach trojans or link to download sites. Compromised legitimate websites serve trojans to trusting visitors. Supply chain attacks inject trojans into legitimate software development or distribution processes. Each channel exploits different trust relationships or user behaviors.
Malicious payloads accomplish various attacker objectives once trojans execute. Remote access trojans provide complete system control enabling attackers to execute commands, access files, capture screenshots, or activate webcams and microphones. Banking trojans steal financial credentials intercepting online banking sessions or capturing payment card information. Information stealers collect passwords, browsing history, documents, or other sensitive data. Downloaders fetch additional malware installing more sophisticated attacks after initial compromise. Ransomware trojans encrypt files demanding payment. Cryptocurrency miners hijack system resources. Each payload type serves different criminal monetization or espionage purposes.
Evasion techniques help trojans avoid detection and analysis. Code obfuscation makes programs difficult to reverse engineer. Encryption hides malicious components from antivirus scanning. Packers compress code in ways that complicate analysis. Anti-analysis checks detect virtual machines, debuggers, or sandboxes used by security researchers attempting to abort execution in analysis environments. Polymorphism changes code structure with each infection making signature-based detection harder. These sophisticated techniques extend trojan effectiveness against security defenses.
Question 168:
What security measure isolates different network segments to limit breach propagation?
A) Encryption
B) Network segmentation
C) Authentication
D) Data backup
Answer: B) Network segmentation
Explanation:
Network segmentation divides networks into separate isolated segments or subnets with controlled communication between them, limiting breach propagation by containing security incidents within specific segments and preventing attackers who compromise one area from easily moving laterally to access other network parts. This defense-in-depth strategy creates security boundaries beyond perimeter defenses, recognizing that perimeter breaches will occur and designing internal architecture to minimize damage through isolation. Effective segmentation implements the principle of least privilege at the network level, ensuring systems and users can only communicate with resources necessary for their functions.
Segmentation strategies employ various criteria for dividing networks. Functional segmentation isolates different business functions or departments such as separating human resources, finance, sales, and operations systems allowing each department’s needs while preventing unnecessary cross-department access. Security zone segregation creates distinct zones including public DMZs for internet-facing systems, internal trusted networks for user workstations and business applications, restricted zones for sensitive data and critical infrastructure, management networks for administrative access, and guest networks for visitor devices. System type separation isolates servers from workstations, production from development environments, operational technology from information technology, and database systems from application servers.
Technical implementation uses multiple technologies. Virtual LANs create logical network segments over shared physical infrastructure, providing basic segmentation at layer 2 using switch configurations. Firewalls between segments enforce security policies controlling traffic flow, examining packets and permitting only authorized communications based on rules defining allowed sources, destinations, ports, and protocols. Router access control lists filter traffic based on IP addresses, ports, and protocols. Software-defined networking enables dynamic segmentation policies adapting to changing security requirements through centralized policy management. Microsegmentation in virtualized or cloud environments applies fine-grained policies down to individual workloads or applications providing extremely granular control.
Benefits include breach containment limiting lateral movement as attackers who compromise initial systems face additional barriers preventing easy progression to other segments. Reduced attack surface minimizes what attackers can reach from compromised segments. Compliance support helps meet regulatory requirements mandating isolation of sensitive data like payment card information under PCI DSS or protected health information under HIPAA. Improved monitoring focuses security tools on specific segments enabling better anomaly detection within known traffic patterns. Simplified security policy enforcement applies appropriate controls based on segment security requirements. Performance optimization can result from reduced broadcast domains and optimized traffic patterns.
Implementation planning requires careful analysis of communication patterns and dependencies. Network mapping documents existing infrastructure topology. Traffic analysis identifies actual communication patterns between systems. Business process understanding determines necessary connections for operations. Security assessment evaluates risks and determines appropriate segmentation boundaries. Segmentation design balances security isolation with operational requirements avoiding overly complex architectures that become difficult to manage or so restrictive they impede business functions.
Challenges include complexity in managing multiple segments and security policies requiring careful planning and documentation. Potential performance impacts from additional inspection of inter-segment traffic must be considered in capacity planning. User resistance may arise from restrictions that complicate accessing resources across segments. Difficulty mapping all interdependencies in complex environments risks disrupting applications during implementation. Ongoing maintenance is required as systems and requirements evolve demanding continuous segmentation governance. These challenges necessitate executive support, clear policies, gradual implementation, and adequate resources.
Zero trust architectures extend segmentation principles assuming internal networks are hostile and requiring verification for all connections regardless of source location. This approach treats every network as potentially compromised, authenticating and authorizing all access attempts rather than trusting based on network position. Zero trust represents the evolution of traditional network segmentation into comprehensive continuous verification models.
Common segmentation patterns include three-tier architectures separating presentation, application, and data layers; data classification-based segmentation isolating data by sensitivity levels; user role-based segmentation limiting access based on job functions; and geographic segmentation separating locations. Organizations often combine multiple patterns creating layered segmentation aligned with diverse requirements.
Encryption protects data confidentiality. Authentication verifies identity. Data backup enables recovery. Network segmentation specifically isolates network segments limiting breach propagation through controlled communication boundaries creating containment when compromises occur despite other security controls.
Question 169:
Which protocol provides secure file transfer over SSH?
A) FTP
B) TFTP
C) SFTP
D) HTTP
Answer: C) SFTP
Explanation:
SFTP, which stands for SSH File Transfer Protocol or Secure File Transfer Protocol, provides secure file transfer capabilities by operating over SSH connections, encrypting both authentication credentials and transferred data. This protocol addresses security weaknesses in traditional file transfer protocols by leveraging SSH’s strong encryption, authentication, and integrity verification protecting files during transmission over untrusted networks. SFTP has become the preferred secure file transfer method, replacing insecure protocols like FTP that transmit credentials and data in plaintext vulnerable to interception and eavesdropping.
Protocol operation leverages SSH infrastructure establishing encrypted tunnels between clients and servers before file transfer begins. SFTP uses SSH’s connection multiplexing running file transfer operations over authenticated encrypted SSH sessions on port 22 by default. All commands, responses, and file data travel through encrypted channels protecting confidentiality and integrity. This tight integration with SSH means SFTP inherits SSH security properties including strong encryption algorithms, multiple authentication methods, and connection integrity verification through message authentication codes.
Functionality extends beyond basic file upload and download to provide comprehensive remote file system operations. Users can list directory contents, create and delete directories, rename and delete files, modify file permissions and attributes, and perform other file management tasks remotely. SFTP supports resuming interrupted transfers, preserving file timestamps and permissions, and batch operations transferring multiple files efficiently. These capabilities make SFTP suitable for backup operations, application deployment, data synchronization, and general file management beyond simple file exchange.
Authentication methods provide flexibility for different security requirements. Password authentication allows users to authenticate with usernames and passwords though this remains vulnerable to password-based attacks. Public key authentication using SSH key pairs provides stronger security eliminating password transmission and enabling automated file transfers without embedding credentials in scripts. Certificate-based authentication employs digital certificates for scalable authentication in enterprise environments. Multi-factor authentication can be implemented requiring additional verification beyond primary credentials.
Advantages over alternatives include strong encryption protecting all transmitted data from eavesdropping and tampering, authentication security preventing unauthorized access through multiple supported authentication methods including strong cryptographic options, single port operation simplifying firewall configuration compared to FTP’s multiple port requirements, platform independence supporting diverse operating systems and devices, and SSH infrastructure reuse leveraging existing SSH servers without requiring separate file transfer services. These benefits make SFTP appropriate for transferring sensitive files across untrusted networks.
Common use cases include secure file exchange with business partners, automated data feeds between systems, backup file transfers to remote storage, application deployment distributing software to servers, compliance requirements mandating encrypted data transmission, and systems administration for managing server files remotely. Organizations across industries rely on SFTP for secure file operations where confidentiality and integrity are important.
Implementation considerations include server configuration specifying SFTP settings, access controls, and user permissions; client selection from numerous available SFTP client applications supporting various platforms; automation through scripting for scheduled or triggered file transfers; monitoring and logging tracking file transfer activities for audit and troubleshooting; and performance optimization for large file transfers or high-volume operations. Proper implementation ensures secure reliable file transfer operations.
Security best practices include disabling password authentication in favor of key-based authentication eliminating password vulnerabilities, restricting SFTP access through firewall rules limiting connection sources, implementing key management policies for generating, distributing, rotating, and protecting SSH keys, enabling comprehensive logging of SFTP activities for security monitoring and audit, chroot jails or similar mechanisms restricting users to specific directories preventing unauthorized file system access, and keeping SFTP server software updated addressing security vulnerabilities. These practices maximize SFTP security.
Alternatives include FTPS which adds SSL/TLS encryption to traditional FTP providing security but with additional complexity from multiple ports and active/passive mode considerations. SCP, another SSH-based file transfer protocol, offers simpler functionality focused on file copying without SFTP’s comprehensive file management capabilities. HTTP-based file transfer can be secured with TLS but lacks purpose-built file transfer features.
FTP transmits data unencrypted. TFTP is a simplified protocol without authentication. HTTP transfers web content. SFTP specifically provides secure file transfer over SSH encrypting all communications and offering comprehensive secure remote file management capabilities distinguishing it through security and functionality.
Question 170:
What is the primary purpose of implementing certificate revocation lists?
A) To issue new certificates
B) To identify certificates that are no longer valid
C) To encrypt network traffic
D) To authenticate users
Answer: B) To identify certificates that are no longer valid
Explanation:
Certificate Revocation Lists, commonly abbreviated as CRLs, identify digital certificates that certificate authorities have revoked before their scheduled expiration dates and should no longer be trusted by systems verifying certificates. This critical public key infrastructure component enables the PKI ecosystem to communicate that previously valid certificates are now invalid due to various circumstances including private key compromise, certificate holder information changes, policy violations, or certificate authority compromise. Without revocation mechanisms, compromised certificates would remain trusted until natural expiration potentially enabling months or years of malicious use by attackers who obtained certificate private keys.
Revocation reasons vary requiring different response urgencies. Private key compromise represents the most critical scenario where unauthorized parties gained access to certificate private keys enabling impersonation of legitimate certificate holders or decryption of communications intended for rightful key owners. Organizations must immediately revoke compromised certificates preventing their misuse for fraudulent websites, man-in-the-middle attacks, code signing malware, or email impersonation. Certificate holder information changes such as company name changes, domain ownership transfers, or employee terminations from organizations issuing employee certificates necessitate revoking old certificates reflecting outdated information and issuing new certificates with current details. Policy violations where certificate holders fail to comply with certificate authority requirements or acceptable use policies may trigger revocation as enforcement mechanism.
CRL operation follows straightforward publication and checking processes. Certificate authorities maintain authoritative lists of revoked certificates identified by serial numbers along with revocation dates and reason codes. These lists are digitally signed by certificate authorities to prevent tampering and published at well-known locations specified in certificate extensions. Applications and systems verifying certificates download current CRLs from published locations and check whether certificates being validated appear on revocation lists by comparing serial numbers. Certificates found on CRLs should be rejected as invalid regardless of their expiration dates not yet being reached. This process ensures revocation decisions propagate to all relying parties.
Scalability challenges arise as certificate authorities issue more certificates causing CRLs to grow larger consuming increasing bandwidth and processing time. Some CRLs contain thousands or millions of entries creating multi-megabyte files that slow downloads particularly over limited-bandwidth connections or in regions with poor internet connectivity. Delta CRLs containing only changes since previous full CRLs partially address this by reducing download sizes though adding implementation complexity. Partitioned CRLs split large lists into smaller segments manageable independently.
Question 171:
Which attack involves creating a fraudulent website that mimics a legitimate one to steal user credentials?
A) Pharming
B) Phishing
C) Spoofing
D) Whaling
Answer: B
Explanation:
Phishing involves creating fraudulent websites that closely mimic legitimate ones to deceive users into entering their credentials, personal information, or financial details. Attackers design these fake websites to appear identical to trusted sites such as banks, email providers, social media platforms, or e-commerce sites. When victims enter their information believing they are on legitimate sites, attackers capture this data for identity theft, financial fraud, or unauthorized account access.
These attacks typically begin with deceptive communications directing victims to fraudulent sites. Attackers send emails, text messages, or social media messages containing links to fake websites. These messages often create urgency claiming account problems, security alerts, or limited-time offers requiring immediate action. The fraudulent URLs may closely resemble legitimate addresses using slight misspellings, different domain extensions, or subdomains designed to appear authentic at quick glance.
The fake websites replicate legitimate site appearances including logos, color schemes, layouts, and functionality. Some sophisticated phishing sites even display valid security indicators or use encryption certificates to appear more trustworthy. Attackers may register domain names resembling target organizations or use URL shortening services to obscure actual destinations.
A) Pharming redirects users to fraudulent sites through DNS manipulation rather than deceptive links, operating at infrastructure level without requiring user interaction with malicious messages.
B) Phishing specifically creates fake websites mimicking legitimate ones to harvest credentials through social engineering and deception, making this the correct answer.
C) Spoofing involves falsifying information like email addresses or IP addresses but does not specifically describe creating fake credential-harvesting websites.
D) Whaling targets high-profile executives with sophisticated phishing attacks but represents a phishing variant rather than the general technique of creating fraudulent websites.
Organizations defend against phishing through user awareness training, email filtering, browser warnings for suspicious sites, and multi-factor authentication protecting accounts even when credentials are compromised.
Question 172:
What security control prevents data from leaving the organization through unauthorized channels?
A) Intrusion detection
B) Data loss prevention
C) Access control
D) Encryption
Answer: B
Explanation:
Data loss prevention, commonly abbreviated as DLP, prevents sensitive data from leaving organizations through unauthorized channels by monitoring, detecting, and blocking potential data exfiltration attempts. DLP solutions examine data in motion across networks, data at rest in storage systems, and data in use on endpoints, applying policies that identify and protect sensitive information from unauthorized transmission, copying, or exposure regardless of whether the action is malicious or accidental.
DLP systems identify sensitive data using various techniques including content inspection examining actual data content for patterns matching sensitive information types like credit card numbers, social security numbers, or confidential keywords. Contextual analysis considers metadata, file types, user behaviors, and transmission destinations. Exact data matching compares content against fingerprints of known sensitive documents or database records. These combined approaches enable comprehensive sensitive data identification across diverse formats and locations.
When DLP detects policy violations, it can respond through multiple actions depending on configuration and violation severity. Blocking prevents unauthorized transmissions entirely. Encryption automatically protects sensitive data before transmission. Quarantining holds content for security review before release. Alerting notifies security teams of potential incidents. Logging records activities for audit and investigation purposes.
A) Intrusion detection identifies suspicious network activities and potential attacks but focuses on detecting threats entering networks rather than preventing data from leaving.
B) Data loss prevention specifically monitors and blocks unauthorized data exfiltration through various channels, making this the correct answer for preventing data leaving organizations.
C) Access control restricts who can access resources but does not specifically address preventing authorized users from transmitting data through unauthorized channels.
D) Encryption protects data confidentiality by making it unreadable without keys but does not prevent data transmission through unauthorized channels.
Organizations implement DLP as critical protection for intellectual property, customer data, financial information, and compliance with regulations requiring data protection.
Question 173:
Which security principle ensures systems continue operating during component failures?
A) Confidentiality
B) Integrity
C) Availability
D) Non-repudiation
Answer: C
Explanation:
Availability ensures systems and data remain accessible and operational when needed by authorized users, including during component failures through redundancy, fault tolerance, and resilience mechanisms. This fundamental security principle recognizes that information and systems have value only when accessible for legitimate purposes. Availability protection addresses threats ranging from hardware failures and software crashes to natural disasters and denial of service attacks that could prevent access to critical resources.
Achieving availability during failures requires implementing redundant components that continue operations when primary systems fail. Redundant power supplies, network connections, storage systems, and servers ensure single component failures do not cause complete service outages. Load balancing distributes workloads across multiple systems so failure of individual servers does not eliminate capacity entirely. Clustering configurations enable automatic failover where backup systems assume operations when primary systems become unavailable.
Disaster recovery planning addresses larger-scale availability threats through backup facilities, data replication, and recovery procedures enabling restoration of operations following significant incidents. Business continuity planning ensures critical business functions continue during disruptions. Regular testing validates that availability mechanisms function correctly when actually needed during emergencies.
A) Confidentiality protects information from unauthorized disclosure but does not address maintaining operations during failures.
B) Integrity ensures data accuracy and prevents unauthorized modification but focuses on data correctness rather than operational continuity.
C) Availability specifically ensures systems continue operating and remain accessible during component failures and other disruptions, making this the correct answer.
D) Non-repudiation prevents denial of actions and provides proof of origin but does not relate to maintaining system operations during failures.
Organizations measure availability through metrics like uptime percentages and service level agreements specifying required availability levels for critical systems.
Question 174:
What type of attack exploits human psychology rather than technical vulnerabilities?
A) Buffer overflow
B) SQL injection
C) Social engineering
D) Cross-site scripting
Answer: C
Explanation:
Social engineering exploits human psychology rather than technical vulnerabilities by manipulating individuals into performing actions that compromise security or divulging confidential information. These attacks target the human element of security, recognizing that people often represent the weakest link in security chains regardless of how robust technical controls may be. Social engineers leverage fundamental human traits including trust, helpfulness, authority respect, fear, and curiosity to influence victim behavior.
Social engineering techniques vary widely depending on attack objectives and target characteristics. Phishing uses fraudulent communications appearing to come from trusted sources requesting sensitive information or directing victims to malicious websites. Pretexting creates fabricated scenarios establishing trust before requesting information, such as impersonating IT support technicians or company executives. Baiting offers attractive items like free software or USB drives containing malware. Quid pro quo attacks promise services in exchange for information. Tailgating physically follows authorized personnel through secured doors.
These attacks succeed because they exploit human nature rather than requiring technical expertise to discover and exploit software vulnerabilities. Even organizations with sophisticated technical security can be compromised through employees manipulated into revealing passwords, granting access, or installing malicious software.
A) Buffer overflow exploits programming flaws that corrupt memory, representing a technical vulnerability rather than psychological manipulation.
B) SQL injection exploits database vulnerabilities through malicious queries, targeting technical weaknesses in application code rather than human psychology.
C) Social engineering specifically exploits human psychology and manipulation rather than technical vulnerabilities, making this the correct answer.
D) Cross-site scripting exploits web application vulnerabilities to inject malicious scripts, representing technical rather than psychological attacks.
Defense requires comprehensive security awareness training educating employees about social engineering tactics and creating cultures where questioning suspicious requests is encouraged.
Question 175:
Which encryption algorithm is currently considered the industry standard for symmetric encryption?
A) DES
B) 3DES
C) AES
D) RC4
Answer: C
Explanation:
Advanced Encryption Standard, commonly known as AES, is currently considered the industry standard for symmetric encryption, providing strong security with excellent performance across diverse applications. AES was selected through a rigorous public competition conducted by the National Institute of Standards and Technology, evaluating numerous candidate algorithms based on security strength, computational efficiency, memory requirements, and implementation flexibility. The winning Rijndael algorithm became the AES standard in 2001, replacing the aging DES algorithm.
AES operates on fixed 128-bit data blocks using key sizes of 128, 192, or 256 bits. Longer key sizes provide additional security strength at slight performance cost, with AES-256 offering extremely strong protection suitable for highly sensitive applications including government classified information. The algorithm employs multiple transformation rounds including substitution, permutation, mixing, and key addition operations that thoroughly scramble data making cryptanalysis extremely difficult.
Modern processors include hardware acceleration for AES through dedicated instruction sets enabling encryption and decryption at speeds measured in gigabytes per second. This hardware support makes AES practical for high-throughput applications including disk encryption, database encryption, network protocols, and real-time communications without significant performance penalties.
A) DES uses only 56-bit keys now vulnerable to brute force attacks with modern computing power, making it obsolete for security purposes.
B) 3DES applies DES three times providing better security than single DES but with significantly slower performance than AES and decreasing support.
C) AES is the current industry standard symmetric encryption algorithm offering strong security with excellent performance, making this the correct answer.
D) RC4 is a stream cipher with known vulnerabilities that has been deprecated in major protocols including TLS, no longer recommended for security applications.
Organizations should use AES for all symmetric encryption needs, selecting appropriate key lengths based on data sensitivity and protection duration requirements.
Question 176:
What is the primary purpose of implementing multi-factor authentication?
A) To simplify login processes
B) To require multiple credentials from different categories for stronger verification
C) To eliminate the need for passwords
D) To increase network bandwidth
Answer: B
Explanation:
Multi-factor authentication requires multiple credentials from different authentication factor categories for stronger identity verification, significantly improving security over single-factor methods like passwords alone. This approach ensures that compromising one authentication factor does not grant unauthorized access since attackers must obtain multiple independent credentials. MFA has become essential for protecting sensitive systems against credential theft, phishing, brute force attacks, and other threats targeting authentication mechanisms.
Authentication factors fall into distinct categories. Something you know includes passwords, PINs, and security questions relying on memorized information. Something you have involves physical objects like smart cards, security tokens, or mobile devices generating one-time codes. Something you are encompasses biometric characteristics including fingerprints, facial recognition, and iris scans unique to individuals. Strong MFA combines factors from different categories ensuring diverse credential types must all be compromised for unauthorized access.
The security improvement from MFA is substantial. Even if attackers obtain passwords through phishing, credential database breaches, or guessing, they cannot access accounts protected by MFA without also possessing required additional factors. Physical tokens or biometric characteristics are significantly harder to steal remotely than knowledge-based credentials.
A) MFA adds authentication steps rather than simplifying login processes, though modern implementations minimize user friction while improving security.
B) Multi-factor authentication specifically requires credentials from different categories providing stronger verification through layered security, making this the correct answer.
C) MFA supplements rather than eliminates passwords in most implementations, though passwordless authentication using multiple alternative factors is emerging.
D) Authentication methods do not affect network bandwidth, which relates to data transmission capacity rather than identity verification.
Organizations should implement MFA for all systems containing sensitive information, particularly internet-facing applications, privileged accounts, and remote access services.
Question 177:
Which security control monitors network traffic for suspicious activities?
A) Firewall
B) Intrusion detection system
C) Proxy server
D) Load balancer
Answer: B
Explanation:
Intrusion detection systems, commonly abbreviated as IDS, monitor network traffic for suspicious activities by analyzing packets, connections, and behaviors to identify potential security threats. These systems examine network communications comparing observed patterns against known attack signatures, established behavioral baselines, and protocol specifications to detect malicious activities, policy violations, or anomalous behaviors warranting investigation. IDS provides visibility into network security status enabling security teams to identify and respond to threats.
Detection mechanisms employ multiple complementary approaches. Signature-based detection compares traffic against databases of known attack patterns identifying specific exploits, malware communications, or malicious activities based on recognized characteristics. This approach effectively detects known threats but cannot identify novel attacks lacking existing signatures. Anomaly-based detection establishes baselines of normal network behavior then identifies deviations suggesting potential threats even when specific attack patterns are unknown.
IDS deployments position sensors strategically within network architectures. Network-based IDS monitors traffic at network boundaries or critical segments examining packets flowing through monitored points. Host-based IDS runs on individual systems monitoring local activities including system calls, file modifications, and process behaviors on specific hosts.
A) Firewalls filter traffic based on rules allowing or blocking connections but focus on access control rather than comprehensive traffic monitoring for suspicious activities.
B) Intrusion detection systems specifically monitor network traffic for suspicious activities using signatures and behavioral analysis, making this the correct answer.
C) Proxy servers intermediate communications between clients and servers providing caching and filtering but not comprehensive security monitoring.
D) Load balancers distribute traffic across servers for availability and performance rather than monitoring for security threats.
Organizations deploy IDS as part of defense-in-depth strategies providing threat visibility complementing preventive controls like firewalls with detection capabilities enabling incident response.
Question 178:
What type of malware spreads automatically across networks without user interaction?
A) Virus
B) Trojan
C) Worm
D) Spyware
Answer: C
Explanation:
Worms spread automatically across networks without requiring user interaction, distinguishing them from other malware types that depend on user actions or host files for propagation. Worms exploit vulnerabilities in operating systems, applications, or network protocols to replicate themselves from system to system autonomously. Once a worm infects a system, it scans for other vulnerable systems and propagates automatically, potentially spreading across entire networks or the internet rapidly without any user involvement.
The self-propagating nature of worms makes them particularly dangerous in networked environments. A single infected system can quickly compromise thousands of others as the worm spreads exponentially through vulnerable populations. Famous worms like Code Red, Slammer, and WannaCry demonstrated devastating potential by spreading across hundreds of thousands of systems globally within hours, causing widespread disruptions, consuming network bandwidth, and delivering malicious payloads while overwhelming organizational ability to respond.
Worms typically exploit specific vulnerabilities to spread. Some target operating system services accessible over networks. Others exploit application vulnerabilities in web servers, database systems, or email software. Network protocol vulnerabilities enable spreading through standard communication channels. Once exploitation succeeds, worms copy themselves to newly compromised systems and continue scanning for additional targets.
A) Viruses attach to legitimate files or programs and require user action like running infected programs to execute and spread, not automatically propagating across networks.
B) Trojans disguise themselves as legitimate software relying on social engineering to trick users into installation, not self-replicating across networks automatically.
C) Worms specifically spread automatically across networks without user interaction through vulnerability exploitation, making this the correct answer.
D) Spyware monitors user activities and collects information but does not inherently self-replicate or spread automatically across networks.
Defending against worms requires prompt patching of vulnerabilities, network segmentation limiting spread, intrusion prevention systems detecting worm activity, and endpoint protection identifying and blocking worm infections.
Question 179:
Which protocol provides secure web browsing through encryption?
A) HTTP
B) FTP
C) HTTPS
D) SMTP
Answer: C
Explanation:
HTTPS, which stands for Hypertext Transfer Protocol Secure, provides secure web browsing through encryption by establishing encrypted connections between web browsers and servers using Transport Layer Security. This protocol protects web communications from eavesdropping, tampering, and man-in-the-middle attacks by encrypting all data transmitted between clients and servers. HTTPS has become essential for all websites, not just those handling sensitive transactions, with modern browsers actively warning users about unencrypted HTTP connections.
HTTPS operation begins with TLS handshakes establishing secure connections. Browsers request secure communication with servers. Servers respond with digital certificates proving their identities and providing public keys for encryption. Browsers verify certificates against trusted certificate authorities confirming server authenticity. Both parties negotiate encryption algorithms and generate session keys. This handshake establishes encrypted tunnels protecting all subsequent data transmission invisibly to users who simply see lock icons indicating secure connections.
The encryption protects both transmitted content and metadata from observation by network intermediaries. User credentials, personal information, financial data, and private communications remain confidential during transmission. Integrity protection detects any tampering attempts during transit. Server authentication prevents connection to fraudulent sites impersonating legitimate services.
A) HTTP transmits web content without encryption, leaving all data visible to anyone monitoring network traffic between clients and servers.
B) FTP transfers files but does not provide encrypted web browsing, operating as separate protocol with its own security considerations.
C) HTTPS specifically provides secure web browsing through TLS encryption protecting communications between browsers and servers, making this the correct answer.
D) SMTP handles email transmission between servers rather than web browsing and does not inherently provide encryption.
Modern websites universally implement HTTPS, with search engines favoring encrypted sites and browsers warning about unencrypted connections to encourage widespread adoption protecting user privacy and security.
Question 180:
What is the primary purpose of implementing access control lists on network devices?
A) To encrypt network traffic
B) To filter traffic based on defined rules
C) To increase network speed
D) To provide wireless connectivity
Answer: B
Explanation:
Access control lists, commonly abbreviated as ACLs, filter network traffic based on defined rules specifying which communications are permitted or denied. Network administrators configure ACLs on routers, switches, and firewalls to control traffic flow between network segments, implementing security policies through examination of packet characteristics including source and destination addresses, port numbers, and protocols. ACLs serve as fundamental network security controls restricting unauthorized access while allowing legitimate business communications.
ACL operation examines packets against ordered rule lists. Each rule specifies conditions defining traffic characteristics and actions indicating whether matching traffic should be permitted, denied, or logged. Standard ACLs filter based primarily on source IP addresses providing basic filtering. Extended ACLs consider multiple criteria including source and destination addresses, ports, protocols, and other packet attributes enabling precise traffic control. Rules are processed sequentially with first matching rule determining packet fate.
Common ACL applications include protecting sensitive network segments by restricting which systems can communicate with critical servers or infrastructure. Blocking known malicious traffic sources prevents connections from IP addresses associated with attacks. Restricting services limits which applications can traverse network boundaries. Implementing compliance requirements enforces network isolation mandated by regulations.
A) Encryption protects data confidentiality through cryptographic transformation, not ACL functionality which filters traffic based on rules.
B) Access control lists specifically filter network traffic based on defined rules controlling which communications are permitted or denied, making this the correct answer.
C) ACLs add processing overhead as devices examine packets against rules, not increasing network speed which depends on infrastructure capacity.
D) Wireless connectivity requires access points and controllers, not ACLs which filter traffic regardless of physical medium.
Effective ACL implementation requires careful rule design, regular review, and documentation ensuring policies remain current as network environments evolve.