Coming soon. We are working on adding products for this exam.
Coming soon. We are working on adding products for this exam.
Passing the IT Certification Exams can be Tough, but with the right exam prep materials, that can be solved. ExamLabs providers 100% Real and updated EMC E20-807 exam dumps, practice test questions and answers which can make you equipped with the right knowledge required to pass the exams. Our EMC E20-807 exam dumps, practice test questions and answers, are reviewed constantly by IT Experts to Ensure their Validity and help you pass without putting in hundreds and hours of studying.
The E20-807 Exam, leading to the "Expert - VMAX All Flash and VMAX3 Solutions" certification, was a pinnacle credential for storage professionals specializing in high-end enterprise storage. This expert-level exam was designed for individuals responsible for the design, deployment, management, and performance of VMAX3 and VMAX All Flash storage arrays. It represented a deep, specialized knowledge of one of the industry's most powerful and resilient storage platforms. Passing this exam was a significant achievement, validating a candidate's ability to handle the complex demands of mission-critical application environments. The E20-807 Exam was situated within a comprehensive certification framework for storage professionals. It was an expert-level test, meaning it assumed a significant amount of prior knowledge and hands-on experience with storage area networks (SANs), storage provisioning, and data protection concepts. The exam's focus was not on general principles but on the specific, intricate details of the VMAX architecture, its HYPERMAX operating system, and its rich suite of data services. It was a test of mastery over a platform known for its "five nines" (99.999%) availability. Preparing for the E20-807 Exam was a rigorous undertaking. It required a thorough understanding of both the hardware and software components of the VMAX family, the theory behind its data placement and protection schemes, and proficiency in using its management tools, Unisphere and the Solutions Enabler command line interface (SYMCLI). While the specific product names have evolved, the architectural principles and technologies covered in the E20-807 Exam remain foundational to modern high-end enterprise storage design.
A professional who passed the E20-807 Exam was recognized as a VMAX Solutions Expert. This role involves far more than just routine storage administration. An expert is expected to act as a trusted advisor and a technical lead for all aspects of the VMAX environment. Their responsibilities include translating business requirements for performance and availability into a technical storage design, planning and executing complex data migrations, and optimizing the performance of the storage array to meet the demands of critical applications like large databases and virtualization platforms. The expert role also encompasses the entire data services portfolio. A key part of the job is designing and implementing robust data protection and disaster recovery solutions using the VMAX's native replication technologies, such as TimeFinder for local replication and SRDF for remote replication. The E20-807 Exam heavily tested the ability to choose the right replication technology for a given business requirement and to manage complex, multi-site replication topologies. Furthermore, a VMAX expert is the highest level of technical escalation for troubleshooting. When complex performance or availability issues arise, the expert is responsible for leading the investigation, analyzing performance data, and identifying the root cause. This requires a deep understanding of the VMAX's internal data flow and the ability to interpret detailed performance metrics. The E20-807 Exam was designed to validate this elite level of technical skill and problem-solving ability.
The E20-807 Exam centered on two specific families of the VMAX platform: the VMAX3 and its successor, the VMAX All Flash. The VMAX3 was a hybrid array, meaning it could be configured with a mix of high-performance flash drives and high-capacity spinning disk drives. This allowed for a tiered storage approach within a single system. The VMAX All Flash, as its name implies, was designed from the ground up to be an all-flash array, leveraging the massive performance and low latency of solid-state drives. A key architectural innovation introduced with VMAX3 and carried forward in the VMAX All Flash was the HYPERMAX operating system. This was a significant departure from previous generations. HYPERMAX was a purpose-built operating system that combined the core storage functions with embedded data services. This meant that services like file access (eNAS), embedded management, and data mobility could run as virtualized applications directly on the VMAX's own controllers, simplifying the infrastructure. The E20-807 Exam required a deep understanding of this new software-defined architecture. Both platforms were designed for massive scale and performance. They were built around a modular, scale-out architecture based on building blocks called V-Bricks or engines. This allowed the system to scale from a small initial configuration to a massive multi-engine system capable of supporting thousands of hosts and providing millions of IOPS (Input/Output Operations Per Second). The E20-807 Exam tested the candidate's knowledge of these hardware building blocks and their performance characteristics.
At the heart of the VMAX3 and VMAX All Flash platforms is the HYPERMAX operating system, a core topic of the E20-807 Exam. HYPERMAX is a true multitasking storage operating system that runs on the VMAX director engines. A key feature of HYPERMAX is that it includes a built-in hypervisor. This allows the system to run other data services, such as file gateways or replication managers, as self-contained virtual machines directly on the storage array's hardware. This embedding of services simplified the data center by reducing the need for external physical servers. HYPERMAX manages all the core functions of the array, including the global cache, data placement, and all the data services. It is designed for extreme resilience, with all its services running in an active-active, fully redundant configuration across the director engines. The E20-807 Exam required a detailed understanding of how HYPERMAX managed the system's resources to ensure high performance and availability. The architecture that HYPERMAX manages is known as the Dynamic Virtual Matrix. This is the interconnect that allows all the director engines in a multi-engine VMAX to communicate with each other and to access the shared global cache. It is a high-speed, low-latency, fully redundant fabric that allows the system to scale out by adding more engines. The Dynamic Virtual Matrix ensures that any director can access any piece of data in the cache, regardless of which engine is physically connected to the host, a fundamental architectural concept for the E20-807 Exam.
The E20-807 Exam required a detailed knowledge of the physical hardware components that make up a VMAX array. The fundamental building block is the VMAX Engine. An engine is a self-contained unit that includes two redundant director boards, front-end host connectivity ports, back-end storage connectivity, and a portion of the system's global cache memory. All components within an engine are fully redundant. In VMAX All Flash, this building block was referred to as a V-Brick. Each director board within an engine is essentially a powerful server with multiple processor cores and a large amount of memory. The directors run the HYPERMAX operating system and are responsible for all the data processing. The E20-807 Exam expected a candidate to know the different types of directors and their capabilities. For example, some directors were dedicated to front-end connectivity, while others handled back-end storage tasks and data services. The storage itself is housed in Drive Array Enclosures (DAEs). These are shelves of disk drives that are connected to the back-end ports of the directors. In a VMAX All Flash array, these DAEs would be populated with flash drives. The physical layout of the system, including the cabling between the engines and the DAEs, is designed for full redundancy. The E20-807 Exam required an understanding of this physical topology and how it contributes to the array's high availability.
The global cache is one of the most critical components of the VMAX architecture and a key topic for the E20-807 Exam. The VMAX is a cache-centric array, meaning that almost all read and write operations are serviced directly from the large DRAM cache that is distributed across all the director engines. This is what gives the VMAX its extremely high performance and low latency. The total size of the global cache can be multiple terabytes in a large system. When a host writes data to the VMAX, the data is first written to the cache of two different director boards simultaneously, a process known as cache mirroring. Once the data is safely in the mirrored cache, the VMAX sends an acknowledgement back to the host, completing the write operation from the host's perspective. This write-back cache mechanism makes write operations incredibly fast. The E20-807 Exam requires a deep understanding of this write process. The HYPERMAX operating system then "de-stages" this new data from the cache to the physical flash drives in the background. For a read operation, if the requested data is already in the cache (a "read hit"), it is served directly to the host at memory speed. If the data is not in the cache (a "read miss"), the VMAX will read it from the flash drives, place it in the cache, and then send it to the host. Advanced algorithms are used to proactively pre-fetch data into the cache to maximize the read hit rate.
A major paradigm shift introduced with the VMAX3 and a central topic for the E20-807 Exam was the move from traditional LUN-centric provisioning to Service Level Objective (SLO) based provisioning. In the old model, a storage administrator would have to manually create RAID groups, select specific disk types, and build LUNs with specific performance characteristics. This was a complex and time-consuming process that required deep knowledge of the array's internal workings. SLO-based provisioning completely automates and simplifies this process. The administrator no longer has to worry about the underlying physical disks. Instead, they simply define the performance requirement for an application by selecting from a predefined menu of service levels, such as Diamond, Platinum, Gold, Silver, or Bronze. Each of these service levels is associated with a specific performance target, typically measured in terms of average response time. When an application is provisioned with a specific SLO, the HYPERMAX operating system takes full responsibility for automatically placing the data on the correct storage resources and dynamically managing it to ensure that the performance target is met. This automated, policy-based approach dramatically simplified storage administration and was a core concept that a candidate for the E20-807 Exam had to master.
Preparing for an expert-level certification like the E20-807 Exam requires a significant commitment of time and effort. The journey should begin with a thorough review of the official exam description and topics. This document is the definitive guide to the content of the exam. It will outline the major domains, such as architecture, management, and local and remote replication, and the specific skills and knowledge that are tested in each domain. Because this is an expert-level exam, it is assumed that you already have a strong foundation in storage networking and have significant hands-on experience with VMAX or a similar enterprise storage platform. Your study plan should focus on filling in the gaps in your knowledge and gaining a deeper understanding of the advanced features and the "why" behind the architecture. Official training courses, while an investment, are highly recommended for this level of certification. Your preparation must include extensive hands-on practice with the Unisphere GUI and, most importantly, the Solutions Enabler (SYMCLI) command line interface. The E20-807 Exam is known for testing detailed knowledge of SYMCLI commands and their syntax. Access to a lab environment, either physical or virtual, is essential for practicing the complex configuration and management tasks covered in the exam, such as setting up SRDF replication or managing SnapVX snapshots.
The primary graphical management tool for the VMAX platform, and a key area of knowledge for the E20-807 Exam, is Unisphere. Unisphere for VMAX is a web-based management interface that provides a centralized point of control for all aspects of the storage array. It offers a dashboard-centric view that gives administrators at-a-glance visibility into the health, capacity, and performance of their entire VMAX environment, even across multiple arrays. Unisphere is designed to simplify the complex task of managing a high-end storage system. It provides intuitive, wizard-driven workflows for all the common administrative tasks, such as provisioning new storage, creating snapshots, and setting up remote replication. The E20-807 Exam requires candidates to be proficient in navigating the Unisphere interface and using it to perform these core tasks. This includes understanding the different dashboards for storage, performance, and data protection. A major focus of Unisphere, and a key theme for the E20-807 Exam, is its integration with the Service Level Objective (SLO) provisioning model. Unisphere is the primary interface for creating and managing storage groups and associating them with the desired SLO. It also provides detailed performance monitoring and reporting capabilities that allow an administrator to verify that the VMAX is meeting the performance targets defined by the SLOs.
While Unisphere provides a user-friendly graphical interface, the E20-807 Exam places a very strong emphasis on the command line interface, Solutions Enabler, which is more commonly known as SYMCLI. SYMCLI is an extremely powerful and scriptable interface that provides access to every function of the VMAX array. For many experienced administrators and for automation purposes, SYMCLI is the preferred management tool. The exam requires a deep and practical knowledge of SYMCLI commands and syntax. SYMCLI commands, which all start with the prefix sym, are used to manage all aspects of the array, from discovering the hardware (symcfg list) to provisioning storage (symaccess create) and managing replication (symsnapvx establish). The E20-807 Exam is known for its detailed, scenario-based questions that require the candidate to choose the correct SYMCLI command and its options to accomplish a specific task. A key concept in SYMCLI is the use of device groups and composite groups to manage collections of storage devices. This allows an administrator to perform an operation, such as creating a snapshot or failing over a replication group, on hundreds of devices with a single command. This ability to manage storage at scale is a core competency for a VMAX expert and a major focus of the E20-807 Exam.
The foundation of the automated provisioning model in VMAX3 and VMAX All Flash is the Storage Resource Pool, or SRP. The E20-807 Exam requires a thorough understanding of this concept. An SRP is a large, consolidated pool of raw storage capacity that is aggregated from all the physical drives in the array. It is the single source from which all virtual or thin storage volumes are created. This is a significant departure from the traditional model of creating many small, isolated RAID groups. The SRP abstracts the physical disks from the storage administrator. When provisioning storage, the administrator no longer needs to worry about which specific drives the data will be placed on. They simply request capacity from the SRP. The HYPERMAX operating system then takes care of all the data placement and RAID protection automatically in the background. The default RAID protection level used within the SRP is RAID 5. In a VMAX3 hybrid array, the SRP would contain multiple tiers of storage, based on the type of drive (e.g., Flash, SAS, NL-SAS). The Fully Automated Storage Tiering (FAST) feature would then dynamically move data between these tiers based on its activity level to optimize performance and cost. In a VMAX All Flash array, the SRP consists of a single tier of flash drives. The E20-807 Exam requires a solid understanding of the SRP as the basis for SLO provisioning.
The E20-807 Exam requires a deep understanding of thin provisioning, which is referred to as Virtual Provisioning in the VMAX context. Thin provisioning is a technology that allows you to present a storage volume (a LUN, or thin device) to a host that is much larger than the amount of physical storage that has actually been allocated to it from the Storage Resource Pool. When a thin device is created, it consumes almost no space from the SRP. Physical storage is only allocated from the SRP on-demand, in small chunks called thin device extents, as the host application actually writes new data to the volume. This "just-in-time" allocation of storage has significant benefits. It improves storage utilization by eliminating the wasted space that is common with traditional "thick" provisioning, where all the space is allocated upfront. The E20-807 Exam emphasizes these benefits. Thin provisioning is the default and recommended method for all storage provisioning on the VMAX3 and VMAX All Flash platforms. The management of the underlying thin pools and the allocation of extents is completely automated by the HYPERMAX OS. The E20-807 Exam requires an understanding of the key concepts, including how to monitor the subscription rate of the SRP to ensure that you do not over-provision the available capacity.
The E20-807 Exam requires mastery of the Service Level Objective (SLO) based provisioning model. As introduced earlier, this is a policy-based approach that automates and simplifies storage provisioning. The process begins with the administrator creating a Storage Group. A Storage Group is a container for a set of storage volumes that are all associated with a single application or host. The administrator then associates this Storage Group with one of the predefined SLOs (e.g., Diamond, Platinum, Gold). This single action tells the VMAX everything it needs to know about the performance requirements for that application. The E20-807 Exam expects you to know the typical response time targets for each of the SLO levels. For example, the Diamond SLO is designed for the most mission-critical applications and has a sub-millisecond response time target. Once the Storage Group is associated with an SLO, the HYPERMAX operating system takes over. It uses its internal intelligence to manage the data placement and I/O prioritization to ensure that the SLO's performance target is consistently met. This includes managing the cache residency of the data and, in a hybrid array, placing the most active data on the fastest (flash) tier of storage. This automated, workload-aware management is a core concept for the E20-807 Exam.
The Storage Group is the central object for managing application storage in the SLO-based model. The E20-807 Exam requires proficiency in creating and managing these groups using both Unisphere and SYMCLI. A Storage Group can be a standalone group, or it can be part of a parent-child hierarchy. This allows for very granular control. For example, you could have a parent Storage Group for a large database application, and then create child Storage Groups within it for the database logs, the data files, and the index files. This hierarchical structure is powerful because you can apply different SLOs to the different child groups. You might assign the Diamond SLO to the write-intensive database log volumes, and the Gold SLO to the less critical data file volumes. The E20-807 Exam requires an understanding of how to use these cascaded storage groups to align the storage performance with the specific needs of the different components of an application. Managing a Storage Group involves adding or removing volumes, changing its associated SLO, or associating it with a host. All of these operations are simple, policy-based actions. The E20-807 Exam will test your ability to perform these tasks using the appropriate Unisphere workflows or SYMCLI commands.
Once storage has been provisioned in a Storage Group, it must be made available to a host server. The E20-807 Exam covers the process of managing host connectivity, which is known as masking. Masking is the process that controls which hosts are allowed to see which storage volumes. It is a critical security function that prevents a host from accidentally or maliciously accessing another host's data. On the VMAX, this process is simplified and automated through the use of Auto-provisioning Groups. An Auto-provisioning Group is a container that brings together the three key elements of storage provisioning: the hosts that need access (an Initiator Group), the storage they need access to (a Storage Group), and the front-end director ports they will connect through (a Port Group). The E20-807 Exam requires a deep understanding of these three components. By creating a single Auto-provisioning Group, also known as a Masking View, an administrator can provision storage to a host or a cluster of hosts in a single, simple operation. You create the view, add the initiator, port, and storage groups, and the VMAX automatically creates all the necessary masking records. This is a much simpler and less error-prone process than manually creating individual masking records, a key concept for the E20-807 Exam.
A VMAX expert must be able to continuously monitor the array to ensure it is healthy and performing optimally. The E20-807 Exam covers the tools and metrics used for this monitoring. Unisphere is the primary tool for real-time and historical performance analysis. It provides detailed dashboards that show key performance indicators (KPIs) for the entire array, for individual storage groups, and even for individual directors and ports. The exam requires familiarity with the most important performance metrics. These include IOPS (Input/Output Operations Per Second), which measures the number of I/O requests the array is handling; throughput (or bandwidth), which is measured in megabytes per second; and response time (or latency), which is measured in milliseconds and is the most important indicator of the performance that the application is experiencing. The E20-807 Exam also expects an administrator to know how to monitor the system to ensure that it is meeting its Service Level Objectives. Unisphere provides a specific SLO compliance dashboard that shows how well each Storage Group is performing relative to its SLO target. If a storage group is not meeting its SLO, the administrator can use the performance analysis tools to drill down and identify the cause of the contention, a critical skill for a VMAX expert.
Data protection is a critical function of any enterprise storage array, and the E20-807 Exam dedicates a significant portion of its objectives to the VMAX's native replication capabilities. The suite of software products that provides local replication on a VMAX is called TimeFinder. Local replication is the process of creating a copy of a set of data within the same physical storage array. These copies can be used for a wide variety of business purposes, including backups, application testing and development, and data analytics. The E20-807 Exam focuses on the modern implementation of TimeFinder that was introduced with the HYPERMAX OS, which is called SnapVX. SnapVX is a highly advanced and efficient snapshot technology that is integrated directly into the core of the operating system. It was designed to be extremely space-efficient and to have a minimal impact on the performance of the production application. In addition to the snapshot capabilities of SnapVX, the TimeFinder family also includes TimeFinder Clone, which provides the ability to create full, independent, point-in-time copies of data. The E20-807 Exam requires a deep, technical understanding of both SnapVX and Clone, including their underlying mechanisms, their use cases, and the Unisphere and SYMCLI commands used to manage them. A VMAX expert must be proficient in using these tools to create robust data protection and data repurposing solutions.
The E20-807 Exam requires a detailed understanding of the architecture of TimeFinder SnapVX. SnapVX is a snapshot technology that uses a redirect-on-write mechanism. When you take a snapshot of a source volume, the VMAX does not immediately create a copy of the data. Instead, it simply creates a set of pointers that reference the original data blocks on the source volume. This process is nearly instantaneous and consumes almost no additional storage space. The snapshot only begins to consume space when the data on the source volume is changed. When a host wants to write a new block of data to the source volume, the VMAX redirects this new write to a new location in the Storage Resource Pool. It then updates the pointer for the source volume to point to this new location. The original data block is left untouched, and the snapshot's pointers continue to point to it. This is the redirect-on-write process, a core concept for the E20-807 Exam. A single source volume can have up to 1024 snapshots associated with it, all sharing the same backend storage pool for their changed data. This architecture is extremely efficient and allows for the creation of frequent, low-impact snapshots, which is ideal for providing a very granular recovery point objective for operational recovery. The E20-807 Exam expects a candidate to be able to explain this underlying mechanism.
The practical application of SnapVX is a key focus of the E20-807 Exam. The primary object for managing snapshots is the Storage Group. All snapshot operations are performed at the level of the Storage Group, which ensures that you get a consistent, point-in-time copy of all the volumes that make up an application. To create a snapshot, you simply select the source Storage Group and issue the snapshot command, giving the snapshot a unique name. This can be done easily through a wizard in Unisphere or with a single SYMCLI command, such as symsnapvx -sg MyStorageGroup establish -name MySnapshot. The E20-807 Exam requires proficiency in this command. Once created, the snapshot exists as a point-in-time image of the source data. You can view a list of all the snapshots that exist for a given Storage Group and see their creation time and how much space they are consuming. Managing the lifecycle of snapshots is also an important task. As new snapshots are created, older ones may no longer be needed. The E20-807 Exam covers the process of terminating or deleting a snapshot. This is done by specifying the source Storage Group and the name of the snapshot to be terminated. The VMAX will then release the storage space that was being held by that snapshot in the background, a key management task for a VMAX expert.
A snapshot on its own is a point-in-time image, but it is not directly accessible to a host. To use the data in a snapshot, for example, for a backup or for application testing, you must "link" it to a host. The E20-807 Exam requires a deep understanding of the linking process. When you link a snapshot, the VMAX creates a new set of target volumes and presents the snapshot's point-in-time data through these volumes. The target volumes are then masked to a host, just like any other storage volume. The host will see the target volumes as standard, read-write LUNs that contain a perfect copy of the source data as it existed at the moment the snapshot was taken. This allows a backup server to mount the volumes and perform a backup without impacting the live production application. The E20-807 Exam covers the commands and workflows for this linking process. After the task is complete, for example, after the backup has finished, the snapshot should be "unlinked." The unlink operation removes the target volumes and severs the connection between the snapshot and the host. The snapshot itself remains as a point-in-time image. This process of linking and unlinking provides a very flexible and efficient way to repurpose data for a variety of use cases, a core concept for the E20-807 Exam.
While SnapVX is ideal for space-efficient, point-in-time images, there are some use cases that require a full, independent, physical copy of the data. The E20-807 Exam covers TimeFinder Clone for these scenarios. A Clone session creates a full-copy replica of a source volume onto a target volume of the same size. Unlike a snapshot, a clone is not a pointer-based image; it is a true bit-for-bit copy of the data. The process begins by creating a session between the source and target devices. When the session is activated, the VMAX begins a background copy process to synchronize the data from the source to the target. Once the initial synchronization is complete, the clone is a fully independent volume. It does not share any backend storage with the source. This makes it suitable for use cases where you need a long-term, independent copy of the data for development or for offloading intensive reporting workloads from the production array. The E20-807 Exam requires an understanding of the differences between a snapshot and a clone and when to use each technology. A clone consumes much more storage space than a snapshot but provides a fully independent copy. The management of clone sessions, including creating, activating, and splitting the sessions, is a key skill for a VMAX expert.
For more advanced data protection requirements, such as continuous data protection (CDP), the VMAX can be integrated with another technology called RecoverPoint. The E20-807 Exam requires a high-level awareness of this integration. RecoverPoint is a separate appliance-based solution that provides CDP, which means it captures every single write that occurs on the production volumes. This allows for recovery to any point in time, not just to the specific times when snapshots were taken. The integration with VMAX allows RecoverPoint to use the VMAX's built-in splitter technology to get a copy of all the write I/O. RecoverPoint then stores this I/O in a journal. For recovery, an administrator can simply select a specific point in time from the journal, and RecoverPoint will automatically roll the data back to that exact moment. This provides an extremely granular recovery point objective (RPO) of near-zero. While the detailed management of RecoverPoint is outside the scope of the E20-807 Exam, a VMAX expert is expected to understand what it is and how it integrates with the VMAX to provide advanced CDP capabilities for mission-critical applications that cannot tolerate any data loss.
A VMAX expert must be proficient in managing all aspects of local replication using both Unisphere and SYMCLI. The E20-807 Exam will test this proficiency with practical, scenario-based questions. Unisphere provides a dedicated Data Protection dashboard and a set of intuitive wizards for managing SnapVX and TimeFinder Clone. From Unisphere, an administrator can easily create and manage snapshot policies to automate the creation and retention of snapshots on a predefined schedule. For example, you could create a policy to take a snapshot of a critical application's Storage Group every hour and to keep the last 24 hourly snapshots. Unisphere will automatically manage this entire process. It also provides a simple, graphical interface for linking and unlinking snapshots for backup or testing purposes. The E20-807 Exam expects familiarity with these automated policy management capabilities. While Unisphere is excellent for routine tasks, SYMCLI is essential for advanced scripting and for managing very large environments. The symsnapvx and symclone command sets provide complete control over all aspects of local replication. The E20-807 Exam requires a deep knowledge of these commands, including the syntax for establishing, terminating, linking, and unlinking replication sessions on storage groups.
The E20-807 Exam is not just about knowing the technology, but also about knowing when and why to use it. Local replication with TimeFinder has a wide range of use cases that are critical for business operations. The most common use case is operational recovery. If a logical data corruption event occurs, such as a user accidentally deleting a file or a database becoming corrupted, a SnapVX snapshot can be used to instantly restore the data to a point in time just before the corruption occurred. Another major use case is backup and recovery. By linking a snapshot to a backup server, you can perform a full backup of the production data without any performance impact on the production application. This is a much more efficient and less disruptive method than trying to run a backup agent directly on the production server. This is often referred to as off-host backup. Data repurposing is another key driver for local replication. A snapshot or a clone can be created and provided to the application development and test teams. This gives them a recent, realistic copy of the production data to use for testing new application versions, which leads to higher quality software. Similarly, a copy can be used to run data warehouse or business analytics queries without impacting the performance of the primary production database. The E20-807 Exam requires an understanding of all these critical business use cases.
While local replication with TimeFinder protects against data corruption and is used for operational recovery, it does not protect against a complete site disaster, such as a fire or a flood. For this, you need remote replication. The E20-807 Exam covers the Symmetrix Remote Data Facility (SRDF) in extreme detail. SRDF is the gold standard for storage-based remote replication and is a core feature of the VMAX platform. It provides the ability to replicate data from a VMAX array in one data center to another VMAX array in a separate geographic location. SRDF is a host-independent, array-based replication solution. This means it is completely transparent to the application servers that are connected to the array. The replication is managed entirely by the HYPERMAX operating system on the VMAX arrays themselves. This provides a very robust and high-performance replication solution that works with any host operating system or application. The E20-807 Exam requires a deep, architectural understanding of SRDF. The fundamental components of an SRDF configuration are a source device (called the R1) on the primary VMAX array and a target device (called the R2) on the secondary VMAX array. These two devices are paired together, and SRDF ensures that the data written to the R1 is replicated to the R2. These devices are managed in groups, called SRDF groups or consistency groups, to ensure that all the data for an application is replicated in a consistent state.
The E20-807 Exam covers the different modes of SRDF operation. The first and most robust of these is SRDF Synchronous mode, or SRDF/S. In synchronous mode, when a host writes data to the R1 device on the primary array, the VMAX does not send the acknowledgement back to the host immediately. Instead, it first sends the write over the network link to the secondary VMAX array. The secondary array then writes the data to its cache and sends an acknowledgement back to the primary array. Only after the primary array receives this acknowledgement from the secondary array will it send the final acknowledgement back to the host application. This process ensures that before the host is notified that its write is complete, the data is safely stored in the cache of both the primary and the secondary VMAX arrays. The E20-807 Exam requires a deep understanding of this write I/O path. The major benefit of SRDF/S is that it provides a zero data loss recovery point objective (RPO). Because the writes are committed in both locations before being acknowledged, in the event of a disaster at the primary site, there is a guarantee that no data will be lost. The secondary site has an exact, up-to-the-millisecond copy of the data. The trade-off is that this mode is limited by distance due to the latency of the network link, as every write operation has to wait for a round trip to the remote site.
For disaster recovery over longer distances where the network latency makes synchronous replication impractical, the E20-807 Exam covers SRDF Asynchronous mode, or SRDF/A. In asynchronous mode, when a host writes data to the R1 device, the primary VMAX acknowledges the write immediately after it is saved to the local cache. The replication of the data to the secondary site happens in the background, independently of the host I/O. The primary VMAX collects a group of writes that have occurred over a short period of time (a "delta set") and then transmits this entire group to the secondary site as a single, consistent batch. This mode decouples the host application's performance from the network latency, allowing for replication over virtually any distance. The E20-807 Exam requires a candidate to understand this delta set-based replication mechanism. The trade-off with SRDF/A is that it does not provide a zero RPO. There will always be a small amount of data that has been written and acknowledged at the primary site but has not yet been transmitted to the secondary site. This means that in a disaster, there is the potential for a small amount of data loss. However, SRDF/A is designed to maintain a very low RPO, typically measured in seconds, while still providing full data consistency at the remote site.
A more advanced and powerful application of SRDF, and a key topic for the E20-807 Exam, is SRDF Metro. SRDF Metro is a solution designed to provide active-active, high-availability data access across two data centers that are located within a metropolitan area (typically less than 100 km apart). In an SRDF Metro configuration, the R2 device at the secondary site is not a read-only copy; it is a live, read-write accessible device. The host servers at both data centers can read and write to their local VMAX array simultaneously, and SRDF Metro uses a combination of synchronous replication and advanced lock management to ensure that the data on both arrays is always identical and consistent. The two arrays effectively act as a single, virtual storage device that is stretched across two sites. The E20-807 Exam requires an understanding of this unique active-active capability. This allows for true workload mobility and continuous availability. If one of the data centers fails, the applications can continue to run at the surviving data center without any interruption or data loss. The host's multipathing software will automatically fail over the I/O path to the surviving VMAX array. This provides a zero RPO and a near-zero recovery time objective (RTO), the highest level of business continuity.
The E20-807 Exam requires a VMAX expert to be proficient in the practical aspects of configuring and managing SRDF. The process begins with zoning the VMAX director's SRDF ports together over the SAN or a dedicated network. Then, using SYMCLI or Unisphere, you create the device pairs between the source (R1) and target (R2) devices. These devices must be of the same size. Once the pairs are created, they must be placed into a consistency group. A consistency group ensures that all the devices that belong to an application are managed as a single entity. This is critical for applications like databases that write to multiple volumes, as it guarantees that the replicated copy of the data at the remote site is always in a consistent, crash-recoverable state. The E20-807 Exam emphasizes the importance of consistency. After the configuration is complete, the initial synchronization of the data from the R1 to the R2 devices must be performed. Once the devices are fully synchronized, the SRDF link is in an active, replicating state. The E20-807 Exam requires knowledge of the commands to create these pairs and groups, and to verify their status, such as symrdf query.
An expert must be able to manage the SRDF environment during a disaster recovery test or a real disaster. The E20-807 Exam covers the key SRDF operations. A failover is the process of making the secondary (R2) devices read-write accessible to the hosts at the disaster recovery site after the primary site has failed. This is a declared disaster operation that allows the business to resume operations at the secondary site. After the primary site has been repaired and is back online, a failback operation is performed. This involves synchronizing the changes that occurred at the DR site back to the original primary site, and then gracefully switching the production workload back to the primary site. The E20-807 Exam requires a detailed understanding of the steps and commands involved in these controlled failover and failback procedures. Other common operations include split and resume. A split operation temporarily suspends the replication between the R1 and R2 devices, making the R2 device read-write accessible for a temporary purpose, such as a DR test. A resume operation will then re-establish the replication and synchronize the changes that occurred while the link was split. The E20-807 Exam tests the ability to perform all these essential lifecycle operations on an SRDF pair.
The E20-807 Exam also covers more advanced SRDF topologies that can be used to meet complex business requirements. One such topology is SRDF/Star. This is a three-site configuration where a primary site replicates synchronously to a nearby secondary site (for zero data loss protection) and also replicates asynchronously to a third, more distant site (for regional disaster protection). This provides a very high level of protection against multiple types of disasters. Another advanced feature is SRDF/CG (Consistency Group). This feature enhances the capabilities of SRDF/A by allowing the system to maintain multiple, consistent, point-in-time images of the data at the remote site. This provides protection not just against a site failure, but also against logical data corruption, as you can "roll back" the remote site to a point in time before the corruption occurred. The E20-807 Exam requires a conceptual understanding of these advanced capabilities and the types of business problems they are designed to solve. An expert-level professional is expected to be able to design a solution that leverages these features to meet even the most demanding recovery point and recovery time objectives.
The technology of SRDF is only one part of a successful disaster recovery solution. The E20-807 Exam emphasizes that this technology must be part of a comprehensive, documented, and regularly tested disaster recovery plan. A VMAX expert is often a key contributor to the development of this plan. The plan should detail the step-by-step procedures for failing over the applications to the disaster recovery site. This includes not just the storage failover steps using SRDF, but also the steps for starting the servers at the DR site, reconfiguring the network, and bringing the applications online. The plan should define the roles and responsibilities of all the team members involved in the recovery process. The E20-807 Exam stresses the importance of this holistic, business-centric view of disaster recovery. Regular testing of the DR plan is absolutely essential to ensure that it will work in a real disaster. This involves performing controlled failover tests to the DR site. These tests validate the technical procedures, the documentation, and the readiness of the staff. The lessons learned from these tests are then used to improve the plan. The E20-807 Exam positions this regular testing as a critical part of the DR lifecycle.
While the VMAX is primarily a block storage array, the E20-807 Exam covers its ability to provide unified storage capabilities through the use of Embedded NAS, or eNAS. The eNAS feature leverages the hypervisor built into the HYPERMAX operating system to run a set of virtual machines directly on the VMAX director engines. These virtual machines, known as Data Movers, run a dedicated operating system that provides rich network attached storage (NAS) functionality. This allows the VMAX to serve files to clients using standard file-sharing protocols like NFS for Unix/Linux clients and SMB/CIFS for Windows clients. The underlying storage for these file systems is carved out from the VMAX's main storage resource pool, just like any block storage device. This provides a unified solution where a single VMAX array can serve both block and file storage needs, which simplifies the infrastructure and management. The E20-807 Exam requires a solid understanding of this unified architecture. The management of the eNAS feature is done through a separate Unisphere for NAS interface or via its own command line. An expert is expected to know how to perform the basic configuration of eNAS, which includes configuring the network interfaces for the virtual Data Movers, creating file systems, and provisioning exports (for NFS) or shares (for SMB). The E20-807 Exam tests the conceptual understanding and management of this integrated file service.
Choose ExamLabs to get the latest & updated EMC E20-807 practice test questions, exam dumps with verified answers to pass your certification exam. Try our reliable E20-807 exam dumps, practice test questions and answers for your next certification exam. Premium Exam Files, Question and Answers for EMC E20-807 are actually exam dumps which help you pass quickly.
Please keep in mind before downloading file you need to install Avanset Exam Simulator Software to open VCE files. Click here to download software.
Please check your mailbox for a message from support@examlabs.com and follow the directions.