Passing the IT Certification Exams can be Tough, but with the right exam prep materials, that can be solved. ExamLabs providers 100% Real and updated Dell DEA-2TT3 exam dumps, practice test questions and answers which can make you equipped with the right knowledge required to pass the exams. Our Dell DEA-2TT3 exam dumps, practice test questions and answers, are reviewed constantly by IT Experts to Ensure their Validity and help you pass without putting in hundreds and hours of studying.
The DEA-2TT3 exam is the assessment for the "Associate - Cloud Infrastructure and Services Version 3.0" certification. This credential is part of the Dell EMC Proven Professional program and serves as a foundational certification for anyone looking to build a career in cloud computing and modern data center technologies. The exam is not focused on a specific product but rather on the concepts, principles, and technologies that underpin cloud infrastructure and services. It is designed for students, IT professionals, and managers who need to develop a strong understanding of the cloud ecosystem.
Passing the DEA-2TT3 exam validates that an individual possesses the fundamental knowledge of cloud computing, virtualization, software-defined infrastructure, and the key security and management considerations involved. The exam covers a broad range of topics, from the business drivers of cloud adoption to the technical details of compute, storage, and network virtualization. Earning this associate-level certification is a crucial first step that demonstrates a solid grasp of the modern IT landscape and prepares a candidate for more advanced, specialized roles.
A core concept that provides the context for the DEA-2TT3 exam is digital transformation. Digital transformation is the process by which businesses fundamentally change how they operate and deliver value to their customers by adopting modern digital technologies. It is not just about using new tools; it is about a complete shift in business strategy, culture, and processes, with technology acting as the key enabler. This transformation is driven by the need to be more agile, innovative, and responsive to customer demands in a rapidly changing market.
Cloud computing is the primary engine of digital transformation. It provides the on-demand, scalable, and flexible infrastructure that allows businesses to rapidly develop and deploy new applications, analyze vast amounts of data, and reach their customers in new ways. The DEA-2TT3 exam requires you to understand this business context. The technologies you will be tested on are not just abstract concepts; they are the tools that are enabling this massive shift in the global economy.
The National Institute of Standards and Technology (NIST) defines cloud computing through five essential characteristics. A deep understanding of these characteristics is a fundamental requirement for the DEA-2TT3 exam. The first is on-demand self-service, which means that a user can automatically provision computing resources like server time and network storage as needed, without requiring human interaction with the service provider.
The second is broad network access, meaning capabilities are available over the network and accessed through standard mechanisms. The third is resource pooling, where the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model. The fourth is rapid elasticity, which allows resources to be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward with demand. The final characteristic is measured service, where resource usage is monitored, controlled, and reported, providing transparency for both the provider and consumer.
Cloud computing services are typically delivered in one of three service models, and the DEA-2TT3 exam requires you to know the difference between them. The first is Infrastructure as a Service (IaaS). In this model, the cloud provider offers fundamental computing resources such as virtual machines, storage, and networking. The consumer does not manage the underlying infrastructure but has control over the operating systems, storage, and deployed applications.
The second model is Platform as a Service (PaaS). In this model, the provider offers a platform that allows customers to develop, run, and manage applications without the complexity of building and maintaining the underlying infrastructure. This typically includes the operating system, middleware, and database. The final model is Software as a Service (SaaS). Here, the provider offers a complete software application that is delivered over the network, such as email or a CRM system. The consumer simply uses the application.
In addition to the service models, the DEA-2TT3 exam covers the four primary cloud deployment models. A public cloud is one where the cloud infrastructure is provisioned for open use by the general public. It is owned, managed, and operated by a business, academic, or government organization. The major cloud providers offer public cloud services. A private cloud is where the cloud infrastructure is provisioned for exclusive use by a single organization. It can be managed by the organization or a third party and can exist on or off-premises.
A hybrid cloud is a composition of two or more distinct cloud infrastructures (private or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability. A community cloud is a model where the infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns, such as a group of universities or government agencies.
Adopting the cloud is not a simple technical decision; it is a journey that requires careful planning and a clear strategy. The DEA-2TT3 exam expects you to understand the typical phases of this journey. It often begins with a discovery and assessment phase, where the organization inventories its existing applications and infrastructure and evaluates their suitability for the cloud. This is followed by a proof-of-concept phase, where a non-critical application is migrated to the cloud to gain experience and validate the technology.
Once the concept is proven, the organization moves into a migration and modernization phase, where they begin to move more workloads to the cloud. This might involve simply rehosting existing applications or rearchitecting them to take advantage of cloud-native features. A key part of the strategy is defining the governance, security, and financial management models that will be used to control the new cloud environment.
The DEA-2TT3 exam also introduces two major technology trends that are closely linked with cloud computing: Big Data and the Internet of Things (IoT). Big Data refers to the massive volumes of structured and unstructured data that are generated by modern applications and systems. Cloud computing provides the scalable and cost-effective storage and compute resources that are needed to store and analyze these huge datasets, enabling businesses to gain valuable insights.
The Internet of Things (IoT) refers to the vast network of physical devices, vehicles, and other items that are embedded with sensors and software that allows them to connect and exchange data over the internet. These devices generate a constant stream of data that must be collected, processed, and analyzed. Cloud platforms provide the backend infrastructure that is essential for building and managing these large-scale IoT solutions. The DEA-2TT3 exam requires a high-level understanding of these concepts.
The concept of the Software-Defined Data Center, or SDDC, is a central theme of the DEA-2TT3 exam. The SDDC is an architectural approach to IT infrastructure that extends virtualization concepts—such as abstraction, pooling, and automation—to all of the data center's resources. In an SDDC, all elements of the infrastructure, including compute, storage, and networking, are virtualized and delivered as a service.
The entire environment is controlled by a sophisticated management and automation software layer. This allows for the programmatic provisioning and management of the entire infrastructure, making it much more agile, flexible, and efficient than a traditional, hardware-defined data center. The SDDC is the foundational technology that enables a true private cloud, and understanding its components is a core requirement for the DEA-2TT3 exam.
The journey to the SDDC begins with compute virtualization. This is the process of creating a virtual, software-based representation of a physical computer. This was one of the most transformative technologies in the history of IT and is a critical topic for the DEA-2TT3 exam. Compute virtualization allows you to run multiple virtual machines (VMs), each with its own operating system and applications, on a single physical server.
This is made possible by a piece of software called a hypervisor, which is installed on the physical server. The hypervisor is responsible for abstracting the server's physical hardware resources (CPU, memory, networking) and allocating them to the various virtual machines that are running on top of it. This allows for a massive consolidation of servers, leading to significant savings in hardware costs, power, and cooling.
The DEA-2TT3 exam requires you to know the difference between the two main types of hypervisors. A Type 1 hypervisor, also known as a bare-metal hypervisor, is installed directly onto the physical hardware of the host server. There is no underlying operating system. This is the type of hypervisor that is used in enterprise data centers and cloud environments because it is highly efficient and secure.
A Type 2 hypervisor, also known as a hosted hypervisor, is installed as a software application on top of an existing host operating system, such as Windows or macOS. This is the type of hypervisor that is often used on a desktop or laptop computer to run a different operating system in a virtual machine. While it is easier to set up, a Type 2 hypervisor is less performant than a Type 1 hypervisor because it has the additional overhead of the host operating system.
The next pillar of the SDDC is storage virtualization. This is a key concept for the DEA-2TT3 exam. Storage virtualization is the process of pooling the physical storage from multiple storage devices into what appears to be a single, logical storage device that is managed from a central console. This abstraction of the physical storage from the logical representation of that storage provides numerous benefits.
It simplifies management, as administrators no longer need to manage a large number of individual storage arrays. It also allows for more advanced storage features, such as thin provisioning, which allows you to allocate more logical storage to an application than you currently have physically available, and automated storage tiering, which can automatically move data between different types of storage (e.g., fast SSDs and slower HDDs) based on its access patterns.
Software-Defined Storage (SDS) is the practical implementation of storage virtualization and a core component of the SDDC. The DEA-2TT3 exam requires a solid understanding of this concept. In an SDS environment, the software that manages the storage services (the control plane) is completely decoupled from the underlying physical storage hardware (the data plane). This allows the storage services to be delivered by a software layer that can run on any commodity, off-the-shelf server hardware.
This is a major shift from a traditional Storage Area Network (SAN), where the management software is tightly integrated with the proprietary hardware from a single vendor. SDS provides much greater flexibility, reduces hardware costs, and simplifies management through a unified, policy-driven control plane. It is the key to creating an agile and scalable storage infrastructure for a cloud environment.
The final pillar of the SDDC is network virtualization, which is achieved through an architectural approach called Software-Defined Networking, or SDN. This is another critical topic for the DEA-2TT3 exam. Similar to SDS, SDN decouples the network's control plane from its data plane. In a traditional network, the control plane (which makes the decisions about where to forward traffic) and the data plane (which actually forwards the traffic) are combined on each individual switch and router.
In an SDN model, the control plane is centralized in a software-based SDN controller. This controller has a global view of the entire network and can make much more intelligent and dynamic decisions. It can then push these decisions down to the simple data plane devices (the switches), which are only responsible for executing the forwarding instructions. This makes the network much more agile, programmable, and easier to manage.
To truly understand SDN for the DEA-2TT3 exam, you must have a clear grasp of the distinction between the control plane and the data plane. The data plane, also known as the forwarding plane, is the part of the network device that is responsible for the actual work of moving packets from an input port to an output port. It is optimized for high-speed performance.
The control plane is the brains of the device. It is responsible for building the routing tables or the switching tables that the data plane uses to make its forwarding decisions. In a traditional network, every device has its own independent control plane. In an SDN network, the control planes of all the devices are logically centralized into the SDN controller. This centralization is what gives SDN its power and flexibility.
The DEA-2TT3 exam will expect you to be able to articulate the business and technical benefits of a fully software-defined data center. The primary benefit is agility. Because the entire infrastructure is controlled by software, new resources can be provisioned in minutes through automated workflows, rather than the days or weeks it can take to provision physical hardware. This allows the business to respond much more quickly to new opportunities.
Other benefits include increased efficiency, as virtualization allows for much higher utilization of hardware resources, and reduced costs, as the SDDC can be built on commodity, industry-standard hardware rather than expensive proprietary systems. It also simplifies operations, as the entire data center can be managed and automated from a single, unified software interface. The SDDC is the foundation that enables the agility and efficiency of a true cloud operating model.
A key part of the DEA-2TT3 exam is a deep understanding of the compute layer of the cloud infrastructure. As discussed, the foundation of this is compute virtualization, which allows a single physical server to be carved up into multiple, isolated virtual machines. A physical server has its own dedicated CPU, memory, storage, and network interfaces. These resources are managed by a single operating system.
A virtual machine, on the other hand, has virtual hardware. The hypervisor presents a set of virtual CPUs, a certain amount of virtual RAM, and a virtual network card to the guest operating system running inside the VM. The hypervisor is responsible for managing the scheduling of these virtual resources onto the underlying physical hardware. The DEA-2TT3 exam requires you to understand this relationship and the benefits that virtualization brings in terms of resource utilization and workload portability.
To simplify the deployment of cloud infrastructure, the industry has developed new architectural approaches like Converged Infrastructure (CI) and Hyper-Converged Infrastructure (HCI). These are important concepts for the DEA-2TT3 exam. Converged Infrastructure is an approach where a vendor pre-integrates compute, storage, and networking components into a single, pre-validated solution. This simplifies procurement and deployment but still maintains the separate components.
Hyper-Converged Infrastructure takes this a step further. In an HCI system, the compute and storage functions are combined into a single, software-defined platform that runs on commodity server hardware. The storage from all the servers in a cluster is pooled together and managed by a software layer. HCI provides a simple, scalable, building-block approach to deploying a private cloud. The DEA-2TT3 exam expects you to understand the differences and use cases for these modern infrastructure models.
A cloud infrastructure requires a robust and scalable storage foundation. The DEA-2TT3 exam requires a solid understanding of the two primary types of networked storage: Storage Area Networks (SAN) and Network Attached Storage (NAS). A SAN is a dedicated, high-speed network that provides block-level access to storage. When a server connects to a SAN, the storage appears to the server's operating system as if it were a locally attached disk drive. This makes SAN the ideal choice for high-performance, transactional workloads like databases.
A NAS, on the other hand, is a storage device that is connected to a shared network and provides file-level access to storage. When a server or a client connects to a NAS, they access the storage as a shared network folder. NAS is ideal for unstructured data and for easily sharing files among multiple users. The DEA-2TT3 exam will expect you to know the difference between these two technologies and their primary use cases.
Each type of networked storage uses specific protocols for communication. A candidate for the DEA-2TT3 exam should be familiar with the most common of these. The traditional protocol for SANs is Fibre Channel (FC). Fibre Channel is a very high-performance and reliable protocol that runs on its own dedicated network infrastructure, separate from the regular Ethernet LAN. This makes it very fast but also complex and expensive to implement.
A more modern and cost-effective alternative for SANs is iSCSI. iSCSI is a protocol that allows block-level storage commands to be sent over a standard Ethernet network. This allows an organization to build a SAN using the same Ethernet switches and network adapters that they use for their regular data traffic. For NAS, the most common protocol in a virtualized environment is the Network File System (NFS). It is a simple and robust file-sharing protocol.
A third and increasingly important type of storage, and a key topic for the DEA-2TT3 exam, is object storage. Unlike a file system that organizes data in a hierarchical structure of folders, an object storage system manages data as discrete units called objects. Each object consists of the data itself, a variable amount of metadata, and a globally unique identifier. The objects are stored in a flat address space, often referred to as a storage pool.
This flat structure makes object storage incredibly scalable, capable of storing trillions of objects and exabytes of data. It is accessed via a simple API, typically over HTTP. Object storage is the dominant storage paradigm in the public cloud and is ideal for storing massive amounts of unstructured data, such as photos, videos, backups, and log files. The DEA-2TT3 exam requires you to understand the unique characteristics and use cases for this modern storage architecture.
As the DEA-2TT3 exam is part of the Dell EMC Proven Professional program, it is important to have a high-level familiarity with the types of infrastructure solutions that Dell EMC provides. This does not mean you need to be a product expert, but you should understand how their products map to the concepts you are learning. For example, you should be aware that Dell EMC offers a market-leading portfolio of converged and hyper-converged infrastructure systems.
You should also be aware that they have a comprehensive portfolio of storage products that includes traditional SAN arrays, modern all-flash arrays, scale-out NAS systems, and object storage platforms. Having this context helps to ground the theoretical concepts of the DEA-2TT3 exam in the real-world products and solutions that are used to build modern cloud infrastructures.
Once a storage system is in place, the administrator must provision storage to the applications and servers that need it. The DEA-2TT3 exam covers the basic concepts of this process. In a traditional model, known as thick provisioning, all the storage capacity for a given volume is allocated upfront, even if it is not yet used. This is simple but can be wasteful of storage space.
A more modern approach is thin provisioning. With thin provisioning, you can create a logical volume that appears to be a certain size, but the physical storage is only allocated as data is actually written to the volume. This allows for much more efficient use of the storage pool. Another key concept is storage tiering, where the storage system can automatically move data between different tiers of storage (e.g., fast, expensive SSDs and slower, cheaper HDDs) based on how frequently it is accessed.
The network is the glue that holds the entire cloud infrastructure together. The DEA-2TT3 exam requires a solid understanding of both the physical and virtual networking components. The physical network consists of the switches, routers, and cabling that provide the underlying connectivity for all the servers and storage in the data center. A key design principle is to build a highly redundant and high-bandwidth physical network with no single points of failure.
On top of this physical network runs the virtual network. In a virtualized environment, each virtual machine has one or more virtual network interface cards (vNICs). These vNICs are connected to a virtual switch that runs inside the hypervisor on the physical host. The virtual switch is responsible for directing traffic between the VMs on the same host and for connecting them to the physical network. The DEA-2TT3 exam will test your knowledge of how these physical and virtual layers work together.
Modern applications are rarely built as a single, monolithic unit. Instead, they are typically designed using a multi-tier (or n-tier) architecture. This is a fundamental application design pattern that you should be familiar with for the DEA-2TT3 exam. In a classic three-tier architecture, the application is broken down into three logical layers: the presentation tier, the application tier, and the data tier.
The presentation tier is the user interface, which is what the user interacts with (e.g., a web server). The application tier contains the core business logic of the application. The data tier is the database where all the application's data is stored. By separating the application into these tiers, you can scale and manage each tier independently. A key part of cloud infrastructure design is to create a network topology with different subnets and security policies for each of these tiers.
In addition to virtualizing servers, the same virtualization technology can be used to virtualize user desktops and applications. This is an important concept for the DEA-2TT3 exam. Desktop virtualization, also known as Virtual Desktop Infrastructure (VDI), is a technology that hosts a user's desktop operating system, such as Windows 10, as a virtual machine running on a server in the data center. The user can then access their desktop from any device, anywhere.
Application virtualization is a related technology that allows an application to be run in a virtual environment on a server and then be delivered to the user's endpoint device. The user interacts with the application as if it were running locally, but all the processing is happening on the server. These technologies provide benefits in terms of centralized management, improved security, and enhanced user mobility.
While the DEA-2TT3 exam focuses heavily on the infrastructure layer, it also requires an understanding of the higher-level services that are built on top of it. One of the most important of these is Platform as a Service (PaaS). PaaS provides a complete development and deployment environment in the cloud, allowing developers to build, test, deploy, and manage applications without having to worry about the underlying infrastructure.
A PaaS offering typically includes the operating system, middleware, database, and other development tools, all managed by the cloud provider. This allows developers to be much more productive, as they can focus solely on writing their application code rather than on managing servers. The DEA-2TT3 exam expects you to understand the benefits of PaaS and how it fits into the overall cloud service model landscape.
A more modern and lightweight approach to application virtualization is the use of containers. This is a very important topic for the DEA-2TT3 exam. Unlike a virtual machine, which virtualizes an entire computer including the operating system, a container only virtualizes the operating system. Multiple containers can run on a single host, and they all share the host's operating system kernel. This makes containers incredibly lightweight, fast to start, and very efficient in their use of resources.
The most popular container technology is Docker. Docker provides a simple and standardized way to package an application and all its dependencies into a single, portable unit called a container image. This image can then be run on any machine that has the Docker engine installed, ensuring that the application will always run the same way, regardless of the environment. Containers are a key building block for modern, cloud-native applications.
While Docker provides a way to run a single container, in a real production environment you will often need to run and manage hundreds or even thousands of containers across a fleet of servers. The tool for managing this at scale is a container orchestrator. The dominant container orchestration platform in the industry today is Kubernetes. This is an advanced but important concept to be aware of for the DEA-2TT3 exam.
Kubernetes is an open-source platform that automates the deployment, scaling, and management of containerized applications. It can automatically schedule containers to run on the available servers in a cluster, restart them if they fail, and scale them up or down based on demand. It provides a robust and resilient platform for running modern, microservices-based applications. Kubernetes has become the de facto standard for container orchestration.
The availability of cloud infrastructure and technologies like containers and Kubernetes has led to the rise of a new architectural style for building applications: cloud-native. The DEA-2TT3 exam requires you to have a high-level understanding of this modern approach. A cloud-native application is designed from the ground up to take full advantage of the cloud computing model.
This often involves using a microservices architecture. In this architecture, a large, complex application is broken down into a collection of small, independent services. Each microservice is responsible for a single business function, is developed and deployed independently, and communicates with other services over a well-defined API. This architectural style, when combined with containers and an orchestrator like Kubernetes, allows for incredible agility, scalability, and resilience.
Security is a paramount concern in any IT environment, and it is a critical domain of the DEA-2TT3 exam. A fundamental concept that you must understand is the shared responsibility model. In the cloud, security is a partnership between the cloud service provider and the customer. The provider is responsible for the security of the cloud, which means securing the underlying infrastructure, such as the physical data centers and the virtualization platform.
The customer is responsible for security in the cloud. This means that the customer is responsible for securing their own data, applications, and operating systems. This includes tasks like properly configuring network security controls, managing user access and permissions, and encrypting their data. The DEA-2TT3 exam requires a clear understanding of where the provider's responsibility ends and where the customer's responsibility begins for each of the different cloud service models.
A candidate for the DEA-2TT3 exam should be aware of the common security threats that affect cloud environments. These threats can come from both external and internal sources. External threats include things like denial-of-service (DoS) attacks, which attempt to overwhelm a service and make it unavailable, and data breaches, where an attacker attempts to gain unauthorized access to sensitive data. Internal threats can include malicious insiders or accidental misconfigurations by an administrator.
To mitigate these threats, a defense-in-depth strategy is required. This involves layering multiple security controls to create a robust security posture. These controls can include network security measures like firewalls, identity and access management controls to enforce the principle of least privilege, data encryption to protect data at rest and in transit, and continuous monitoring to detect and respond to security incidents. The DEA-2TT3 exam will test on these fundamental security concepts.
Controlling who can access your cloud resources and what they are allowed to do is the job of the Identity and Access Management (IAM) system. This is a critical security function and a key topic for the DEA-2TT3 exam. A robust IAM system is built on the principle of least privilege. It should provide a way to create and manage user identities and to assign them fine-grained permissions.
Modern IAM systems also support advanced features like multi-factor authentication (MFA), which requires a user to provide a second form of verification in addition to their password, significantly increasing security. They also support federation, which allows you to integrate the cloud's IAM system with your organization's existing identity provider, such as Active Directory, to enable single sign-on. A solid grasp of these IAM concepts is essential.
Protecting the data itself is a core part of any security strategy. The DEA-2TT3 exam covers the key concepts of data protection. This starts with a robust backup and recovery plan to protect against data loss due to hardware failure, data corruption, or a ransomware attack. It is critical to have a well-defined and regularly tested plan for restoring data in the event of a disaster.
Another key component of data protection is encryption. Encryption is the process of converting data into a code to prevent unauthorized access. The DEA-2TT3 exam requires you to understand the difference between encrypting data at rest, which means encrypting it while it is stored on a disk, and encrypting data in transit, which means encrypting it as it travels over a network. Both are essential for a comprehensive data protection strategy.
The DEA-2TT3 exam also covers the tools and processes that are used to manage a cloud environment. As a cloud environment grows, manual management becomes impossible. Therefore, automation is key. A cloud management platform provides a unified portal for managing the entire cloud infrastructure. It typically includes a self-service catalog, where users can request and automatically provision resources from a pre-approved set of templates.
Behind the scenes, a cloud orchestration engine is responsible for automating the complex workflows involved in provisioning and configuring these resources. Orchestration allows you to define a multi-step process, such as "deploy a new virtual machine, configure its network settings, install the necessary software, and then add it to a load balancer," and have that entire process be executed automatically. This is the key to achieving the agility and efficiency of the cloud.
To effectively manage a cloud environment, you must have visibility into its health and performance. The DEA-2TT3 exam requires an understanding of the key concepts of cloud monitoring. This involves collecting and analyzing a wide variety of telemetry data, including performance metrics (like CPU and memory utilization), log files from applications and operating systems, and dependency mapping information that shows how different components are connected.
This data is then used to create dashboards that provide a real-time view of the health of the environment. It is also used to configure an alerting system that can automatically notify an administrator when a problem occurs, such as a server going down or an application running out of memory. This proactive monitoring is essential for maintaining the service levels and availability of the applications running in the cloud.
As you prepare for the DEA-2TT3 exam, it is important to focus on the most heavily weighted topics. The core of the exam revolves around the concepts of the Software-Defined Data Center (SDDC). You must have a crystal-clear understanding of compute, storage, and network virtualization. Be able to explain what a hypervisor is, the difference between a SAN and a NAS, and the basic principles of Software-Defined Networking (SDN).
You should also be very comfortable with the essential characteristics and the different service and deployment models of cloud computing. Review the modern infrastructure concepts, such as converged and hyper-converged infrastructure, object storage, and containers. Finally, do not neglect the security and management sections. A solid, well-rounded knowledge of all the exam domains is the best strategy for success on the DEA-2TT3 exam.
Choose ExamLabs to get the latest & updated Dell DEA-2TT3 practice test questions, exam dumps with verified answers to pass your certification exam. Try our reliable DEA-2TT3 exam dumps, practice test questions and answers for your next certification exam. Premium Exam Files, Question and Answers for Dell DEA-2TT3 are actually exam dumps which help you pass quickly.
File name |
Size |
Downloads |
|
|---|---|---|---|
144.9 KB |
1517 |
||
144.9 KB |
1616 |
||
167.2 KB |
2139 |
144.9 KB
1517Please keep in mind before downloading file you need to install Avanset Exam Simulator Software to open VCE files. Click here to download software.
Please fill out your email address below in order to Download VCE files or view Training Courses.
Please check your mailbox for a message from support@examlabs.com and follow the directions.