
You save $34.99
Certified Data Engineer Professional Premium Bundle
- Premium File 227 Questions & Answers
- Last Update: Aug 26, 2025
- Training Course 33 Lectures
You save $34.99
Passing the IT Certification Exams can be Tough, but with the right exam prep materials, that can be solved. ExamLabs providers 100% Real and updated Databricks Certified Data Engineer Professional exam dumps, practice test questions and answers which can make you equipped with the right knowledge required to pass the exams. Our Databricks Certified Data Engineer Professional exam dumps, practice test questions and answers, are reviewed constantly by IT Experts to Ensure their Validity and help you pass without putting in hundreds and hours of studying.
Breaking Down the Certified Data Engineer Professional Exam: Skills, Strategy, and Success
The role of a certified data engineer professional extends beyond basic data manipulation. This credential demonstrates a comprehensive understanding of how to build, optimize, and maintain scalable data processing pipelines in a distributed environment. Data engineers with this certification are expected to be proficient in handling massive datasets, automating ETL workflows, applying best practices in Delta Lake management, and ensuring pipeline robustness across production systems.
The focus of the certification is practical. Instead of relying heavily on theory, it tests real-world scenarios involving Apache Spark, Delta Lake, Databricks Workflows, and Structured Streaming. A professional-level engineer must be prepared to build scalable, high-performance systems that meet organizational data requirements and governance policies.
Databricks has emerged as a leading unified analytics platform combining the best of data lakes and data warehouses. This has made it essential for data engineers to familiarize themselves with the Databricks environment, including its Workflows, SQL Analytics, Unity Catalog, and Delta Lake features. The certification aligns closely with the architecture and ecosystem of the platform.
Engineers are expected to understand how Databricks integrates with cloud environments and supports data engineering workflows through robust APIs, CLI tools, and version-controlled code management. Familiarity with Databricks-specific tools is essential to perform well in the exam.
The certification exam covers six major domains. Each domain includes a mix of theoretical knowledge and practical expertise, especially around Spark optimization and Databricks orchestration. Below is an outline of the critical components:
Data Processing
Databricks Tooling
Data Modeling
Security and Governance
Monitoring and Logging
Testing and Deployment
Understanding how these topics interrelate within production environments is key to answering scenario-based questions during the exam.
Data processing forms the foundation of the exam. This section evaluates knowledge of Delta Lake operations, Spark transformations, and the use of streaming data within structured workflows. Engineers must understand how to handle stateful streaming, data deduplication, error handling, and schema evolution.
It is essential to practice the use of MERGE, UPDATE, DELETE, and VACUUM statements in Delta Lake, along with partitioning strategies to manage large datasets efficiently. You must also develop an understanding of how Delta Lake manages transaction logs and ensures ACID compliance through optimistic concurrency control.
A common area of confusion among exam candidates is the role of change data capture, specifically using the Delta Change Data Feed. It is important to know how to enable it and how it integrates with downstream systems for near-real-time analytics or auditing.
Another critical section is Databricks tooling. Candidates need hands-on knowledge of managing clusters, libraries, and configurations through both the UI and APIs. You must understand how Jobs and Workflows are defined and triggered, how to manage task dependencies, and how to automate deployment pipelines using CLI commands.
This area also assesses the use of Databricks utilities (dbutils) to interact with the file system, pass secrets securely, and manage job parameters. Familiarity with the REST APIs is also expected for operations such as creating jobs programmatically or pulling logs for auditing.
Most engineers overlook the importance of Workspace management, including using Databricks Repos for version control and branch isolation. Knowing how to integrate Git workflows directly into the Databricks environment is essential for ensuring code stability and reproducibility.
The exam requires a clear understanding of the Medallion Architecture, which consists of Bronze (raw), Silver (cleaned), and Gold (aggregated) layers. Candidates must know how to transition data across layers, ensuring optimal performance and data quality.
You should be able to model Slowly Changing Dimensions within Delta Lake, especially Types 1 and 2. Knowing how to apply schema enforcement, column mapping, and append/merge strategies is fundamental.
Another commonly tested area is query optimization. This includes selecting between Z-Ordering and partitioning strategies to improve read performance. Candidates must understand how file sizes and cluster configuration impact the physical plan generated by Spark, and how to debug performance issues using the Spark UI.
Security and governance account for a smaller portion of the exam but are no less important. You need to know how to implement table access control, manage permissions via Unity Catalog, and apply dynamic views for row-level security.
Data engineers must ensure that sensitive information is masked or redacted, and that audit logs are maintained for access tracking. It is important to understand how GDPR and data deletion policies are implemented in Databricks, especially when working in multi-tenant environments.
Knowledge of secure access patterns using service principals, token management, and role-based access control is also expected.
Another key focus is monitoring and logging. You must know how to leverage the Spark UI to identify and resolve performance bottlenecks. This includes analyzing DAGs, shuffle operations, skewed joins, and resource allocation.
Databricks generates event logs and audit logs, which are useful for monitoring job execution and user activity. Understanding how to integrate logging into Databricks workflows using structured logging patterns is beneficial for managing large deployments.
Most importantly, knowing how to correlate Spark metrics with cloud provider monitoring tools helps in maintaining the overall health of the system, especially for production workloads.
The final section deals with the deployment of tested and validated code into production. You must understand CI/CD integration with Databricks Repos, unit testing using frameworks like pytest, and test orchestration through Jobs or external tools.
The exam assesses your ability to organize pipeline deployment patterns such as fan-out (parallelism), funnel (consolidation), and sequential execution. These patterns help in controlling resource usage and maintaining data consistency across stages.
Another important area is version control. You must know how to manage code revisions, rollback changes, and maintain reproducibility across environments.
Time management is critical during the exam. The questions are a mix of conceptual, scenario-based, and code interpretation. Candidates must practice with realistic mock tests to build speed and accuracy. It is important not to spend too much time on a single question and instead flag it for review if unsure.
While multiple-choice format may seem simple, many questions include subtle traps such as slight variations in code syntax or nuanced details about Spark’s behavior. Developing a habit of reading every option carefully can help avoid these pitfalls.
Focus particularly on Delta Lake and Spark code questions, as they form a major chunk of the test. Familiarity with command syntax and behavior of configuration parameters can significantly improve your confidence and accuracy during the exam.
One of the most valuable tips for the Databricks Certified Data Engineer Professional exam is to rely on hands-on practice rather than rote memorization. Practicing with actual Databricks clusters, submitting jobs, and analyzing job metrics helps build intuition that is crucial when dealing with scenario-based questions.
Try building a mini data pipeline using Structured Streaming, apply Delta Lake operations, simulate schema evolution, and automate the job using the CLI or REST API. Doing so will deepen your understanding and prepare you for the diversity of questions in the exam.
Even though you can prepare with notes and guides, nothing replaces the value of executing commands, reading logs, and troubleshooting errors in real-time.
Delta Lake is a foundational element of the Databricks platform, and a core component of the Certified Data Engineer Professional exam. Candidates are expected to understand its mechanics, especially when working with large-scale batch and streaming data pipelines. Delta Lake introduces reliability and scalability to data lakes by providing ACID transactions, scalable metadata handling, and unifying streaming and batch data processing.
Understanding Delta Lake’s transaction log is crucial. The _delta_log directory stores JSON files that track all operations, from data inserts to schema changes. Every commit creates a new file in the log, and Delta Lake uses Optimistic Concurrency Control to prevent conflicts between multiple writes. You must be able to identify how a table’s state is reconstructed from the log files.
Change Data Feed (CDF) is another topic that frequently appears in scenario-based questions. It allows you to track changes (inserts, updates, deletes) between versions. You should know how to enable CDF on a Delta table and query changes using the table_changes function, which can be particularly useful in downstream data consumption layers.
The exam often tests your understanding of merge operations in the context of CDC. You must understand how to perform upserts using MERGE INTO, and how to handle schema evolution within these operations. Additionally, operations such as OPTIMIZE, ZORDER, and VACUUM are important for managing performance and storage. For example, ZORDER BY improves selective queries, while VACUUM reclaims storage by removing old data files.
Structured Streaming is another core domain. Unlike batch pipelines, streaming systems are continuous and sensitive to latency, state management, and event-time vs. processing-time considerations. In Databricks, the Auto Loader mechanism simplifies ingestion by incrementally processing new files from cloud storage.
Auto Loader can detect new files without requiring file listing, thanks to its use of file notifications and checkpoints. This allows you to build efficient, incremental data pipelines. The exam assesses your familiarity with the nuances of file source formats such as cloudFiles, and how Auto Loader uses schema inference and schema evolution.
A key part of mastering Structured Streaming includes understanding watermarking and windowing. Watermarking deals with late-arriving data, while windowing groups records over a time interval. You should be comfortable using withWatermark, groupBy(window()), and understanding how late data affects aggregations.
Another frequent concept is streaming sinks, especially writing to Delta tables with appropriate triggers (Trigger.ProcessingTime, Trigger.Once, etc.). Know the distinctions between append, complete, and update output modes. Misconfiguration of output modes often results in errors, so understanding their implications is critical for troubleshooting.
The second-largest section of the exam focuses on Databricks Tooling. As a candidate, you must be proficient with various interface tools that support development, orchestration, and automation. This includes notebooks, jobs, Repos, the Databricks CLI, REST APIs, and the workspace UI.
Databricks Workflows are an important topic, encompassing job orchestration through tasks, dependencies, and cluster configurations. You should be able to distinguish between single-task and multi-task jobs, and understand when to use shared vs. job clusters. The ability to specify retry policies, timeouts, and email alerts is often tested.
Notebooks play a central role in pipeline development. You should know how to use %run to modularize code and pass parameters between notebooks. This becomes essential in orchestrating pipelines where logic is split across multiple notebooks.
The use of dbutils is also a recurring subject. It includes operations like mounting storage, listing files, and managing secrets. For example, understanding how to read credentials from a secret scope using dbutils.secrets.get is important when accessing external systems securely. Another commonly tested function is dbutils.fs, which handles file manipulation tasks.
The Databricks CLI is powerful for automating deployments and job submissions. Candidates should be able to use commands for uploading files, managing clusters, creating jobs, and running workflows. Familiarity with JSON configurations for job definitions and running jobs through CLI commands is essential.
REST APIs offer similar capabilities but are often used in CI/CD pipelines. Understanding token-based authentication, endpoints for jobs and runs, and how to read API responses is valuable for real-world automation tasks.
Data modeling on Databricks is guided by Lakehouse architecture principles. This part of the exam focuses on optimizing storage layout, schema management, and query performance. Candidates must understand the Medallion Architecture and how it relates to data freshness, quality, and usability.
The Medallion Architecture divides data into Bronze, Silver, and Gold layers. Bronze is raw ingestion with minimal processing, Silver applies cleaning and transformation, and Gold serves business-level aggregations. Knowing when and how to transition data between these layers is essential.
Performance tuning through physical design is another key topic. This includes selecting appropriate partitioning strategies and using Z-Ordering for better pruning of files during query execution. Z-Ordering works best when your query workload frequently filters on specific columns.
Schema evolution and enforcement are particularly important in streaming pipelines. Delta Lake supports automatic schema updates with options like mergeSchema, but also allows schema enforcement to avoid data corruption. You should understand how schema changes are logged and how they affect downstream queries.
Slowly Changing Dimensions (SCDs) also appear in the modeling section, particularly in the context of Delta Lake. You may be asked to implement Type 1 and Type 2 SCDs using MERGE and UPDATE statements, and manage historical versions through timestamps or versioning.
Security and governance are central to building production-grade pipelines. You need to understand both platform-level and data-level controls. One of the key areas is Access Control Lists (ACLs), which are used to manage access to notebooks, clusters, and tables.
Row-level and column-level security can be implemented using dynamic views. For example, using IS_MEMBER() in a SQL view to restrict rows based on group membership is a common scenario. You should understand how these views behave in combination with Unity Catalog or external metastores.
Data masking is also relevant for protecting sensitive fields. This can be done using conditional logic in views, ensuring that unauthorized users cannot access raw PII values. You should also be familiar with the current_user() function, which helps tailor view access.
Compliance-related topics like GDPR and data deletion are tested through case scenarios. For example, you may need to demonstrate how to delete a customer’s data completely from a Delta table, including version history, by using VACUUM after a retention override. This requires understanding retention periods and implications on data recovery.
Operational monitoring is essential for ensuring pipeline health. The exam includes questions on how to analyze and interpret logs, both from the Spark UI and event logs. You must be comfortable with key components like DAGs, stages, and tasks.
The Spark UI helps identify issues like skewed partitions or expensive shuffles. Understanding task duration, stage retries, and input size can reveal performance bottlenecks. You may be asked to troubleshoot a slow job based on a description of its Spark UI.
Event logs, which capture metadata about job runs, are useful for auditing and debugging. These logs can be stored in a persistent location and processed using structured queries. You should understand their schema and how to join them with application-level logs for end-to-end visibility.
Audit logs provided by the platform or cloud providers capture user actions, job access, and data reads. These are critical for compliance and security audits. You may be asked to trace an action back to a user or determine whether a dataset was accessed within a certain period.
Metrics for monitoring pipelines include throughput, latency, and error rates. Tools like Ganglia or built-in metrics dashboards provide visibility into these indicators. Understanding when and how to scale clusters based on metrics is a practical skill assessed by the exam.
One of the central themes of the Certified Data Engineer Professional exam is the ability to construct reliable, scalable workflows using the Lakehouse paradigm. Candidates are expected to know how the Bronze-Silver-Gold layered architecture facilitates incremental processing, maintains data quality, and ensures efficiency.
In a Bronze layer, raw data is ingested as-is, often using tools like Auto Loader for file ingestion. This layer may include duplicate records, malformed rows, or incomplete entries. Candidates must understand how to process such data with fault tolerance and schema evolution.
The Silver layer introduces filtering, cleaning, and business rule enforcement. Records here are typically deduplicated and conform to a structured format. Techniques like merge operations for upserts and capturing slowly changing dimensions become essential. In this context, understanding data versioning and time travel capabilities offered by Delta Lake adds significant depth to your implementation.
The Gold layer serves refined, query-optimized data ready for analytics or machine learning. Optimizing queries using Z-Ordering and appropriate partitioning strategies is critical here. As performance impacts exam scenarios, your knowledge of caching strategies and query optimization mechanisms plays a role in solving complex use cases.
Real-world scenarios often involve fan-out or fan-in workflows where one task branches into multiple downstream processes or vice versa. For instance, you may need to enrich data from several sources before aggregation and model scoring. The ability to orchestrate such flows using the platform's native task chaining or CLI/API-based triggers is an advanced topic tested in the exam.
These complex workflows also bring idempotency concerns. You must design pipelines that are retry-safe and avoid duplicate records. Understanding how to use structured streaming checkpoints and delta write options is critical. At the orchestration level, conditional executions, retries, and timeout policies reflect mature pipeline design.
Beyond scheduling, managing dependencies across jobs and environments using Repos and version control becomes important. The exam also includes scenarios requiring you to modify pipeline logic in response to failed data quality checks or schema drift, so it's important to simulate such conditions during preparation.
A Certified Data Engineer Professional is expected to implement data governance controls that align with organizational policies and compliance requirements. The exam explores this from both a policy enforcement and architecture standpoint.
Access control can be granular, down to the column level using dynamic views and masking. For example, specific teams may only access anonymized data while audit teams can see full logs. The challenge lies in configuring roles and SQL-based access control expressions correctly. A nuanced understanding of the underlying execution context helps in implementing secure architectures.
Another recurring theme is secure data deletion. Candidates must know how to delete GDPR-sensitive data and manage table retention through vacuum policies without violating compliance. This includes knowing when and how to enable Delta change data feed (CDF) and ensuring deleted records are purged from both table versions and transaction logs within set timelines.
Token management, credential pass-through, and integration with cloud-specific identity and access management systems are also evaluated. You may face questions requiring you to troubleshoot token scopes or configure secure mounts for external storage sources.
An often-underestimated topic in preparation is observability. The exam does not just test if you can write code or configure workflows; it expects you to ensure their stability and performance over time.
Key areas include tracking Spark jobs via the Spark UI, interpreting stage DAGs, identifying skewed joins, and optimizing wide transformations. Many questions frame underperforming pipelines with symptoms like long shuffle times or out-of-memory errors. To answer correctly, you must correlate resource configurations, data volume characteristics, and code efficiency.
Databricks generates event logs, cluster logs, and job metrics which can be integrated with alerting systems. The ability to set up log sinks or monitor streaming progress via metrics helps maintain SLAs. In some advanced scenarios, you'll need to use custom metrics or log parsing for debugging.
Audit logging is also tested in relation to security. You might be asked to identify unauthorized actions or anomalies using event logs or to design monitoring layers that detect unexpected job behavior. These questions combine multiple skills across orchestration, security, and observability.
The final piece of the preparation journey is deployment. You’re expected to promote your work from development to production using robust processes. This includes local testing, CI/CD integrations, and environmental parity.
Unit testing using pytest or equivalent frameworks is often simulated in questions that ask you to validate pipeline logic before deployment. Beyond this, integration testing at the data level—verifying row counts, column presence, and data types—adds confidence to production readiness.
Deployment pipelines may involve Repos synced with source control, automated testing frameworks, and job deployment via CLI/API. You should understand how to use Git branches for feature isolation and merge them into staging or main branches after reviews and validation.
Versioning plays a key role in production safety. Knowing how to tag stable versions of a job, revert to previous configurations, and debug using commit history ensures long-term pipeline reliability.
Also important are deployment patterns like blue-green deployments and canary testing, especially when releasing schema or logic changes that may affect downstream consumers. You may face scenario-based questions that ask you to roll back a faulty pipeline or isolate a misbehaving task.
The Certified Data Engineer Professional role extends into resource optimization. While the platform abstracts many low-level details, you still need to make smart decisions about compute usage, cluster sizing, and job configurations.
Clusters can be job-scoped, interactive, or shared. Each has cost implications and affects performance. Choosing the right instance type, autoscaling settings, and spot instance usage helps reduce costs. You may be asked to troubleshoot high-cost pipelines or improve execution efficiency using configuration changes.
Caching strategies like in-memory caching, disk spilling, or even pinning intermediate results help balance latency and cost. Query tuning, especially join strategies and shuffle operations, can significantly alter performance. Mastery over Spark configurations like parallelism, shuffle partitions, and executor memory is key for handling large datasets under budget constraints.
While cost is not a dominant topic in the exam, its connection to architectural decisions makes it a recurring secondary theme.
Structured streaming is deeply embedded in the exam. Unlike batch pipelines, streaming introduces nuances like watermarking, late data handling, and checkpointing. You’re tested on your understanding of exactly-once semantics, stateful processing, and how to handle evolving schemas.
One topic that often catches candidates off-guard is managing aggregations in streaming. Knowing when to use update mode versus append mode and understanding the implications on output sinks can make or break your pipeline's reliability.
Also, real-time ingestion tools like Auto Loader have unique behaviors such as schema inference, schema evolution, and support for backfilling historical data. The certification tests whether you can design for both low-latency processing and long-term reliability.
Finally, managing streaming jobs with failover capabilities, alerting, and recovery workflows ensures your pipelines are production-grade. You may be presented with scenarios where checkpoint corruption or data duplication occurs and asked to diagnose or recover from these failures.
Change Data Capture is one of the most technically complex yet essential skills for certified data engineers. The use of Delta Change Data Feed (CDF) enables tracking row-level changes for downstream consumers.
Questions on CDC typically center around maintaining slowly changing dimensions, synchronizing systems with source-of-truth tables, or building audit trails. You may have to select appropriate CDC methods or reason through performance and storage implications.
CDF can operate in streaming or batch mode. The exam tests your ability to configure it correctly, handle schema changes, and combine it with transactional integrity. For instance, ensuring idempotent processing or avoiding out-of-order updates is key in real-time applications.
When working with updates or deletes from sources like event logs or relational systems, the ability to merge CDF output with target tables using appropriate keys is essential. Understanding the implications of merge condition mismatches and null values can help avoid silent errors.
Many experienced exam takers have observed that there are often two categories of wrong answers: obviously incorrect and deceptively incorrect. It is your job to differentiate the second kind. If two options are technically correct, ask yourself which one is more aligned with scalability, best practices, and Databricks-recommended designs. This principle is particularly important when dealing with questions about data ingestion versus transformation logic.
Also, remember that the correct answer may not be the most efficient in terms of cost or time. The exam often tests best practice and reliability over optimization.
In the final stretch, many candidates make the mistake of learning new frameworks or tools not mentioned in the blueprint. The scope of the exam is specific, and trying to study beyond what’s relevant often leads to unnecessary cognitive load. Instead, focus on reinforcing your understanding of Databricks Workflows, Delta Lake, Spark performance tuning, and deployment best practices.
Avoid switching resources frequently in the last few days. Jumping from one guide to another in search of a magical insight is counterproductive. Stick to your core notes and revisit documentation of topics you feel unsure about. It’s about consolidating, not expanding, your knowledge.
Do not attempt full-day study sessions before the exam. Instead, take frequent breaks and do short 60- to 90-minute reviews. If you are sleep-deprived or overwhelmed, your recall and reasoning during the exam will drop significantly.
The exam goes beyond rote memorization. If a question asks about optimizing a pipeline with 10 billion records that need to be joined from multiple sources, think about the implications. You must understand broadcast joins, skew optimization, partition management, and caching — but not just as buzzwords. Understand when and why they are needed.
Try to relate each concept to a real scenario. Imagine ingesting raw clickstream data to a bronze table, transforming it in a silver table with CDC, and aggregating it for business dashboards in a gold table. This mental model helps you remember each concept's place and value in a pipeline.
Databricks also values job orchestration as a critical skill. While knowing how to configure job clusters or retries is essential, it’s equally important to understand how dependencies, retries, and failure handling should be orchestrated for production workloads.
Many candidates walk into the exam hall with technical proficiency but lose marks due to stress or overthinking. In the final 48 hours, reduce your study intensity. Spend time reviewing summaries, diagrams, or mental maps rather than diving deep into new material.
Get a full night’s rest before exam day. Hydrate well, eat a balanced meal, and log in early. Make sure your testing environment is calm, your ID is ready, and all technical setups are functioning. You don’t want to start the exam with adrenaline from technical glitches.
If you get stuck on a tough question during the test, close your eyes for a second, breathe, and move on. Building mental resilience is just as important as building technical knowledge.
After completing the exam, take time to reflect on which areas you were most and least confident in. Whether you pass or not, the process of preparing for this exam elevates your skills tremendously.
If successful, you now possess one of the most respected certifications in the data engineering world. Use this to your advantage by contributing more actively in design meetings, leading discussions on streaming data architectures, or proposing new Lakehouse integrations in your workplace.
This credential is also a signal to hiring managers and technical leads that you can handle real-world, complex data pipelines. It opens doors to senior roles in data platform teams, cloud engineering, and scalable architecture design.
For those who narrowly miss the passing mark, consider the experience as a diagnostic tool. You now know the exam format and your blind spots. You can retake the exam after some time with a much higher chance of success.
Passing this exam is not the end, but the beginning of a broader journey in scalable data engineering. Continue to refine your understanding of Spark 3 optimizations, storage formats like Iceberg or Hudi if relevant, and explore orchestration tools that go beyond Databricks-native solutions. Build a Git-based deployment strategy that scales with organizational growth. Experiment with cross-cloud lakehouse implementations.
Eventually, the knowledge gained during this certification lays the foundation for becoming a data architect or a platform engineer. These roles require an even deeper understanding of performance engineering, security boundaries, cost optimization, and data lineage.
Keep participating in community discussions, forums, and architecture reviews. Share lessons from your certification journey with others and stay up-to-date as Databricks evolves. Features like serverless compute, AI integrations, and native versioning are emerging areas worth exploring.
The Certified Data Engineer Professional exam is more than just a technical test. It’s a deep assessment of how well you understand the practical and theoretical aspects of designing, deploying, and optimizing scalable data platforms. It requires not just knowledge but also judgment, clarity, and experience-based reasoning.
By approaching the exam with a focused strategy, disciplined revision, and the right mindset, you not only increase your chances of passing but also gain valuable expertise that will pay dividends throughout your career. Whether you’re working in real-time analytics, large-scale batch processing, or building reliable data lakes, this certification affirms your ability to handle complex challenges in modern data ecosystems.
Choose ExamLabs to get the latest & updated Databricks Certified Data Engineer Professional practice test questions, exam dumps with verified answers to pass your certification exam. Try our reliable Certified Data Engineer Professional exam dumps, practice test questions and answers for your next certification exam. Premium Exam Files, Question and Answers for Databricks Certified Data Engineer Professional are actually exam dumps which help you pass quickly.
File name |
Size |
Downloads |
|
---|---|---|---|
34.9 KB |
756 |
Please keep in mind before downloading file you need to install Avanset Exam Simulator Software to open VCE files. Click here to download software.
Please fill out your email address below in order to Download VCE files or view Training Courses.
Please check your mailbox for a message from support@examlabs.com and follow the directions.