Essential Practice Questions for the Docker Certified Associate Exam

Embarking on the journey to become a Docker Certified Associate (DCA) requires a solid grasp of containerization principles and hands-on Docker proficiency. This certification, designed for individuals new to the container paradigm, validates your expertise in building, deploying, and managing applications with Docker. To bolster your confidence and preparedness for the actual examination, this compilation of practice questions, mirroring the style and complexity of the DCA exam, will prove invaluable. Each question is accompanied by a detailed explanation, shedding light on critical concepts and reinforcing key objectives of the certification.

Deep Dive into Docker Orchestration: Understanding Swarm Mode and Service Deployment

Question 1: Global Service Task Distribution

A global service in Docker Swarm mode is designed to execute one task on every node that satisfies the specified placement and resource constraints.

Correct Answer: B

Explanation: Global services ensure that a single instance of a service’s task runs on every eligible node within the swarm. This is distinct from replicated services, which maintain a desired number of tasks distributed across the cluster, potentially having multiple tasks on some nodes and none on others.

Crafting a Pervasive Monitoring Solution with Docker Swarm: A Deep Dive

The contemporary digital landscape necessitates robust and ubiquitous monitoring to maintain the operational integrity of distributed systems. Organizations frequently leverage containerization technologies like Docker to deploy applications with unparalleled agility and consistency. When it comes to pervasive monitoring, the objective is often to ensure that a dedicated monitoring agent or application is actively functioning on every single node within a compute cluster, encompassing both existing infrastructure and any newly provisioned nodes. This comprehensive coverage guarantees no blind spots in data collection, providing a holistic view of system health and performance.

Consider the scenario where an organization requires a specialized custom monitoring application, identified as examplecorp/stats-collector from Docker Hub, to diligently operate across all nodes within their Docker Swarm environment. This includes a crucial mandate: any new nodes joining the Swarm must automatically inherit this monitoring capability. Furthermore, this critical monitoring system has a fundamental dependency on an environment variable, ENDPOINT_ADDRESS, which must be precisely configured to service.example.com for the accurate submission of collected metrics. This intricate requirement underscores the need for a command that not only deploys the application universally but also correctly injects the necessary configuration parameter.

Let’s dissect the various approaches and clarify why a specific command emerges as the optimal solution for this sophisticated deployment challenge. Understanding the nuances of Docker Swarm service creation, particularly concerning deployment modes and environment variable injection, is paramount to achieving the desired operational state.

Unpacking Docker Swarm Service Deployment Modalities

Docker Swarm offers distinct deployment modalities that dictate how tasks, or instances of a service, are distributed across the nodes in a cluster. These modes are fundamental to achieving specific operational outcomes, such as high availability, load balancing, or, as in our case, pervasive deployment.

The first key modality is the replicated mode. This is the default setting when creating a Docker Swarm service. In replicated mode, you explicitly define a fixed number of tasks that Docker Swarm will strive to maintain. If a task fails, Docker Swarm will attempt to reschedule it on an available node to preserve the desired replica count. However, the exact nodes where these tasks reside are not necessarily predetermined; Swarm’s orchestrator intelligently distributes them to optimize resource utilization and availability. While suitable for many stateless or horizontally scalable applications where a specific number of instances is sufficient, replicated mode falls short when the requirement is for every single node to run a specific service instance. For instance, if you specify –replicas=3, Swarm will ensure three instances are running somewhere in the cluster, but not necessarily one on each of your five swarm nodes. This inherent limitation makes it unsuitable for the pervasive monitoring scenario where every node needs to be monitored.

In stark contrast, the global mode fundamentally alters the deployment paradigm. When a service is configured with –mode=global, Docker Swarm ensures that exactly one task of that service is launched and maintained on every single node within the Swarm cluster. This includes manager nodes, worker nodes, and, crucially, any new nodes that are subsequently added to the Swarm. As soon as a new node joins the Swarm, Docker Swarm’s orchestrator automatically deploys an instance of the global service onto it. This automatic provisioning on new nodes is a cornerstone of achieving truly pervasive coverage for applications like monitoring agents, security scanners, or log collectors, where every node’s contribution is vital for a complete operational picture. This characteristic perfectly aligns with the organizational requirement for the examplecorp/stats-collector to operate on all swarm nodes, including newly added ones.

Injecting Environmental Configuration into Containerized Services

Beyond deployment modality, the ability to configure applications dynamically through environment variables is a cornerstone of containerization best practices. Environment variables offer a flexible and robust mechanism for passing configuration data into containers without modifying the container image itself. This promotes immutability of images and allows for different deployments (e.g., development, staging, production) to utilize the same image with varying configurations.

In Docker, the -e flag, or its more verbose counterpart –env, is the standardized method for injecting environment variables into a container. When you use -e KEY=”VALUE”, Docker ensures that within the running container’s environment, a variable named KEY will be set to the specified VALUE. This is critical for applications that rely on external configuration, such as database connection strings, API keys, or, in our current scenario, the ENDPOINT_ADDRESS for metric submission. Misconfiguring or failing to set these environment variables can lead to application malfunction or an inability to communicate with external services.

It’s imperative to distinguish the -e flag from other Docker command-line options that might seem superficially similar but serve entirely different purposes. For instance, the –entrypoint flag is used to override the default ENTRYPOINT instruction defined within a Dockerfile. The ENTRYPOINT specifies the command that will be executed when a container starts. While you can pass arguments to the entrypoint, you cannot set environment variables directly using the –entrypoint flag itself. Attempting to use –entrypoint ADDRESS=”service.example.com” would incorrectly interpret “ADDRESS=service.example.com” as an argument to the entrypoint command, rather than setting an environment variable. This fundamental distinction is often a source of confusion for those new to Docker service deployment.

Evaluating the Proposed Deployment Commands

Let’s meticulously analyze the given options in light of our understanding of Docker Swarm deployment modes and environment variable injection:

Option A: docker service create –name stats-collector –replicas=1 –entrypoint ADDRESS=”service.example.com” examplecorp/stats-collector

This command suffers from two critical flaws. Firstly, –replicas=1 instructs Docker Swarm to maintain only a single instance of the stats-collector service across the entire cluster. This directly contradicts the requirement for the monitoring application to operate on all swarm nodes. A single replica provides no guarantee of pervasiveness. Secondly, the use of –entrypoint ADDRESS=”service.example.com” is fundamentally incorrect for setting an environment variable. The –entrypoint flag is designed to modify the container’s entry point command, not to inject configuration values as environment variables. This misuse would likely lead to an error or the application failing to find its crucial ENDPOINT_ADDRESS configuration. Therefore, Option A is unequivocally unsuitable.

Option B: docker service create –name stats-collector –replicas=auto –entrypoint ADDRESS=”service.example.com” examplecorp/stats-collector

Option B introduces a non-existent and invalid value for the –replicas flag: auto. The –replicas option exclusively accepts a numerical value specifying the desired number of tasks. There is no concept of “auto” scaling or automatic replica determination in this context without additional tools or configurations beyond the basic docker service create command itself. Furthermore, similar to Option A, it incorrectly employs the –entrypoint flag to attempt to set an environment variable. These combined inaccuracies render Option B entirely unviable for the stated requirements.

Option C: docker service create –name stats-collector –mode=global -e ENDPOINT_ADDRESS=”service.example.com” examplecorp/stats-collector

This command precisely aligns with all the specified requirements, making it the unequivocally correct solution. The –mode=global flag is the linchpin here. It dictates that Docker Swarm will deploy exactly one instance of the stats-collector service on every single node within the Swarm cluster. This ensures comprehensive coverage, encompassing both existing nodes and any new nodes that may join the Swarm in the future. As new nodes are integrated, Swarm’s orchestration capabilities will automatically provision the stats-collector on them, maintaining the desired pervasive monitoring footprint.

Equally important is the use of -e ENDPOINT_ADDRESS=”service.example.com”. This is the correct and standard method for injecting the ENDPOINT_ADDRESS environment variable into the examplecorp/stats-collector container. The application running inside the container will then be able to access this variable and use service.example.com as the destination for submitting its collected metrics. The combination of global deployment and accurate environment variable injection makes this command the perfect fit for the described scenario.

Option D: docker service create –name stats-collector –mode=replicated -e ENDPOINT_ADDRESS=”service.example.com” examplecorp/stats-collector

While Option D correctly utilizes the -e flag for setting the environment variable, its fatal flaw lies in the –mode=replicated flag. As previously discussed, replicated mode, which is the default, creates a specified number of tasks (by default, one if –replicas is not explicitly set) and distributes them across the swarm. It does not guarantee that a task will run on every single node. If the organization has, for example, ten swarm nodes, and replicated mode is used without specifying a replica count of ten, only a small subset of nodes would run the monitoring application. This directly violates the core requirement for the monitoring application to operate on all swarm nodes, including newly joined ones. Therefore, despite the correct environment variable injection, Option D fails to meet the fundamental deployment objective.

Precision in Swarm Service Management

The meticulous selection of Docker Swarm command-line arguments is paramount for achieving desired application deployment and operational outcomes. For a global monitoring application like examplecorp/stats-collector, the paramount requirement is to ensure its presence on every single node within the Swarm, accommodating future infrastructure growth. This necessitates the –mode=global flag. Simultaneously, providing essential configuration, such as the metric submission endpoint, demands the correct application of environment variables using the -e flag.

The example presented vividly illustrates that a thorough understanding of Docker Swarm’s capabilities, particularly its deployment modes and configuration mechanisms, is indispensable for effective container orchestration. Misinterpretations of command-line arguments, such as confusing –entrypoint with -e for environment variables, can lead to failed deployments or applications operating suboptimally. In the dynamic world of containerized applications, precision in deployment commands translates directly into system reliability, comprehensive monitoring, and ultimately, enhanced operational efficiency. Examlabs candidates often encounter scenarios requiring such precise command crafting, underscoring the importance of internalizing these fundamental Docker Swarm concepts.

Question 3: Discerning Truths about Docker Swarm Mode

Which of the following statements about Docker Swarm mode is NOT true?

  1. You can deploy both manager and worker nodes using the Docker Engine. B. For each service, you can specify the desired number of tasks, and the swarm manager automatically adjusts to maintain this state. C. The swarm manager automatically assigns addresses to containers on the overlay network during application initialization or updates. D. Docker Swarm mode is a plugin that needs to be installed alongside Docker to manage a cluster of Docker Engines.

Correct Answer: D

Explanation:

  • Options A, B, and C are all accurate descriptions of Docker Swarm mode’s capabilities. Docker Engine natively supports the deployment of both manager and worker nodes. Swarm managers indeed manage task scaling for services, and they handle IP address assignment on overlay networks.
  • Option D is incorrect. Docker Swarm mode is an integral feature built directly into the Docker Engine as of Docker Engine 1.12. It does not require a separate plugin installation, making cluster management capabilities natively available with a basic Docker installation. Additionally, communication between swarm nodes is secured through TLS mutual authentication and encryption.

Mastering Docker Images: Creation, Management, and Registry Interactions

Question 4: Excluding Python Byte-Code Files During Image Creation

Which pattern effectively prevents all Python byte-code files (.pyc) from being copied into a Docker image during its creation process?

  1. **.pyc B. **/*.pyc C. *.pyc D. /*.pyc

Correct Answer: B

Explanation: The correct pattern for excluding files across all directories is **/*.pyc.

  • The ** wildcard matches any number of directories (including zero).
  • The / ensures the pattern applies to subdirectories.
  • *.pyc matches any file ending with .pyc. Therefore, **/*.pyc correctly targets all Python byte-code files located in any directory within the build context. This syntax is based on filepath.Match rules with Docker’s extension for the ** wildcard.

Question 5: Understanding the Relationship Between Images and Containers

Which two statements accurately describe the relationship between Docker images and containers?

  1. An image is a collection of immutable layers, whereas a container is a running instance of an image. B. A container can exist without an image, but an image cannot exist without a container. C. Only one container can be spawned from a given image at a time. D. If multiple containers are spawned from the same image, they all utilize the same copy of the image in memory.

Correct Answers: A and D

Explanation:

  • Option A is correct. Docker images are composed of read-only, immutable layers, with each layer representing a Dockerfile instruction. A container is the executable, runtime instantiation of an image, adding a writable layer on top of the image’s read-only layers.
  • Option B is incorrect. Containers are always created from images; an image is their blueprint. A container cannot exist independently without an underlying image.
  • Option C is incorrect. Docker is designed for scalability. Multiple isolated containers can be concurrently spawned from a single image.
  • Option D is correct. When multiple containers are launched from the same image, Docker optimizes resource utilization by loading a single copy of the image’s read-only layers into memory. Each container then receives its own distinct, writable layer for specific runtime changes, ensuring efficient memory usage.

Question 6: Building a Docker Image from a Custom Dockerfile Name

You are in a directory containing a file named Dockerfile-app. To build a Docker image using this specific file without renaming it to Dockerfile, which command is correct?

  1. docker build -d Dockerfile-app B. docker build -f Dockerfile-app C. docker build –dockerfile Dockerfile-app D. docker build –from-file Dockerfile-app

Correct Answer: B

Explanation:

  • Option A is incorrect as -d is not a valid flag for docker build in this context.
  • Option B is correct. The -f or –file flag is specifically used with docker build to designate an alternative Dockerfile name.
  • Options C and D are incorrect; –dockerfile and –from-file are not valid flags for this operation.

Docker Installation and Configuration: Setting Up Your Environment

Question 7: Default Logging Driver in a Fresh Docker Installation

Bob performs a fresh installation of Docker on his new Linux server, without any custom configurations. He then runs a new container using docker run -d nginx. Which logging driver will this container utilize by default?

  1. Syslog B. Logentries C. Json-file D. Journald

Correct Answer: C

Explanation: The json-file logging driver is the default logging mechanism employed by Docker when no other logging driver is explicitly configured. It stores container logs in JSON format.

Question 8: Web Dashboard for Docker Cluster Management

Which of the following provides a web-based dashboard for managing a Docker cluster?

  1. Docker Swarm mode B. Docker UCP C. Docker-compose D. DTR

Correct Answer: B

Explanation:

  • Docker Swarm mode is the clustering capability built into Docker Engine, but it doesn’t inherently provide a web dashboard for management.
  • Docker Universal Control Plane (UCP) is a component of Docker Enterprise that includes a comprehensive web-based graphical user interface (GUI) and a command-line interface (CLI) for effectively managing Docker Swarm clusters.
  • Docker Compose is a tool for defining and running multi-container Docker applications on a single host.
  • Docker Trusted Registry (DTR) is an enterprise-grade image storage and management solution, also part of Docker Enterprise, but it’s focused on registry functionalities, not overall cluster management.

Docker Networking: Connecting Containers and Hosts

Question 9: Behavior of Containers Using Host Network Mode

Bob runs a container with the –net=host option, and multiple other containers are already running on the host. Which of the following statements will NOT be true regarding this container’s networking?

  1. The container will utilize the host’s network namespace, including its network interfaces and IP stack. B. All containers sharing the host network are capable of communicating with each other on the host interfaces. C. Because they are using the host networking namespace, two containers are able to bind to the same TCP port. D. From a networking standpoint, this is equivalent to multiple processes running directly on a host without containers.

Correct Answer: C

Explanation:

  • Options A, B, and D are true. When a container uses –net=host, it shares the network namespace of the host machine. This means it directly uses the host’s network interfaces and IP addresses, allowing direct communication with other processes (containerized or not) on the host. Network-wise, it’s as if the application is running directly on the host.
  • Option C is incorrect. In any given network namespace (including the host’s), only one process or container can bind to a specific TCP port at a time. If two containers or processes attempt to bind to the same port on the host’s network namespace, a port conflict will occur.

Question 21: Attaching an Existing Network to a Running Container

Which command can be used to attach an existing network named net1 to a container named container1, which is currently running in a network named net2?

  1. docker network connect net1 net2 container1 B. docker network connect net1 container1 C. docker connect network net1 net2 D. docker connect network net1 container1 E. docker connect network net1 net2 container1

Correct Answer: B

Explanation: The correct syntax for connecting a running container to an existing network is docker network connect <network_name> <container_name>. The fact that container1 is already on net2 is irrelevant to the command for connecting it to net1. Docker will handle the necessary adjustments.

Question 22: Assigning Static IP to a Container

Which of the following is a valid command to assign a static IP address to a container?

  1. docker run –static-ip 172.18.0.22 <image> B. docker run –ip 172.18.0.22 <image> C. None of the above D. docker run –network-ip 172.18.0.22 <image>

Correct Answer: C

Explanation: Options A, B, and D use incorrect flags for assigning a static IP directly in the docker run command. While you can assign a static IP to a container, it requires a custom network to be created first, as static IPs are not supported on Docker’s default bridge network. The correct sequence is:

  1. Create a custom bridge network with a specified subnet: docker network create –subnet=172.18.0.0/16 mycustomnet
  2. Then, run the container and assign a static IP within that custom network: docker run –net mycustomnet –ip 172.18.0.22 -it ubuntu bash

Docker Security: Safeguarding Your Containerized Applications

Question 10: Components of Grants in Docker UCP

Grants in Docker Universal Control Plane (UCP) are composed of which elements?

  1. Subject, role, and resource set B. Subject, role, and containers C. Nodes, role, and containers D. Subject, node, and containers E. Images, containers, and nodes

Correct Answer: A

Explanation: A grant in Docker UCP is fundamentally an Access Control List (ACL) that defines who (the subject – user or team) can access what (the resource set – a collection of resources like services, volumes, or nodes) in what manner (the role – defining permissions like viewer, editor, or administrator). This granular control allows administrators to define comprehensive access policies.

Question 11: Understanding SELinux and Docker Compatibility

Which of the following statements about SELinux is NOT true?

  1. SELinux stands for Security-Enhanced Linux. B. SELinux provides a mechanism for supporting access control security policies. C. SELinux comes bundled with Docker. D. SELinux is a set of kernel modifications and user-space tools integrated into various Linux distributions.

Correct Answer: C

Explanation:

  • Options A, B, and D are true statements. SELinux (Security-Enhanced Linux) is a Linux kernel security module that provides mechanisms for supporting access control security policies, enhancing the security posture of the operating system. It is a set of kernel modifications and user-space tools that are typically integrated into various Linux distributions.
  • Option C is incorrect. SELinux is independent of Docker. While Docker is designed to be compatible with SELinux, SELinux itself is a separate security subsystem of the Linux kernel and is not “bundled” with Docker. You enable and configure SELinux at the operating system level.

Question 23: Limiting Container Memory Usage

Bob needs to test an untrusted Docker image that has a memory leak, causing it to consume memory rapidly and crash other system programs. Bob wants to run this container with a maximum memory limit of 512MB. Which command option should Bob use?

  1. docker run –limit 512m B. docker run –limit 512 C. docker run -m 512m D. docker run -m 512

Correct Answer: C

Explanation:

  • Options A and B are incorrect because –limit is not a valid flag for specifying memory limits in docker run.
  • Option C is correct. The -m (or –memory) flag is used to set the memory limit for a container. The value can be specified with a unit (e.g., m for megabytes, g for gigabytes). So, 512m correctly sets a limit of 512 megabytes.
  • Option D is incorrect because -m 512 without a unit would allocate 512 bytes, which is far too small and would likely lead to immediate out-of-memory errors.

Question 24: Addressing Root Key Loss in Docker Content Trust (DCT)

What is the recommended action when facing a loss of the root key in Docker Content Trust (DCT)?

  1. Regenerate a new root key. B. Sign existing user certificates with a new root key. C. Contact Docker support. D. Create a new DCT cluster.

Correct Answer: C

Explanation: The root key in Docker Content Trust is critically important. Its loss effectively compromises the entire trust chain. There are no simple user-level commands or procedures to recover from or unilaterally resolve the loss of a DCT root key. In such a severe security incident, the recommended and often only viable course of action is to contact Docker support (or the vendor of your Docker Enterprise solution) for assistance in recovery or establishing a new trust hierarchy. Attempting to regenerate or re-sign keys without proper procedures can further compromise the system.

Docker Storage and Volumes: Managing Data Persistence

Question 12: Updating Secrets in Docker Swarm

Bob intends to update a secret currently utilized by one of his services. What is the correct and safest sequence of actions for him to perform?

  1. Update the existing secret using docker secret update. B. Update the existing secret, then restart all services using this secret. C. Create a new secret, update the service to use this new secret, then delete the old secret. D. Create a new secret, create a new service with this new secret, then delete the old secret and old service.

Correct Answer: C

Explanation:

  • Options A and B are incorrect. Secrets in Docker Swarm are immutable, meaning they cannot be directly modified after creation.
  • Option C is correct. To “update” a secret, you must:
    1. Create a new secret with the desired updated content.
    2. Update the existing service to utilize this newly created secret. Docker Swarm will automatically handle the rolling update, restarting tasks to incorporate the new secret.
    3. Once the service has successfully adopted the new secret, the old secret can be safely deleted.
  • Option D is overly complex and unnecessary, as services can be updated to reference new secrets without requiring the creation of an entirely new service.

Question 16: Characteristics of Docker Image Layers

Docker images are composed of read-only layers, where each layer corresponds to a Dockerfile instruction. These layers are stacked, and each represents a delta of changes from the preceding layer.

Correct Answer: C

Explanation: Docker images are built from a series of read-only layers. When a container is launched from an image, a new, writable layer is added on top of these read-only image layers. This architecture allows for efficient sharing of image layers among multiple containers and ensures image immutability.

Question 17: Inaccuracies in Multi-Stage Builds

Which of the following statements about multi-stage builds is NOT TRUE?

  1. Multi-stage builds eliminate the need for separate Dockerfiles. B. Multi-stage builds aid in the creation of smaller image sizes. C. You cannot select which step you want to begin your build process in a multi-stage build once all steps have been defined. D. With multi-stage builds, you can create images tailored for different purposes, such as development and production.

Correct Answer: C

Explanation:

  • Option A is true. Multi-stage builds consolidate multiple build steps (e.g., compiling code, then packaging) into a single Dockerfile, eliminating the need for separate build and runtime Dockerfiles.
  • Option B is true. By allowing intermediate build artifacts to be discarded, multi-stage builds enable the creation of lean, production-ready images that contain only the necessary runtime components, significantly reducing image size.
  • Option C is incorrect. You can select a target build stage using the –target flag with docker build. This allows you to build up to a specific intermediate stage, which is useful for debugging or creating different purpose-built images from the same Dockerfile.
  • Option D is true. Multi-stage builds are excellent for creating specialized images. For example, one stage might include development tools and source code, while a later stage creates a slim image containing only the compiled application for deployment.

Question 18: Default Tag for Docker Image Pulls

If no tag is explicitly specified when executing the docker pull command, which tag is pulled by convention?

  1. Production B. Staging C. Latest D. Master

Correct Answer: C

Explanation: By default, when no tag is provided during a docker pull operation, the latest tag is assumed and the image associated with that tag is retrieved from the registry. While “production,” “staging,” or “master” tags can be manually created and used, “latest” is the established default convention.

Question 19: Default Location of Secrets within a Docker Container

What is the default location for secrets inside a Docker container that is part of a Docker Swarm service?

  1. /run/secrets/ B. /secrets/ C. /var/run/ D. /var/secrets/

Correct Answer: A

Explanation: When secrets are mounted into a Docker service’s containers, they are typically made available as files within the /run/secrets/ directory by default. This is a temporary filesystem that only exists while the container is running, providing a secure way to manage sensitive data.

Question 20: Configuring the Default Docker Daemon Logging Driver

How do you configure the Docker daemon’s default logging driver to be the syslog driver?

  1. On /etc/docker/daemon.yaml or C:\ProgramData\docker\config\daemon.yaml, just add: log-driver: “syslog” B. On /etc/docker/daemon.json or C:\ProgramData\docker\config\daemon.json, just add: { “log-driver”: “syslog” } C. On /etc/docker/daemon.cfg or C:\ProgramData\docker\config\daemon.cfg, just add: { “log-driver”: “syslog” } D. On /etc/docker/daemon.cfg or C:\ProgramData\docker\config\daemon.cfg, just add: log-driver: “syslog” E. On /etc/docker/daemon.conf or C:\ProgramData\docker\config\daemon.conf, just add: log-driver: “syslog” F. On /etc/docker/daemon.conf or C:\ProgramData\docker\config\daemon.conf, just add: { “log-driver”: “syslog” }

Correct Answer: B

Explanation: The Docker daemon’s configuration is managed through a JSON file named daemon.json. This file is typically located at /etc/docker/daemon.json on Linux systems and C:\ProgramData\docker\config\daemon.json on Windows. The correct way to specify the default logging driver is to add the “log-driver”: “syslog” key-value pair within the JSON object.

Question 25: Persistence of Data in Writable Container Layers

Which of the following statements is NOT TRUE? By default, all files created inside a container are stored on a writable container layer. This means that:

  1. The data persists when that container no longer exists. B. Two different containers cannot share the data present in their writable layer. C. A container’s writable layer is tightly coupled to the host machine where the container is running. You cannot easily move the data somewhere else. D. Writing into a container’s writable layer requires a storage driver to manage the filesystem.

Correct Answer: A

Explanation:

  • Option A is incorrect. Data stored on a container’s writable layer is ephemeral; it is lost when the container is deleted. This is a fundamental characteristic that necessitates the use of volumes or bind mounts for persistent data storage.
  • Option B is true. Each container gets its own isolated writable layer, ensuring that changes made by one container do not affect others running from the same image. This isolation is achieved through separate mount namespaces.
  • Option C is true. The writable layer is tied to the specific host where the container is running. Moving a container (and its writable layer data) to another host is not straightforward without specialized tools or strategies (like committed images or external volumes).
  • Option D is true. Docker uses a storage driver (e.g., OverlayFS, AUFS) to manage the union file system that comprises the read-only image layers and the writable container layer.

Summary

These practice questions provide a robust foundation for your Docker Certified Associate exam preparation. By meticulously reviewing these questions and their detailed explanations, you have gained valuable insights into the core concepts of Docker orchestration, image management, networking, security, and storage. To solidify your understanding and ensure complete readiness for the actual certification, it is highly recommended to engage in additional practice tests and hands-on exercises. Continued learning and practical application of these principles will undoubtedly enhance your proficiency and confidence.