Skip to main content
Docker packages your application and its dependencies into a portable, self-contained unit called a container. Containers share the host OS kernel but are isolated from each other through Linux namespaces and cgroups—lighter than virtual machines but just as reproducible. Whether you are running a single Spring Boot service or orchestrating a dozen microservices with Compose, Docker ensures that what works on your laptop works in production. This page walks you through everything from pulling your first image to building multi-stage Dockerfiles and pushing to a private registry.

Core Concepts

Images, containers, and registries

An image is a read-only, layered filesystem template—think of it as a class in object-oriented programming. A container is a running instance of an image—the object. You can start many containers from the same image, each with its own isolated process space, filesystem, and network. Images are built from layers stacked on top of a base image using the Union File System. Each Dockerfile instruction adds a new layer. Layers are cached and reused across builds and across images that share the same base, which keeps storage costs low.
Your App Layer         ← COPY / RUN your code
OpenJDK 21 Layer       ← FROM eclipse-temurin:21
Ubuntu 22.04 Layer     ← base image
bootfs                 ← uses host kernel
A registry stores and distributes images. Docker Hub is the public default. You can also run a private registry inside your own infrastructure (covered in the final section).

Docker daemon and client

The Docker daemon (dockerd) runs as a background service and manages images, containers, networks, and volumes. The Docker client (docker) is the CLI you interact with—it sends API requests to the daemon over a local socket or TCP.

Essential Commands

Service management

systemctl start docker    # start the daemon
systemctl stop docker     # stop the daemon
systemctl restart docker  # restart after config changes
systemctl status docker   # check daemon health
systemctl enable docker   # start on boot

Working with images

docker images              # list local images
docker images -q           # list only image IDs
docker search nginx        # search Docker Hub
docker pull nginx:1.25     # pull a specific version (omit tag for latest)
docker rmi nginx:1.25      # remove an image by name:tag
docker rmi abc123          # remove by image ID

# convert a running container to an image
docker commit <container-id> my-app:v1

# export/import as a tar archive
docker save -o my-app.tar my-app:v1
docker load -i my-app.tar

Managing containers

# run interactively (keeps stdin open, allocates a terminal)
docker run -it --name web centos:7 /bin/bash

# run in the background (detached)
docker run -d --name web nginx:1.25

# port mapping: host 8080 → container 80
docker run -d -p 8080:80 --name web nginx:1.25

# volume mount: host path → container path
docker run -d -v /data/mysql:/var/lib/mysql --name db mysql:8

# view running containers
docker ps

# view all containers (including stopped)
docker ps -a

# enter a running container (exit without stopping it)
docker exec -it web /bin/bash

# view real-time logs
docker logs -f web

# view logs since a timestamp
docker logs --since 30m web

# start / stop / remove
docker start web
docker stop web
docker rm web

# remove all stopped containers
docker rm $(docker ps -aq)

# inspect container metadata (IP, mounts, env, etc.)
docker inspect web

# copy a file between host and container
docker cp dump.sql my-db:/tmp/dump.sql
docker cp my-db:/tmp/output.csv ./output.csv

Common flags

FlagMeaning
-iKeep stdin open
-tAllocate a pseudo-terminal
-dRun in the background (detached)
-p host:containerPublish a port
-v host:containerMount a volume
--nameAssign a name to the container
--networkConnect to a specific network
--restart=alwaysRestart automatically on daemon start or container crash
Enable auto-restart for a container that already exists:
docker update my-container --restart=always

Writing Dockerfiles

A Dockerfile is a text file containing instructions that build an image layer by layer. Each instruction creates a new layer; layers are cached so unchanged steps are skipped on subsequent builds.

Core instructions

# Base image
FROM eclipse-temurin:21-jre-alpine

# Set working directory (created if it doesn't exist)
WORKDIR /app

# Copy files from the build context into the image
COPY target/app.jar app.jar

# Run a command during build (installs packages, compiles, etc.)
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*

# Expose a port (documentation only—does not publish)
EXPOSE 8080

# Environment variable available at runtime
ENV SPRING_PROFILES_ACTIVE=prod

# Default command run when container starts
CMD ["java", "-jar", "app.jar"]

CMD vs ENTRYPOINT

  • CMD sets the default command. It can be overridden by arguments passed to docker run.
  • ENTRYPOINT sets the executable that always runs. Arguments from docker run are appended to it rather than replacing it.
A common pattern combines both: ENTRYPOINT sets the executable, CMD provides default arguments that you can override:
ENTRYPOINT ["java", "-jar", "app.jar"]
CMD ["--spring.profiles.active=prod"]

Multi-stage builds

Multi-stage builds let you compile code in one image and copy only the output into a smaller runtime image. This keeps your final image lean and free of build tools:
# Stage 1: build
FROM maven:3.9-eclipse-temurin-21 AS builder
WORKDIR /build
COPY pom.xml .
RUN mvn dependency:go-offline            # cache dependencies
COPY src ./src
RUN mvn package -DskipTests

# Stage 2: runtime (no Maven, no source code)
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
COPY --from=builder /build/target/app.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
Build the image:
docker build -t my-app:1.0 .
docker build -t my-app:1.0 -f path/to/Dockerfile .

Deploying a Spring Boot app

FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
ADD springboot.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
docker build -t my-spring-app:latest .
docker run -d -p 8080:8080 --name spring-app my-spring-app:latest

Docker Compose

Running one container is simple. Running a whole microservice stack—app server, database, cache, reverse proxy—requires coordinating startup order, shared networks, and volume mounts. Docker Compose manages all of this from a single docker-compose.yml file.

Example: nginx + Spring Boot app

version: '3.8'
services:
  app:
    image: my-spring-app:latest
    expose:
      - "8080"
    environment:
      - SPRING_DATASOURCE_URL=jdbc:mysql://db:3306/mydb
      - SPRING_DATASOURCE_PASSWORD=secret
    depends_on:
      - db

  db:
    image: mysql:8
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: mydb
    volumes:
      - db-data:/var/lib/mysql

  nginx:
    image: nginx:1.25
    ports:
      - "80:80"
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d
    depends_on:
      - app

volumes:
  db-data:
A minimal nginx reverse-proxy config (./nginx/conf.d/default.conf):
server {
    listen 80;
    location / {
        proxy_pass http://app:8080;
    }
}

Core Compose commands

docker compose up          # start all services (foreground)
docker compose up -d       # start in the background
docker compose down        # stop and remove containers and networks
docker compose logs -f     # stream logs from all services
docker compose ps          # list running services
docker compose build       # rebuild images
docker compose pull        # pull latest images

Named volumes

Volumes persist data beyond the lifecycle of a container. Declare them explicitly so Compose manages them:
services:
  db:
    image: mysql:8
    volumes:
      - db-data:/var/lib/mysql

volumes:
  db-data:          # Docker-managed volume
Reference an externally created volume:
volumes:
  db-data:
    external: true  # must already exist; Compose won't create it

Networking

Bridge network (default)

Containers on the same default bridge network can reach each other by IP address. However, the IP address of a container can change between restarts, so hard-coding IPs in application config is fragile.

Custom bridge network

Custom networks assign DNS names equal to container names. Any container on the same custom network can reach another by its container name or alias:
# create a custom network
docker network create my-net

# run containers on it
docker run -d --name db --network my-net mysql:8
docker run -d --name app --network my-net my-spring-app

# now 'app' can connect to mysql at hostname 'db'
With aliases, a container can be reached by multiple names:
docker network connect my-net mysql-container --alias db

Host network

A container with --network host shares the host’s network stack. There is no isolation—the container binds directly to host ports. This maximizes performance but prevents running multiple containers that use the same port.
docker run --network host -d my-app

Overlay network

Overlay networks span multiple Docker hosts (used with Docker Swarm or as a building block for Kubernetes). They let containers on different machines communicate as if they were on the same local network.

Network management commands

docker network ls                          # list networks
docker network create my-net              # create a bridge network
docker network inspect my-net             # show network details and connected containers
docker network connect my-net my-container
docker network disconnect my-net my-container
docker network rm my-net                  # remove a network
docker network prune                      # remove all unused networks

Private Registry

Docker Hub is public by default. For proprietary images, you can run a private registry using the official registry image.

Set up the registry

# pull the registry image
docker pull registry

# start the registry on port 5000
docker run -d --name registry -p 5000:5000 registry

# verify: browse to http://<server-ip>:5000/v2/_catalog
# expected response: {"repositories":[]}
Tell the Docker daemon to trust your insecure (HTTP) registry by editing /etc/docker/daemon.json:
{
  "insecure-registries": ["192.168.1.100:5000"]
}
Restart the daemon:
systemctl restart docker
docker start registry

Push an image to the private registry

# tag the image with the registry address prefix
docker tag my-app:latest 192.168.1.100:5000/my-app:latest

# push it
docker push 192.168.1.100:5000/my-app:latest

Pull from the private registry

docker pull 192.168.1.100:5000/my-app:latest

Authenticate with Docker Hub before pushing

If you are pushing to Docker Hub (not a private registry), you must tag images with your Docker Hub username and log in first:
docker login
# enter your Docker Hub username and password

docker tag my-app:latest youruser/my-app:latest
docker push youruser/my-app:latest
The image name on Docker Hub must start with your username. Pushing without the correct prefix will fail with denied: requested access to the resource is denied.