How to Build Docker Image
Introduction Docker has become the de facto standard for containerizing applications, enabling developers to package code, dependencies, and configurations into portable, consistent units. However, with widespread adoption comes increased risk. Not all Docker images are created equal. Many public images on Docker Hub or other registries contain outdated packages, hidden malware, unnecessary binari
Introduction
Docker has become the de facto standard for containerizing applications, enabling developers to package code, dependencies, and configurations into portable, consistent units. However, with widespread adoption comes increased risk. Not all Docker images are created equal. Many public images on Docker Hub or other registries contain outdated packages, hidden malware, unnecessary binaries, or misconfigurations that can compromise entire systems. Building a Docker image you can trust isnt just about functionalityits about security, integrity, and reliability. This article outlines the top 10 proven methods to build Docker images you can trust, ensuring your deployments are secure, auditable, and production-ready. Whether youre a DevOps engineer, a security analyst, or a developer managing containerized workloads, these practices will help you eliminate guesswork and establish a foundation of trust in your container ecosystem.
Why Trust Matters
Trust in Docker images is not a luxuryits a necessity. A single compromised image can lead to data breaches, unauthorized access, regulatory violations, and service outages. In 2023, over 60% of containerized environments experienced at least one security incident tied to a vulnerable or untrusted base image, according to the Cloud Native Computing Foundation. Many organizations assume that because an image is publicly available or labeled official, it is safe. This assumption is dangerously flawed.
Untrusted images often contain:
- Outdated operating system packages with known CVEs
- Hidden backdoors or cryptocurrency miners embedded during build time
- Excessive permissions granted to the root user
- Unnecessary tools like curl, wget, or ssh that expand the attack surface
- Missing or improperly configured security scanners
Building a trusted image requires a shift in mindset: from it works to its secure, minimal, and verifiable. Trust is earned through process, not reputation. Its built through reproducible builds, signed artifacts, dependency scanning, and strict access controls. Organizations that prioritize image trust reduce their mean time to remediation (MTTR) by up to 70% and significantly lower their exposure to supply chain attacks. The goal is not just to run containersits to run containers you can confidently audit, monitor, and defend.
Top 10 How to Build Docker Image You Can Trust
1. Use Official or Verified Base Images
The foundation of every trusted Docker image is its base layer. Always start with official images published by trusted vendorssuch as Dockers official repositories on Docker Hub, Red Hats UBI (Universal Base Image), or Debians official images. These images undergo rigorous security reviews, are updated regularly with patched packages, and are digitally signed. Avoid third-party or user-submitted images unless they have been explicitly verified through your organizations internal approval process.
When selecting a base image, prioritize minimalism. For example, prefer python:3.11-slim over python:3.11, or node:20-alpine over node:20. Smaller images mean fewer packages, fewer vulnerabilities, and less surface area for exploitation. Verify the images digest (SHA256 hash) before pulling it. Use the digest instead of a tag to lock your build to a specific, immutable version:
docker pull python@sha256:8a4a7e2f9a1b5c8d7e6f5a4b3c2d1e0f9a8b7c6d5e4f3a2b1c0d9e8f7a6b5c4d3
This prevents accidental upgrades or tampering through tag manipulation. Always cross-reference the images digest with the official source to ensure authenticity.
2. Implement Multi-Stage Builds to Reduce Attack Surface
Multi-stage builds are one of the most effective techniques for minimizing the final image size and removing unnecessary tools. In traditional builds, developers often install compilers, build tools, and debug utilities inside the same container that runs in production. These tools are not needed at runtime and only increase risk.
With multi-stage builds, you separate the build environment from the runtime environment. For example:
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp .
FROM alpine:3.18
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/myapp .
CMD ["./myapp"]
In this example, the final image contains only the compiled binary and the minimal CA certificates required for HTTPS. The Go compiler, source code, and build dependencies are discarded. This approach reduces image size by up to 90% and eliminates potential exploit vectors like shell access, package managers, or development libraries.
Use distinct stages for different purposes: one for testing, one for linting, one for compiling, and one for deployment. This not only improves security but also enhances build reproducibility and CI/CD pipeline efficiency.
3. Scan Images for Vulnerabilities Before Deployment
Automated vulnerability scanning is non-negotiable. Even official images can contain outdated or unpatched packages. Use trusted scanning tools like Trivy, Clair, or Snyk to analyze your images before pushing them to production registries. Integrate these tools into your CI/pipeline to block builds that exceed your organizations vulnerability thresholds.
For example, with Trivy:
trivy image --severity HIGH,CRITICAL myapp:latest
This command scans for high and critical CVEs. Configure your pipeline to fail if any critical vulnerabilities are detected. Additionally, scan for misconfigurations using Trivys config scan mode:
trivy config --severity HIGH,CRITICAL .
Use policies to enforce compliance. For example, disallow images with CVEs older than 30 days, or images running as root. Store scan results in a centralized dashboard for audit trails. Some organizations integrate scanning with GitOps workflows, where image deployment is gated until a scan report is signed off by a security reviewer.
4. Sign Images with Cosign or Notary
Image signing ensures that the image youre deploying is the same image that was built and approved by your team. Without signing, an attacker who compromises your registry could replace a legitimate image with a malicious onewithout detection.
Use Cosign (by Sigstore) to cryptographically sign your Docker images. Cosign uses public-key cryptography to generate a signature that is stored alongside the image in the registry. To sign an image:
cosign sign --key cosign.key myregistry.example.com/myapp:latest
To verify it before deployment:
cosign verify --key cosign.pub myregistry.example.com/myapp:latest
Public keys can be stored in a secure key management system or embedded in your CI/CD pipeline. This creates a chain of trust: you know the image came from your build system and hasnt been altered. Notary is another option, though Cosign is now the industry standard due to its simplicity and integration with modern toolchains.
Combine image signing with image scanning. A signed image with critical vulnerabilities is still dangerous. Signing ensures authenticity; scanning ensures safety.
5. Run Containers as Non-Root Users
By default, Docker containers run as the root user inside the container. This means that if an attacker exploits a vulnerability in your application, they gain full root privileges within the containerpotentially allowing them to escape to the host system.
Always create a non-root user in your Dockerfile and switch to it before running your application:
FROM alpine:3.18
RUN apk --no-cache add ca-certificates
RUN addgroup -g 1001 -S appuser && adduser -u 1001 -S appuser -G appuser
COPY --from=builder /app/myapp /home/appuser/
USER appuser
CMD ["./myapp"]
This limits the damage an attacker can do if they compromise the process. Even if they gain shell access, they cannot modify system files, install packages, or access sensitive host resources.
Ensure your application doesnt require root privileges to function. Most modern applications (Node.js, Python, Go, Java) can run perfectly fine as non-root users. If your app needs to bind to low-numbered ports (like 80 or 443), use a reverse proxy like Nginx or Traefik running on the host, or use Linux capabilities like CAP_NET_BIND_SERVICE with fine-grained permissions instead of root.
6. Pin All Dependencies and Avoid Latest Tags
Using floating tags like latest, stable, or latest-alpine is one of the most dangerous practices in Docker. These tags change over time, meaning your build today may produce a different image tomorrow. This breaks reproducibility and introduces unpredictability.
Always pin versions explicitly:
- Use
node:20.12.1instead ofnode:20 - Use
python:3.11.6-sliminstead ofpython:3.11 - Pin package versions in package managers:
pip install requests==2.31.0
In multi-stage builds, pin the base image in every stage. In your applications dependency files (like package.json, Pipfile, or go.mod), lock dependencies using tools like npm ci, pip-tools, or go mod tidy. This ensures every build uses the exact same dependency tree.
Use tools like Renovate or Dependabot to automate dependency updates, but only after automated tests and vulnerability scans pass. Never allow automatic updates to bypass security checks.
7. Use .dockerignore to Exclude Sensitive Files
The .dockerignore file works like .gitignore but for Docker builds. It prevents unnecessary or sensitive files from being copied into the build context. Many developers overlook this file, inadvertently including secrets, configuration files, or source code that should never be baked into the image.
Example .dockerignore:
.env
node_modules
.git
README.md
Dockerfile*
tests/
*.log
secret.key
This ensures that environment variables, API keys, and test files are never included in the image. Even if you use multi-stage builds, the build context is still uploaded to the Docker daemon during the build process. If a file is in the context, it can be accessed by malicious layers or intermediate containers.
Additionally, avoid using COPY . /app unless youve explicitly filtered whats included. Instead, copy only whats needed:
COPY package*.json ./
RUN npm install --only=production
COPY . .
This minimizes the build context size and reduces the risk of accidental exposure.
8. Enable Docker Content Trust and Registry Policies
Docker Content Trust (DCT) is a feature that enforces image signing at the registry level. When enabled, Docker will refuse to pull or run unsigned images. This is especially useful in team environments where multiple developers push images.
Enable DCT globally:
export DOCKER_CONTENT_TRUST=1
Or set it in your shell profile for persistent enforcement. When DCT is enabled, Docker will only pull images with valid signatures and will warn you if a tag is unsigned.
Combine DCT with registry-level policies. If youre using Docker Hub, Harbor, or Amazon ECR, configure image scanning and signing policies at the registry level. For example, in Harbor, you can enforce that only signed images can be deployed to production namespaces. In ECR, use image scanning with Amazon Inspector and set lifecycle policies to auto-delete untagged or unscanned images.
These policies create a governance layer that ensures compliance across teams and environments. They prevent developers from bypassing security checks and make it impossible to deploy untrusted images by accident.
9. Audit and Rebuild Images Regularly
Security is not a one-time task. New vulnerabilities are discovered daily. An image that was secure last month may be compromised today due to a newly disclosed CVE in a base package.
Establish a regular image audit cycle. For critical applications, rebuild and rescan images weekly. For less critical systems, do so monthly. Automate this process using your CI/CD pipeline:
- Trigger a rebuild when a base image is updated
- Run vulnerability scans on the new image
- Compare the new scan report with the previous one
- Only deploy if no new critical vulnerabilities are introduced
Use tools like Renovate or Snyk to monitor base image updates. For example, Snyk can notify you when a new version of alpine:3.18 is released and automatically open a pull request to update your Dockerfile.
Keep a changelog of image rebuilds, including the base image version, scan results, and signing status. This creates an audit trail for compliance and incident response. If a breach occurs, you can trace back which image version was deployed and whether it had been scanned and signed.
10. Use Immutable Image Tags and Avoid Pushing to Latest
Tagging images with :latest is a common anti-pattern that undermines trust. It implies that the image is always up-to-date, but it also makes it impossible to roll back to a known-good version. If a new build introduces a bug or vulnerability, you cant easily revert because :latest points to the newest version.
Instead, use immutable tags based on:
- Git commit hash:
myapp:5f3a8b1 - Build number:
myapp:build-245 - Semantic version:
myapp:v1.2.3
Never push to :latest in production. Reserve :latest for development or testing environments only. In production, every deployment should reference an immutable tag. This ensures:
- Reproducibility: You can redeploy the exact same image
- Traceability: You know exactly which code version was deployed
- Rollback capability: If something breaks, revert to a previous tag
Use GitOps tools like Argo CD or Flux to automate deployment based on immutable tags. These tools monitor your Git repository and automatically deploy new images when a new tag is pushed and verified. This creates a fully auditable, version-controlled deployment pipeline.
Comparison Table
The table below summarizes the top 10 practices for building trusted Docker images, including their impact on security, ease of implementation, and recommended priority level.
| Practice | Security Impact | Implementation Difficulty | Priority | Key Tool/Command |
|---|---|---|---|---|
| Use Official or Verified Base Images | High | Low | Essential | docker pull python:3.11-slim@sha256:... |
| Implement Multi-Stage Builds | High | Medium | Essential | COPY --from=builder |
| Scan Images for Vulnerabilities | High | Medium | Essential | trivy image myapp:latest |
| Sign Images with Cosign | Very High | Medium | Essential | cosign sign --key cosign.key |
| Run Containers as Non-Root Users | High | Low | Essential | USER appuser |
| Pin All Dependencies | Medium | Low | Essential | pip install requests==2.31.0 |
| Use .dockerignore | Medium | Low | High | .env in .dockerignore |
| Enable Docker Content Trust | High | Low | High | export DOCKER_CONTENT_TRUST=1 |
| Audit and Rebuild Regularly | High | Medium | High | Renovate, Snyk |
| Use Immutable Tags | High | Low | Essential | myapp:v1.2.3 |
Priority levels are categorized as:
- Essential: Must be implemented in all production environments.
- High: Strongly recommended; critical for regulated industries.
FAQs
Can I trust Docker Hubs official images?
Official images from Docker Hub are generally more trustworthy than user-submitted ones because they are maintained by the software vendors or Dockers team. However, they are not immune to vulnerabilities. Always scan them for CVEs, pin to specific digests, and avoid using :latest. Treat even official images as untrusted until verified.
Whats the difference between Docker Content Trust and image signing?
Docker Content Trust (DCT) is a client-side enforcement mechanism that prevents Docker from pulling unsigned images. Image signing (e.g., with Cosign) is the actual cryptographic process of attaching a signature to an image. DCT requires signed images to function. You can sign images without enabling DCT, but enabling DCT ensures all images in your environment are signed.
Do I need to rebuild my image every time a base image is updated?
You dont need to rebuild immediately, but you should have a process in place to trigger rebuilds when critical vulnerabilities are patched. Use automation tools to monitor base image updates and notify your team. Rebuild and rescan at least monthly for production images.
How do I handle secrets in Docker builds?
Never hardcode secrets into Dockerfiles. Use Docker BuildKits secret mounting feature for sensitive files during build time:
syntax=docker/dockerfile:1
RUN --mount=type=secret,id=mysecret,dst=/run/secrets/mysecret \
cat /run/secrets/mysecret | myapp --config=
Then build with: DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=./secret.txt .
For runtime secrets, use environment variables injected at container startup or secret management systems like HashiCorp Vault or Kubernetes Secrets.
Can I use Alpine as a base image for all applications?
Alpine is lightweight and secure due to its minimal package set, but it uses musl libc instead of glibc. Some applications (especially those compiled with glibc dependencies) may not run correctly on Alpine. Test thoroughly. For Java or .NET applications, consider using distroless or Microsofts official .NET images instead.
How do I verify an image hasnt been tampered with after its pushed?
Use image signing and verification. When you pull an image, verify its signature using Cosign or your registrys built-in verification tools. Combine this with checksum validation of the image digest. If the digest changes after push, it indicates tampering.
Is it safe to use multi-stage builds with proprietary code?
Yes. Multi-stage builds are ideal for proprietary code because they exclude source files from the final image. Only the compiled output is copied into the final stage. Ensure your build environment is secure and your CI system is not exposed to external attackers.
What should I do if a critical vulnerability is found in my image?
Immediately rebuild the image using the patched base image and updated dependencies. Rescan, re-sign, and redeploy using an immutable tag. Notify stakeholders and document the remediation. Use this as a trigger to improve your scanning frequency or update your policies.
Can I automate all of these practices in CI/CD?
Yes. Modern CI/CD platforms like GitHub Actions, GitLab CI, Jenkins, and CircleCI support all these practices. Integrate Trivy, Cosign, and dependency scanners into your pipeline. Fail builds on vulnerability detection, enforce signing, and use immutable tags. Automate everything except final approval for critical deployments.
Whats the most common mistake teams make when building Docker images?
The most common mistake is assuming that if the image runs, its safe. Teams focus on functionality and neglect security, auditability, and reproducibility. They use :latest, skip scanning, run as root, and ignore .dockerignore. Security must be baked in from the startnot added as an afterthought.
Conclusion
Building a Docker image you can trust is not a single actionits a disciplined, repeatable process that spans development, security, and operations. The top 10 practices outlined in this article form a comprehensive framework for ensuring your containers are secure, minimal, and verifiable. From using signed, immutable images to running as non-root users and scanning for vulnerabilities at every stage, each step reduces risk and increases confidence in your deployments.
Trust is earned through consistency. Its not about using the latest tools or following trendsits about applying proven, repeatable practices that have stood the test of time and real-world incidents. Organizations that adopt these practices see fewer breaches, faster incident response, and greater compliance with standards like NIST, CIS, and SOC 2.
Start by implementing the essential practices: pin your base images, use multi-stage builds, scan for vulnerabilities, sign your images, and run as non-root users. Then layer on the high-priority practices: enable Docker Content Trust, audit regularly, and use immutable tags. Automate everything you can. Make security the default, not the exception.
In a world where supply chain attacks are rising and regulatory scrutiny is intensifying, the cost of untrusted images is no longer acceptable. The time to build Docker images you can trust is nowbefore the next breach makes it a mandatory requirement.