What is the docker multi-stage build?

Sarvesh Mishra
3 min readSep 19, 2023

--

Although Docker eliminates the need for dependencies on the client system, inefficient building of the image can result in a heavy image size. Many developers are unaware of this feature, leading to high data transfer costs for their organization. In this blog, I will demonstrate how to use Docker’s multi-stage build to comprehensively build and reduce the image size.

By utilizing Docker multi-stage builds, you can create more efficient and simplified container images by consolidating multiple build stages into one Docker file. This feature was introduced in Docker 17.05 and is particularly useful for applications that require build processes or dependencies that are not necessary for the final runtime image. Here’s how Docker multi-stage builds work:

Multiple Build Stages:

You define multiple build stages in your Dockerfile, each with its own base image and set of instructions. Each stage is like a separate container image during the build process.

Copy Artifacts:

You can copy files or artifacts from one build stage to another using the COPY --from=<stage> instruction. This allows you to selectively include only the necessary files in the final image.

Reduce Image Size:

By using multi-stage builds, you can avoid including build tools, development dependencies, and intermediate files in the final container image. This results in smaller and more efficient images that are easier to distribute and deploy.

If you’re new to Docker, start by reading these posts before continuing.

Here’s how you can effectively use multi-stage builds in Docker:

Create a Dockerfile with Multiple Stages:

To use multi-stage builds, create a Dockerfile that defines multiple stages. Each stage represents a step in the build process and can have its own base image and set of instructions.

   # Stage 1: Build the application
FROM node:14 AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Create the production image
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

In this example, there are two stages: one for building a Node.js application and another for creating a Nginx-based production image.

Use Different Base Images: Each stage can use a different base image that is suitable for the specific task. In the example above, the first stage uses a Node.js base image for building the application, and the second stage uses a Nginx base image for the production image.

Copy Artifacts Between Stages: To copy artifacts (e.g., compiled code or static files) from one stage to another, use the COPY --from instructions, as shown in the example. This allows you to keep only the necessary files in the final image.

Keep Only What’s Necessary: In the final stage, include only the necessary files and dependencies needed to run your application. This helps reduce the size of the container image and minimize security risks.

Build the Docker Image: To build the Docker image, use the docker build command with the -t flag to specify a tag for the image. For example:

docker build -t myapp:v1 .

Run the Docker Container: Once the image is built, you can run a container from it:

docker run -p 8080:80 myapp:v1

This will start your application in a container.

Using multi-stage builds helps create more efficient and smaller Docker images, making them easier to distribute and deploy. It’s particularly useful for building applications that require compilation or build steps, as it allows you to keep only the necessary artifacts in the final image while discarding build tools and intermediate files.

Happy Dockerizing! 🐋 See you in the next post. Don’t forget to comment your thoughts on this post. Share knowledge with others…

Originally published at https://sarvesh.xyz.

--

--

Sarvesh Mishra

Sarvesh is a Full Stack Developer specializing in the Backend with Software Architecture and Machine Learning experience at carwale.com