Skip to content
Go back

How to containerize your development environment

Published:

I prefer to keep my OS clean. By installing any tools (such as npm and astro for this blog), i would modify my OS environment. And it would be hard, if not impossible to revert back.

More specifically, if i test multiple tools, which depend on different versions of a same package. It would require me to manually install the correct version, before working with either of those.

And eventually i might succumb to conflicts anyway. Thus it’s best to keep the packages installed on OS to minimum!

Containerize

I don’t know a good solution to this. So best is to avoid this situation. Meaning just use a different machines to test different tools.

And this is where Containerization comes in - effectively it does just that. It allows to provide a private environment to your tool, while still sharing the underlying machine. So the changes yo do on your host machine don’t affect the tools environment, and vice versa. This is a great separation, and you can spin up as many different environments as you like!

Note I’ll be using Docker. This post assumes you have rough idea of what it is. For further background i have more notes here: Docker for non-docker people. They don’t add up to a full Docker guide, but should still give useful angle.

Case study of this blog

To get a development preview running, id need to install a version of astro. To install astro id need a version of npm. I want neither of those on my host.

Plan

The repository for the blog (astro code + posts) will be stored on the host. Astro, with its development server, will be stored in the container. This means that the container will need access to the project in host, and host will need to have access to some ports on container to access the dev website.

1. Define the environment

Dockerfile:

# Use base image with Node.js
FROM node:20-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy dependencies list to container
COPY package.json ./
# Install the dependencies
RUN npm install
# Declare that port 4321 is needed (for Astro dev server)
EXPOSE 4321
# Command to execute at container start
# Starts the Astro dev server
CMD ["npm", "run", "dev", "--", "--host"]

In human terms: These are instructions for building a docker image. It grabs some Linux image, which has node installed (thus npm is available). When i execute this dockerfile from the directory of my astro project. It will grab the package.json file, and download and install all dependencies specified there. This is nice as they will be baked in the image, and i don’t need to do it manually when later running the container. It also defines the default command that will be executed once someone runs this container: run the node dev server.

2. Build the environment

docker build . or dont be a barbaric and add a tag. So you could identify the built image form all the other images on your machine: docker build -t your-image-name:latest .

However now it becomes a bit to remember, specially if you’d want to rebuild the image. Would be better if this run config was written down somewhere. Thats where docker compose (coming up next) helps, there you can handle the tag, and also runtime configuration.

In docker-compose.yaml , add image: your-image-name:latest property to your service. Now all you need to remember for builds is docker compose build.
Ps. you might not even need that property anymore, docker compose uses the folder name and service name to generate a reasonable default tag.

3. Run the environment

docker-compose.yml

services:
  ytho-dev: # just name your service, can be anything.
    # image: your-image-name:latest # <- this would be the tag for image
    # Build the image using the Dockerfile in the current directory (.)
    build: .
    # Name the container for easier reference (optional)
    container_name: ytho-astro-blog
    # Map port 4321 on your host to port 4321 in the container
    ports:
      - "4321:4321"
    # Mount your local source code directory into the container's /app directory
    # This is the key part for live development
    volumes:
      - .:/app # Mount current directory (where compose file is) to /app

      # Now the host version of app folder overrides everything in
      # container, but we still want the app/node_modules folder to come
      # from the container and not the host
      # To fix it we need to add one more mapping that would overlay on the
      # previous mapping. Allowing app/node_modules contain files from the
      # container and not to the host.
      - node_modules:/app/node_modules
      # notice the path of node_modules, it doesn't start with / or .
      # meaning it "doesn't exist" in host filesystem, its a named volume.

volumes:
  # Docker will create, own and manage this volume
  # for the mapping trick above to work
  node_modules:

Having this file in your repo, all you need to do to run the container is docker compose up and to stop it docker compose down. It even builds it if its hasn’t been built. Much better.

Note if you didn’t use docker compose, every time you’d have to execute something like: docker run --name ytho-astro-blog -p 4321:4321 -v .:/app -v node_modules:/app/node_modules your-image-name:latest, and you’d still need a separate command to prepare node_modules volume, etc.

I’ve added some comments, but effectively all it does is handles the parts that the host is concerned with: which port on host should exactly map to the 4321 exported in container. Which directory on host should map to /app in container, and so on.

Recap of niceties of using a Container

Bonus: How to build docker image when you don’t have package.json file

All is golden, unless you want to start with a blank Astro project. In that case you can not build the above image because package.json doesn’t yet exist: there’s nothing to COPY nor install.

To generate the package.json id need to run npm create astro@latest first. But like i mentioned, i want none of that on my host machine.

Howeever, the solution is still same: i need to run that npm create in a container first. Since its an one off task, it suffices to have a lengthy command, no need to start specifying Dockerfiles:

docker run --rm -it -v .:/app -w /app node:20-alpine sh

in human terms: Starts ephemeral container of Alpine Linux with node 20. Links the /app directory inside the container be current working directory. Connects you to shell inside the container.

run - start a container
--rm - remove the container after it stops
-it - interactive (opens standard input) and tty (starts pseudo-terminal). Both required for sh (shell) later.
-v .:/app - maps the hosts working directory (volume) to /app dir on the container
-w /app - sets working directory in container to /app
node:20-alpine - Docker image to use
sh - at container start it will execute this, starting shell prompt (otherwise, with this particular image, it would execute node for you)

In this container you can do the interactive setup part of the Astro project. And it will write everything to hosts working directory. Once you are ready to leave run exit or just signal Ctrl+D.
Now you do have package.json on your host along with empty project.

Note: during research i learned that VS Code has an extension called Dev Containers that does what i did, maybe in a nicer way.
However for being tools agnostic, i prefer my approach.



Previous Post
Docker for non-docker people
Next Post
Architecture of this blog