Good tech adventures start with some frustration, a need, or a requirement. This is the story of how I simplified the management and access of my local web applications with the help of Traefik and dnsmasq. The reasoning applies just as well for a production server using Docker.
My dev environment is composed of a growing number of web applications self-hosted on my laptop. Such applications include several websites, tools, editors, registries, … They use databases, REST APIs, or more complex backends. Take the example of Supabase, the Docker Compose file includes the Studio, the Kong API gateway, the authentication service, the REST service, the real-time service, the storage service, the meta service, and the PostgreSQL database.
The result is a growing number of containers started on my laptop, accessible at
localhost on various ports. Some of them use the default ports and cannot run in parallel to avoid conflicts. For example, the
8000 ports are common to a lot of containers present on my machine. To circumvent the issue, some containers use custom ports which I often happen to forget.
The solution is to create local domain names which are easy to remember and use a web proxy to route the requests to the correct container. Traefik helps in the routing and the discovery of those services and dnsmasq provides a custom top-level domain (pseudo-TLD) to access them.
Another usage of Traefik is a production server using multiple Docker Compose files for various websites and web applications. The containers communicate inside an internal network and are exposed through a proxy service, in our case implemented with Caddy.
Out of many, let’s take 3 web applications running locally. All of them are managed with Docker Compose:
- Adaltas website, 1 container, Gatsby-based static website
- Alliage website, 10 containers, Next.js frontend, Node.js backend, and Supabase
- Penpot, 6 containers, Penpot frontend, backend services plus Inbucket for email testing (personal addition)
By default, those containers expose the following ports on localhost:
8000Gatsby server in dev mode
9000Gatsby service to serve a build website
3000Next.js website both dev and build mode
3001Node.js custom API
2500Inbucket SMTP server
9000Inbucket Web interface
1100Inbucket POP3 server
2500Inbucket SMTP server
9000Inbucket Web interface
1100Inbucket POP3 server
Note, depending on your environment and desires, some ports might be restricted while other ports might be accessible.
As you can see, many ports collide with each other. It is not just the 2 instances of Inbucket running in parallel. For example, port
8000 is used both by Gatsby and Kong. It is a common default port for several applications. The same goes for ports
One solution is to assign distinctive ports for each service. However, this approach is not scalable. Soon enough, I forget to which port each service is assigned.
A better solution is the usage of a reverse proxy with hostnames easy to remember. Here is what we expect:
www.adaltas.localGatsby server in dev mode
build.adaltas.localGatsby service to serve a build website
www.alliage.localNext.js website both dev and build mode
api.alliage.localNode.js custom API
smtp.alliage.localInbucket SMTP server
mail.alliage.localInbucket Web interface
pop3.alliage.localInbucket POP3 server
smtp.penpot.localInbucket SMTP server
mail.penpot.localInbucket Web interface
pop3.penpot.localInbucket POP3 server
In a traditional setting, the reverse proxy is configured with one or multiple configuration files with all the routing information. However, a central configuration is not so convenient. It is preferable to have each service declares which hostname they resolve.
Automatic routing registration
All my web services are managed with Docker Compose. Ideally, I expect information to be present inside the Docker Compose file. Traefik is cloud-native in the sense that it configures itself using cloud-native workflows. The application provides some instructions present in its
docker-compose.yml file and the containers are automatically exposed.
The way Traefik works with Docker, it plugs into the Docker socket, detects new services, and creates the routes for you.
To start Traefik inside Docker is straightforward (never say easy). The
docker-compose.yml file is:
version: '3' services: reverse-proxy: image: traefik:v2.9 command: --api.insecure=true --providers.docker ports: - "80:80" - "8080:8080" volumes: - /var/run/docker.sock:/var/run/docker.sock
Registering new services
Let’s consider an additional service. The Adaltas website is a single container based on Gatsby. In development mode, it starts a web server on port
8000. I expect it to be accessible with the hostname
www.adaltas.local on port
Following the Traefik’s getting started with Docker, the integration is made with the property
traefik.http.routers.router_name.rule present in the
labels field of the docker service. It defines the hostname under which our website is accessible on port
80. It is set to
www.adaltas.localhost because the
.localhost TLD resolves locally by default. Since I prefer to use the
.local domain, we set the domain to
www.adaltas.local later using dnsmasq. The traffic is then routed to the container IP on port 8000. The container port is obtained by Traefik from the Docker Compose’s
version: '3' services: www: container_name: adaltas-www ... labels: - "traefik.http.routers.adaltas-www.rule=Host(`www.adaltas.localhost`)" ports: - "8000:8000"
This works when both the Traefik and the Adaltas services are defined in the same Docker compose file. Firing
docker-compose up and you can:
http://localhost:8080: Access the Traefik web UI
http://localhost:8080/api/rawdata: Access the Traefik’s API rawdata
http://www.adaltas.localhost: Access the Adaltas website in development mode
http://localhost:8080: Same as
There are 3 limitations we need to deal with:
- Internal networking
It only works because all the services are declared inside the same Docker Compose file. With separated Docker Compose files, an internal network must be used to communicate between the Traefic container and the targetted containers.
- Domain name
I wish to use a pseudo top-level domain (TLD), for example,
.localTLD does not yet resolve locally, a local DNS server must be configured.
- Port label
The port of Adaltas is defined inside the Docker Compose file. Thus, it is exposed on the host machine and it collides with other services. Port forwarding must be disabled and Traefik must be instructed about the port with another mechanism than the
When defined across separated files, the container cannot communicate. Each Docker Compose file generates a dedicated network. The targeted service is visible inside the Traefik UI. However, the request fails to be routed.
The containers must share a common network to communicate. When the Traefik container is started, a
traefik_default network is created, see
docker network list. Instead of creating a new network, let’s reuse it. Enrich the Docker Compose file of the targetted container, the Adaltas website in our case, with the
version: '3' services: www: container_name: adaltas-www networks: default: name: traefik_default
After starting the 2 Docker Compose setups with
docker-compose up, the Traefik and the Website containers start communicating.
It is time to tackle the FQDN of our services. The current TLD in use,
.localhost, is perfectly fine. It works by default and it is officially reserved for this usage. However, I wish to use my own top-level domains (pseudo-TLD name), we’ll use
.local in this example.
Disclaimer, using a pseudo-TLD name is not recommended. The
.localTLD is used by multicast DNS / zero-configuration networking. In practice, I haven’t encountered any issues. To mitigate the risk of conflicts, RFC 2606 reserves the following TLD names:
A local DNS server is used to resolve the
*.local addresses. I had some experience with Bind in the past. A simpler and more lightweight option is the usage of dnsmasq. The instructions below cover the installation on MacOS and Ubuntu Desktop. In both cases, dnsmaq is installed and configured to not interfere with the current DNS settings.
brew install dnsmasq mkdir -pv $(brew --prefix)/etc/ echo 'address=/.local/127.0.0.1' >> $(brew --prefix)/etc/dnsmasq.conf sudo brew services start dnsmasq sudo mkdir -v /etc/resolver sudo bash -c 'echo "nameserver 127.0.0.1" > /etc/resolver/test' scutil --dns
Linux instructions with NetworkManager (eg Ubuntu Desktop):
systemctl disable systemd-resolved systemctl stop systemd-resolved unlink /etc/resolv.conf cat <<CONF | sudo tee /etc/NetworkManager/conf.d/00-use-dnsmasq.conf [main] dns=dnsmasq CONF cat <<CONF | sudo tee /etc/NetworkManager/dnsmasq.d/00-dns-public.conf server=220.127.116.11 CONF cat <<CONF | sudo tee /etc/NetworkManager/dnsmasq.d/00-address-local.conf address=/.local/127.0.0.1 CONF systemctl restart NetworkManager
dig to validate that any FQDN using our pseudo-TLD resolves to the local
With the introduction of a reverse proxy like Traefik, exposing the container port on the host machine is no longer necessary, thus, eliminating the risk of collision between the exposed port and the ones of other services.
One label is already present to define the hostname of the website service. Traefik comes with a lot of complementary labels. The
traefik.http.services.service_name.loadbalancer.server.port property tells Traefik to use a specific port to connect to a container.
The final Docker Compose file looks like this:
version: '3' services: www: container_name: adaltas-www image: node:18 volumes: - .:/app user: node working_dir: /app command: bash -c "yarn install && yarn run develop" labels: - "traefik.http.routers.adaltas-www.rule=Host(`www.adaltas.local`)" - "traefik.http.services.adaltas-www.loadbalancer.server.port=8000" networks: default: name: traefik_default
With Traefik, I like the idea of my container services registering automatically in a cloud-native philosophy. It provided me with confort and simplicity. Also, dnsmasq has proved to be well-documented and quick to adjust to my various requirements.