8 Lessons Learned Using docker-compose

Nowadays when people setup and configure services, probably nobody will enjoy doing it in a manual way.

Here comes a new question. How do you automate the process, and make it fast and reliable?

Wrap up some ssh scripts? Leverage CM(configuration management tools) like chef, ansible, or pupet? Use docker run? It’s great that we now have many options. But, as you may guess, not all of them are equally good.

You want a headache-free solution, right? And you also want it real quick, do you? Then You Can Not Miss docker-compose! Here are some useful tips and lessons learned using docker-compose.

8 Lessons Learned Using docker-compose

I’m not here to argue with you that docker-compose is the best for all scenarios. Definitely there are some, which we can not or should not dockerize. Well, I still insist that it’s capable for most cases.

CM is good, but docker/docker-compose is just better for env setup. (Read: 5 Common Failures Of Package Installation). Not convinced, my friends? Give it a try and challenge me now!

With plain docker facility, we can setup envs like below.

docker run --rm -p 8080:8080 \
 -h mytest --name my-test\
 -v /var/run/docker.sock:/var/run/docker.sock \
 --group-add=$(stat -c %g /var/run/docker.sock) \

But… The command is just too long, isn’t it? What if you need to start multiple different containers? You may easily mess it up, right?

Show time for docker-compose, an advanced version of docker run.

Just one single command, docker-compose up -d! We can run the same deployment process against another new environment within just a few seconds. Beautiful, isn’t it?

8 Lessons Learned Using docker-compose

Here are my lessons learned using docker-compose.

1.1 1. [Infrastructure-as-code] Host all setup and configuration logic in git repo

Host all necessary things in git repo: docker-compose.yml, .env and docker volume files.

  • Everything you need can be and should be found in git repo.
  • People can easily review and comment your changes via PRs.
  • You can audit the change history, and understand issues.

1.2 2. [Port Mapping] Default docker port mapping is dangerous for public cloud.

You may get surprised how insecure docker port mapping feature is! Let me show you.

Image you have a mysql instance running with below docker-compose.yml.

version: '2'
    container_name: db
    image: mysql
      - "3306:3306"
     - db_home:/var/lib/mysql
      MYSQL_ROOT_PASSWORD: rootpassword
      MYSQL_USER: user1
      MYSQL_PASSWORD: password1


With default setting, anyone from the internet can access your db, port 3306.[1]

root@denny:# docker-compose up -d
root@denny:# iptables -L -n | grep 3306
ACCEPT  tcp  --  anywhere  tcp dpt:3306
root@denny:# telnet $YOUR_IP 3306

Why? docker add iptables rules widely open FOR YOU. (Sweet, isn’t it?) Anywhere can access.

So let’s limit the source ip for the access. Good idea! Unfortunately, it’s impossible. That rule is added by docker. To make it worse, it provides no hook points for us to change this behavior.

With some more thinking, you may think: how about I delete the rules created by docker, and add a new rule? I’m an iptable expert, it’s easy. Well, it’s not easy. The tricky part is your customized iptable rules won’t be recognized and managed by docker. It would easily mess up, when service/machine reboot. Especially when we restart container, the ip will change.

So instead of default port mapping, I only bound the port mapping to specific ip address like below.

  - ""

1.3 3. [Data Volume] Separate data from application using docker volume

Make sure containers doesn’t hold any unrecoverable application data.

Consequently we can safely recreate docker envs, or even migrate to another environment completely and easily.

1.3.1 docker-compose mount local folders and named volumes

    container_name: db
    image: mysql
      - network_application
    # Named volumes
    - db_home:/var/lib/mysql
    # Local folders
    - ./scripts:/var/lib/mysql/scripts


1.3.2 docker-compose overwrite an existing file

    image: denny/ss:v1
      - "6187:6187"
      - ./shadowsock.json:/etc/shadowsocks.json:rw
    entrypoint: ["/usr/bin/supervisord", "-n", "-c", "/etc/supervisor/supervisord.conf"]

1.4 4. [Migration Rehearsal] Run rehearsal for the docker-compose migration

It’s always a good idea to run migration rehearsal.

Ideally there should be no more than 3 steps:

  • scp data volume from old env to new env
  • Install docker compose and run “docker-compose up -d”
  • Very few manual steps, mostly it’s about credentials.

1.5 5. [Backup] Enforce weekly backup for data volumes

All critical data would be in volumes only. To backup the system, we just need to backup the folders of data volumes.

When the data volumes are not very big, we can:

  • Enforce periodical folder backup for data volumes, like weekly backup.
  • To reduce out-of-disk issues, remember to rotate very old backupset. (Read: Reduce Support Effort Of Low Free Disk Issues).
  • To avoid local VM failures, backup to a remote VMs or AWS S3.

1.6 6. [Docker Image] Build your own docker image

During deployment, any request to external services is a failure point.

To make sure the deployment is more smoothly, I always build my own docker images pre-download packages/files, if necessary.

1.7 7. [Monitoring] Use docker healthcheck and external SaaS monitoring

Monitoring and alerting are good.

  • Define docker healthcheck, thus trouble shooting would be as simple as below
docker-compose ps
docker ps | grep unhealthy

  • External monitoring is easy and useful.

Try uptimerobot.com[2]. It an run url check/port check every 5 minutes. If the check has failed, we can easily get slack or email notifications.

Didn’t I mentioned, uptimerobot.com is totally free? I’m happy to be a loytal customers for more than 5 years.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.