March, 9, 2024 archives
How I use Docker and Deployer together
I thought I’d write about this because I’m using Deployer in a way that doesn’t really seem to be supported.
After the work I’ve been doing with Python lately, I can see how I have been using Docker with PHP is sort of comparable to how venv
is used there.
On my production host, my docker-compose
setup all lives in a directory called tmky
. There are four containers: caddy
, talapoin
(PHP-FPM), db
(the database server), and search
(the search engine, currently Meilisearch).
There is no installation of PHP aside from that talapoin
container. There is no MySQL client software on the server outside of the db
container.
I guess the usual way of deploying in this situation would be to rebuild the PHP-FPM container, but what I do is just treat that container as a runtime environment and the PHP code that it runs is mounted from a directory on the server outside the container.
It’s in ${HOME}/tmky/deploy/talapoin
(which I’ll call ${DEPLOY_PATH}
from now on). ${DEPLOY_PATH}/current
is a symlink to something like ${DEPLOY_PATH}/release/5
.
The important bits from the docker-compose.yml
look like:
services:
talapoin:
image: jimwins/talapoin
volumes:
- ./deploy/talapoin:${DEPLOY_PATH}
This means that within the container, the files still live within a path that looks like ${HOME}/tmky/deploy/talapoin
. (It’s running under a different UID/GID so it can’t even write into any directories there.) The caddy
container has the same volume setup, so the relevant Caddyfile
config looks like:
trainedmonkey.com {
log
# compress stuff
encode zstd gzip
# our root is a couple of levels down
root * {$DEPLOY_PATH}/current/site
# pass everything else to php
php_fastcgi talapoin:9000 {
resolve_root_symlink
}
file_server
}
(I like how compact this is, Caddy has a very it-just-works spirit to it that I dig.)
So when a request hits Caddy, it sees a URL like /2024/03/09
, figures out there is no static file for it and throws it over to the talapoin
container to handle, giving it a SCRIPT_FILENAME
of ${DEPLOY_PATH}
and a REQUEST_URI
of /2024/03/09
.
When I do a new deployment, ${DEPLOY_PATH}/current
will get relinked to the new release directory, the resolve_root_symlink
from the Caddyfile
will pick up the change, and new requests will seamlessly roll right over to the new deployment. (Requests already being processed will complete unmolested, which I guess is kind of my rationale for avoiding deployment via updated Docker container.)
Here is what my deploy.php
file looks like:
<?php
namespace Deployer;
require 'recipe/composer.php';
require 'contrib/phinx.php';
// Project name
set('application', 'talapoin');
// Project repository
set('repository', 'https://github.com/jimwins/talapoin.git');
// Host(s)
import('hosts.yml');
// Copy previous vendor directory
set('copy_dirs', [ 'vendor' ]);
before('deploy:vendors', 'deploy:copy_dirs');
// Tasks
after('deploy:cleanup', 'phinx:migrate');
// If deploy fails automatically unlock.
after('deploy:failed', 'deploy:unlock');
Pretty normal for a PHP application, the only real additions here are using Phinx for the data migrations and using deploy:copy_dirs
to copy the vendors
directory from the previous release so we are less likely to have to download stuff.
That hosts.yml
is where it gets tricky, because when we are running PHP tools like composer
and phinx
, we have to run them inside the talapoin
container.
hosts:
hanuman:
bin/php: docker-compose -f "${HOME}/tmky/docker-compose.yml" exec --user="${UID}" -T --workdir="${PWD}" talapoin
bin/composer: docker-compose -f "${HOME}/tmky/docker-compose.yml" exec --user="${UID}" -T --workdir="${PWD}" talapoin composer
bin/phinx: docker-compose -f "${HOME}/tmky/docker-compose.yml" exec --user="${UID}" -T --workdir="${PWD}" talapoin ./vendor/bin/phinx
deploy_path: ${HOME}/tmky/deploy/{{application}}
phinx:
configuration: ./phinx.yml
Now when it’s not being pushed to an OCI host that likes to fall flat on its face, I can just run dep deploy
and out goes the code.
I’m also actually running Deployer in a Docker container on my development machine, too, thanks to my fork of docker-deployer
. Here’s my dep
script:
#!/bin/sh
exec \
docker run --rm -it \
--volume $(pwd):/project \
--volume ${SSH_AUTH_SOCK}:/ssh_agent \
--user $(id -u):$(id -g) \
--volume /etc/passwd:/etc/passwd:ro \
--volume /etc/group:/etc/group:ro \
--volume ${HOME}:${HOME} \
-e SSH_AUTH_SOCK=/ssh_agent \
jimwins/docker-deployer "$@"
Anyway, I’m sure there are different and maybe better ways I could be doing this. I wanted to write this down because I had to fight with some of these tools a lot to figure out how to make them work how I envisioned, and just going through the process of writing this has led me to refine it a little more. It’s one of those classic cases of putting in a lot of hours to end up with a relatively few lines of code.
I’m also just deploying to a single host, deployment to a real cluster of machines would require more thought and tinkering.