Entries tagged 'docker'
The oldest sins the newest kind of ways
When I wanted to get away from using Discord to participate in the online PHP and IndieWeb communities, I did still want a web-based interface that provided access to the backlog of conversations from when I was offline, which IRC servers don’t generally do on their own.
I landed on using The Lounge, which has worked out very well.
I run it on my home server in Docker and it is exposed to my Tailscale tailnet so if I ever was on the road, I could still access it. The configuration is pretty straightforward. There’s a docker-compose.yml
file:
version: '3.9'
services:
tailscale:
image: tailscale/tailscale:latest
hostname: thelounge
env_file: ./.env
environment:
- TS_SERVE_CONFIG=/config/thelounge.json
- TS_STATE_DIR=/var/lib/tailscale
volumes:
- ts_state:/var/lib/tailscale
- ./config/tailscale:/config
- /dev/net/tun:/dev/net/tun
cap_add:
- net_admin
- sys_module
restart: always
backend:
image: ghcr.io/thelounge/thelounge:latest
env_file: ./.env
volumes:
- lounge_state:/var/opt/thelounge
expose:
- "9000/tcp"
restart: always
volumes:
ts_state:
lounge_state:
And config/tailscale/thelounge.json
:
{
"TCP": {
"443": {
"HTTPS": true
}
},
"Web": {
"${TS_CERT_DOMAIN}:443": {
"Handlers": {
"/": {
"Proxy": "http://backend:9000"
}
}
}
},
"AllowFunnel": {
"${TS_CERT_DOMAIN}:443": false
}
}
There is an .env
file that sets TS_AUTHKEY
and TS_EXTRA_ARGS
. It looks kind of like this:
TS_AUTHKEY="tskey-client-{something}?ephemeral=false"
TS_EXTRA_ARGS="--advertise-tags=tag:container --reset"
Certifiable
Because I am paying for a “LinkedIn Premium” membership during my job search, which has mainly been useful for letting me see who is looking at my profile, I decided to see what other benefits I could wring out of it. Access to their library of online courses was an obvious one to check out, and after a day of going through some lessons on Docker, I am the official holder of a “Docker Foundations Professional Certificate.”
As a software engineer with a fancy degree from a fancy school, the idea of certification feels a little down-market to me. It’s on the other side of that strange divide between computer science and information technology. That line has gotten a lot fuzzier over the years, with “DevOps” trying to give a name to that overlapping space between developer and system administrator.
(Look at me, calling myself an engineer instead of just a developer. Funny that the most prestige seems to have gotten attached to engineer instead of scientist. My fancy degree is in “Computer Science” but I don’t know that I’ve ever seen someone call themselves a computer scientist outside of academia.)
Another time I found myself facing that sort of divide was in high school when I mentioned to a classmate that I was thinking of taking a “shop” class as an elective and they were horrified. I was supposed to be on the college-bound, AP-class-taking trajectory.
Did I learn anything from the training on LinkedIn that I didn’t already know? Not particularly.
Is the certification that I earned on my profile now? Of course.
Now I’ve got my eye on that “Career Essentials in GitHub Professional Certificate.”
How I use Docker and Deployer together
I thought I’d write about this because I’m using Deployer in a way that doesn’t really seem to be supported.
After the work I’ve been doing with Python lately, I can see how I have been using Docker with PHP is sort of comparable to how venv
is used there.
On my production host, my docker-compose
setup all lives in a directory called tmky
. There are four containers: caddy
, talapoin
(PHP-FPM), db
(the database server), and search
(the search engine, currently Meilisearch).
There is no installation of PHP aside from that talapoin
container. There is no MySQL client software on the server outside of the db
container.
I guess the usual way of deploying in this situation would be to rebuild the PHP-FPM container, but what I do is just treat that container as a runtime environment and the PHP code that it runs is mounted from a directory on the server outside the container.
It’s in ${HOME}/tmky/deploy/talapoin
(which I’ll call ${DEPLOY_PATH}
from now on). ${DEPLOY_PATH}/current
is a symlink to something like ${DEPLOY_PATH}/release/5
.
The important bits from the docker-compose.yml
look like:
services:
talapoin:
image: jimwins/talapoin
volumes:
- ./deploy/talapoin:${DEPLOY_PATH}
This means that within the container, the files still live within a path that looks like ${HOME}/tmky/deploy/talapoin
. (It’s running under a different UID/GID so it can’t even write into any directories there.) The caddy
container has the same volume setup, so the relevant Caddyfile
config looks like:
trainedmonkey.com {
log
# compress stuff
encode zstd gzip
# our root is a couple of levels down
root * {$DEPLOY_PATH}/current/site
# pass everything else to php
php_fastcgi talapoin:9000 {
resolve_root_symlink
}
file_server
}
(I like how compact this is, Caddy has a very it-just-works spirit to it that I dig.)
So when a request hits Caddy, it sees a URL like /2024/03/09
, figures out there is no static file for it and throws it over to the talapoin
container to handle, giving it a SCRIPT_FILENAME
of ${DEPLOY_PATH}
and a REQUEST_URI
of /2024/03/09
.
When I do a new deployment, ${DEPLOY_PATH}/current
will get relinked to the new release directory, the resolve_root_symlink
from the Caddyfile
will pick up the change, and new requests will seamlessly roll right over to the new deployment. (Requests already being processed will complete unmolested, which I guess is kind of my rationale for avoiding deployment via updated Docker container.)
Here is what my deploy.php
file looks like:
<?php
namespace Deployer;
require 'recipe/composer.php';
require 'contrib/phinx.php';
// Project name
set('application', 'talapoin');
// Project repository
set('repository', 'https://github.com/jimwins/talapoin.git');
// Host(s)
import('hosts.yml');
// Copy previous vendor directory
set('copy_dirs', [ 'vendor' ]);
before('deploy:vendors', 'deploy:copy_dirs');
// Tasks
after('deploy:cleanup', 'phinx:migrate');
// If deploy fails automatically unlock.
after('deploy:failed', 'deploy:unlock');
Pretty normal for a PHP application, the only real additions here are using Phinx for the data migrations and using deploy:copy_dirs
to copy the vendors
directory from the previous release so we are less likely to have to download stuff.
That hosts.yml
is where it gets tricky, because when we are running PHP tools like composer
and phinx
, we have to run them inside the talapoin
container.
hosts:
hanuman:
bin/php: docker-compose -f "${HOME}/tmky/docker-compose.yml" exec --user="${UID}" -T --workdir="${PWD}" talapoin
bin/composer: docker-compose -f "${HOME}/tmky/docker-compose.yml" exec --user="${UID}" -T --workdir="${PWD}" talapoin composer
bin/phinx: docker-compose -f "${HOME}/tmky/docker-compose.yml" exec --user="${UID}" -T --workdir="${PWD}" talapoin ./vendor/bin/phinx
deploy_path: ${HOME}/tmky/deploy/{{application}}
phinx:
configuration: ./phinx.yml
Now when it’s not being pushed to an OCI host that likes to fall flat on its face, I can just run dep deploy
and out goes the code.
I’m also actually running Deployer in a Docker container on my development machine, too, thanks to my fork of docker-deployer
. Here’s my dep
script:
#!/bin/sh
exec \
docker run --rm -it \
--volume $(pwd):/project \
--volume ${SSH_AUTH_SOCK}:/ssh_agent \
--user $(id -u):$(id -g) \
--volume /etc/passwd:/etc/passwd:ro \
--volume /etc/group:/etc/group:ro \
--volume ${HOME}:${HOME} \
-e SSH_AUTH_SOCK=/ssh_agent \
jimwins/docker-deployer "$@"
Anyway, I’m sure there are different and maybe better ways I could be doing this. I wanted to write this down because I had to fight with some of these tools a lot to figure out how to make them work how I envisioned, and just going through the process of writing this has led me to refine it a little more. It’s one of those classic cases of putting in a lot of hours to end up with a relatively few lines of code.
I’m also just deploying to a single host, deployment to a real cluster of machines would require more thought and tinkering.
Docker, Tailscale, and Caddy, oh my
I do my web development on a server under my desk, and the way I had it set up is with a wildcard entry set up for *.muck.rawm.us
so requests would hit nginx
on that server which was configured to handle various incarnations of whatever I was working on. The IP address was originally just a private-network one, and eventually I migrated that to a Tailscale tailnet address. Still published to public DNS, but not a big deal since those weren’t routable.
A reason I liked this is because I find it easier to deal with hostnames like talapoin.muck.rawm.us
and scat.muck.rawm.us
rather than running things on different ports and trying to keep those straight.
One annoyance was that I had to maintain an active SSL certificate for the wildcard. Not a big deal, and I had that nearly automated, but a bigger hassle was that whenever I wanted to set up another service it required mucking about in the nginx
configuration.
Something I have wanted to play around with for a while was using Tailscale with Docker to make each container (or docker-compose
setup, really) it’s own host on my tailnet.
So I finally buckled down, watched this video deep dive into using Tailscale with Docker, and got it all working.
I even took on the additional complication of throwing Caddy into the mix. That ended up being really straightforward once I finally wrapped my head around how to set up the file paths so Caddy could serve up the static files and pass the PHP off to the php-fpm
container. Almost too easy, which is probably why it took me so long.
Now I can just start this up, it’s accessible at talapoin.{tailnet}.ts.net
, and I can keep on tinkering.
While it works the way I have it set up for development, it will need tweaking for “production” use since I won’t need Tailscale.
big, heavy, and wood
justin mason flagged this article about "The log/event processing pipeline you can't have" a while back, and it has been on my mind ever since. our digital infrastructure is split across a few machines (virtual and not) and i often wish that i had a more cohesive way of collecting logs and doing even minimally interesting things with them.
i think the setup there is probably overkill for what i want, but i love the philosophy behind it. small, simple tools that fit together in an old-school unix way.
i set up an instance of graylog to play with a state-of-the-art log management tool, and it is actually pretty nice. the documentation around it is kind of terrible right now because the latest big release broke a lot of the recipes for processing logs.
right now, the path i am using for getting logs from nginx in a docker container to graylog involves nginx outputting JSON that gets double-encoded. it’s all very gross.
i think i am having a hard time finding the correct tooling for the gap between “i run everything on a single box” and “i have a lot of VC money to throw at an exponentially scalable system”. (while also avoiding AWS.)
(the very first post to this blog was the same ren & stimpy reference as the title of this post.)
the state of things
just over seven years ago, i mentioned that i had decided to switch over to using scat, the point of sale software that i had been knocking together in my spare time. it happened, and we have been using it while i continue to work on it in that copious spare time. the project page says “it is currently a very rough work-in-progress and not suitable for use by anyone.” and that's still true. perhaps even more true. (and absolutely true if you include the online store component.)
it is currently a frankenstein monster as i am (slowly) transforming it from an old-school php application to being built on the slim framework. i am using twig for templating, and using idiorm and paris as a database abstraction thing.
i am using docker containers for some things, but i have very mixed emotions about it. i started doing that because i was doing development on my macbook pro (15-inch early 2008) which is stuck on el capitan, but found it convenient to keep using them once i transitioned to doing development on the same local server where i'm running our production instance.
the way that docker gets stdout and stderr wrong constantly vexes me. (there may be reasonable technical reasons for it.)
i have been reading jamie zawinski’s blog for a very long time. all the way back to when it was on live journal, and also the blog for dna lounge, the nightclub he owns. the most recent post about writing his own user account system to the club website sounded very familiar.