trainedmonkey

migrating from cloudflare

i have been migrating away from cloudflare. domain name registrations are moving to name silo, ddos shielding/caching is now on fastly , and i am trying out segment instead of cloudflare zaraz.

flipping switches

cloudflare zaraz is a great concept: manage the third-party code for your website sort of like google tag manager, but run as much of the code as possible in the cloud instead of the browser. but the execution is still rough around the edges, especially when it comes to the ecommerce functionality.

each of the platforms where we publish our catalog (and can use that to advertise) have their own way of collecting performance metrics. the way i had hacked support for each into our old website was messy and fragile. zaraz intervenes here with a simple zaraz.ecommerce(event, data) call that pushes out the data to each of those third-party tools.

the problem is that how zaraz maps their simplified interface to those various systems is undocumented, and as near as the community can figure out, not always correct. i also found that if i enabled the ecommerce integration for facebook, it broke all of the ecommerce reporting everywhere.

i am still hopeful that they can work through the bugs and issues, add support for some of the other platforms that would be useful for us (like pinterest), and we can collect the data we need with a minimized impact on site performance.

the worst case is that i can just drop in my own implementation to turn those zaraz.ecommerce() into the old browser-side integration and it will still be more streamlined than it used to be.

dipping my toes in go

one of the very first things i noticed when i migrated our website to a new server is that someone was running a vulnerability scanner against us, which was annoying. i cranked up the bot-fighting tools on cloudflare, but i also got fail2ban running pretty quickly so it would add the IP addresses for obviously bad requests to an IP list on cloudflare that would lock those addresses out of the site for a while. not a foolproof measure, of course, but maybe it just makes us a slightly harder target so they move on to someone else.

but fail2ban is a very old system with a pretty gross configuration system. i was poking around for a more modern take on the problem, and i found a simple application written in go called silencer that i decided to try and work with. i forked it so i could integrate it with cloudflare, and it was very straightforward. i also had to update one of the dependencies so it actually handled log file rotation. when i get time to hack on it some more, i’ll add handling for ipv6 as well as ipv4 addresses.

go is an interesting language. obviously i don’t have my head wrapped around the customs and community, so it seems a little rough to me, but it’s also not so different that i couldn’t feel my way around pretty quickly to solve my problem at hand.

another three years

another three years between entries. some stuff has happened. the store is still going, and i am still finding excuses to code and learn new things.

i wrote before about how i was converting scat from a frankenstein monster to a more modern php application built on a framework, which has more or less happened. there’s just a little bit of the monster left in there that i just need to work up the proper motivation to finish rooting out.

i also took what was a separate online store application built on a different php framework and made it a different face of scat. it is still evolving and there’s bits that make it work that aren’t really reflected in the repository, but it’s in production and seems to sort of work, which has been gratifying to get accomplished. the interface for the online store doesn’t use any javascript or css frameworks. between that and running everything behind cloudflare, it’s much faster than it used to be.

big, heavy, and wood

justin mason flagged this article about "The log/event processing pipeline you can't have" a while back, and it has been on my mind ever since. our digital infrastructure is split across a few machines (virtual and not) and i often wish that i had a more cohesive way of collecting logs and doing even minimally interesting things with them.

i think the setup there is probably overkill for what i want, but i love the philosophy behind it. small, simple tools that fit together in an old-school unix way.

i set up an instance of graylog to play with a state-of-the-art log management tool, and it is actually pretty nice. the documentation around it is kind of terrible right now because the latest big release broke a lot of the recipes for processing logs.

right now, the path i am using for getting logs from nginx in a docker container to graylog involves nginx outputting JSON that gets double-encoded. it’s all very gross.

i think i am having a hard time finding the correct tooling for the gap between “i run everything on a single box” and “i have a lot of VC money to throw at an exponentially scalable system”. (while also avoiding AWS.)

(the very first post to this blog was the same ren & stimpy reference as the title of this post.)

the state of things

just over seven years ago, i mentioned that i had decided to switch over to using scat, the point of sale software that i had been knocking together in my spare time. it happened, and we have been using it while i continue to work on it in that copious spare time. the project page says “it is currently a very rough work-in-progress and not suitable for use by anyone.” and that's still true. perhaps even more true. (and absolutely true if you include the online store component.)

it is currently a frankenstein monster as i am (slowly) transforming it from an old-school php application to being built on the slim framework. i am using twig for templating, and using idiorm and paris as a database abstraction thing.

i am using docker containers for some things, but i have very mixed emotions about it. i started doing that because i was doing development on my macbook pro (15-inch early 2008) which is stuck on el capitan, but found it convenient to keep using them once i transitioned to doing development on the same local server where i'm running our production instance.

the way that docker gets stdout and stderr wrong constantly vexes me. (there may be reasonable technical reasons for it.)

i have been reading jamie zawinski’s blog for a very long time. all the way back to when it was on live journal, and also the blog for dna lounge, the nightclub he owns. the most recent post about writing his own user account system to the club website sounded very familiar.

character encoding is still hard?

email2webhook is nice in theory, but fell flat in handling basic character encoding.

that i still have to fight the same sort of issues that i was dealing with about 16(!) years ago is somehow not at all surprising.

i’ve switched to using zapier’s email to webhook features. it will probably bring a different set of challenges.

😎

now i’ve done it

i noodled around a little more, and came up with an ugly way to supply and extract tags in my postings via email. the next step will be extracting attachments so i can post images again.

incremental

i wanted to play with email2webhook, so that gave me an excuse to knock together a quick and dirty way to post here via email. i didn't add a way to tag posts, but it's a start. more to come.

progress

yes, this is still on. and now it is on a new server with new code, even though it looks the same.

i still need to knock together the "writing a post" bit of code so that i can post without using manual SQL queries. details.

stirrings

it turns out that this thing is still on. it is kind of funny to me that it still just chugs along, but such is the joy of writing your own software running on your own (virtual) server. it is long past time to rebuild the infrastructure here, which may or may not happen. but it is something that i am thinking about again.

let’s encrypt

this domain is now using a certificate from the let’s encrypt project. they are scheduled for general availability in a couple of weeks.

it’s a great idea. getting free domain certificates has always been more annoying than i was ready to deal with, and once this is all fully baked, it will make it easier to just make everything encrypted by default.