New site
I have archived my old blog at blog.aqwari.net
and moved the content here. The old blog will remain in perpetuity, because
Cool URIs don't change. The old
domain, aqwari.net, was difficult to communicate to people; I would have
to spell it out and I would get questions like "what does it mean?" I came
up with the name on a whim bcause I thought it looked cool. The new domain
is just my name.
In the process, I've completely changed the scripts and hosting I was using for the old site. The previous site was generated by Hugo, and uploaded to Google Cloud Storage (GCS). There were a couple problems with the old setup:
- I built the templates with a very old version (0.17) of Hugo, and they didn't work with newer versions. It would have been less work to fix the templates, but I wasn't interested in keeping them up to date. The ancient version of hugo did everything I needed. However, I am worried about the old version disappearing from the internet.
- It was expensive. While GCS itself was very cheap, serving HTTPS from a custom domain required me to setup an HTTPS Load Balancer which is very expensive, with a minimum cost of about 18 USD per month.
I moved the hosting for the new and old site to SourceHut
pages which costs me about 20 USD per year. In a pinch,
I can host the same content on Github Pages
as a backup, for free. I also moved the toplevel aqwari.net
domain to SourceHut pages as well. This took a little work, as I was
serving paths for some Go modules
from a custom server running on
a VM. Luckily, the format the that go tool expects allowed me to
generate those responses in advance, so
I was able to shut down the load balancer. I'm still using GCP for some
uptime checks
to alert me when the site is unreachable.
In the future I might have enough projects and services running to justify some form of dedicated hosting, where my blog can coexist. But for now, I just want to write.
The new site is generated by a custom Ocaml program I wrote, named
didi. It's messy, it's ugly,
but it's mine and I can extend it exactly the way I want to as the
need arises. In the future I plan to extend it to generate output
suitable for print in magazine/journal formats. While didi takes
care of parsing and formatting the article content, it is driven by
the mk utility according a mkfile.
Mk is a successor to make, mostly used
in Plan 9. It's very similar,
but has a few niceties that I appreciated:
-
Variables are passed to recipes through the environment, instead of text substitution, and a recipe is a single shell script, rather than a list of lines.
-
A target's recipe has access to all of its dependencies in the
$prereqvariable, whereas in Make, the recipe only has access to the rule immediately preceding it. -
Special variables like
$@and$^from Make are replaced by human-readable names like$targetand$prereq.# make %.o : %.c cc -c $< -o $@ # mk %.o : %.c cc -o $stem.c -o $target -
You can provide a command to decide if a file is out of date instead of relying on the file modification time:
didi-index:Pcmp -s: didi-index.new cp $prereq $targetThis example will only update the target if the command
cmp -s didi-index didi-index.newhas a non-zero exit status. In effect it will only updatedidi-indexif it has changed. This is useful to avoid rebuilding files that depend ondidi-indexunnecessarily.
Currently I push site updates directly from my workstation. In the future I could setup some automation to build and push whenever the content is updated, but this works for now.
I hope that this frees me up to write more. In the past, writing has been an effective way to keep me focused and motivated to actually finish projects.
Update: 2025-09-01
I noticed, when visiting my site with Firefox, that non-index pages were being
downloaded instead of displayed. After some experimentation with curl, I found that
SourceHut pages, where I was hosting the files, was not inferring the content-type
of the files when the client signals compression support via the Accept-Encoding
HTTP header, so it would not provide a content-type header at all. Firefox, by
default, will assume a mime type of application/octet-stream. Google Chrome
will try to detect the mime type type of each page itself.
I reached out
to the sr.ht-discuss list about the issue, but they weren't willing
to revert to the old behavior. I implemented the suggested workaround,
moving ${article} to ${article}/index.html to use SourceHut page's
existing redirect.