steve

Notes from underground

An Amiga Workbench desktop with this article being written in a text editor

Having decided that I'm unlikely to get the Aston Martin DB5 in Gunmetal grey I wanted as a kid, I got the next best thing and bought an Amiga 4000. Normally when I tell people this, I get one of two reactions:

  • You jammy, jammy sod
  • You bought a what?

For those who've never experienced the Amiga first-hand, number 2 is understandable. Most people who have are in the first camp.

For those in the 2nd camp, the Amiga 4000 is the final set of models in the classic Amiga series. This is the Amiga equivalent of a Ferrari F40. Sleek, spectacular, crazy expensive to run for what it is, and entirely impractical. There are some mildly insane design choices, bugs and because Commodore cheaped out at the last minute, several pretty fatal things that can happen to it if it's not taken care of exceptionally well over it's lifetime.

Owning an Amiga 4000 is not like owning an Amiga 1200 or 500. This isn't a machine built for gaming. After all, I can game just fine with near perfect emulation thanks to WHDLoad, and a Raspberry Pi is pretty much the fastest gaming Amiga you can get.

I'm using the Amiga 4000 for productivity, mostly creative. Yes, you read that right. No, I'm not insane. It's 2019, and I've bought an Amiga to use for actual day to day creative things. As shown in the screenshot above, this article was even written on the Amiga.

To be creative I need to move files back and forth. The Amiga 4000 is the only regularly used device I have with a floppy drive, so that's out as a medium.

Thankfully the Amiga 4000 has a DVD-rewriter. Transferring files over DVD/CD works well. The Amiga uses the Joliet, not UDF filesystem, and has some slight preferences for odd CD writing configurations. On the whole, it works.

Burning CDs for small amounts of data gets old after a while though, and I'd prefer some sort of network connectivity, at least till the MNT ZZ9000 comes online. The easiest and cheapest way to do this is with an X-Surf 100, an Ethernet card available for around 100 Euro. As I won't have a use for the X-Surf after my ZZ9000 arrives, I'm trying a serial link to a Raspberry Pi instead. Here's how it's set up.

Hardware

First you'll need some hardware. Some of this you can build yourself or you could buy the parts and salvage things lying around like I did. You will need:

  • A Raspberry Pi, power cable, Micro SD card, Raspbian etc.
  • An Amiga with a 25-pin serial port
  • A 9-pin to 25-pin serial adapter. I used this one from Amigakit
  • A USB-Serial cable

Stage 1, Basic Connectivity

To start, connect the Raspberry Pi to the USB cable, the USB cable to the 25 pin adapter, and the 25 pin adapter to the Amiga. Congrats, your Amiga is now physically linked to the Pi!

I'm using Term v4.8 from Aminet to get basic terminal emulation running. You'll want to configure serial settings as follows:

Amiga Term configured for 115200 xfer with 8/n1 and no flow control.

You'll be asked about enabling RTS/CTS, things seem to work fine with it switched off.

Note: Term needs paths defined (Settings –> Paths from the Menu) or your files won't be saved. Also make sure that you save your settings from the pulldown Settings menu.

On the Raspberry Pi you'll need to install screen via apt-get. In a console, enter the following:

screen /dev/ttyUSB0 115200

The device name might vary dependent upon the USB-Serial converter you're using or how you're connected. It could be /dev/ttyAMA0 or /dev/ttyACM0 in some cases. Check dmesg and the contents of /dev/ if you get stuck.

Hello From the Amiga on the PC

If you type in the screen window, you should see the text echoed in the Term session on the Amiga. If you type on the Amiga you should see text show up on the screen session on the Pi.

Hello from the PC on the Amiga

It'd be a little boring if this was all you could do. Lets transfer some files. I downloaded Delitracker and the update to 2.34 onto the Raspberry Pi with wget. In the screen session, I pressed Ctrl-A and typed in the following:

: exec sz -b delitracker232.lha

The Term session should spring to life and start receiving a file, which will be saved in the path you specified earlier. Extract delitracker, run the installer and repeat with the update file. Of course, now you have a mod music player, it's only fair that you should go to UnExoticA and get some tunes to play.

Sending files back.

Sending files back to the Raspberry Pi is pretty easy, getting screen to receive them is only slightly more involved. Drag and drop the file you want to send onto Term's “term Upload queue” icon. On the Raspberry Pi's screen session, press Ctrl-A and enter : exec !! rz.

Transfer config on the Amiga side

The Amiga's term window will ask you about the file you're about to send. Set it to binary transfer and it'll land in the directory where you originally launched screen.

PC Receiving a file

Going further

You could run a full login terminal on the Raspberry Pi over serial and use that to log into the Pi via Term. While it's certainly cute, it reduces a very expensive Amiga 4000 to a dumb terminal. Instead I plan on using the Pi as a support system for the Amiga, where it does things that are menial, boring or just too slow for the Amiga to take care of. The next thing for me to do is to get TCP/IP networking via PPP, which I'll cover in another post.

In the meantime, here's the Amiga in it's home, on the right of this picture.

My Battlestation setup, with the Amiga on the right

Gotify

In a previous article I wrote about how I've changed my relationship with my phone. One of the benefits of degoogling is that your device is a little less spied upon. The downside of running lineage for microg is that certain functionality is a bit harder to come by.

I like minimal notifications, but there are things happening I want to know about. On iOS I used prowl to tell me about reboots. As there's no f-droid client I found myself without a emergency notification system. I saw gotify in the f-droid app and thought I'd give it a go. So far, I'm pretty happy with it.

I recently rebuilt an old unused box to self-host low-priority services. I'm a big fan of self-hosting having been burnt several times by online services. I'm not against online services making a living, but I'd rather own my stuff than rent.

The box was rebuilt to use docker and docker-compose. I find docker a double-edged sword. You either have to maintain your own docker repository or trust someone else's. This box only runs low-priority services. I'm ok running images from other people's repositories.

Installing Gotify Server With Docker-Compose

I set up caddy as a front-end service to manage letsencrypt. I prefer nginx but for docker, Caddy's fine. I also use ouroboros to auto-update images when new ones come out. If I'm going to use other people's repos I may as well get some value out of it.

Creating a gotify docker-compose entry was easy. I've included ouroboros and the caddy frontend in mine below:

version: '3'
services:
  ouroboros:
    container_name: ouroboros
    hostname: ouroboros
    image: pyouroboros/ouroboros
    environment:
      - CLEANUP=true
      - INTERVAL=300
      - LOG_LEVEL=info
      - SELF_UPDATE=true
      - IGNORE=mongo influxdb postgres mariadb
      - TZ=Europe/London
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
	
  caddy:
    container_name: caddy
    image: abiosoft/caddy:no-stats
    restart: unless-stopped
    volumes:
      - ./caddy/Caddyfile:/etc/Caddyfile
      - ./caddy/caddycerts:/etc/caddycerts
      - ./caddy/data:/data:ro
    ports:
      - "80:80"
      - "443:443"
    env_file:
      - ./caddy/caddy.env
	
  gotify:
    container_name: gotify
    image: gotify/server
    restart: unless-stopped
    volumes:
      - ./apps/gotify/data:/app/data

My caddy config needed an additional section for the new host:

gotify.steves.box {
  root /data
  log stdout
  errors stdout
  tls s@steves.box
  gzip
  proxy / gotify:80 {
    transparent
    websocket
  }
}

Hostnames have been changed to protect the innocent. When using caddy, specify websocket in the proxy section. The Android app uses websockets to handle notifications.

A quick docker-compose up -d and I was up and running. The default username and password is admin/admin. Change that first, then create a user account to receive notifications.

After creating the user account, log out of admin, and log back in as the new user. Notifications are per-application and per-user. You'll have to send notifications for each user. I hope group notifications will be possible at some point.

Gotify notifications

I added a cute puppy picture to my app, making unexpected reboots all the more cute. The installed the gotify app from f-droid and added my server. I checked the app and server logs for HTTP 400 errors. This would stop notifications from working.

A Portable Commandline Notification Tool

I wrote a quick python-based tool to send notifications from the command line. You can use the official gotify client tool, or even curl. I wanted something portable that would work without 3rd-party libraries.

#!/usr/bin/env python
# gotipy.py - A python gotify client using only built-in modules
	
import json, urllib, urllib2, argparse
	
parser = argparse.ArgumentParser(description='gotify python client')
	
parser.add_argument('-p','--priority', help="priority number (higher's more intrusive)", type=int, required=True)
parser.add_argument('-t','--title', help="title notification", required=True)
parser.add_argument('-m','--message', help="message to display", required=True)
parser.add_argument('-v','--verbose', help="print response", action='store_true')
args = parser.parse_args()
	
url = 'https://gotify.steves.box/message?token=changeme'
data = urllib.urlencode({"message": args.message, 
			"priority": args.priority,
			"title": args.title})
	
req = urllib2.Request(url, data)
resp = urllib2.urlopen(req)
if args.verbose:
	print resp.read()

If you use the script, don't forget to change the token value in the url variable to one for your app.

The final thing to do is to set up a reboot notification for the chargen.one box. We can do this on OpenBSD using a cron job. I've copied gotipy.py into /usr/local/bin and set up a cron job as a normal user to run on reboot:

@reboot python2 /usr/local/bin/gotipy.py -p 8 -t "Chargen.one" -m "Rebooted at `date`"

Now if we reboot the system, we can check that it's working by looking in /var/cron/log:

Apr 6 16:11:10 chargen cron[41858]: (asdf) CMD (python2 /usr/local/bin/gotipy.py -p 8 -t "Chargen.one" -m "Rebooted at `date`")

Please note that some OSes only run @reboot jobs for root. If you're having trouble, check your cron daemon supports non-root @reboot jobs.

If you're wondering what else I plan to use this for, it's not really much. I like only having serious event notifications and want to keep it minimal. Some of the things I'll use this for include:

  • Reboot notifications across servers
  • New device connected to home networks
  • Motioneye detected movement in the conservatory

For pretty much everything else, there's email and I can pick that up in slow time.

Smartphone manipulation

Smartphones and Social Media are ruining our lives, say the press. It's not smartphones or social media itself. It's that we're biologically unprepared for how to deal with them. Smartphones and Social Media are designed to hijack the attention and reward centres of the brain. They do it so well we don't notice how our brains are being altered.

In this post I write about how I changed my Smartphone use, and how this helped me reclaim my time and my attention span. It's about my ability to be present in the moment. Something I lost, then regained.

I won't talk about everything I do. Instead, I'll focus on things you can do. There's no talk of compiling your own firmware, mainlining F-droid or micro-g here. That's for some other time.

Instead, this post is about what I did that you can do without changing your phone.

I never felt more connected to friends through my phone, but so absent in their presence.

How Things Got Out Of Control

A few years ago, I had an iPhone 6. It was the digital tool I used more than anything else. If you could think of a pointless app, it'd be on there. When a notification came I'd hear a noise. The screen would light up, and, in pavlovian style so would my neurons. My phone went to bed with me, it woke up with me, it went to work with me.

The phone takes over our lives, like boiling a frog

I started to find that I was getting less happy. I felt less able to concentrate. I never felt more connected to friends through my phone, but so absent in their presence. My attention span shrivelled. I couldn't watch whole films. Reading books was impossible. In pockets of free time I'd check Facebook, Twitter, Instagram, Email ad nauseum. I've missed buses and trains because I was so absorbed in something I don't even remember reading. I had forgotten boredom. There was no time for my mind to wander. I became an angelheaded hipster, burning for the ancient heavenly connection to the starry dynamo in the machinery of night.

I lost the buzz of low-effort connection, but gained the ability to connect with purpose.

The Flashpoint

When Apple pulled the plug on the headphone jack, I realised my time with Apple's products was over. I didn't want to jump from Apple's walled garden into Google's. Instead I tried to degoogle my life (which is definitely another post in itself).

In the process I found ways to make my phone work for me rather than against me. I found a whole new world of ethical social media. So far, I've gained happiness, time and space for myself. I lost the buzz of low-effort connection, but gained the ability to connect with purpose.

I wanted a sustainable phone experience. This isn't a minimalist experience. This isn't a phone pared back to the basics. It's a phone experience that works for, not against me. Everyone's sustainable phone experience is different. It's a journey, not a goal. A journey I encourage readers to travel.

Stage 1: Do Not Disturb

I'm not waking this pupper up, and neither is my phone

The first thing I did was reduce the volume and timing of notifications I receive. One of the best features on both Android and iOS is Do Not Disturb. This isn't enough alone, but combined with sane rules makes the break between you and your phone.

Since the early days of Blackberry, people were chained to notifications. Notifications have many problems, the worst of which is the impact on sleep. Do Not Disturb helps you take back your sleep. It also lets you take back your time.

Here's how I use my Do Not Disturb settings:

  1. No calls, messages or notifications from 8pm – 10am
  2. Notifications from Tusky, Signal QKSMS and calls on Saturday and Sunday daytime
  3. Exceptions for specific contact groups over specific services

Using contact groups to manage exceptions lets family and friends reach you in your own time. Calls also come through if someone calls 3 times in 5 minutes. This works on both iOS and Android.

Stage 2: Notifications

Putting the no in No-tifications

I also restrict notifications. I restrict which apps can send notifications. I restrict when apps can send notifications.

On my iPhone I used IHG's app to book hotels. The app used notifications to update me about bookings. It also advertised to me. Many apps use notifications for adverts. This doesn't happen on my current phone.

The only apps that can trigger notifications on my phone are:

  • Tusky for Mastodon notifications
  • Mail notifications
  • Calendar notifications
  • Signal Messenger for messages
  • App update notifications from F-Droid and Yalp

For everything else, I can check the app when I feel like it.

Imagine caring what people you barely know are up to while the most amazing person in the world lies next to you.

Stage 3: Reduce Interaction

Oh god, no.

A major try to get social media to notify me by email. This increases the amount of steps needed to respond. I use a dedicated mail account for low-value mail such as notifications and sign-ups. I now respond to notifications on my own time, not when a light pops up.

If I have an email notification, the app will still show it as unread when I visit. I set aside time to use social networks. I try to use them with purpose instead of passively scrolling through every 15 minutes.

Marizel and I noticed we used our phones when we woke up, and used them in bed before sleeping. Imagine caring what people you barely know are up to while the most amazing person in the world lies next to you.

Our phones stay outside of the bedroom now. In fact we use no technology in the bedroom beyond a light and a radiator. The room is now only used for about 3 things, none of which need complex technology.

Stage 4: Trimming Apps

Wanna see my home screen?

My home screen

I deleted Facebook in light of it's continuous commitment to violating privacy. The Cambridge Analytica scandal was the last straw for me. I understand that for many people that's not an option. For example, I'm still a heavy twitter user but it often makes me sad. That's why I keep it off my home screen.

Reducing the amount of apps I have helped a lot. Most of the time, you don't need an app. I started by removing apps I hadn't used in 6 months. I removed apps that had functioning mobile sites and bookmarked them on my home screen. I switched to lighter non-official social media clients that didn't bug or track me.

There are alternatives that will help you get your time back. If you can't delete Facebook, remove the app from your phone. If that's too much, replace it with a dedicated browser app only used for Facebook. Set Facebook's mobile site as the home page in that browser. Bonus points if your dedicated browser app supports ad-blocking.

If you find social media makes you angry or upset, consider consider using it from your Laptop only. Laptops tend not to stay online, unlike phones and tablets. Using a laptop requires a conscious decision to engage instead of a passive default. You can still catch up with friends and family on Facebook, but need to make a little effort to do so. Friction is the best tool to control social media control use.

Setting up ad-blockers on a laptop is often easier than on a phone. Having said that, there are great apps like Better that are worth looking at. Android Firefox supports add-ons on mobile, such as uBlock Origin.

Stage 5: Seasonal Cleaning

It's a journey, not a destination

To keep things light, I created a 3 month folder on my home screen. Every month I go through my installed apps. If I haven't used an app that month, it goes in the 3 month folder. This means the app is on my home screen but not taking up space.

If I use it, it comes out of the drawer and off my home screen. If I don't use the app in 3 months, I uninstall it. This keeps my phone light, quick and clean.

I'm pretty brutal about my home screen. My wallpaper is black, I use dark mode where I can and I keep the screen brightness low. I have 11 icons on my home screen, along with two folders:

  • The 3 months folder discussed earlier
  • A folder named “Don't”.

The Don't folder holds apps I want to use less. Don't doesn't mean, “Don't use this”. It means “Don't make this your default action”. In my Don't folder currently, I have the following apps:

  • Red Reader
  • SimplyWall.st
  • Tusky

Once I feel my relationship with an app is back on track, I take it out of don't and decide where to put it next. If it doesn't improve, I'll consider removing it. I don't have to remove it. I just have to make an active decision about that app's future.

As I mentioned, my wallpaper is black, but I've found some great options for lockscreens.

An Aside: Kinder, Gentler Social Media

Ethical Social Media exists. You should try it

You might've wondered what Tusky and Mastodon are. Well, I used to use Facebook, Twitter and Instagram. I find that these apps would encourage me to vomit thoughts, argue with people or share things that upset me. I decided to find alternatives, and I'm glad I did.

I use Mastodon as a much happier alternative to Twitter. Mastodon is a bit like a friendlier, happier twitter. It's not the same, but that's a good thing. Instead of Instagram I use Pixelfed but that's still new, so I'm waiting for an Android app. For writing I use writefreely. You're using it now to read this.

These applications are all part of something called the Fediverse. It's a non-commercial, open way of sharing with each other. Nobody's incentivised to get you to like or share. Likewise, nobody's incentivised to like or share your stuff. These spaces tend to be smaller and sometimes less active, but are way healthier.

Ethical social media is less invasive. It avoids the dopamine-feedback loop you get with commercial networks. People can still contact me via social networks on Mastodon and Pixelfed. Of course, there are plenty of options for email.

Stage 6: Making Social Media an Active Choice

There are no wrong answers, just take the time to choose

I've got rid of most of the more evil social media around, how do I reclaim my life? Well, I start by setting particular times to use social media. I check social media on my phone mostly at the start of the day, and about an hour before bed. The rest of the time it needs to be a conscious decision to use it on my laptop.

It takes time to reclaim your attention span. I've found Kindles to be amazing devices for this. I just wish I could find a more open alternative that did what I wanted. I've also found little things to reclaim my attention span.

Instead of using a phone when I get up, I try to make sure Marizel is the first thing I see. If I'm up first I'll spend a few minutes watching her sleep. Sometimes I think about random things. Other times I just watch her. I find this helps me focus on what's important.

I usually make us coffee first thing in the morning and I'll look out of the kitchen window while the kettle boils. It's not an amazing view, but the phone stays in the living room. It gives me time every day for my mind to wander. It's only 5 minutes while I wake up, but it makes a real difference to my perspective.

Final Thoughts

The biggest thing I've had to accept is that this is a work in progress. Sometimes I'm going to fail. I'm going to get into arguments on twitter. I'm going to spend too much time on an app for no good reason. There will be times when I'm physically with people, but mentally absent. It's ok. What's important is that I recognise it, and try to stop it happening next time.

But in a life surrounded by bells and flashing lights I can find the time to be present with those I care about. That's worth more than all the likes and shares in the world.

Now that my OpenBSD.Amsterdam VPS is up and running, and I have working backups, I thought I'd migrate some static sites over to this host and free up another dedicated server I'm using. Adding extra static HTML won't add to the VPS' general load and won't introduce new risks to #Chargen.One.

To do this, I need to implement name-based Virtual Hosting. I'm going to show how this is done for one site, hackingforfoodbanks.org, then build upon it for multiple hosts. Finally, I'll modularize elements of the configuration to make things more manageable, including HTTPS support.

To make Name-based virtual hosting work, it's necessary to update /etc/acme-client.conf, the DNS Records for the domain in question, and the nginx configuration.

Moving DNS

This is the simplest part of the job. It's simply a case of logging into a DNS provider, and pointing the relevant DNS records at the HTTP server. Log into the DNS provider or server, point the relevant 'A' and/or 'CNAME' records to the HTTP server's IP address, and be prepared to wait up to 24 hours.

Now DNS is out of the way, the next thing is to clean up the nginx config from earlier.

Segregating the Nginx config

The config as-is is fine for just hosting Chargen.One but could get a bit unwieldy if I move all of my static sites across. I created a subdirectory in /etc/nginx/ called sites, into which I can add server blocks for each site I want to host. This splits the configuration up into more manageable per-site blocks.

Before adding a new host, I split out the default chargen.one site config into a new file, /etc/nginx/sites/default.conf. This is a copy of the main /etc/nginx/nginx.conf site with everything from the openings server{ to closing } characters included. It looks like this:

server {
	listen       80 default_server;
	listen       [::]:80 default_server;
	server_name  _;
	root         /var/www/htdocs/c1;
	
	include acme.conf;
	
	#access_log  logs/host.access.log  main;
	#error_page  404              /404.html;
	
	# redirect server error pages to the static page /50x.html
	error_page   500 502 503 504  /50x.html;
	location = /50x.html {
	    root  /var/www/htdocs/c1;
	}
	
	# For reading content
	location ~ ^/(css|img|js|fonts)/ {
	        root /var/www/htdocs/c1;
	        # Optionally cache these files in the browser:
	        # expires 12M;
	}

	
	location ~ ^/.well-known/(webfinger|nodeinfo|host-meta) {
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_set_header X-Forwarded-For $remote_addr;
	    proxy_pass http://127.0.0.1:8080;
	    proxy_redirect off;
	}
	
	location ~ ^/(css|img|js|fonts)/ {
	    root /var/www/htdocs/c1;
	    # Optionally cache these files in the browser:
	    # expires 12M;
	}
		
	location /{
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_set_header X-Forwarded-For $remote_addr;
	    proxy_pass http://127.0.0.1:8080;
	    proxy_redirect off;
	}
}

# HTTPS server
#
server {
	listen       443 default_server;
	server_name  _;
	root         /var/www/htdocs/c1;
	include /etc/nginx/acme.conf;
	
	ssl                  on;
	ssl_certificate      /etc/ssl/chargen.one.fullchain.pem;
	ssl_certificate_key  /etc/ssl/private/chargen.one.key;
	ssl_session_timeout  5m;
	ssl_session_cache    shared:SSL:1m;
	ssl_ciphers  HIGH:!aNULL:!MD5:!RC4;
	ssl_prefer_server_ciphers   on;
	
	location ~ ^/.well-known/(webfinger|nodeinfo|host-meta) {
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_set_header X-Forwarded-For $remote_addr;
	    proxy_pass http://127.0.0.1:8080;
	    proxy_redirect off;
	}
		
	location ~ ^/(css|img|js|fonts)/ {
	    root /var/www/htdocs/c1;
	    # Optionally cache these files in the browser:
	    # expires 12M;
	}
		
	location / {
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_set_header X-Forwarded-For $remote_addr;
	    proxy_pass http://127.0.0.1:8080;
	    proxy_redirect off;
	}
}

With that entire block removed from the main config, below the line server_tokens off;, there's just the following remaining in /etc/nginx/nginx.conf:

include /etc/nginx/sites/*.conf;

If I want to disable a site, I change the file extension from .conf to .dis and restart nginx. That way I can easily see which sites are enabled and which sites aren't without having to mess with the ln command or symbolic links.

Adding a new virtual host

The first host is the hardest, but onces up and running provides a template for any future hosts. I keep things fairly minimal, but adding support for PHP-based sites is as simple as copying from the default OpenBSD nginx config. The TLS config still points to the chargen.one certificate as only the certificate's associated hostnames change, not the filename.

    server {
        listen       80;
        server_name  hackingforfoodbanks.org www.hackingforfoodbanks.org;
        root         /var/www/htdocs/hackingforfoodbanks;

        include /etc/nginx/acme.conf;
        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root  /var/www/htdocs/hackingforfoodbanks;
        }

        location / {
            try_files $uri $uri/ =404;
            # Optionally cache these files in the browser:
            # expires 12M;
        }

    }

    # HTTPS server
    #
    server {
        listen 443;
        server_name  hackingforfoodbanks.org www.hackingforfoodbanks.org;
        root         /var/www/htdocs/hackingforfoodbanks;
        include /etc/nginx/acme.conf;

        ssl                  on;
        ssl_certificate      /etc/ssl/chargen.one.fullchain.pem;
        ssl_certificate_key  /etc/ssl/private/chargen.one.key;

        ssl_session_timeout  5m;
        ssl_session_cache    shared:SSL:1m;

        ssl_ciphers  HIGH:!aNULL:!MD5:!RC4;
        ssl_prefer_server_ciphers   on;

	location / {
	    root /var/www/htdocs/hackingforfoodbanks;
	    # Optionally cache these files in the browser:
	    # expires 12M;
	}

}

The only major differences are the removal of default_server in the listen directives, the changes to server_name and root to point to the correct spot and the removal of all of the dynamic parts associated with Chargen.One. Check whether or not there are problems with the nginx config before restarting by using the following command:

nginx -t -c /etc/nginx/nginx.conf

Providing the syntax is ok, restart nginx with rcctl restart nginx as root, or via doas.

Adding domains to acme-client

The final part of the puzzle is to add LetsEncrypt support for the new domain. The easiest way to add domains to acme-client is through the alternative names feature. Here's what I've added to /etc/acme-client.conf in order to support the hackingforfoodbanks.org URL.

alternative names { hackingforfoodbanks.org www.hackingforfoodbanks.org }

After adding that, and deleting the existing /etc/ssl/chargen.one.crt file, acme-client can be called to add the new domain.

rm /etc/ssl/chargen.one.crt
acme-client -vFAD chargen.one

Note that the alternative names for our new domains are under the chargen.one domain section. The domain section name is passed to acme-client, not the domain itself.

With a fully functioning certificate and nginx setup, run rcctl restart nginx to finish things off, and test the new site in a browser.

Adding HTTPS redirects

You might want to redirect some of your sites to HTTPS rather than serve a HTTP version of your site. While often touted as a panacea, this introduces a mix of advantages and drawbacks.

  • The content being delivered will be wrapped in transport layer encryption, making it harder for someone eavesdropping to identify the content being transferred (confidentiality)
  • As the transfer is encrypted, it becomes hard to interfere with the content.
  • HTTPS relies on a routinely (temporarily) broken permission-based model, often abused by companies and nation states. Thus while it's useful, it shouldn't be relied on for bulletproof 100% security.
  • Currently support TLS versions used in HTTPS aren't supported by most browsers available for legacy Operating Systems such as Windows XP. This means your site may be inaccessible over Windows XP and older versions of Android.

I'm not saying don't use HTTPS for a static site. There is no harm in supporting both, especially for a static web site. Just consider the site's audience and make a reasoned, deliberate decision as to whether or not to support accessing your content over HTTP before proceeding.

This site is accessible over HTTP and HTTPS precisely so users of older systems can still access the content via the reader, but authenticated access only works over HTTPS, and no mixed content is loaded.

As people accessing hackingforfoodbanks.org may not have access to current technology (e.g. foodbank users), I made a conscious decision to leave HTTP access open. For another site, rawhex.com, there's less of a requirement to leave HTTP access open, so I'll redirect that to HTTPS.

It's always annoying when a doc doesn't show the whole config for something complicated, so here's the /etc/nginx/sites/rawhex.conf file in full:

    server {
        listen       80;
        server_name  rawhex.com www.rawhex.com;
        return 301 https://$server_name$request_uri;

    }

    # HTTPS server
    #
    server {
        listen 443;
        server_name  rawhex.com www.rawhex.com;
        root         /var/www/htdocs/www.rawhex.com;
        include /etc/nginx/acme.conf;

        ssl                  on;
        ssl_certificate      /etc/ssl/chargen.one.fullchain.pem;
        ssl_certificate_key  /etc/ssl/private/chargen.one.key;

        ssl_session_timeout  5m;
        ssl_session_cache    shared:SSL:1m;

        ssl_ciphers  HIGH:!aNULL:!MD5:!RC4;
        ssl_prefer_server_ciphers   on;

        # add HSTS header to ensure we don't hit the redirect again
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;


        location / {
            root /var/www/htdocs/www.rawhex.com;
            # Optionally cache these files in the browser:
            # expires 12M;
        }

}

The HTTP 301 redirect shrinks the rest of the block to almost nothing. The HSTS header is a way to ensure that once redirected, a browser will only make requests over HTTPS, even if the user clicks on a HTTP link. The end result is an A+ score from Qualys' SSL Labs. There are things that can be done to improve the score, but these come at the cost of compatibility with older browsers and Operating Systems such as Windows Vista and 7.

Modularizing further

You might've noticed in the above that I'm repeating a lot of SSL settings. For HTTPS sites, it's best to keep things consistent. As such I've moved my ssl settings (aside from HSTS) into a separate file, /etc/nginx/https.conf. This means I only have to change one file for all HTTPS site configs. The current version of my file looks like this:

        ssl                  on;
        ssl_certificate      /etc/ssl/chargen.one.fullchain.pem;
        ssl_certificate_key  /etc/ssl/private/chargen.one.key;

        ssl_session_timeout  30m;
        ssl_session_cache    shared:SSL:2m;

        ssl_ciphers  HIGH:!aNULL:!MD5:!RC4;
        ssl_prefer_server_ciphers   on;

I set a higher SSL Session timeout and cache for performance purposes. People should be able to use a single SSL session to cover a full visit to and around the site. People rarely spend longer than 30 minutes there unless they leave a tab open, at which point I'm happy to reinitialize.

Please don't confuse SSL Sessions with HTTP or application sessions. They're different things. If in doubt, the defaults are probably fine.

Now all I have to do is add include /etc/nginx/https.conf; below include /etc/nginx/acme.conf; to my sites, and any changes to ciphers or timeouts will be picked up systemwide with a single change.

Conclusion

Now that I can add static sites to the Chargen.One system, I'll migrate the rest of my content over. With a clean, modular nginx config, content is served speedily and thanks to OpenBSD, to a level of security I'm comfortable with. I still need to find somewhere to move my git repos to, and I'm not sure chargen.one is right for that, but roman has a few ideas that I might borrow from.

Sometimes I miss things on the Internet the first time round. I'm not aware they were things until I stumble across them randomly some time later. A week ago I came across the writing of Bronnie Ware, a pallative nurse in Australia who in 2009, documented the 5 most common regrets she encountered from the dying.

The regrets themselves weren't surprising. I've encountered all of them at some point. What surprised me was that the ones I'd encountered were the common ones. In her post, the regrets of the dying, Bonnie lists 5 regrets she commonly encountered from her patients:

  1. I wish I’d had the courage to live a life true to myself, not the life others expected of me.
  2. I wish I hadn’t worked so hard.
  3. I wish I’d had the courage to express my feelings.
  4. I wish I had stayed in touch with my friends.
  5. I wish that I had let myself be happier.

Bonnie went on to write a book about this, The Top Five Regrets of the Dying – A Life Transformed by the Dearly Departing. Her blog post touched a lot of people at the time, including YCombinator founder Paul Graham.

Graham viewed at these regrets as errors, in this case of omission. Not everyone has the opportunities Graham has had in life, and while I can understand his rationale, I'm not sure I agree with it. I believe these regrets have an internal element, but rarely arise in a vacuum.

I looked for signs of these regrets in myself and those around me. I found examples everywhere amongst my neighbours, family and friends:

  1. The regret of the women who married the man who knocked them up because it was expected at the time.
  2. The regret of the men who try so hard to provide for their children they never get to grow close to them.
  3. The gay men and trans women who attempted suicide because they couldn't reconcile their identity with their devout cultural or religious beliefs.
  4. The old man living alone in his house, slowly forgetting everything and everyone he knew.
  5. The women who spend their lives looking after everyone else, barely, if ever making time for herself.

I've experienced all 5 forms of Bronnies' regrets. Thankfully I've always had the ability to do something about it. I don't pretend that others have that capability. In fact I doubt most people are aware of these regrets until long after they've formed.

What I can do, is when I see this in others is be kind, be patient, encourage them to open up, and to listen. But I felt I should find a way to identify the first signs of these regrets in myself.

Graham inverted the regrets to create a list of 5 commands, but I found these to be very negative. Perhaps that's ok for him. It's not really for me. Instead, I chose 5 questions to periodically ask myself. Their purpose is to help me become more mindful of things that make me sad:

  1. Have I been authentic throughout the month, or was there a moment where I became a version of me to meet the expectations of others?
  2. Have I made enough time and space this week to be with those I love?
  3. This week, have I been continuously open and honest with myself and those around me?
  4. When did I last talk about something other than work to those I really care for?
  5. What did I do for me this week?

I've put these 5 questions up here, so I can check in on myself now and again. I also have them in a notes folder so I can go through the list once a week.

If the answer to a question is no, I make a note in Joplin about why the answer is no, and what I'll do to address it. It's ok for there to be a no response to a question, but I should at least make a conscious decision about it when it arises. There are no wrong answers, the thinking alone is often enough to kick me into gear.

My hope is that by asking these questions regularly, I can avoid things before they become regrets, instead of fixing them later on. That way, whenever it's time to die, I can do so with no regrets.

*The approach and scripts discussed here use mysqldump to back up a database at one point. Yes I know this isn't in OpenBSD base, but this was added just for a specific system. It's easy not to add it, and everything else is done through tools from the base system.

With any system, it's important backups and restores work properly. With #Chargen.One, I wanted to protect user data, and be able to easily restore. The important things for the backup were:

  • Single timestamped backup file
  • No additional software installed above what's already on the box
  • Backups are pulled from a central server, not pushed to one
  • Portable script
  • Use privilege separation so local accounts can't access backups

Most services like borg backup work best with a backup server visible from the server being backed up. I use a huge NAS to store my backups, and a separate server to store backups of backups. The NAS is behind a Firewall and the other server can see the NAS but not the Internet. As such, I need a backup system that lets me use the NAS to pull from Chargen.One onto the NAS, and then allow my isolated backup server to pull from the NAS. If that sounds a little paranoid, at least you understand why I use OpenBSD.

Backups are taken by root on a nightly basis, and put into a folder belonging to a dedicated backup user. Early in the morning, the backup is pulled from a system which deletes the backup from chargen.one. The remote backup system will store 30 days' worth of backups.

Configuring a backup account

The /home partition is the largest on the VPS, so I created an account to store backups in a temporary folder, create an archive, delete the temporary folder and change permissions so the remote backup system can pull and remove the backup archive.

To set up the backup user account, use the following commands (as root):

# useradd -m backup
# chmod 700 /home/backup
# su - backup
$ cd .ssh

On the backup system, generate an ssh keypair using ssh-keygen -t ed25519. Copy the contends of id_ed25519.pub from the backup system into /home/backup/.ssh/authorized_keys on the server being backed up.

SSH into the backup account on the server from the NAS to make sure everything works.

Each backup archive is created by a script that creates and stores content in /home/backup/backup/. Once backed up, the script will create a timestamped archive file and delete the /home/backup/backup/ directory. The script starts off very simply:

#!/bin/sh

mkdir /home/backup/backup
# Add stuff below here


# Don't add stuff below here
rm -rf /home/backup/backup

Backing up MySQL data

If you want to implement my backup scheme and don't run MariaDB or MySQL, then skip this section and backup using commands from base only.

Because MySQL is configured to use passwords, a /root/.my.cnf file containing credentials for the mysqldump command is needed.

[mysqldump]
user=root
password=your_password_here

The mysqldump command fully backs up all mysql databases, routines, events and triggers.

Add the following to the backup script (all one line):

mysqldump -A -R -E --triggers --single-transaction  > /home/backup/backup/mysql.gz

The --single-transaction option causes the backup to take place without locking tables.

Backing up a package list

OpenBSD uses it's own package management system called pkg. To create a backup of installed packages add the following to the backup script:

pkg_info -mz > /home/backup/backup/packages.txt

This can then be restored from a backup using pkg_add -l packages.txt.

Backing up files

The following files and directories should be backed up:

  • /etc
  • /root
  • /var/www
  • /var/log
  • /var/cron
  • /home, excluding /home/backup
  • /usr/local/bin/writefreely
  • /usr/local/share/writefreely

Use the tar command to create backups. A discussion of the tar command is best left to man tar, but as the backup isn't very large, I'm not using incremental backups, which keeps things simple...up to a point.

OpenBSD's tar implementation doesn't add the --exclude option as it's a GNU extension. Other BSDs such as FreeBSD do add the option, but the OpenBSD team prefer not to have it. I could've added the GNU tar package, but one of the stated goals of the script is to not require additional software to keep things portable. Paths such as /home/backup are excluded using shell expansion instead.

To test this, try the following command:

# tar cvf bk.tar /home/!(backup)

The exclamation mark means exclude anything in the parentheses. For multiple directories, separate the names with a pipe symbol, e.g. !(backup|user) to exclude both backup and user directories.

There are complications and error messages will be shown on each backup if absolute paths are added to the tar command. This means that an email would be generated every night, even if the backup succeeds. Ain't nobody got time for that.

As a workaround, changing to the root directory at the start makes all paths relative, and allows the shell expansion to work. The -C switch can be used instead, but this breaks shell expansion.

The final commands to go in the script look like this:

cd /
tar cf /home/backup/backup/files.tar etc/!(spwd.db) root \
	var/www var/log home/!(backup) /var/cron \
	usr/local/bin/writefreely usr/local/bin/backup.sh \
	usr/local/share/writefreely 

I've used backslashes to break up the lines for readability, but all the paths could be put on a single line if preferred.

I've excluded /etc/spwd.db from the backup because OpenBSD's built-in tar uses a feature called pledge that restricts access to certain files. The file isn't particularly important to this specific backup, but contains the shadow password database, which I'm happy to recreate as part of the restore process.

At this point you might wonder why gzip compression isn't being used in the tar archive. This is because the final archive will be compressed, and there's no point in compressing twice.

Creating the final archive

To distinguish between backups by date, I used a timestamp, generated by the date command. By default there are spaces and colons, neither of which are good for interoperability across Operating Systems and filesystems. Use date +%F_%H%M%S to generate a more reasonable format. Using tar's -C switch changes the tar working directory to /home/backup and stops a leading / error message appearing in the backup.

The final tar command in the backup script should look like this:

tar zcf /home/backup/c1_$(date +%F_%H%M%S).tgz -C /home/backup backup

It's also important to change the file ownership to the backup user so the remote system can delete the backup after it's been created.

chown backup:backup /home/backup/c1_*

The full backup script on chargen.one looks like this:

#!/bin/sh

mkdir /home/backup/backup
# Add stuff below here

# MySQL Backup
mysqldump -u root -A -R -E --triggers --single-transaction | gzip -9 > /home/backup/backup/mysql.gz

# Packages backup
pkg_info -mz > /home/backup/backup/packages.txt

# Files backup
cd /
tar cf /home/backup/backup/files.tar etc/!(spwd.db) root \
	var/www var/log home/!(backup) /var/cron \
	usr/local/bin/writefreely usr/local/bin/backup.sh \
	usr/local/share/writefreely 

# Final archive
tar zcf /home/backup/c1_$(date +%F_%H%M%S).tgz \
	-C /home/backup backup

# Fix permissions
chown backup:backup /home/backup/c1_*

# Don't add stuff below here
rm -rf /home/backup/backup

Automating the backup

As root (via su -, not doas), use crontab -e and add the following entry:

0 1 * * * /usr/local/bin/backup.sh

A new backup is created at 1am, every morning. On the remote server, a cron job calls a script at 3am to pull the backup down via scp using the following:

#!/bin/sh

find /Backups/c1/ -mtime +30 -exec rm {} \;
scp -q backup@chargen.one:./c1_*.tgz /Backups/c1/
ssh backup@chargen.one rm c1_*.tgz

And that's it! The secondary backup server pulls down the contents of /Backups from the NAS, so there's nothing left to do.

I'll write a separate post about restoring, as this post is already getting long, but hopefully it's useful to people who want pull, rather than push backups.

Chargen.one is a little different to most writing experiences. It's still, very very much in development, so things are in places where you might not ordinarily expect them to be, and some things aren't there at all.

I've configured everything to support multiple blogs per user. As this is a themed instance, some people may want to have more than one blog. Your first blog uses your username by default, but each author can create up to 5.

For example, I'm interested in personal finance and investing, but by putting any thoughts into a separate blog, I can spare my less interested readers the boredom that comes with such a thing.

You might also notice the lack of image upload function. This is deliberate. If you want to use images, I'd suggest using something your own hosting or something like PixelFed. It's all markdown here, and easy to export too.

When you first hit publish, instead of going to your blog the post goes to drafts. You can then move it to a blog. This prevents you from accidentally publishing something you don't. Your account is also private by default. It can be made public in the settings.

I've written this in case anyone else wants to try and build their own instance of #Chargen.One on OpenBSD. If you're looking for the easy route to getting Writefreely working, then docker is the way. This is the hard way, but as a federated blogging site with the *BSD community in mind, I felt it important that it runs on a BSD of some sort.

There are two systems involved in Chargen.One, a build system and a deployment system. There is actually a test environment but that's the same as the deployment environment. The build environment is called c0, the deployment is c1. Both run OpenBSD 6.4 at the time of writing.

This post covers setting up the initial webserver with nginx and letsencrypt. Part two will cover the mysql config, part 3 the build and part 4 my deployment process.

Initial housekeeping

On the build system, start by following the process detailed in man afterboot.

I used an OpenBSD.Amsterdam VM for the deployment system, so there's some tweaks to implement before you start.

Installing Nginx and Lets Encrypt

On both c0 and c1 it's the same. As root, run pkg_add nginx. Update /etc/newsyslog.conf as per the info in /usr/local/share/doc/pkg-readmes/nginx.

Preparing Nginx for LetsEncrypt

Add the line include acme.conf; to c1's port 80 server block below root /var/www/htdocs;

Now create a /etc/nginx/acme.conf file with the following

location ^~ /.well-known/acme-challenge {
    alias /var/www/acme;
    try_files $uri =404;
}

Preparing LetsEncrypt

We'll configure C1 to use LetsEncrypt. All content will be served over HTTPS, with only the reader accessible over HTTP for older systems.

Create a domain entry at the bottom of /etc/acme-client.conf file like the following:

domain chargen.one {
        domain key "/etc/ssl/private/chargen.one.key"
        domain certificate "/etc/ssl/chargen.one.crt"
        domain full chain certificate "/etc/ssl/chargen.one.fullchain.pem"
        sign with letsencrypt
}

Getting certs and a working HTTPS setup

Restart nginx, run acme-client -vAD and you should have working certs. Now it's time to configure HTTPS. The commented out defaults are reasonably sane at the time of writing, just change things to point to your certs. Here's what I had set up. We'll change this later.

    server {
        listen       443;
        server_name  chargen.one;
        root         /var/www/htdocs;

        ssl                  on;
        ssl_certificate      /etc/ssl/chargen.one.fullchain.pem;
        ssl_certificate_key  /etc/ssl/private/chargen.one.key;

        ssl_session_timeout  5m;
        ssl_session_cache    shared:SSL:1m;

        ssl_ciphers  HIGH:!aNULL:!MD5:!RC4;
        ssl_prefer_server_ciphers   on;
    }