The latest posts from Chargen.One.

from steve

An Amiga Workbench desktop with this article being written in a text editor

Having decided that I'm unlikely to get the Aston Martin DB5 in Gunmetal grey I wanted as a kid, I got the next best thing and bought an Amiga 4000. Normally when I tell people this, I get one of two reactions:

  • You jammy, jammy sod
  • You bought a what?

For those who've never experienced the Amiga first-hand, number 2 is understandable. Most people who have are in the first camp.

For those in the 2nd camp, the Amiga 4000 is the final set of models in the classic Amiga series. This is the Amiga equivalent of a Ferrari F40. Sleek, spectacular, crazy expensive to run for what it is, and entirely impractical. There are some mildly insane design choices, bugs and because Commodore cheaped out at the last minute, several pretty fatal things that can happen to it if it's not taken care of exceptionally well over it's lifetime.

Owning an Amiga 4000 is not like owning an Amiga 1200 or 500. This isn't a machine built for gaming. After all, I can game just fine with near perfect emulation thanks to WHDLoad, and a Raspberry Pi is pretty much the fastest gaming Amiga you can get.

I'm using the Amiga 4000 for productivity, mostly creative. Yes, you read that right. No, I'm not insane. It's 2019, and I've bought an Amiga to use for actual day to day creative things. As shown in the screenshot above, this article was even written on the Amiga.

To be creative I need to move files back and forth. The Amiga 4000 is the only regularly used device I have with a floppy drive, so that's out as a medium.

Thankfully the Amiga 4000 has a DVD-rewriter. Transferring files over DVD/CD works well. The Amiga uses the Joliet, not UDF filesystem, and has some slight preferences for odd CD writing configurations. On the whole, it works.

Burning CDs for small amounts of data gets old after a while though, and I'd prefer some sort of network connectivity, at least till the MNT ZZ9000 comes online. The easiest and cheapest way to do this is with an X-Surf 100, an Ethernet card available for around 100 Euro. As I won't have a use for the X-Surf after my ZZ9000 arrives, I'm trying a serial link to a Raspberry Pi instead. Here's how it's set up.


First you'll need some hardware. Some of this you can build yourself or you could buy the parts and salvage things lying around like I did. You will need:

  • A Raspberry Pi, power cable, Micro SD card, Raspbian etc.
  • An Amiga with a 25-pin serial port
  • A 9-pin to 25-pin serial adapter. I used this one from Amigakit
  • A USB-Serial cable

Stage 1, Basic Connectivity

To start, connect the Raspberry Pi to the USB cable, the USB cable to the 25 pin adapter, and the 25 pin adapter to the Amiga. Congrats, your Amiga is now physically linked to the Pi!

I'm using Term v4.8 from Aminet to get basic terminal emulation running. You'll want to configure serial settings as follows:

Amiga Term configured for 115200 xfer with 8/n1 and no flow control.

You'll be asked about enabling RTS/CTS, things seem to work fine with it switched off.

Note: Term needs paths defined (Settings –> Paths from the Menu) or your files won't be saved. Also make sure that you save your settings from the pulldown Settings menu.

On the Raspberry Pi you'll need to install screen via apt-get. In a console, enter the following:

screen /dev/ttyUSB0 115200

The device name might vary dependent upon the USB-Serial converter you're using or how you're connected. It could be /dev/ttyAMA0 or /dev/ttyACM0 in some cases. Check dmesg and the contents of /dev/ if you get stuck.

Hello From the Amiga on the PC

If you type in the screen window, you should see the text echoed in the Term session on the Amiga. If you type on the Amiga you should see text show up on the screen session on the Pi.

Hello from the PC on the Amiga

It'd be a little boring if this was all you could do. Lets transfer some files. I downloaded Delitracker and the update to 2.34 onto the Raspberry Pi with wget. In the screen session, I pressed Ctrl-A and typed in the following:

: exec sz -b delitracker232.lha

The Term session should spring to life and start receiving a file, which will be saved in the path you specified earlier. Extract delitracker, run the installer and repeat with the update file. Of course, now you have a mod music player, it's only fair that you should go to UnExoticA and get some tunes to play.

Sending files back.

Sending files back to the Raspberry Pi is pretty easy, getting screen to receive them is only slightly more involved. Drag and drop the file you want to send onto Term's “term Upload queue” icon. On the Raspberry Pi's screen session, press Ctrl-A and enter : exec !! rz.

Transfer config on the Amiga side

The Amiga's term window will ask you about the file you're about to send. Set it to binary transfer and it'll land in the directory where you originally launched screen.

PC Receiving a file

Going further

You could run a full login terminal on the Raspberry Pi over serial and use that to log into the Pi via Term. While it's certainly cute, it reduces a very expensive Amiga 4000 to a dumb terminal. Instead I plan on using the Pi as a support system for the Amiga, where it does things that are menial, boring or just too slow for the Amiga to take care of. The next thing for me to do is to get TCP/IP networking via PPP, which I'll cover in another post.

In the meantime, here's the Amiga in it's home, on the right of this picture.

My Battlestation setup, with the Amiga on the right


from h3artbl33d

You might have noticed that if you run NextCloud on OpenBSD, with the chroot option enabled in php-fpm.ini, that the occ command and cronjob fail miserably. That can be fixed!

You might have stumbled upon the following error if you have tried to run the occ command or that the cronjob fails:

Your data directory is invalid
Ensure there is a file called ".ocdata" in the root of the data directory.

Cannot create "data" directory
This can usually be fixed by giving the webserver write access to the root directory. See

That is due to the chroot option in php-fpm.conf. Both the occ command and the cronjob use the cli interpreter, rather than fpm. Disabling that feels like giving a piece of your sanity to the devil. So, let's fix that! Fire up your favorite editor and open config/config.php in the NextCloud docroot. You specifically want to edit the datadirectory variable:

'datadirectory' => '/ncdata',

Change this one to:

'datadirectory' => ((php_sapi_name() == 'cli') ? '/var/www' : '') . '/ncdata',

...and it's fixed! Basically, that this does, is it prepends /var/www if a NextCloud function is called from the commandline.


from steve


In a previous article I wrote about how I've changed my relationship with my phone. One of the benefits of degoogling is that your device is a little less spied upon. The downside of running lineage for microg is that certain functionality is a bit harder to come by.

I like minimal notifications, but there are things happening I want to know about. On iOS I used prowl to tell me about reboots. As there's no f-droid client I found myself without a emergency notification system. I saw gotify in the f-droid app and thought I'd give it a go. So far, I'm pretty happy with it.

I recently rebuilt an old unused box to self-host low-priority services. I'm a big fan of self-hosting having been burnt several times by online services. I'm not against online services making a living, but I'd rather own my stuff than rent.

The box was rebuilt to use docker and docker-compose. I find docker a double-edged sword. You either have to maintain your own docker repository or trust someone else's. This box only runs low-priority services. I'm ok running images from other people's repositories.

Installing Gotify Server With Docker-Compose

I set up caddy as a front-end service to manage letsencrypt. I prefer nginx but for docker, Caddy's fine. I also use ouroboros to auto-update images when new ones come out. If I'm going to use other people's repos I may as well get some value out of it.

Creating a gotify docker-compose entry was easy. I've included ouroboros and the caddy frontend in mine below:

version: '3'
    container_name: ouroboros
    hostname: ouroboros
    image: pyouroboros/ouroboros
      - CLEANUP=true
      - INTERVAL=300
      - LOG_LEVEL=info
      - SELF_UPDATE=true
      - IGNORE=mongo influxdb postgres mariadb
      - TZ=Europe/London
    restart: unless-stopped
      - /var/run/docker.sock:/var/run/docker.sock
    container_name: caddy
    image: abiosoft/caddy:no-stats
    restart: unless-stopped
      - ./caddy/Caddyfile:/etc/Caddyfile
      - ./caddy/caddycerts:/etc/caddycerts
      - ./caddy/data:/data:ro
      - "80:80"
      - "443:443"
      - ./caddy/caddy.env
    container_name: gotify
    image: gotify/server
    restart: unless-stopped
      - ./apps/gotify/data:/app/data

My caddy config needed an additional section for the new host: {
  root /data
  log stdout
  errors stdout
  proxy / gotify:80 {

Hostnames have been changed to protect the innocent. When using caddy, specify websocket in the proxy section. The Android app uses websockets to handle notifications.

A quick docker-compose up -d and I was up and running. The default username and password is admin/admin. Change that first, then create a user account to receive notifications.

After creating the user account, log out of admin, and log back in as the new user. Notifications are per-application and per-user. You'll have to send notifications for each user. I hope group notifications will be possible at some point.

Gotify notifications

I added a cute puppy picture to my app, making unexpected reboots all the more cute. The installed the gotify app from f-droid and added my server. I checked the app and server logs for HTTP 400 errors. This would stop notifications from working.

A Portable Commandline Notification Tool

I wrote a quick python-based tool to send notifications from the command line. You can use the official gotify client tool, or even curl. I wanted something portable that would work without 3rd-party libraries.

#!/usr/bin/env python
# - A python gotify client using only built-in modules
import json, urllib, urllib2, argparse
parser = argparse.ArgumentParser(description='gotify python client')
parser.add_argument('-p','--priority', help="priority number (higher's more intrusive)", type=int, required=True)
parser.add_argument('-t','--title', help="title notification", required=True)
parser.add_argument('-m','--message', help="message to display", required=True)
parser.add_argument('-v','--verbose', help="print response", action='store_true')
args = parser.parse_args()
url = ''
data = urllib.urlencode({"message": args.message, 
			"priority": args.priority,
			"title": args.title})
req = urllib2.Request(url, data)
resp = urllib2.urlopen(req)
if args.verbose:

If you use the script, don't forget to change the token value in the url variable to one for your app.

The final thing to do is to set up a reboot notification for the box. We can do this on OpenBSD using a cron job. I've copied into /usr/local/bin and set up a cron job as a normal user to run on reboot:

@reboot python2 /usr/local/bin/ -p 8 -t "" -m "Rebooted at `date`"

Now if we reboot the system, we can check that it's working by looking in /var/cron/log:

Apr 6 16:11:10 chargen cron[41858]: (asdf) CMD (python2 /usr/local/bin/ -p 8 -t "" -m "Rebooted at `date`")

Please note that some OSes only run @reboot jobs for root. If you're having trouble, check your cron daemon supports non-root @reboot jobs.

If you're wondering what else I plan to use this for, it's not really much. I like only having serious event notifications and want to keep it minimal. Some of the things I'll use this for include:

  • Reboot notifications across servers
  • New device connected to home networks
  • Motioneye detected movement in the conservatory

For pretty much everything else, there's email and I can pick that up in slow time.


from V6Shell (Jeff)

the latest release as of Thursday, 2019/03/28 =^)

#original #UNIX command interpreter ( #cli aka shell )

Links to all immediately relevant files for this release are available via the primary Sources page at . has everything a person might need/want to get started with etsh-5.4.0; other useful files include:

... There is a screenshot below this paragraph (see caption); the shells/etsh OpenBSD package/port installs using PREFIX=/usr/local and SYSCONFDIR=/etc by default.

pev-example.png caption: etsh-5.4.0 running as an interactive login shell and executing the pev script for fun

Enjoy! =)

Jeff ( for short )


from High5!

Recently we wrote a post on Moving back to Lighttpd and Michael Dexter thought I could spend my time wisely and do a short write-up on our use of dehydrated with Lighttpd.

In order to start with dehydrated we of course need to install it:

# pkg install dehydrated

Once it's all installed you can find the dehydrated configration /usr/local/etc/dehydrated

Your hosts and domains you want to get certificates for need to be added to domains.txt. For example:

The first host/domain listed will be used as filename to store the keys and certificates. There are a number of examples in the file itself if you want to get funky.


If you want to restart services or do anything special, for example in the case when new certificates are generated, there is a file called This script allows you to hook into any part of the process and run commands during that part of the process.

The hook we are using is for deploy_cert(). We are going to use this hook for: – creating a PEM certificate for Lighttpd – change owner to www – restart Lighttpd

What that looks like is something like this:

deploy_cert() {
    cat "${KEYFILE}" "${CERTFILE}" > "${BASEDIR}/certs/${DOMAIN}/combined.pem"
    chown -R www "${KEYFILE}" "${FULLCHAINFILE}" "${BASEDIR}/certs/${DOMAIN}/combined.pem"
    service lighttpd restart

The last part that is needed is to make sure this is run every day with cron.

@daily  root /usr/local/bin/dehydrated -c

In most cases this will be all that is needed to get going with dehydrated.


You will need to let Lighttpd know about dehydrated and point it to acme-challange in the .well-known directory. You can do this with an alias like:

alias.url += ("/.well-known/acme-challenge/" => "/usr/local/www/dehydrated/")

The Lighttpd config we are using for SSL/TLS is the following:

$SERVER["socket"] == ":443" {
  ssl.engine = "enable" 
  ssl.pemfile = "/usr/local/etc/dehydrated/certs/" = "/usr/local/etc/dehydrated/certs/"
  ssl.dh-file = "/usr/local/etc/ssl/dhparam.pem" = "secp384r1"
  setenv.add-response-header = (
    "Strict-Transport-Security" => "max-age=31536000; includeSubdomains",
    "X-Frame-Options" => "SAMEORIGIN",
    "X-XSS-Protection" => "1; mode=block",
    "X-Content-Type-Options" => "nosniff",
    "Referrer-Policy" => "no-referrer",
    "Feature-Policy" =>  "geolocation none; midi none; notifications none; push none; sync-xhr none; microphone none; camera none; magnetometer none; gyroscope none; speaker none; vibrate none; fullscreen self; payment none; usb none;"  

To finish it all you can now run dehydrated, in most cases would be:

# dehydrated -c

The complete Lighttpd config can be found in our Git Repository.


from OpenBSD Amsterdam

The post written about rdist(1) on sparked us to write one as well. It's a great, underappreciated, tool. And we wanted to show how we wrapped doas(1) around it.

There are two services in our infrastructure for which we were looking to keep the configuration in sync and to reload the process when the configuration had indeed changed. There is a pair of nsd(8)/unbound(8) hosts and a pair of hosts running relayd(8)/httpd(8) with carp(4) between them.

We didn't have a requirement to go full configuration management with tools like Ansible or Salt Stack. And there wasn't any interest in building additional logic on top of rsync or repositories.

Enter rdist(1), rdist is a program to maintain identical copies of files over multiple hosts. It preserves the owner, group, mode, and mtime of files if possible and can update programs that are executing.

The only tricky part with rdist(1) is that in order to copy files and restart services, owned by a privileged user, has to be done by root. Our solution to the problem was to wrap doas(1) around rdist(1).

We decided to create a separate user account for rdist(1) to operate with on the destination host, for example:

ns2# useradd -m rupdate

Create an ssh key on the source host where you want to copy from:

ns1# ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_rdist

Copy the public key to the destination host for the rupdate user in .ssh/authorized_keys.

In order to wrap doas(1) around rdistd(1) we have to rename the original file. It's the only way we were able to do this.

Move rdistd to rdistd-orig on the destination host:

ns2# mv /usr/bin/rdistd /usr/bin/rdistd-orig

Create a new shell script rdistd with the following:

/usr/bin/doas /usr/bin/rdistd-orig -S

Make it executable:

ns2# chmod 555 /usr/bin/rdistd

Add rupdate to doas.conf(5) like:

permit nopass rupdate as root cmd /usr/bin/rdistd
permit nopass rupdate as root cmd /usr/bin/rdistd-orig

Once that is all done we can create the files needed for rdist(1).

To copy the nsd(8) and unbound(8) configuration we created a distfile like:

HOSTS = ( )

FILES = ( /var/nsd )

EXCL = ( nsd.conf *.key *.pem )

${FILES} -> ${HOSTS}
	install ;
	except /var/nsd/db ;
	except /var/nsd/etc/${EXCL} ;
	except /var/nsd/run ;
	special "logger rdist update: $REMFILE" ;
	cmdspecial "rcctl reload nsd" ;

/var/unbound/etc/unbound.conf -> ${HOSTS}
	install ;
	special "logger rdist update: $REMFILE" ;
	cmdspecial "rcctl reload unbound" ;

The distfile describes the destination HOSTS, the FILES which need to be copied and need to be EXCLuded. When it runs it will copy the selected FILES to the destination HOSTS, except the directories listed.

The install command is used to copy out-of-date files and/or directories.

The except command is used to update all of the files in the source list except for the files listed in name list.

The special command is used to specify sh(1) commands that are to be executed on the remote host after the file in name list is updated or installed.

The cmdspecial command is similar to the special command, except it is executed only when the entire command is completed instead of after each file is updated.

In our case the unbound(8) config doesn't change very often, so we used a label to only update this when needed. With:

ns1# rdist unbound

To keep our relayd(8)/httpd(8) in sync we did something like:

HOSTS = ( )

FILES = ( /etc/acme /etc/ssl /etc/httpd.conf /etc/relayd.conf /etc/acme-client.conf )

${FILES} -> ${HOSTS}
	install ;
	special "logger rdist update: $REMFILE" ;
	cmdspecial "rcctl restart relayd httpd" ;

If you want cron(8) to pick this via the system script daily(8) you can save the file as /etc/Distfile.

To make sure the correct username and key are used you can add this to your .ssh/config file:

	User rupdate
	IdentityFile ~/.ssh/id_ed25519_rdist

When you don't store the distfile in /etc you can add the following to your .profile:

alias rdist='rdist -f ~/distfile'

Running rdist will result in the following type of logging on the destination host:

==> /var/log/daemon <==
Nov 13 09:59:15 name2 rdistd-orig[763]: ns2: startup for

==> /var/log/messages <==
Nov 13 09:59:15 ns2 rupdate: rdist update: /var/nsd/zones/reverse/

==> /var/log/daemon <==
Nov 13 09:59:16 ns2 nsd[164]: zone read with success                     

You can follow us on Twitter and Mastodon.


from High5!

There are some FreeBSD machines in our infrastructure which run NGINX. After the recent announcement on the F5 purchase of NGINX we decided to move back to Lighttpd.

We have not seen a lot of open source projects doing well after the parent company got acquired. We used Lighttpd in the past, before the project stalled, doesn’t seem to be the case anymore. We decided to check it out again.

The configuration discussed here is roughly what we used NGINX for.

A lot of the options within Lighttpd are enabled by using modules. These are the modules we have enabled on all our Lighttpd servers.

server.modules = (

To specify which IP and port Lighttpd listens on is defined in a couple of different ways. For IPv4 server.port and server.bind are used. For IPv6 you have to use $SERVER[“socket”]. The same is true for the SSL config.

server.port = "80"
server.bind = ""
$SERVER["socket"] == "[::]:80" { }
$SERVER["socket"] == "[::]:443" { }
$SERVER["socket"] == ":443" {
  ssl.engine = "enable"
  ssl.pemfile = "/usr/local/etc/ssl/certs/" = "/usr/local/etc/ssl/certs/"
  ssl.dh-file = "/usr/local/etc/ssl/certs/dhparam.pem" = "secp384r1"
  setenv.add-response-header = (    "Strict-Transport-Security" => "max-age=31536000; includeSubdomains",
    "X-Frame-Options" => "SAMEORIGIN",
    "X-XSS-Protection" => "1; mode=block",
    "X-Content-Type-Options" => "nosniff",
    "Referrer-Policy" => "no-referrer",
    "Feature-Policy" =>  "geolocation none; midi none; notifications none; push none; sync-xhr none; microphone none; c
amera none; magnetometer none; gyroscope none; speaker none; vibrate none; fullscreen self; payment none; usb none;" 

Lighttpd requires a PEM certificate. Which you can easily create with: # cat domain.key domain.crt > combined.pem

You can create the dhparam.pem file with: # openssl dhparam -out dhparam.pem 4096

These are the global settings we are using on FreeBSD related to the server settings.

server.username = "www"
server.groupname = "www" = "/var/run/"
server.event-handler = "freebsd-kqueue"
server.stat-cache-engine = "disable"
server.max-write-idle = 720
server.tag = "unknown"
server.document-root = "/usr/local/www/default/"
server.error-handler-404 = "/404.html"
accesslog.filename = "/usr/local/www/logs/lighttpd.access.log"
server.errorlog = "/usr/local/www/logs/lighttpd.error.log"
server.dir-listing = "disable"

Some global settings which apply to all the websites served by Lighttpd.

index-file.names = ("index.php", "index.html", "index.htm")
url.access-deny = ("~", ".inc", ".sh", "sql", ".htaccess")
static-file.exclude-extensions = (".php", ".pl", ".fcgi")

Alias for Let's Encrypt.

alias.url += ("/.well-known/acme-challenge/" => "/usr/local/www/acme/")

Enable compression for certain filetypes.

compress.cache-dir = "/tmp/lighttpdcompress/"
compress.filetype = ("text/plain", "text/css", "text/xml", "text/javascript")

When authentication is needed you can specify this as below. Different backends are supported.

auth.backend = "htpasswd"
auth.backend.htpasswd.userfile = "/usr/local/etc/lighttpd/htpasswd"

General Expire and Cache-Control headers for certain filetypes.

$HTTP["url"] =~ "\.(js|css|png|jpg|jpeg|gif|ico)$" {
  expire.url = ( "" => "access plus 1 months" )

When you are running Wordpress sites you might want to deny access to certain urls.

$HTTP["url"] =~ "/(?:uploads|files|wp-content|wp-includes).*\.(php|phps|txt|md|exe)$" {
  url.access-deny = ("")
$HTTP["url"] =~ "/(wp-config|xmlrpc)\.php$" {
  url.access-deny = ("")

Define for which host and url the authentication is needed.

$HTTP["host"] =~ "" {
  auth.require = ( "/admin/" => (
    "method" => "basic",
    "realm" => "Restricted",
    "require" => "valid-user" )

Redirect certain hosts from http to https.

$HTTP["host"] =~ "(www\.)?" {
  url.redirect = ("^/(.*)" => "$1")

There is a module available which helps to assign the correct server.document-root for virtual hosts. This can be done with mod_evhost and we are using the following pattern:

$HTTP["host"] =~ "^(www.)?[^.]+\.[^.]+$" {
  evhost.path-pattern = "/usr/local/www/www.%2.%1/"

To be able to use pretty urls with Wordpress you can use the following mod_rewrite rules.

url.rewrite = (
  "^/(wp-.+).*/?" => "$0",
  "^/(.*)\.(.+)$" => "$0",
  "^/(.+)/?$" => "/index.php/$1"

The final piece of the puzzle for when you are using PHP-FPM the following config can be used.

fastcgi.server = ( ".php" =>
  ( "localhost" =>
      "host" => "",
      "port" => 9000

The complete config can be found in our Git Repository


from V6Shell (Jeff)

It looks like my 1st (or 0th) post was blank. I was rather lost when I saw “Write ...” the first time around, but I get it now (and fixed it). Hahaha :^D

Post 1

Suppose my posts as they are and will be are like in C, ...

main(int argc, char *argv[])

    /* do nothing successfully */
    return 0;

.. where if (argv[0] == NULL || *argv[0] == '\0') can only be true if you don't want execve(2) to (be able to) execute your program successfully and return 0. If memory serves, you'll end up with a core dump too in all cases?? Could be! I don't remember the last time I tried; I have an old test case somewhere.

Either way, good fun all around it is :^)


from V6Shell (Jeff)

Post 0 (or void)

The next 1 is better, and the other side of (the) void might help to liberate us all.

... I'll stop there .. Buenas noches por ahora .


from OpenBSD Amsterdam

OpenBSD Amsterdam was in search of a lightweight toolset to keep track of resource usage, at a minimum the CPU load generated by the vmm(4)/vmd(8) hosts and the traffic from and to the hosts. A couple of weeks ago we ended up with a workable MRTG setup. While it worked, it didn't look very pretty.

In a moment of clarity, we thought about using RRDtool. Heck, why shouldn't we give it a try? From the previous tooling, we already had some required building blocks in place to make MRTG understand the CPU Cores and uptime from OpenBSD.

Before we start:

# pkg_add rrdtool

We decided to split the collection of the different OIDs (SNMP Object Identifiers) into three different scripts, which cron(1) calls, from a wrapper script.


test -n "$1" || exit 1
TICKS=$(snmpctl snmp get ${HOST} community ${COMMUNITY} oid hrSystemUptime.0 | cut -d= -f2)
DAYS=$(echo "${TICKS}/8640000" | bc -l)
HOURS=$(echo "0.${DAYS##*.} * 24" | bc -l)
MINUTES=$(echo "0.${HOURS##*.} * 60" | bc -l)
SECS=$(echo "0.${MINUTES##*.} * 60" | bc -l)
test -n "$DAYS" && printf '%s days, ' "${DAYS%.*}" > ${UPTIMEINFO}
printf '%02d\\:%02d\\:%02d\n' "${HOURS%.*}" "${MINUTES%.*}" "${SECS%.*}" >> ${UPTIMEINFO}

This is a seperate script, due to the uptime usage of both hosts in both graphs.

The origins for this script can be found detailled in our MRTG Setup.

test -n "$1" || exit 1
WATERMARK="OpenBSD Amsterdam -"
UPTIME=$(cat /tmp/${HOST}-uptime.txt)
NOW=$(date "+%Y-%m-%d %H:%M:%S %Z" | sed 's/:/\\:/g')

if ! test -f "${RRDFILES}/${HOST}-cpu.rrd"
echo "Creating ${RRDFILES}/${HOST}-cpu.rrd"
${RRDTOOL} create ${RRDFILES}/${HOST}-cpu.rrd \
        --step 300 \
        DS:ds0:GAUGE:600:U:U \

snmpctl snmp walk ${HOST} community ${COMMUNITY} oid hrProcessorLoad | cut -d= -f2 > ${CPUINFO}
CORES=$(grep -cv "^0$" ${CPUINFO})
CPU_LOAD_SUM=$(awk '{sum += $1} END {print sum}' ${CPUINFO})
CPU_LOAD=$(echo "scale=2; ${CPU_LOAD_SUM}/${CORES}" | bc -l)

${RRDTOOL} update ${RRDFILES}/${HOST}-cpu.rrd N:${CPU_LOAD}

${RRDTOOL} graph ${IMAGES}/${HOST}-cpu.png \
        --start -43200 \
        --title "${HOST} - CPU" \
        --vertical-label "% CPU Used" \
        --watermark "${WATERMARK}" \
        DEF:CPU=${RRDFILES}/${HOST}-cpu.rrd:ds0:AVERAGE \
        AREA:CPU#FFCC00 \
        LINE2:CPU#CC0033:"CPU" \
        GPRINT:CPU:MAX:"Max\:%2.2lf %s" \
        GPRINT:CPU:AVERAGE:"Average\:%2.2lf %s" \
        GPRINT:CPU:LAST:" Current\:%2.2lf %s\n" \
        COMMENT:"\\n" \
        COMMENT:"  SUM CPU Load / Active Cores = % CPU Used\n" \
        COMMENT:"  Up for ${UPTIME} at ${NOW}"

On the first run, RRDtool will create the .rrd file. On every subsequent run, it will update the file with the collected values and update the graph.

The origins for this script can be found detailled in our MRTG Setup.

test -n "$1" || exit 1                                                                             
test -n "$2" || exit 1                                                                             
WATERMARK="OpenBSD Amsterdam -"
UPTIME=$(cat /tmp/${HOST}-uptime.txt)
NOW=$(date "+%Y-%m-%d %H:%M:%S %Z" | sed 's/:/\\:/g')                                              

if ! test -f "${RRDFILES}/${HOST}-${INTERFACE}.rrd"                                                
echo "Creating ${RRDFILES}/${HOST}-${INTERFACE}.rrd"                                               
${RRDTOOL} create ${RRDFILES}/${HOST}-${INTERFACE}.rrd \                                           
        --step 300 \
        DS:ds0:COUNTER:600:0:1250000000 \
        DS:ds1:COUNTER:600:0:1250000000  \
        RRA:AVERAGE:0.5:1:600 \
        RRA:AVERAGE:0.5:6:700 \
        RRA:AVERAGE:0.5:24:775 \
        RRA:AVERAGE:0.5:288:797 \
        RRA:MAX:0.5:1:600 \
        RRA:MAX:0.5:6:700 \
        RRA:MAX:0.5:24:775 \

IN=$(snmpctl snmp get ${HOST} community ${COMMUNITY} oid ifInOctets.${INTERFACE} | cut -d= -f2)    
OUT=$(snmpctl snmp get ${HOST} community ${COMMUNITY} oid ifOutOctets.${INTERFACE} | cut -d= -f2)  
DESCR=$(snmpctl snmp get ${HOST} community ${COMMUNITY} oid ifDescr.${INTERFACE} | cut -d= -f2 | tr
-d '"')

${RRDTOOL} update ${RRDFILES}/${HOST}-${INTERFACE}.rrd N:${IN}:${OUT}                              

${RRDTOOL} graph ${IMAGES}/${HOST}-${INTERFACE}.png \                                              
        --start -43200 \
        --title "${HOST} - ${DESCR}" \
        --vertical-label "Bits per Second" \
        --watermark "${WATERMARK}" \
        DEF:IN=${RRDFILES}/${HOST}-${INTERFACE}.rrd:ds0:AVERAGE \                                  
        DEF:OUT=${RRDFILES}/${HOST}-${INTERFACE}.rrd:ds1:AVERAGE \                                 
        CDEF:IN_CDEF="IN,8,*" \
        CDEF:OUT_CDEF="OUT,8,*" \
        AREA:IN_CDEF#00FF00:"In " \
        GPRINT:IN_CDEF:MAX:"Max\:%5.2lf %s" \
        GPRINT:IN_CDEF:AVERAGE:"Average\:%5.2lf %s" \                                              
        GPRINT:IN_CDEF:LAST:" Current\:%5.2lf %s\n" \                                              
        LINE2:OUT_CDEF#0000FF:"Out" \
        GPRINT:OUT_CDEF:MAX:"Max\:%5.2lf %s" \
        GPRINT:OUT_CDEF:AVERAGE:"Average\:%5.2lf %s" \                                             
        GPRINT:OUT_CDEF:LAST:" Current\:%5.2lf %s\n" \                                             
        COMMENT:"\\n" \
        COMMENT:"  Up for ${UPTIME} at ${NOW}"

To pinpoint the network interface you want to measure the bandwith for, this command prints the available interfaces:

snmpctl snmp walk [host] community [community] oid ifDescr

This will output a list like:


The number behind ifDescr is the one that you need to feed to, for example:

# 5

Finally the script calls all the aforementioned scripts:

for i in $(jot 2 1); do ${SCRIPTS}/ host${i}.domain.tld; done
for i in $(jot 2 1); do ${SCRIPTS}/ host${i}.domain.tld; done
${SCRIPTS}/ host1.domain.tld 12
${SCRIPTS}/ host2.domain.tld 11

The resulting graphs:

To serve the graphs we use httpd(8) with the following config:

server "default" {
        listen on * port 80
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                request strip 2
        location * {
                block return 302 "https://$HTTP_HOST$REQUEST_URI"

server "default" {
        listen on * tls port 443
        tls {
                certificate "/etc/ssl/default-fullchain.pem"
                key "/etc/ssl/private/default.key"
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                request strip 2
        root "/htdocs"

All the scripts can be found in our Git Repository.

You can follow us on Twitter and Mastodon.


from h3artbl33d


You and I need to have a serious talk about email. I have liberated my email and want to share the experience with you, so you are informed enough to decide whether you want to do the same.

The bad

Currently, the top 10 percent of all mx records mainly consist of Google, with GoDaddy in the second position, as can be gathered from these statistics*:

Mailserver Count % Of total 22,989,327 2,53% 22,984,706 2,54% 10,141,392 1,11% 9,878,764 1,09% 9,800,303 1,08% 5,607,263 0,62% 5,477,548 0,60% 4,449,479 0.49% 4,121,725 0,45% 4,057,221 0,45%

This is bad for a couple of reasons:

  • Neither Google, nor GoDaddy give a flying f*ck about your privacy
  • It's centralization at its worst
  • Everything stored at a few parties, really?

Let's walk through these arguments:

The first argument, privacy, should be obvious. Facebook is very hostile towards user privacy, but Google is even worse. Gmail is offered free of charge, since you are the product. You are an awesome human being – you deserve better. Way better.

And so do the human beings you exchange messages with! Perhaps you haven't thought of this before, but with the usage of Gmail, you also made the choice for the other parties. Every message they send to you – a Gmail user – gets stored on the servers of the big bad G, only to be kept an indefinite amount of time. And logically, this also goes for every message you send to them.

The second argument, centralization, is against the design of the world wide web. It's supposed to be a place to share knowledge, collaborate and to be used to heighten the efficiency of our daily lives. It sure as hell wasn't meant to be controlled by a handful of commercial parties.

Furthermore, while perhaps convenient, it's bad that a few select parties have a huge amount of data, that combined and intertwined is your whole digital persona.

The ugly

Email itself is an old fashioned protocol. It was never designed to mitigate modern threats, nor is it designed to be free of eavesdropping. While more and more mailservers use traffic encryption (TLS) to exchange messages, this is still optional.

A different initiative, GPG – allowing to encrypt the content of the message itself – has failed miserably, because it's too hard to use for the average user. It's easy to make mistakes, especially with frequent usage. And while it allows encryption of the message content, it doesn't do anything about the metadata (to, from, subject, etc).

The good

Last, but certainly not least: this is not the end. It sure as hell isn't too late. The tide can still be turned! And even easier: you can still reclaim the ownership of your mailbox and make sure that your privacy – and the privacy of your contacts – is still respected.

Mainly, there are a couple of ways that aren't hard, to reclaim your inbox:

  • Host your own email server; probably the hardest, but also the most efficient. You could setup your own server at home, throw OpenBSD on it alongside with Dovecot and OpenSMTPd – or use a script like Caesonia to help you with the installation.
  • Go with a privacy friendly provider; much less of a hassle. Popular providers include Mailbox, Mailfence, Fastmail, Tutanota and Protonmail – with the latter two not supporting IMAP, POP3 and SMTP directly.
  • Get yourself a Helm; store your email in the comfort of your home without the hassle of setting and maintaining your own server. It does require setup and maintenance via a mobile app, uses Docker containers internally and is comes in at 299 USD and 99 USD/year from the second year site.

Closing thought

Over the next weeks weeks, I'll be writing more articles and insights into liberating your mailbox, hosting your own server and reclaiming your inbox. Feel free to ask me for help, via mail (prefer to mail with non-Gmail addresses, haha), via Twitter or Mastodon.

Statistics about mailserver/mx usage come from


from Poems

I found you with these eyes exploring your perfection everything identified aren't you just perfection every little record eager to obtain later recollected I fall in love again

everything just perfect everything's alright oh, that day... the day I lost my sight

now given another vision now given another view this one is more empty this one can't see you


from Poems

It's to late Let to fate A retrospective request Please slow down I've not done my best Growing years Growing fears Losing energy Time will pass watching the colour in my life turn glass I wanna go home I've been gone so long That moment at the end of a false smile They'll see I've been gone too long


from steve

Smartphone manipulation

Smartphones and Social Media are ruining our lives, say the press. It's not smartphones or social media itself. It's that we're biologically unprepared for how to deal with them. Smartphones and Social Media are designed to hijack the attention and reward centres of the brain. They do it so well we don't notice how our brains are being altered.

In this post I write about how I changed my Smartphone use, and how this helped me reclaim my time and my attention span. It's about my ability to be present in the moment. Something I lost, then regained.

I won't talk about everything I do. Instead, I'll focus on things you can do. There's no talk of compiling your own firmware, mainlining F-droid or micro-g here. That's for some other time.

Instead, this post is about what I did that you can do without changing your phone.

I never felt more connected to friends through my phone, but so absent in their presence.

How Things Got Out Of Control

A few years ago, I had an iPhone 6. It was the digital tool I used more than anything else. If you could think of a pointless app, it'd be on there. When a notification came I'd hear a noise. The screen would light up, and, in pavlovian style so would my neurons. My phone went to bed with me, it woke up with me, it went to work with me.

The phone takes over our lives, like boiling a frog

I started to find that I was getting less happy. I felt less able to concentrate. I never felt more connected to friends through my phone, but so absent in their presence. My attention span shrivelled. I couldn't watch whole films. Reading books was impossible. In pockets of free time I'd check Facebook, Twitter, Instagram, Email ad nauseum. I've missed buses and trains because I was so absorbed in something I don't even remember reading. I had forgotten boredom. There was no time for my mind to wander. I became an angelheaded hipster, burning for the ancient heavenly connection to the starry dynamo in the machinery of night.

I lost the buzz of low-effort connection, but gained the ability to connect with purpose.

The Flashpoint

When Apple pulled the plug on the headphone jack, I realised my time with Apple's products was over. I didn't want to jump from Apple's walled garden into Google's. Instead I tried to degoogle my life (which is definitely another post in itself).

In the process I found ways to make my phone work for me rather than against me. I found a whole new world of ethical social media. So far, I've gained happiness, time and space for myself. I lost the buzz of low-effort connection, but gained the ability to connect with purpose.

I wanted a sustainable phone experience. This isn't a minimalist experience. This isn't a phone pared back to the basics. It's a phone experience that works for, not against me. Everyone's sustainable phone experience is different. It's a journey, not a goal. A journey I encourage readers to travel.

Stage 1: Do Not Disturb

I'm not waking this pupper up, and neither is my phone

The first thing I did was reduce the volume and timing of notifications I receive. One of the best features on both Android and iOS is Do Not Disturb. This isn't enough alone, but combined with sane rules makes the break between you and your phone.

Since the early days of Blackberry, people were chained to notifications. Notifications have many problems, the worst of which is the impact on sleep. Do Not Disturb helps you take back your sleep. It also lets you take back your time.

Here's how I use my Do Not Disturb settings:

  1. No calls, messages or notifications from 8pm – 10am
  2. Notifications from Tusky, Signal QKSMS and calls on Saturday and Sunday daytime
  3. Exceptions for specific contact groups over specific services

Using contact groups to manage exceptions lets family and friends reach you in your own time. Calls also come through if someone calls 3 times in 5 minutes. This works on both iOS and Android.

Stage 2: Notifications

Putting the no in No-tifications

I also restrict notifications. I restrict which apps can send notifications. I restrict when apps can send notifications.

On my iPhone I used IHG's app to book hotels. The app used notifications to update me about bookings. It also advertised to me. Many apps use notifications for adverts. This doesn't happen on my current phone.

The only apps that can trigger notifications on my phone are:

  • Tusky for Mastodon notifications
  • Mail notifications
  • Calendar notifications
  • Signal Messenger for messages
  • App update notifications from F-Droid and Yalp

For everything else, I can check the app when I feel like it.

Imagine caring what people you barely know are up to while the most amazing person in the world lies next to you.

Stage 3: Reduce Interaction

Oh god, no.

A major try to get social media to notify me by email. This increases the amount of steps needed to respond. I use a dedicated mail account for low-value mail such as notifications and sign-ups. I now respond to notifications on my own time, not when a light pops up.

If I have an email notification, the app will still show it as unread when I visit. I set aside time to use social networks. I try to use them with purpose instead of passively scrolling through every 15 minutes.

Marizel and I noticed we used our phones when we woke up, and used them in bed before sleeping. Imagine caring what people you barely know are up to while the most amazing person in the world lies next to you.

Our phones stay outside of the bedroom now. In fact we use no technology in the bedroom beyond a light and a radiator. The room is now only used for about 3 things, none of which need complex technology.

Stage 4: Trimming Apps

Wanna see my home screen?

My home screen

I deleted Facebook in light of it's continuous commitment to violating privacy. The Cambridge Analytica scandal was the last straw for me. I understand that for many people that's not an option. For example, I'm still a heavy twitter user but it often makes me sad. That's why I keep it off my home screen.

Reducing the amount of apps I have helped a lot. Most of the time, you don't need an app. I started by removing apps I hadn't used in 6 months. I removed apps that had functioning mobile sites and bookmarked them on my home screen. I switched to lighter non-official social media clients that didn't bug or track me.

There are alternatives that will help you get your time back. If you can't delete Facebook, remove the app from your phone. If that's too much, replace it with a dedicated browser app only used for Facebook. Set Facebook's mobile site as the home page in that browser. Bonus points if your dedicated browser app supports ad-blocking.

If you find social media makes you angry or upset, consider consider using it from your Laptop only. Laptops tend not to stay online, unlike phones and tablets. Using a laptop requires a conscious decision to engage instead of a passive default. You can still catch up with friends and family on Facebook, but need to make a little effort to do so. Friction is the best tool to control social media control use.

Setting up ad-blockers on a laptop is often easier than on a phone. Having said that, there are great apps like Better that are worth looking at. Android Firefox supports add-ons on mobile, such as uBlock Origin.

Stage 5: Seasonal Cleaning

It's a journey, not a destination

To keep things light, I created a 3 month folder on my home screen. Every month I go through my installed apps. If I haven't used an app that month, it goes in the 3 month folder. This means the app is on my home screen but not taking up space.

If I use it, it comes out of the drawer and off my home screen. If I don't use the app in 3 months, I uninstall it. This keeps my phone light, quick and clean.

I'm pretty brutal about my home screen. My wallpaper is black, I use dark mode where I can and I keep the screen brightness low. I have 11 icons on my home screen, along with two folders:

  • The 3 months folder discussed earlier
  • A folder named “Don't”.

The Don't folder holds apps I want to use less. Don't doesn't mean, “Don't use this”. It means “Don't make this your default action”. In my Don't folder currently, I have the following apps:

  • Red Reader
  • Tusky

Once I feel my relationship with an app is back on track, I take it out of don't and decide where to put it next. If it doesn't improve, I'll consider removing it. I don't have to remove it. I just have to make an active decision about that app's future.

As I mentioned, my wallpaper is black, but I've found some great options for lockscreens.

An Aside: Kinder, Gentler Social Media

Ethical Social Media exists. You should try it

You might've wondered what Tusky and Mastodon are. Well, I used to use Facebook, Twitter and Instagram. I find that these apps would encourage me to vomit thoughts, argue with people or share things that upset me. I decided to find alternatives, and I'm glad I did.

I use Mastodon as a much happier alternative to Twitter. Mastodon is a bit like a friendlier, happier twitter. It's not the same, but that's a good thing. Instead of Instagram I use Pixelfed but that's still new, so I'm waiting for an Android app. For writing I use writefreely. You're using it now to read this.

These applications are all part of something called the Fediverse. It's a non-commercial, open way of sharing with each other. Nobody's incentivised to get you to like or share. Likewise, nobody's incentivised to like or share your stuff. These spaces tend to be smaller and sometimes less active, but are way healthier.

Ethical social media is less invasive. It avoids the dopamine-feedback loop you get with commercial networks. People can still contact me via social networks on Mastodon and Pixelfed. Of course, there are plenty of options for email.

Stage 6: Making Social Media an Active Choice

There are no wrong answers, just take the time to choose

I've got rid of most of the more evil social media around, how do I reclaim my life? Well, I start by setting particular times to use social media. I check social media on my phone mostly at the start of the day, and about an hour before bed. The rest of the time it needs to be a conscious decision to use it on my laptop.

It takes time to reclaim your attention span. I've found Kindles to be amazing devices for this. I just wish I could find a more open alternative that did what I wanted. I've also found little things to reclaim my attention span.

Instead of using a phone when I get up, I try to make sure Marizel is the first thing I see. If I'm up first I'll spend a few minutes watching her sleep. Sometimes I think about random things. Other times I just watch her. I find this helps me focus on what's important.

I usually make us coffee first thing in the morning and I'll look out of the kitchen window while the kettle boils. It's not an amazing view, but the phone stays in the living room. It gives me time every day for my mind to wander. It's only 5 minutes while I wake up, but it makes a real difference to my perspective.

Final Thoughts

The biggest thing I've had to accept is that this is a work in progress. Sometimes I'm going to fail. I'm going to get into arguments on twitter. I'm going to spend too much time on an app for no good reason. There will be times when I'm physically with people, but mentally absent. It's ok. What's important is that I recognise it, and try to stop it happening next time.

But in a life surrounded by bells and flashing lights I can find the time to be present with those I care about. That's worth more than all the likes and shares in the world.


from h3artbl33d


I have been running an amount of mailservers for the past years – mainly for my firm (as an entrepreneur). A small part is personal, eg, runs it's own mailserver.

Over the time, I outgrew the scenario where a single (or two, with a fallback) server is feasible. Rather than throwing more resources on it, or moving to a more powerful server, I deliberately chose to add additional servers. Not only does this help in setting up a more resilient mail infrastructure, segmentation also benefits security.

In a very early stage, I implemented technologies like SPF, DKIM and DMARC. Most likely, those abbreviations do ring a bell. If not, here is a small explanation:

  • SPF is a technique used on DNS records, it's basically a list of the mailservers that are allowed to send mails from a certain domain.
  • DKIM adds encryption on top of that. It allows to verify whether the sender is allowed to send from a particular domain, by using public key cryptography.
  • DMARC is the newest addition, it not only adds another layer of sender verification, but also handles what action should be taken once a sender fails verification and to whom it should be reported.

These three techniques are a tremendous help in mitigating spoofing. Let's take my domain as an example:

If SPF, DKIM and/or DMARC aren't setup at all, anyone could spoof that domain and portray to be me – eg, use as the sender.

This goes for virtually any domain. Eg, without these techniques and some provider-level filtering, anyone could spoof messages as if they were sent from,,, etc.

Occasionally, I like to experiment with technology. The same goes for email spoofing. In order to have some fun, I stripped an old, deprecated e-mail domain of SPF, DKIM and DMARC. Additionally, the domain I am referring to produces quite some hits on HIBP (Have I Been Pwned).

Next thing, I configured a catch-all on the domain – meaning every single address would be valid and routed to a single inbox – a “Pandora's Box” if you will.. This setup catches around 500 messages a day – all SPAM. The messages vary from offers of drugs on prescription, to SEO offers; from viagra to so called 'lost contacts'.

Sometimes, I start an effort to scam the scammers – mainly inspired by James Veich, by replying and actually spoofing like I was an actual victim.

Over time, I received quite a number of e-mails like this one:

Though the phrasing varies, but it always boils down to that the victim is supposedly hacked. The webcam was supposedly turned on, all digital activities were tracked and logged – including passwords, porn viewing, etc.

While it might be peanuts for a tech-savvy person to prevent or even see it's a scam in the blink of an eye, the same cannot be said for regular users. Heck, it might be really scary to receive such an e-mail.

To put it in perspective, I received a phone call last week, from an alerted customer, that received one of these e-mails. The respective customer does use an e-mail address supplied by the ISP that have a pretty shitty mailserver setup.

The thing that set off the alarm bells was the mention that the webcam was hacked – the customer in question doesn't have a webcam, so it was all sorted out pretty quickly. But nevertheless – receiving such emails can almost cause an heart-attack if you are not able to tell whether it's a scam.

The reason I am writing this blogpiece, is to raise awareness. If you are managing a mailserver – or if you know folks that do, please implement (or ask the person responsible to do so) SPF, DKIM and DMARC. It isn't something you likely do within five minutes for the first time – but having these techniques can save you from quite the headache!

Let's make the web great again!


from h3artbl33d

Whether you are a pentester or do some occasional auditing, most likely you are familiar with Metasploit – or have heard of it. It's considered to be an essential tool for offensive security. I have always been a little stunned by the fact that Metasploit is often ran from Kali. Linux is far from secure; Kali takes this to the next level by running everything as UID 0 (root). Offensive and defensive security ought to go hand-in-hand. So, obviously, let's combine these two and install Metasploit on OpenBSD. Puffy for the win!

Preparing the dependencies

Metasploit has some dependencies that we have to install beforehand; it does needs these applications and settings in order to function correctly.


Install Ruby 2.6 by issuing pkg_add ruby and choosing version 2.6. Upon succesfull installation, there is a notice shown that you can set some subapplications as the default version. Unless you are currently running Ruby applications – or intent do so so in the future, setting 2.6 as the default Ruby is safe. Execute these commands to set version 2.6 and it's subapplications as the system default:

doas ln -sf /usr/local/bin/ruby26 /usr/local/bin/ruby
doas ln -sf /usr/local/bin/erb26 /usr/local/bin/erb
doas ln -sf /usr/local/bin/irb26 /usr/local/bin/irb
doas ln -sf /usr/local/bin/rdoc26 /usr/local/bin/rdoc
doas ln -sf /usr/local/bin/ri26 /usr/local/bin/ri
doas ln -sf /usr/local/bin/rake26 /usr/local/bin/rake
doas ln -sf /usr/local/bin/gem26 /usr/local/bin/gem
doas ln -sf /usr/local/bin/bundle26 /usr/local/bin/bundle
doas ln -sf /usr/local/bin/bundler26 /usr/local/bin/bundler


Metasploit requires a database to store information. The recommended DBMS is PostgreSQL, with which I am pretty happy. Installing it is pretty straightforward: pkg_add postgresql-server.

Some additional configuration is necessary before running it:

su - _postgresql
mkdir /var/postgresql/data
initdb -D /var/postgresql/data -U postgres -A scram-sha-256 -E UTF8 -W
rcctl start postgresql

Now, we need to create a database and user to store everything in.:

psql -U postgres

Setting up Metasploit

In the previous steps we have prepared the dependencies, in this step we can setup Metasploit itself.

useradd -b /usr/local -m -s /sbin/nologin metasploit
doas -u metasploit git clone ~metasploit/app

More dependencies

Metasploit itself does need some Ruby 'gems' (extensions). Install them with:

cd ~metasploit/app
bundle install

Editing the database

Copy over the configuration and open it with your favorite editor, eg:

cp /usr/local/metasploit/app/config/database.yml.example /usr/local/metasploit/app/config/database.yml
vi /usr/local/metasploit/app/
chown metasploit:metasploit /usr/local/metasploit/app/config/database.yml

The configuration might speak for itself; if not you want to edit lines 9, 10 and 11:

  database: metasploit
  username: sploit
  password: password

That's it. Now you have setup Metasploit! Happy and safe pentesting!