How good is Pentiment? Good enough that I’m writing a video game review on my weblog for the first time in a decade.
I described this to a friend as “The Oregon Trail for grown‐ups.” It’s a game written by someone with expertise in, and obvious love for, a specific place and time in history: Bavaria in the year 1518, at the cusp of the Reformation.
Over a generation, the player meets the entire population of the fictional town of Tassing, and observes how they live their lives. They choose spouses, or have spouses chosen for them. They worship God, yet keep some pagan traditions going on the side. They raise children and speak fondly of those who have died.
They are pious people because their lives are hard. They lose children to illness, wives to childbirth, husbands to conflict. They navigate harsh winters and the demands of feudal lords. What gives them strength to keep going amid loss, in most cases, is their own conception of their life’s purpose. They find comfort in Tassing’s role in the empire, and in their own roles in Tassing.
When you play this game, keep this in mind: some choices will advance time, and some will not. You’ll be able to tell them apart. Before you advance time, make sure you’ve explored the entire map and availed yourself of all the no‐time‐elapsed actions. My only regret in my first play‐through is that I mismanaged my time through the entire first act before understanding this.
Beyond that, don’t stress things. Hardly any of the major decisions you make are “right” or “wrong”; just kick back and enjoy how they affect the narrative. You will want to play it again just to appreciate how those choices affect the outcome of the story.
Josh Sawyer, Pentiment’s director, was inspired in part by his own family’s origins and his undergraduate study of early modern history. To be a smart‐ass: this is why you make people take humanities classes.
Because while you, the player, are traveling between the town and the nearby Kiersau Abbey, piecing together the clues of three interconnected mysteries, Pentiment is asking you good, thoughtful questions about the purpose and value of art. Early in the game, a character says, “Art is illusion, storytelling; but in their most sublime form, these images illuminate a path to truth.”
Eventually, the game will ask you to choose how Tassing should be portrayed. It will ask you whether you can balance art’s commitment to truth against the mythos that gives its residents comfort. The game does not grade your efforts; all it wants is that you reflect on the question before answering.
(Score: 94/100)
]]>I also run Paperless (paperless‐ng these days) on a Raspberry Pi attached to the side of my printer. It’s my document database. If I’m honest, it’s better described as “the place where I forget to put my documents.”
So there are two things I want to improve from my scanner:
I want it to be easier to scan something straight to the folder that Paperless monitors for new documents. My printer’s touchscreen interface offers me the options “Scan to Dropbox” (hasn’t worked since Dropbox retired their v1 API) and “Scan to Network” (hasn’t worked since 2017 for reasons I never determined). My fallback option is “Scan to USB,” meaning a USB thumb drive, but even that is becoming a pain because of USB-C and dongles. I delude myself into thinking that if this process were more straightforward, I’d remember to use Paperless more often.
I want it to be easier to scan a random page (like an old ad from a magazine) and have that image ready to edit/save/attach to a post as quickly as possible on whatever computer I happen to be working from at that moment. If the Dropbox integration still worked, this would be pretty simple, but since it doesn’t, my printer has no way of knowing which computer wants the resulting image.
It took me three tries to get to the right solution here, probably because it was always a problem I had to solve in order to do something else. If I’d been less distracted, I’d like to think I would’ve figured it out sooner.
The easiest part of all this was using SANE to control my scanner from over the network. The first search result for installing SANE on a Raspberry Pi served me quite well. A bit of trial and error allowed me to figure out the correct parameters for my particular scanner.
For instance, here’s how I can scan a color image from my flatbed and get a JPEG:
scanimage --device "airscan:e0:HP OfficeJet Pro 8740 [005A96] (USB)" \
--mode=Color \
--format=jpeg \
--resolution=150 \
-y 279.40 > test.jpg
The --device
value is one of the four options emitted by scanimage -L
— specifically it’s the one that worked after trial and error. The -y
parameter specifies how tall the resulting image should be, and assumes that we’re scanning a letter‐sized sheet of paper. (11 inches equals 279.4 millimeters.)
Since I omitted the --source
parameter, scanimage
defaults to scanning from the flatbed. I could specify --source ADF
to specify the document feeder instead, but then I’d need to add a --batch
parameter to specify the format of the output filenames, since that command will produce as many files as there are pages in the feeder.
Anyway, a couple quick scripts for the common use cases meant that a scan was only a quick SSH session away. Since Paperless was running on the same Pi, the script for scanning to Paperless also moved the resulting files into the Paperless consumption folder, and then I was done.
Scanning an image for other purposes wasn’t as straightforward. It was easy enough to grab a color JPEG with scanimage
, but then I’d need to pull it from the computer I was on via scp
. To cut down on the boilerplate, I wrote a get-latest-scan
script that would just scp
the newest file in the output directory. That script had to exist on both my home laptop and my work laptop.
I was heartened by the simplicity of my add‐to‐Paperless workflow — run one script and you’re done. The most complicated part was having to SSH into the Pi. I decided that I wanted a one‐button solution instead.
I have a number of these small buttons lying around. I’d just need to put one in some sort of 3D‐printed enclosure and then connect it to some GPIO pins on the Pi. I could then write a simple daemon that would listen for button presses and run the scan-to-paperless
script.
The software was pretty simple, but I added a requirement: the daemon should turn off the LED in the button while the scan was happening, then turn it back on at the end to indicate that the scan had finished.
The script ended up being a simplified version of my volume knob daemon.
#!/usr/bin/env python
from time import sleep
import os
import signal
import subprocess
import sys
import threading
import RPi.GPIO as GPIO
import queue
GPIO_BUTTON = 10
GPIO_LED = 8
# Use a queue (of max size 1) to debounce. The first trigger will fill the
# queue and set off the scanning process; subsequent triggers will be ignored
# because the queue is full.
#
# Behavior-wise, this means that the button won't do anything while the scan
# script is running, which we emphasize by dimming the LED around the button.
QUEUE = queue.Queue(1)
# The event is used to signal our loop to trigger.
event = threading.Event()
def set_led(state):
if state:
GPIO.output(GPIO_LED, GPIO.HIGH)
else:
GPIO.output(GPIO_LED, GPIO.LOW)
def on_exit(_signo, _stack_frame):
GPIO.cleanup()
sys.exit(0)
signal.signal(signal.SIGINT, on_exit)
def button_callback(channel):
try:
QUEUE.put(1, block=False)
event.set()
except queue.Full:
# Ignore.
pass
def trigger_scan():
p = subprocess.run(['/home/pi/bin/scan-to-paperless'])
def consume_queue():
while not QUEUE.empty():
set_led(False)
trigger_scan()
set_led(True)
# Don't get the queue item until we're done; otherwise the queue will
# be empty and something can worm its way in.
QUEUE.get()
QUEUE.task_done()
GPIO.setmode(GPIO.BOARD)
GPIO.setup(GPIO_BUTTON, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.setup(GPIO_LED, GPIO.OUT, initial=GPIO.LOW)
GPIO.add_event_detect(GPIO_BUTTON, GPIO.RISING, callback=button_callback)
set_led(True)
while True:
event.wait(1200)
consume_queue()
event.clear()
Physical buttons are liable to trigger more than once any time they’re pressed, which is why libraries exist to debounce button presses. I like the queue that holds only one item; for this script, it’s the easiest possible way to debounce, and it’s also the easiest way to ignore the button if the user tries to press it again while a scan is in progress.
The scan-to-paperless
script assumes that I’ll want to scan from the automatic document feeder. This is fine, and it works identically well for adding one page versus multiple pages, as long as the pages can fit in the ADF.
The button works great. It is still hooked up to my printer and I still use it. But it only solved one of the use cases I needed, and I found myself wishing I had added a second button to the enclosure for scanning to Paperless from the flatbed. And then a third, for scanning an image and then doing, uh, something with it.
I am no stranger to making weird single‐purpose web sites that run on Raspberry Pis and are only accessible within my home network. The solution was staring me in the face the whole time. Web pages have buttons that can do arbitrary tasks. And web pages can display images. And browsers let you save images that appear on web pages.
Web pages also run anywhere, like on phones and tablets. And a good web page would be even easier to use than the printer’s own touchscreen, and would have a high likelihood of being adopted by other members of one’s household.
The backend has two tasks, and the first is easy:
I did the backend in Node. Task 1 was solved with serve-static
; task 2 was solved with WebSockets and the ws
library.
import { createServer } from 'http';
import { readFileSync } from 'fs';
import { WebSocketServer } from 'ws';
import serveStatic from 'serve-static';
import finalhandler from 'finalhandler';
import { exec } from 'child-process-promise';
// COMMANDS
// Map codes like `paperless-scan-from-bed` to specific terminal commands.
const COMMANDS = {
'image-scan-from-bed': ['/home/pi/bin/image-scan-from-bed.sh'],
'image-scan-from-adf': ['/home/pi/bin/image-scan-from-adf.sh'],
'paperless-scan-from-bed': ['/home/pi/bin/paperless-scan-from-bed.sh'],
'paperless-scan-from-adf': ['/home/pi/bin/paperless-scan-from-adf.sh']
};
let suspending = false;
function suspend (ws) {
ws.send('WAIT');
suspending = true;
}
function resume (ws, result) {
ws.send(`RESULT:${(result.stdout || "").toString()}`);
ws.send('OK');
suspending = false;
}
function handleError (ws, message) {
ws.send(`ERROR:${message}`);
}
const staticHandler = serveStatic('/home/pi/scans');
const server = createServer((req, res) => {
staticHandler(req, res, finalhandler(req, res));
});
const wss = new WebSocketServer({ server });
wss.on('connection', (ws) => {
ws.on('message', (data) => {
let message = data.toString('utf-8');
console.debug('Message received: ', message);
if (message.startsWith('SCAN:')) {
// Shouldn't happen, but handle it just in case.
if (suspending) {
ws.send('ERROR:busy');
return;
}
message = message.replace(/^SCAN:(\s*)(?=\w)/, '');
if (!(message in COMMANDS)) {
handleError(ws, 'No such command!');
return;
}
let args = [...COMMANDS[message]];
let command = args.shift();
console.debug('Running command:', command, args);
suspend(ws);
exec(command, args).then((result) => {
resume(ws, result);
});
}
});
ws.send('OK');
});
server.listen(8081);
console.log('Listening on port 8081');
I decided on a dead‐simple “protocol” for communicating between client and server:
WAIT
/OK
are sent by the server to indicate whether the scanner is busy.SCAN:foo
is sent by the client and indicates that the server should run whatever command is aliased to foo
.RESULT:bar
is sent by the server to report the output of the latest scan attempt: namely, the path to the new PDF or JPEG.ERROR:baz
is sent by the server to report that an error of type baz
happened.I’d never worked with Node’s http.createServer
directly — I’d always used a middleware like Express or Koa — but I was pleasantly surprised at how easy it was to operate both a static file server and a WebSocket listener on the same port.
I tend to treat little projects like these as opportunities to audition frameworks that I haven’t yet worked with. I chose Preact for the frontend because create‐react‐app was starting to feel like overkill for the tiny intranet sites I was making for myself. If I had to do it over again, I probably would’ve chosen Lit just to stretch myself a bit further.
I used the CLI and kept a lot of the defaults that Preact gave me, ending up with a Bootstrap‐looking kind of site from 2013. Doesn’t matter. I needed a few things:
img
tag with the scanned image, or else a link to download the PDF.I changed scarcely more than home/index.js
and home/style.css
, but here’s the source for home/index.js
just so you can get an idea.
From there, npm run build
generated a bunch of files that went into a ./build
directory, which itself was easily copied over to my Paperless server. If I ever need to make changes to it, I’ll do the work of writing a script to rsync
everything properly.
I set up nginx in the way I described toward the end of my last article. I don’t recall why I didn’t have the scanner backend serving up these files, or why I didn’t just configure nginx to serve up the JPEGs and PDFs from the scan output directory, but Past Andrew isn’t always thoughtful about documenting rationales.
I’m more than satisfied with how this turned out. Here’s what it looks like to scan a PDF to Paperless:
And here’s what it looks like to scan an image:
Experience tells me that there are maybe a dozen of you weirdos out there that these projects really speak to, and I’ll be glad if this inspires someone to do something similar.
But it wouldn’t be worth doing if it didn’t have personal benefits, also. I did this about three months ago and promptly forgot everything about it. I’d forgotten that I’d chosen Preact. I’d forgotten whether I used Node or Ruby for the scanner backend. I unearthed, in the process of writing this, at least six bugs that needed to be fixed, including the fact that I don’t yet handle the hypothetical case of scanning multiple color pages from the ADF.
I’ve informally resolved to get better at project hygiene, even for things that nobody else on Earth will ever use. Simon Willison has spoken on this topic, and a particular idea resonates with me: the tactics that software developers use in the workspace to share knowledge carry over very well to a hobbyist coder who juggles projects.
In the latter case, the process of researching this article was an ad‐hoc knowledge transfer from myself (three months ago) to myself (now). September Andrew should really have done better at writing down his thoughts and decisions; it would’ve made this article easier for December Andrew to write.
]]>I have a Raspberry Pi in my house that functions as a home automation server. It’s on the network as home.local
and has an IP address of (let’s say) 192.168.1.99
. No Home Assistant — not yet, at least — but there are a handful of things I use to coordinate my home automation. Four of them have a web presence:
1880
by default. Can be accessed at http://home.local:1880
.8080
by default. Can be accessed at http://home.local:8080
.8080
by default. Can be accessed at http://home.local:8080
.I can configure these port numbers so that they don’t clash, but is that the limit of my imagination? I don’t care about the port that something runs on, and I don’t want to have to remember it. These things have names; I want to give them URLs that leverage those names.
I want these services to have “subdomains” of home.local
. I want node-red.home.local
to take me to the Node‐RED frontend. Likewise with homebridge.home.local
, zigbee2mqtt.home.local
, and dashboard.home.local
.
“Subdomain” is in quotes because it’s not the right word here, but for now just think of them as four separate sites with four separate domain names.
How do I pull this off? Split the problem into two parts:
dashboard.home.local
and the others to resolve to 192.168.1.99
just like home.local
does.This part will apply to people like me who prefer to use Multicast DNS instead of running their own DNS server to handle name resolution.
Let’s get this out of the way. Here are my arguments in favor of mDNS:
foo
during installation, on first boot you’ll be able to SSH into it at foo.local
without having to plug it into a monitor first or figure out its IP address.ESP8266mDNS
library, and can resolve mDNS hostnames with mDNSResolver.http://thing/foo
, even if my network can resolve thing
to an IP address. What the hell is that? Of course, I could add a faux‐TLD like .dev
, but it doesn’t exactly feel safe to invent a TLD in this crazy future where new TLDs are introduced all the time. The .local
TLD(-ish) thing is part of RFC 6762; it’s “safe” in the sense that it would be a stupid idea for ICANN to introduce a .local
TLD — stupid even by ICANN’s standards.It’s easy enough to add a DNS server to a spare Pi. Somewhat fashionable, too, judging from the popularity of Pi‐Hole. And your router can be configured to use your internal DNS server so that you don’t have to change your DNS settings on all of your network devices.
I’ll give you the short answer: mDNS is a parallel and alternative way to resolve names, but a local DNS server is a wrapper around whatever DNS server you prefer for the internet at large. I can easily set up a Pi as my DNS server and tell it to resolve what it can, delegating everything else to 8.8.8.8
or whatever. But I’ve now made the Pi the most likely point of failure, and when it does fail, all my DNS lookups will fail, not just the local ones.
When I set up a Pi‐Hole, I had two DNS outages the first day. Bad luck? Probably. I could’ve tracked down the root cause, but by definition you will only notice a DNS failure when you’re in the middle of something else.
Anyway, this is not to say that an mDNS approach is better — only that name servers are essential infrastructure for network devices, and running my own name server on a $40 credit‐card‐sized computer was not a hassle‐free experience. Maybe if I owned the house I live in, I could put ethernet into the walls and replace my mesh Wi‐Fi network with something that has fewer possible points of failure, but you go to war with the network you have.
They don’t.
Well, they work just fine as domain names, but they don’t have the semantics of subdomains that you’d come to expect from the DNS world. To quote the mDNS RFC:
Multicast DNS domains are not delegated from their parent domain via use of NS (Name Server) records, and there is also no concept of delegation of subdomains within a Multicast DNS domain. Just because a particular host on the network may answer queries for a particular record type with the name “example.local.” does not imply anything about whether that host will answer for the name “child.example.local.”, or indeed for other record types with the name “example.local.”.
Outside‐world DNS describes a hierarchy where foo.example.com
is a subdomain of example.com
, and example.com
is (technically) a subdomain of com
. Web browsers build features upon this implied hierarchy, such as allowing cookies set on example.com
to be sent on requests for foo.example.com
.
In mDNS, foo.local
is not a subdomain of local
. That .local
is just a tag on the end meant to act as a namespace. So bar.foo.local
can exist, but it’s just bar.foo
with .local
added on, and is not understood by mDNS to be a subdomain of foo.local
. The two labels can coexist within the same network, but they could point to different machines or the same machine.
The RFC says that any UTF‐8 string (more or less) is a valid name in mDNS, and that includes the period. So I can publish homebridge.home.local
and have it resolve to the same machine as home.local
.
It feels right, I guess?
The only advantage of the “subdomain” here is in my brain and its desire to treat these as child‐sites of my server at home.local
. In a minute I’ll give you a compelling reason to use a simpler system instead. But this is how I did it.
There is a tiny downside, yes, almost too small to mention: Windows doesn’t support it at all.
Before Windows 10, to get mDNS support in Windows, you could install Bonjour Print Services for Windows. Since Windows 10, there’s built‐in mDNS support from the OS, except it’s bad.
Trying ping
from the command line illustrates the problem:
C:\Users\Andrew>ping bar.local
Ping request could not find host bar.local. Please check the name and try again.
C:\Users\Andrew>ping foo.bar.local
Ping request could not find host foo.bar.local. Please check the name and try again.
Here I’m pinging two names on my network, both of them nonexistent. The first one returns an error message after about two seconds; it tried to resolve bar.local
and failed. The second one returns an error message immediately, as though it didn’t even try. Windows does not support mDNS resolution of names with more than one period.
This is flat‐out wrong behavior — foo.bar.local
is a valid name in mDNS — but there you have it. I suspect it’s because .local
has been used by Microsoft products in the past in some non‐mDNS contexts; maybe there’s a heuristic somewhere in Windows that thinks foo.bar.local
is one of those usages and can’t be convinced otherwise.
When I discovered this, I attempted to disable the built‐in mDNS support and reinstall Bonjour Print Services, but I failed. Maybe there’s a brilliant way to make it work right, but I haven’t found it.
Since I have exactly one Windows machine in my house, I’m satisfied with a low‐tech workaround: the venerable hosts
file. On my one Windows machine, all subdomain‐style mDNS aliases go in there:
192.168.1.99 dashboard.home.local
192.168.1.99 homebridge.home.local
…and so on. Keeping this file updated when I add aliases is barely a chore; it’s not like there’s a new alias every week.
So this isn’t a deal‐breaker for me. But if it’s one for you, there’s an easy workaround: don’t use subdomains. Instead of dashboard.home.local
, do dashboard-home.local
, or just dashboard.local
if you prefer simplicity. As long as it has exactly one .
in it, Windows handles it fine.
OK, all that’s out of the way. Back to the problem: we want dashboard.home.local
and the rest to resolve to the same IP address as home.local
. How hard could that be?
On Linux, Avahi is in charge of broadcasting the Pi’s hostname as [hostname].local
. Could it broadcast the other names we want? Let’s dig into its config directory… aha! There’s a file called /etc/avahi/hosts
!
Quoth its man page:
The file format is similar to the one of /etc/hosts: on each line an IP address and the corresponding host name. The host names should be in FQDN form, i.e. with appended .local suffix.
Couldn’t be simpler. So we just need to put them into this file, right?
I’ll save you the trouble of trying. To illustrate the problem more directly, let’s use the avahi-publish
utility to try to broadcast an mDNS name:
sudo apt install avahi-utils
avahi-publish -a foo.home.local 192.168.1.99
We get the response:
Failed to add address: Local name collision
This happens because, by default, Avahi expects each IP address to have exactly one name on the network. Just as it says “home.local
resolves to 192.168.1.99
,” it wants to be able to say “192.168.1.99
is called home.local
” without ambiguity. Since this IP address already has a name, it won’t let us add a second — unless we add the --no-reverse
(or -R
) parameter:
avahi-publish -a foo.home.local -R 192.168.1.99
Now we get what we wanted:
Established under name 'foo.home.local'
So if I can do it with avahi-publish
, I can do it in /etc/avahi/hosts
, right? Well, no. There’s no way to specify the no‐reverse option within the hosts file — the downside to imitating the simplicity of /etc/hosts
.
Maybe it’ll behave differently in the future, but for now we’ll have to publish these aliases a different way. If we can’t just put it into the config file, we’ll have to write a startup script and make a systemd service out of it.
Now that we know about avahi-publish
, the obvious approach would be to write a script that looks like this:
#!/bin/bash
/usr/bin/avahi-publish -a homebridge.home.local -R 192.168.1.99 &
/usr/bin/avahi-publish -a node-red.home.local -R 192.168.1.99 &
/usr/bin/avahi-publish -a zigbee2mqtt.home.local -R 192.168.1.99 &
(Wait, am I writing a Bash script in public? I’ve had nightmares like this.)
Did you notice that avahi-publish
is still running from earlier, and will run indefinitely until we terminate the process with ^C or the like? That’s why those ampersands are needed in the script — they’ll fork the process and run in the background.
This will work fine as a one‐shot script, but not as a daemon. We want this script to run indefinitely and to clean up those child processes when the daemon is stopped.
Let’s make it wait indefinitely:
#!/bin/bash
/usr/bin/avahi-publish -a homebridge.home.local -R 192.168.1.99 &
/usr/bin/avahi-publish -a node-red.home.local -R 192.168.1.99 &
/usr/bin/avahi-publish -a zigbee2mqtt.home.local -R 192.168.1.99 &
while true; do sleep 10000; done
(This is ugly, but portable. sleep infinity
works just fine on Linux, but not on macOS.)
We’re getting somewhere, but I think we need to do something else. We’re creating one new child process for each alias we’re publishing, and I’m kinda sure that those processes will stick around if this script terminates. Let’s trap SIGTERM and make sure that those child processes also get terminated:
#!/bin/bash
function _term {
pkill -P $$
}
trap _term SIGTERM
/usr/bin/avahi-publish -a homebridge.home.local -R 192.168.1.99 &
/usr/bin/avahi-publish -a node-red.home.local -R 192.168.1.99 &
/usr/bin/avahi-publish -a zigbee2mqtt.home.local -R 192.168.1.99 &
while true; do sleep 10000; done
Playing around with this, I’m pretty sure it’ll do the right thing, and will daemonize nicely when we run it later on. If you were satisfied with this, you could save it to someplace like /home/pi/scripts/publish-mdns-aliases.sh
and skip down to the systemd section, but I think you should keep reading.
I fumbled my way though the writing of that Bash script, but I would prefer not to have to manage these child processes at all. I’d like a setup where I create only one process no matter how many aliases I’m publishing.
Hey, there’s a Python package that does what we want! Let’s install it.
pip install mdns-publisher
which mdns-publish-cname
On my machine (Raspbian 11, or bullseye), that works just fine and outputs /home/pi/.local/bin/mdns-publish-cname
. If which
doesn’t find it, you might need to add PATH="$HOME/.local/bin:$PATH"
to .profile
or .bashrc
. Or, if you’d rather install it globally, run sudo pip install mdns-publisher
instead and which
will return /usr/local/bin/mdns-publish-cname
.
The mdns-publish-cname
binary is great because it accepts any number of aliases as arguments. Run it yourself and see:
mdns-publish-cname dashboard.home.local homebridge.home.local node-red.home.local zigbee2mqtt.home.local
All four of those hostnames should now respond to ping
.
Perfect! It does just what we want in a single process. And it assumes we want to publish these aliases for ourselves, rather than for another machine, so we don’t even need to hard‐code an IP address.
To me, this is clearly superior to Option 1. Sure, I had to install a pip
package first, but I had to install avahi-utils
via APT before I could use avahi-publish
, so I think that’s a wash.
Not satisfied at having solved our problem with elegance, the mdns‐publisher repo also includes a sample systemd service file that we’ll use as a starting point for our own.
There are a few lines worth reflecting on:
After=network.target avahi-daemon.service
This is important because we can’t publish aliases if Avahi hasn’t started yet.
If you went with Script Option 1, there’s another thing to take care of: the page we cribbed this from says it needs to restart if Avahi itself restarts. To make that happen, you’d need one extra line in the [Unit]
section:
PartOf=avahi-daemon.service
If you’re using Script Option 2, this doesn’t seem to be necessary. If I restart Avahi while I monitor the mdns‐publisher output, I can see that mdns-publish-cname
somehow knows to republish the aliases.
Restart=no
I’d usually go with on-failure
in my service files, but maybe no
is OK here. I think it depends on whether the daemon would exit non‐successfully for intermittent reasons, or for a reason that’s likely to persist. If it’s the latter, you’ll just end up in a restart loop that you’d have to manage with other systemd options like StartLimitBurst
. I’ll keep this as‐is.
ExecStart=/usr/bin/mdns-publish-cname --ttl 20 vhost1.local vhost2.local
You’ll need to point this to /home/pi/.local/bin/mdns-publish-cname
, or /usr/local/bin/mdns-publish-cname
if you installed the package with sudo
. If you do the former, you’ll want to set User=pi
instead of User=nobody
, because the nobody
user won’t be able to run a binary located inside your pi
user’s home folder.
Of course, you’ll want to change vhost1.local vhost2.local
to the actual aliases you want to publish. But instead of doing that, let’s go one step further:
It feels more intuitive to me to put my aliases inside a config file. After all, a Pi’s primary hostname isn’t specified in the text of some systemd unit config file; it lives at /etc/hostname
.
So I created /home/pi/.mdns-aliases
for keeping a list of aliases, one per line:
dashboard.home.local
node-red.home.local
homebridge.home.local
zigbee2mqtt.home.local
I then wrote a Python script to read from that list:
#!/usr/bin/env python
import os
args = ['mdns-publish-cname']
with open('/home/pi/.mdns-aliases', 'r') as f:
for line in f.readlines():
line = line.strip()
if line:
args.append(line.strip())
os.execv('/home/pi/.local/bin/mdns-publish-cname', args)
We read each line, strip whitespace, toss out blank lines (like if I inadvertently add a newline to the end), and pass along the rest as arguments to mdns-publish-cname
. The os.execv
call works like exec
in the shell: it replaces the current process with the one specified. That’s perfect for our purposes.
Save that script somewhere (I saved mine at /home/pi/scripts/publish-mdns-aliases.py
), make it executable with chmod +x
, and run it as a test. Make sure the output is what you expect. Make sure that you can ping each of your aliases while this script is running, but then be unable to ping them once you quit the script with ^C.
Now we can simplify the service file:
[Unit]
Description=Avahi/mDNS CNAME publisher
After=network.target avahi-daemon.service
[Service]
User=pi
Type=simple
WorkingDirectory=/home/pi
ExecStart=/home/pi/scripts/publish-mdns-aliases.py
Restart=no
PrivateTmp=true
PrivateDevices=true
[Install]
WantedBy=multi-user.target
Save it in your home directory as mdns-publisher.service
, then run:
chmod 755 mdns-publisher.service
sudo chown root:root mdns-publisher.service
sudo mv mdns-publisher.service /etc/systemd/system
sudo systemctl enable mdns-publisher
Now we’ll start it up and monitor its output:
sudo systemctl start mdns-publisher.service && sudo journalctl -u mdns-publisher -f
The output should look quite similar to what you saw earlier when you ran the script directly.
I promise that was the hard part. Now that we’ve got a persistent way to give our Pi more than one mDNS name, we can move onto the other half of this task.
If I type node-red.home.local
into my browser’s address bar, what do I want to happen?
node-red.home.local
to 192.168.1.99
. (We just proved it.)home.local:1880
.This is more or less a reverse proxy, so let’s attempt to handle this with nginx.
sudo apt install nginx
cd /etc/nginx/sites-available
Nginx allows modular per‐site configuration in a similar way to Apache: inside its config directory there are directories called sites-available
and sites-enabled
. You can define a config file in sites-available
, then symlink it into sites-enabled
to enable it.
sudo nano node-red.conf
Now we’ll look at the nginx documentation for a few minutes and throw something together. No, I’m kidding; that’s a very funny joke. In truth, we’ll google “nginx proxy config file example” or something like that, click on results until we find something that’s close to what we want, then tweak it until it’s exactly what we want.
server {
listen 80; # for IPv4
listen [::]:80; # for IPv6
server_name node-red.home.local;
access_log /var/log/nginx/node-red.access.log;
location / {
proxy_pass http://home.local:1880;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache_bypass 1;
proxy_no_cache 1;
port_in_redirect on;
}
}
Let’s call out some stuff:
The server_name
directive means that this server
block should only apply when the Host
request header matches node-red.home.local
.
location /
will match any request for this host, since it’s the only location
directive in the file. Every path starts with /
, and nginx will use whichever block has the longest matching prefix.
proxy_pass
will transparently serve the contents of its URL without a redirect.
We use proxy_set_header
to make sure that the software serving up the proxied site sees a Host
header of node-red.home.local
; if we omitted this, it’d see a Host
header of home.local
. This one is tricky; it may not always be necesssary, and in some cases might even break stuff.
Remember that you’re forwarding to a different web server in this example. Consider what that web server would expect to see, if anything, and whether it would behave incorrectly if Host
were different from its expectation.
Node‐RED seems to serve everything up with absolute or relative URLs, and therefore doesn’t need to care about its own hostname. I’m leaving this line in because it’s easier to keep it around (and comment it out if you don’t need it) than to have to look it up in the cases where you do need it.
I use Node‐RED to make some data available via WebSockets, so we also have to make sure nginx can handle those requests. First, we make a point to pass along any Upgrade
and Connection
headers; by default they wouldn’t survive to the next “hop” in the chain, but WebSockets use those headers to switch protocols. We skip the proxy cache because Google told me to.
Anything I didn’t explain just now is something I don’t fully understand but am too nervous to remove just in case it’s important.
Anyway, let’s save this file and return to the shell.
Now we create our symbolic link:
cd ../sites-enabled
ln -s ../sites-available/node-red.conf .
Run an ls -l
for sanity’s sake and make sure you see your symlinked node-red.conf
. If you’re really paranoid, run cat node-red.conf
and make sure you see the contents of the file.
Now we’ll restart nginx and monitor the startup log:
sudo systemctl restart nginx && sudo journalctl -u nginx -f
You want to watch the logs after you enable a site because, if you’re like me, you will have screwed up the config somehow, even if you directly copied and pasted it from someplace and barely changed it. If nginx can’t parse the config file, it’ll explain what you did wrong.
Did it work? Type node-red.home.local
into your address bar and find out:
Beautiful. Buoyed by this success, we’ll create similar files in sites-available
called homebridge.conf
and zigbee2mqtt.conf
, and point them to their corresponding ports.
The last one is a hell of a lot easier than this: my dashboard site is just a JAMstack (ugh) app that requires nothing more complicated from the server than the ability to serve static files. So here’s what dashboard.conf
looks like:
server {
listen 80;
listen [::]:80;
server_name dashboard.home.local;
root /home/pi/dashboard;
index index.html;
}
Of course, make sure that root
points to the actual place where your flat files can be found. That should take care of it. If you use something like create-react-app
to build and deploy these flat files, it’ll just work as‐is.
I won’t do a deep‐dive here, but Traefik Proxy is a good option if you prefer to containerize things. If you were already running these various services in different Docker containers, you could map them to the URLs you want by adding labels like traefik.http.routers.router0.rule=Host(`node-red.home.local`)
to those containers. Traefik will handle the rest.
I use Traefik on my other Pi server, the one that handles tasks other than home automation. After some experience with both approaches, I’ve decided that containerization involves an equal number of pain points, but in new and interesting places.
No, that’s a joke. This is a one‐parter. I wrote this down mainly so I can refer back to it in a couple years after I’ve forgotten all this stuff, but if you’ve made it this far, you probably also found it helpful or interesting. Let’s see if I can’t write a few more things like that.
]]>In this article — the finale to the series, I promise — you’ll actually install your laundry spy and calibrate it to your particular washer and dryer. Once you’ve got them working reliably, I’ll show you a few ESP8266 libraries that you can drop into this sketch (or others) if you crave more features.
We haven’t actually installed the hardware near your washer and dryer yet because it’s not much fun to write code while sitting in your laundry room. It’s loud and stuffy and the ergonomics are all wrong. But now that we’ve demonstrated that the sketch is working well enough to measure vibration, it’s time to subject the spy to some real‐world data.
The ideal place to install your laundry spy is on the wall behind your washer and dryer at an equal distance from both. I tend to use velcro tape for applications like these where I might need to detach the thing later.
If you’d rather not attach anything to your walls — godspeed to the fellow renters out there trying to make their homes smarter in non‐destructive fashion — then you can mount the head unit on one of your actual machines.
The two sensors themselves should be attached firmly to their machines, but it doesn’t really matter where. Put them wherever the tether will allow them to reach and where they won’t interfere with operation. I’ve got my sensors attached to the back corner of each machine nearest the spy itself.
I mounted mine with foam mounting tape, but VSB tape would work just as well. Hell, based on my quick research, you could even use a glued‐on magnet to keep the sensor case in place on the back of each machine.
The spy will get power from the USB micro port we incorporated into the design. You’ve probably got an extra wall wart charger sitting in a drawer somewhere, and if you don’t, cheap ones are available all over the place. The maximum power draw of an ESP8266 is under the 500mA you’d get from a powered USB port on your computer, and any AC-to-USB charger will provide at least that much current, so you don’t even have to be choosy. A spare Raspberry Pi power supply would also more than suffice.
As soon as it’s plugged in, the spy should run its firmware. Within a few seconds it’ll connect to WiFi and start broadcasting over MQTT.
The firmware publishes to four different MQTT topics — two each for the washer and dryer.
laundry-spy/washer/state
and laundry-spy/dryer/state
publish the integer values of each of the five states: 0
for Idle, 1
for Maybe On, 2
for On, 3
for Maybe Off and 4
for Done. This happens as soon as the state changes.laundry-spy/washer/force
and laundry-spy/dryer/force
publish vibration scores. Not every vibration score; after all, we sample the score as often as 20 times each second. Instead, roughly every two seconds we’ll publish the most recent vibration score we got.The purpose of publishing the force data directly is to help us get a sense of what vibration scores we can expect in various states. In this spirit, let’s listen in on the washing machine. Did you install mosquitto in Part 2? Run this from the command line:
mosquitto_sub -v \
-h YOUR_MQTT_SERVER_IP -p YOUR_MQTT_SERVER_PORT \
-t "laundry-spy/washer/force"
Do this when your washer is idle as a mere sanity check. You should get values very close to zero.
laundry-spy/washer/force 0.01
laundry-spy/washer/force 0.01
laundry-spy/washer/force 0.01
laundry-spy/washer/force 0.01
laundry-spy/washer/force 0.00
laundry-spy/washer/force 0.01
laundry-spy/washer/force 0.00
If you see an occasional spike, that’s fine. Our firmware should recognize those as false positives.
When you start a load of laundry, run that command again. Here’s what mine looks like when the washer is on:
laundry-spy/washer/force 0.03
laundry-spy/washer/force 0.07
laundry-spy/washer/force 0.06
laundry-spy/washer/force 0.03
laundry-spy/washer/force 0.14
laundry-spy/washer/force 0.02
laundry-spy/washer/force 0.10
laundry-spy/washer/force 0.10
laundry-spy/washer/force 0.08
laundry-spy/washer/force 0.10
laundry-spy/washer/force 0.19
laundry-spy/washer/force 0.03
laundry-spy/washer/force 0.08
laundry-spy/washer/force 0.15
laundry-spy/washer/force 0.08
laundry-spy/washer/force 0.05
# ...
The vibration scores you see when the machine is running should be, well, higher than the ones you got when it was idle. Sure, there are a couple of outliers in there; that 0.02
score is probably something we’d get once in a while when the machine is idle. But that’s why we wrote code that’s robust enough to consider scores over a longer window of time. A single high score won’t flip the machine into the On state, and a single low score won’t flip it back to Idle.
So you’ve got two kinds of scores: the sort you’ll get when your machine is idle, and the sort you’ll get when your machine is running. Use these to determine your threshold: pick a value that would routinely get exceeded when the machine is on, but not when it’s idle. In my case I picked 0.08
.
You might need to come back to tune this value later. If you get through an entire cycle without the spy noticing and switching to the On state, your threshold is probably too high. If you get notifications telling you a cycle has finished, yet you have no memory of putting clothes in the washer or dryer, then your threshold is probably too low, or else you took Ambien and had a spate of household productivity right before you fell asleep.
Speaking of false positives… while the washer is running, let’s check on the vibration scores for the dryer.
mosquitto_sub -v \
-h YOUR_MQTT_SERVER_IP -p YOUR_MQTT_SERVER_PORT \
-t "laundry-spy/dryer/force"
laundry-spy/dryer/force 0.01
laundry-spy/dryer/force 0.03
laundry-spy/dryer/force 0.01
laundry-spy/dryer/force 0.00
laundry-spy/dryer/force 0.02
laundry-spy/dryer/force 0.04
laundry-spy/dryer/force 0.00
These scores are low, but not as low as they’d be if the washer were idle. Some of the washer’s vibration is transferring to the dryer through the floor. (If your machines are sitting on a slab foundation, this probably won’t be a big factor, but our laundry room is in the back of a house built on a pier‐and‐beam foundation.) You want your threshold to be high enough that activity from the adjacent machine won’t generate a false positive.
My dryer’s threshold is 0.12
because it rumbles more violently than my washing machine — but I could probably move it up or down a bit without making a practical difference to its behavior. In fact, this version of the laundry spy has been running concurrently with the proof‐of‐concept version I made last year. Though the two machines vary in their accelerometer hardware and in the exact way they measure vibration changes, they behave almost identically, and notify me about finished laundry cycles at nearly the exact same time.
Once you decide on good values for your washer threshold and dryer threshold, put them back into the sketch as the new values of WASHER_THRESHOLD
and DRYER_THRESHOLD
, then flash the new firmware onto the spy the way we did in Part 3.
The next time you do laundry, you can take the laid‐back approach and simply wait to see if you get a notification from Pushover… or you can spy on the machine’s MQTT feeds before and during the beginning of the cycle to see if it’s switching states the way we expect.
Just before you start a cycle, subscribe to both the state
and force
topics for your machine by using the #
wildcard:
mosquitto_sub -v \
-h YOUR_MQTT_SERVER_IP -p YOUR_MQTT_SERVER_PORT \
-t "laundry-spy/washer/#"
(This is why we’ve been using the -v
switch this whole time; without “verbose” mode, you won’t see the name of the topic in the output.)
Since we set the “retain” flag when we publish a machine state, you should immediately get back a response:
laundry-spy/washer/state 0
The 0
corresponds to an Idle state. So far, so good.
Go turn your washing machine on, but leave the mosquitto_sub
command running.
By the time you get back to your computer, the state
value should change to 1
(Maybe On), because it picked up on the vibrating machine almost instantly. (If this didn’t happen, look at the force
readings being published and make sure they’re in the range that you expect. It’s possible that your threshold is way too high.)
After another 30 seconds, state
should change to 2
(On). (If it goes back to 0
instead, that means at least three seconds elapsed without your threshold being exceeded. It’s possible that your threshold is slightly too high.)
If you get all the way to the On state, congratulations! Your spy will probably work just fine.
Monitoring it all the way to the Done state is possible, albeit a bit boring. Leave the command running in the background while you do other things. When you hear the cycle finish, verify that the state changed to 3
(Maybe Done). After about five minutes, it should change to 4
(Done) and then straight back to 0
(Idle).
(If the spy won’t progress to 3
, or will get to 3
but never to 4
, it means that your threshold is too low, and is being exceeded on a consistent basis even when your machine is idle.)
If all of this happens but you still don’t get a notification, then the problem is somewhere in your Node‐RED workflow, and you should test the Pushover integration by itself.
I want to revisit that workflow just so we all understand what’s going on. This is what I had you import in Part 2. You can double‐click on each node if need be to make sure it’s doing exactly what I say it’s doing.
laundry-spy/washer/state
and laundry-spy/dryer/state
, just like we’ve been doing manually with Mosquitto.4
(our value for the Done state).topic
and payload
values corresponding to the title and text of the Pushover notification we want to send. Then it’ll forward that message along to…Easy, right?
If you wanted to build a laundry sensor with a minimal amount of effort, you’d probably be reading someone else’s tutorial right now.
On one hand, it sometimes bothers me that most Arduino‐esque tutorials I read — on Instructables and similar sites — are written in a rigid, recipe‐like fashion: do these exact things, then run this exact code. On the other hand, recipes do have their place — if you follow a recipe you’ll at least end up with edible food when you’re done.
My first version of the laundry spy didn’t need a local MQTT server — it published data to Adafruit IO, and then explicitly triggered a Zapier workflow via webhook when a cycle was done.
Ultimately, I decided to bring those functions in‐house when I began to do other IoT things. I wanted all my home automation logic in one place, and I didn’t want that place to be somewhere in the cloud when I could do it just as easily myself without a monthly fee.
Right now, the laundry spy is just one of about ten different devices in my house that communicate over MQTT. Node‐RED isn’t just my gateway to Pushover; it’s how I tell my backyard string lights to turn off at 10pm every night, and it’s how I tell my nightstand fan to turn off if it’s been on for at least three hours.
So this four‐part series wasn’t just about laundry. If you’ve read this far, you’re probably interested in doing other home automation–type things around the house, in which case the stuff we did in Part 2 isn’t overengineering; it’s just prudent foundational prep work.
At this point, you’ve got a pretty solid, straightforward firmware that does a small amount of things well, and which you can likely leave running indefinitely without any trouble. The WiFi
library will automatically attempt to reconnect if your WiFi drops out, and losses of power are no big deal because the firmware will just start from scratch whenever it regains power.
When the spy has failed to notify me, it’s almost always been the fault of my IoT server, rather than the spy itself. In addition to Node‐RED, I’m running Homebridge (for HomeKit integration) and Redis (for various tasks that require persistence), and something I’m doing is causing either a hard freeze or a loss of network connectivity every couple weeks. So far I’ve been too lazy to hook up the headless machine to a display to diagnose it.
You might want to add some stuff to this sketch for your own convenience — in fact, the first two I’ll talk about are things that I left out of the sketch for simplicity’s sake.
ESP8266s are starting to supplant Arduinos around my house even for tasks that don’t absolutely require internet connectivity, and it’s for one major reason: built‐in networking means firmware updates are easy. ArduinoOTA makes this possible: it’ll let you flash new firmware over the air instead of through a serial connection.
This is a godsend when you’ve got ten oddball devices doing things around your house and you’ve got to change something on each one. The caveat is that this process works only for sketches that are in correct working order; if your sketch is crashing, or otherwise not hitting its intended code path, you’ll have to tether to it the old‐fashioned way.
ArduinoOTA is built into the ESP8266/Arduino core, and its API is dead simple. Observe:
#include <ArduinoOTA.h>
void setup () {
// other setup code, then...
ArduinoOTA.setHostname(HOST);
ArduinoOTA.onStart([]() {
Serial.println("[OTA] Starting update...");
});
ArduinoOTA.onEnd([]() {
Serial.println("[OTA] ...update finished.");
});
ArduinoOTA.onError([](ota_error_t error) {
Serial.printf("OTA: Error[%u]: ", error);
if (error == OTA_AUTH_ERROR) Serial.println("Auth Failed");
else if (error == OTA_BEGIN_ERROR) Serial.println("Begin Failed");
else if (error == OTA_CONNECT_ERROR) Serial.println("Connect Failed");
else if (error == OTA_RECEIVE_ERROR) Serial.println("Receive Failed");
else if (error == OTA_END_ERROR) Serial.println("End Failed");
});
ArduinoOTA.begin();
}
void loop () {
// other loop code, then...
ArduinoOTA.handle();
}
That’s it.
The Arduino IDE will list network update targets in the Tools → Port menu (under the “Network Ports” heading). And PlatformIO users can specify an update address in their platformio.ini
file or via command‐line switch.
ArduinoOTA relies on mDNS, which Macs support natively, and which Linux machines support through Avahi (which is bundled by default in many distros, and usually installable via a package manager otherwise). Windows users can install Bonjour for Windows.
Oh, have I not talked about mDNS yet?
mDNS, otherwise known as Bonjour, is part of the glue that makes IoT easy. Instead of having to know each others’ IP addresses — and instead of making you implement your own home network DNS — it lets machines on the same network multicast the names they want to be called and advertise the services they provide. Computers also use mDNS to auto‐discover things like printers and scanners on your network.
Good news: the ESP8266/Arduino core comes with built‐in mDNS broadcast support. Our firmware declares a HOST
constant (laundry-spy
by default) and we advertise ourselves on our network by that name through a call to MDNS.begin(HOST)
.
Bad news: the built‐in support is limited to mDNS broadcast, not mDNS resolution. When configuring MQTT, we had to tell the laundry spy what our MQTT server’s IP address was, even though from my Mac I can refer to that same machine as home.local
when using SSH.
Because hard‐coding IP addresses creates work for your future self, consider using the mDNSResolver library instead. It’s a focused library that can resolve mDNS‐style .local
domains without any of the service discovery stuff you don’t need.
Here’s how I use it in my sketches:
#define MQTT_SERVER "home.local"
#include <WiFiUdp.h>
#include <mDNSResolver.h>
WiFiUDP udp;
mDNSResolver::Resolver mdnsResolver(udp);
char serverName[26];
void MQTT_resolve () {
IPAddress ip;
if ( ip.fromString(MQTT_SERVER) ) {
// We were given an IP address.
strcpy(serverName, MQTT_SERVER);
} else {
// We need to resolve this value via mDNS to get an IP.
ip = mdnsResolver.search(MQTT_SERVER);
if (ip == INADDR_NONE) {
// We can’t resolve this address via mDNS. It might point to an
// external server, so just copy it over to `serverName` and make it
// someone else’s problem.
Serial.print("Couldn't resolve mDNS server: ");
Serial.println(MQTT_SERVER);
strcpy(serverName, MQTT_SERVER);
} else {
// We have an IP address. PubSubClient expects it as a string, though.
strcpy( serverName, ip.toString().c_str() );
}
}
}
void MQTT_connect() {
Serial.print("Connecting to MQTT... ");
// Have we resolved our server name yet?
if (strlen(serverName) == 0) {
MQTT_resolve();
client.setServer(serverName, MQTT_SERVER_PORT);
}
// ...
}
When we first try to connect to the MQTT server, we’ll attempt to turn our configured MQTT_SERVER
constant into an IP address; if that doesn’t work, we’ll just assume it’s not on our home network, in which case it’s a DNS server’s job to turn it into an IP address later on. We only have to do this work once.
Hey, your ESP8266 can serve up pages over HTTP! And the API is even pleasant to use; if you’ve ever used Sinatra or Flask, you’ll feel at home.
#include <ESP8266WebServer.h>
ESP8266WebServer server(80);
static const char HELLO_WORLD[] PROGMEM = {
"<!DOCTYPE html>"
"<html><head>"
"<title>Hello world!</title>"
"</head><body>"
"Hello world!"
"</body></html>"
};
char tempJsonString[101];
void setup() {
server.on("/", HTTP_GET, []() {
// Send a string from PROGMEM.
server.sendContent_P(HELLO_WORLD);
});
server.on("/settings/get", []() {
int someValue = getSomeValue(); // [pretend this is real]
sprintf(tempJsonString, "{ \"foo\": %d }", someValue);
server.send(200, "application/json", tempJsonString);
});
server.begin();
}
void loop() {
server.handleClient();
}
But! Don’t go overboard here.
Serving HTTP is glorified string‐building, and that’s not what embedded devices excel at. If you want to put a fancy interface on your data, consider writing a web app that lives on your IoT server and communicates with your device via MQTT or something. Remember that your device can subscribe to MQTT as well as publish to it — some ESP8266 tools use this method to expose an API for changing device state and/or settings.
I don’t want to overstate it — your ESP8266 can serve up even complex responses as long as you’re smart about how you build your strings. Like if you know your response will never be larger than X characters, you can allocate the space for that string once and then reuse it every time.
If you want your device to be able to parse and generate arbitrary JSON, and I haven’t talked you out of it, then take a look at the ArduinoJSON library, and strongly consider buying the exhaustive guide as a way of supporting its author. It’s really quite nice to work with.
Go wash your clothes.
Well, I’ll probably take a break before I start another series like this, but maybe next time I’ll talk more about what I’ve been doing with Node‐RED. I’ve found that it excels at turning one sort of interface into another: making MQTT data available via web sockets, wrapping an HTTP API around a proprietary smart switch, and so on.
Maybe I’ll even write the whole thing ahead of time so I can tell you at the beginning how much reading I’m making you do.
]]>In Part 1, we built the hardware for our laundry spy, combining two cheap accelerometers with a cheap ESP8266 module to make something capable of sensing vibration and communicating over WiFi.
In Part 2, we set up a home automation server with Node‐RED that’s capable of receiving messages over MQTT and pushing notifications to our phone via Pushover.
Today, in Part 3, we’ll bridge the gap: it’s time to write the firmware for the hardware we made in Part 1. It’ll turn the raw acceleration data into determinations about when our machines are running and report its findings over MQTT to our home automation server.
If you’ve never worked with an Arduino or similar microcontroller before, I strongly suggest that you start with a simpler project to get your feet wet. Blinking an LED might not be exciting, but at least it’s not baptism by fire.
Thanks to the work of some really smart people over the last couple of years, the same toolchain that’s used to program Arduinos can be used to program an ESP8266. In fact, most libraries written for Arduino hardware can work on ESP8266 with little or no modification.
If you are comfortable with the Arduino IDE, your best bet is to follow this SparkFun guide for installing the ESP8266 addon. This will allow you to treat a plugged‐in ESP8266 board (like the NodeMCU we’re using) just like an Arduino, with all the features you’re used to: one‐click uploading, builtin serial monitor, and the like. (If you have no experience with Arduino or ESP8266, this is still probably the best option.)
If you prefer to use your own IDE, you’ll likely be much happier using PlatformIO. The website emphasizes the integrations with Atom and VSCode, but in my mind its real upside is the IDE‐agnostic command‐line tooling so that you can upload and debug from the terminal regardless of your editor. I installed it via pip
…
sudo pip install platformio
…but there are other installation methods, including Homebrew and an ordinary installer script.
Wow, that was tedious! No wonder I put off this writeup until Part 3.
But back to the actual software we’re trying to write. Where do we even start with this? What should we think about before we decide how to architect this code?
The sensor we’re using features three‐axis detection. I don’t want three values; I want one value that quantifies how much the machine is vibrating.
Hypothetically, let’s hook an accelerometer up to a microcontroller and then place it so it’s resting flat on a table. If we were to ask it for its acceleration values, it would tell us something like
X: 0.0
Y: 0.0
Z: 1.0
because the Z‐axis is receiving all of gravity’s pull. (The unit here is gs, where 1g = the amount of acceleration imposed by gravity on Earth at sea level).
If we were to flip the sensor up on its side and stand it perfectly straight somehow, then we’d get something like
X: 0.0
Y: 1.0
Z: 0.0
because a different axis is now feeling the pull of gravity. You get the idea.
This is useful for stuff like orientation detection, of course, but I don’t need orientation detection. I don’t actually care about gravity at all. I just care about the acceleration imposed by the washing machine, and how much it changes over time. But how can I remove gravity from the math without mandating that the sensor be oriented a certain way?
Let’s try this: I’ll take an initial force reading on each axis for each machine. We’ll treat that as the baseline against which all other force readings will be compared. Then, when I want an update, I’ll take the current acceleration value on each axis and calculate how much it varies from our initial reading for that axis. It’ll look something like this:
float initialX;
float initialY;
float initialZ;
void setup () {
initialX = accel.getX();
initialY = accel.getY();
initialZ = accel.getZ();
}
void loop () {
// (`fabs` gives the absolute value of a float)
float deltaX = fabs(accel.getX() - initialX);
float deltaY = fabs(accel.getY() - initialY);
float deltaZ = fabs(accel.getZ() - initialZ);
// Distill the deltas down into one value representing
// net vibration.
float force = deltaX + deltaY + deltaZ;
// ...
}
Now we’ve got a way to turn the three‐dimensional data from the accelerometer into a simple vibration score. At any point, we can retrieve the score as a way of asking, “how much is my washer or dryer vibrating right now?” If the answer is “a little,” we’ll get back a small number like 0.03
, and if it’s “quite a lot indeed,” maybe we’ll get something like 0.5
. Without trying it, I don’t know how much the value we get during vibration will vary from the nearly‐zero value we get when the appliance is idle, but I’m not expecting big acceleration changes from a machine that needs two people to lift.
Now, a single force reading by itself won’t tell us much. One force spike could be triggered by, say, running down the hall, or briefly standing on the machine while fishing a lightbulb out of the cabinet above. (Don’t do this.)
But the vibration of a washer or dryer is caused by steady and consistent oscillation. If I measure acceleration several times a second, the scores I get back will probably be all over the place — because each time I ask I’m catching the machine at a different point in the oscillation.
Suppose this is a graph of vibration scores over time. The red line is how I imagine the vibration score of an idle washing machine: nearly zero, with occasional spikes that are caused by red herrings. The blue line is how I imagine the vibration score of a washing machine in a cycle: oscillating, with consistent and predictable spikes.
The orange line is a hypothetical score threshold that lets us distinguish these two scenarios. To detect an active machine, we don’t just want a vibration score that exceeds the threshold once; we want one that exceeds that threshold regularly over some window of time.
I like state machines for a few reasons:
So we’ve got three definitive states — Idle, On, and Done. We’ve also decided that we can’t go from Idle straight to On based on a single reading — and that’s also true for transitioning from On to Done — so we’ve also got two in‐between states we use when we think the state has changed but we’re not yet sure.
The visualization helps us realize that a state machine is a good approach. We know which states to draw arrows between, and we have a good idea of how each state transition will get triggered. We can also see that this is naturally a “modal” situation; our “what do I do next?” logic is almost entirely dependent on what state we’re currently in.
Here’s how I’d translate the above brain dump into code:
// How long (in milliseconds) do we have to be MAYBE_ON before we decide we’re
// actually ON?
#define TIME_UNTIL_ON 30000
// How long (in milliseconds) do we have to be MAYBE_DONE before we decide
// we’re actually DONE?
#define TIME_UNTIL_DONE 90000
// How long (in milliseconds) of a vibration lull would convince us that the
// machine isn’t really ON and we should return to IDLE?
#define TIME_WINDOW 3000
enum ApplianceState {
IDLE, // Nothing is happening.
MAYBE_ON, // Recent vibration, but we're not sure yet if it means anything.
ON, // Consistent vibration for a while; we're on.
MAYBE_DONE, // Vibration stopped very recently. Are we done?
DONE // Vibration stopped a while ago. We're definitely done.
};
class Appliance {
private:
LIS3DH accel;
// When were we last in the idle state?
long lastIdleTime = 0;
// When did the vibration score exceed our threshold, regardless of state?
long lastActiveTime;
// The last vibration score.
float force = 0.0;
// The threshold for this machine.
float threshold;
// The original acceleration readings for each axis.
float initialX;
float initialY;
float initialZ;
void readAccelerometer() {
float total = 0;
lastX = accel.readFloatAccelX();
lastY = accel.readFloatAccelY();
lastZ = accel.readFloatAccelZ();
total += fabs(lastX - initialX);
total += fabs(lastY - initialY);
total += fabs(lastZ - initialZ);
force = total;
}
public:
// The name of the appliance ("Washer" or "Dryer").
String name;
ApplianceState state;
void setup () {
// NOT PICTURED: Accelerometer setup.
// Take our initial force readings.
initialX = accel.readFloatAccelX();
initialY = accel.readFloatAccelY();
initialZ = accel.readFloatAccelZ();
}
void setState (ApplianceState s) {
state = s;
// NOT PICTURED: Publishing the state via MQTT when it changes.
}
void update () {
readAccelerometer();
long now = millis();
if (force > threshold) {
lastActiveTime = now;
}
// Did we exceed our threshold at any time in the last three seconds?
bool wasRecentlyActive = (now - lastActiveTime) > TIME_WINDOW;
switch (state) {
case IDLE:
if (wasRecentlyActive) {
setState(MAYBE_ON);
} else {
lastIdleTime = now;
}
break;
case MAYBE_ON:
if (wasRecentlyActive) {
// How long have we been in this state?
if (now > (lastIdleTime + TIME_UNTIL_ON)) {
// For a while now! We must be in a cycle!
setState(ON);
} else {
// Wait and see.
}
} else {
// No vibration in the last three seconds. False alarm!
setState(IDLE);
}
break;
case ON:
if (wasRecentlyActive) {
// This matches our expectation, so we must be in the right state.
} else {
// We stopped vibrating. We might be off.
setState(MAYBE_DONE);
}
break;
case MAYBE_DONE:
if (wasRecentlyActive) {
// We thought we were done, but we’re vibrating again. False alarm!
setState(ON);
} else if (now > (lastActiveTime + TIME_UNTIL_DONE)) {
// We’ve been in this state for a while now. We must be done with a cycle.
setState(DONE);
}
break;
case DONE:
// Nothing to do except reset now.
setState(IDLE);
break;
}
}
}
This isn’t the whole sketch, or even the whole Appliance
class; it’s just the parts having to do with state logic. But hopefully it’s enough to paint a picture. The washer and the dryer share most of their code through being instances of an Appliance
class. We’ll use instance members for the things that will vary between the two machines, like so:
// We've got two sensors at two different I2C addresses.
LIS3DH accelWasher(I2C_MODE, 0x19);
LIS3DH accelDryer(I2C_MODE, 0x18);
// We can pass each sensor instance into an `Appliance` constructor along with
// the appliance name, the MQTT topics we want it to use, and a vibration
// threshold.
Appliance washer("Washer", "/laundry/washer/state", "/laundry/washer/force", accelWasher, 0.12);
Appliance dryer("Dryer", "/laundry/dryer/state", "/laundry/dryer/force", accelDryer, 0.08);
By convention, Arduinos run a setup
function when your sketch starts running, then a loop
function over and over indefinitely. Here, we can instantiate the two Appliance
s globally, do any setup work in the setup
function, then update each one in the loop
function:
void setup () {
washer.setup();
dryer.setup();
}
void update () {
washer.update();
dryer.update();
}
Here’s the whole sketch as a gist. Look it over and fill in your own values in the “config” section: host name for the spy, IP address of your MQTT server, and so on.
Pick whatever values you want for WASHER_THRESHOLD
and DRYER_THRESHOLD
, or just keep them as they are for now. In the next installment you’ll be monitoring the force data reported by your own laundry spy in order to figure out good thresholds for your specific machines.
We’re relying on libraries to do a lot of the work here. Some of them are built into the Arduino/ESP8266 toolkit and some of them require their own installation. Here are the ones you’ll need to install:
platformio lib install PubSubClient
.platformio lib install "Sparkfun LIS3DH Breakout"
.platformio lib install SimpleTimer
.Every other library we use in this sketch comes built‐in with the ESP8266/Arduino integration.
Click on that ✔ button in the Arduino IDE’s toolbar (or run platformio run
from your project root if you’re on PlatformIO). After a long meditation, it’ll decide whether your project compiles. If it doesn’t, then I’ve done poorly in this tutorial and missed a necessary step, or else you’ve made major changes to the sketch and introduced errors.
Hook up your hardware to your computer via the USB micro port on the side. Once plugged in, it ought to show up in the Arduino IDE under Tools → Ports. The CP2104 chipset will have a name containing SLAB_USBtoUART
, whereas a CH340G chipset will have a name containg wchusbserial
. (PlatformIO users don’t have to choose a port unless they’ve got more than one serial device hooked up at once; it’ll figure out which one is correct.)
Now upload the sketch with the → button in the Arduino IDE, or by running platformio run -t upload
. Twiddle your thumbs for a minute as the upload proceeds. When it’s done, go to Tools → Serial Monitor (PlatformIO: platformio device monitor
) to view the serial output. Verify that the sketch is connecting to your WiFi network and finding your MQTT server, and that it can read from the two accelerometers and turn that data into appliance states.
If all of this is working so far, then all we’ve got left is to tune your washer and dryer to figure out good force threshold values, and then iron out the inevitable complications. Part 3 is long enough already, so let’s save it for the big finale.
Next time we’ll be analyzing the vibration data your washer and dryer report so that you can tune your spy for accuracy. Once everything is working properly, we’ll also look at some other things you can throw into this sketch to make your life easier.
]]>To refresh your memory: the thing we built last time is what I’ll be calling a “laundry spy” for short. It’s an ESP8266 (an Arduino‐like microcontroller with built‐in WiFi) with two accelerometers tethered to it — one for the washing machine and one for the dryer. It will make intelligent guesses about when a load of laundry goes through those appliances based on how much vibration it detects from those two sensors.
To help the laundry spy notify me when a cycle is done, I’m going to enlist an intermediary: a Raspberry Pi running Node‐RED home automation software. This will take some work off the laundry spy’s hands; instead of needing to know how to notify my phone, it can simply report data to the Pi using a simple messaging protocol called MQTT. In effect, all the laundry spy will do is say “hey, Pi, I think the dryer’s done,” and then the Pi can decide what the next steps are.
Today I’m going to try to give you an abridged primer on Node‐RED and MQTT. The seasoned IoT veterans might want to skim this article.
Let’s go back to the beginning. How do I make my own phone buzz programmatically? I’ve got options. I could use Twilio and send my phone an SMS, for instance. But instead I landed on Pushover, which does one thing incredibly well: it shows arbitrary notifications of your choosing on your iPhone, Android phone, or computer (or all three).
It’s got an API, but odds are good that your platform of choice already has some sort of Pushover integration. Sending notifications is free; you pay a one‐time fee of $4.99 for the app that receives them on a particular platform. (For instance: spending $4.99 on iOS Pushover enables notifications across any iOS devices you own, and spending $4.99 on Pushover’s website for the desktop app enables them across any desktop computers you own.)
Notifications themselves are free as long as you don’t send more than 7,500 a month, and if you’re using Pushover just for household life hacks like these, you won’t come anywhere near this limit.
So now the problem becomes: how do we connect the laundry spy to Pushover? Well, IFTTT or Zapier would be one way to go. In fact, the very first version of the laundry spy used a Zapier workflow that was triggered by a webhook; the ESP8266 would request a specific URL with one parameter (“washer” or “dryer”) that Zapier would use to build the text of a notification.
This approach isn’t as flexible, but if you’re already an enthusiastic user of one of these services, it might be closer to your wheelhouse.
It’s even possible (if not ideal) for the ESP8266 to talk to the Pushover API directly — it can make HTTP requests, after all. Later on I’ll elaborate on why I decided against this, but it’s a legitimate option.
But if you’re like me, you’ve already got a tiny computer somewhere in your house acting as an IoT hub — and if you’re not like me, you ought to be. I’ve got a Raspberry Pi 3 in my office running Node‐RED. I love it.
Node‐RED, as the name implies, is written in JavaScript, but you don’t need to know JavaScript to use it. You can build IFTTT‐like workflows inside it with a flowchart‐like UI that makes me nostalgic for the departed Yahoo! Pipes.
If it’s not your cup of tea, or if you’ve already thrown your lot in with a different IoT platform, then the implementation details will be different, but the approach will be very similar. You’ll be able to do all this in HomeAssistant, OpenHAB, Domoticz, or even just ~50 lines of your favorite programming language running as a daemon.
OK. Fine. The goalposts move once again: how will the laundry spy talk to our IoT hub?
I don’t know how or why MQTT got itself established as the go‐to protocol for IoT things, but I do know it’s easy as hell to use. On ESP8266 you can use the PubSubClient library to connect to an arbitrary MQTT server and publish data to a certain channel that other clients can subscribe to.
Why use MQTT instead of, say, HTTP? After all, with Node‐RED you could pretty easily build a simple API for the laundry spy to make requests to. But MQTT is somehow even easier to use. I’ll illustrate.
Say I’m running an MQTT server at 10.0.0.1
, and that I’ve configured the server to want a username of some_user_name
and a password of some_password
. (Don’t do this.) For testing purposes, I can install Mosquitto on my Mac with Homebrew (brew install mosquitto
) and then publish a message to my server with a command like this:
mosquitto_pub -p 1883 -h 10.0.0.1 \
-u some_user_name -P some_password \
-t "some/channel" -m "aha" -r
That’ll publish the string aha
to the channel named some/channel
. Nothing will happen because nothing is subscribed to that channel, but if it doesn’t error out on you, you know that your MQTT server is up and running.
Now open a second terminal window and run this command:
mosquitto_sub -p 1883 -h 10.0.0.1 \
-u some_user_name -P some_password \
-t "some/channel"
You should immediately see the aha
you sent a minute ago.
Arrange your two terminal windows so you can see both at the same time. Then, in the first window, run:
mosquitto_pub -p 1883 -h 10.0.0.1 \
-u some_user_name -P some_password \
-t "some/channel" -m "oho"
That oho
message should appear instantaneously in the second window.
Node‐RED has a specific documentation page for installation on Raspberry Pi. If you use a recent version of Raspbian, it’ll already be installed on your Pi, and you’ll merely need to schedule it to run on boot.
The visual environment of Node‐RED makes it intuitive to connect triggers to outcomes. Let’s play around a little bit.
MQTT support is built into Node‐RED. In the sidebar, under the “Input” heading, you’ll see an “MQTT” node. Drag it into the workflow, then double‐click it to configure it:
localhost
and the default port of 1883
, assuming you didn’t change it.
some/channel
in our example.
This node has a handle on its right side, which means it can output data. When messages come across the configured topic, this node will pass them along to other nodes in the workflow. One node can be connected to any number of other nodes.
For Pushover integration, we need to install a plugin. Click on the hamburger menu at top‐right, select “Manage palette,” click on the “install” tab, and search for pushover
. After installing the node-red-node-pushover
package, click on Done and you’ll have a new node in your palette under the “mobile” heading.
Drag it into your workflow, then double‐click on it.
You’ll need to configure two things here:
You can notify several different users with the same app. For instance, I’m notifying two people in my household, so I’d create two Pushover nodes — one for me and one for my girlfriend. The nodes will have different user tokens but can use the same API key. (It doesn’t matter which of us owns the app.)
Now you’ve got two nodes: one that produces output and one that accepts input. Do you see where I’m going with this? Connect the MQTT node’s output to the Pushover node’s input. The Deploy button in the corner of the screen will activate the workflow. Once it’s deployed, you can publish text to the topic you configured and make it show up on screens of your choosing!
Node‐RED workflows can be serialized and shared, so let me just dump some JSON in your lap. Here’s a workflow that describes what we want to happen. To import it, go to hamburger menu → Import → Clipboard… and paste the contents into the text field. Then you can double‐click on each node to change the values as necessary.
Some notes:
4
. Why 4
? Because the state machine we’ll build in part three has five states, numbered from 0
to 4
, and the last state is “Done.” Just go with it for now.
msg
object which represents the incoming message, and if you return an object, it’ll be sent to the next node in the flow. You can modify the existing message or build one from scratch as we’re doing here.
topic
and payload
properties of the message it receives as the title and body of the notification, respectively. We don’t need to set any of the other fields. But don’t forget to fill in your user key and API token as we discussed earlier.
Once you’ve got it configured, click on Deploy and the workflow will be live. Node‐RED will subscribe to two topics named laundry/washer/state
and laundry/dryer/state
. Now, we haven’t set up anything to publish to those topics yet, but that’s fine. MQTT lets you listen to a topic even if nothing has ever published to it before, and even if it never will, and that’s pretty sad and poignant if you think about it.
But I digress! You can test whether the workflow does what you expect by publishing a message to one of those channels, much like we did before:
mosquitto_pub -p 1883 -h 10.0.0.1 \
-u some_user_name -P some_password \
-t "laundry/washer/state" -m "4"
If you’re one of those annoying people who doesn’t screw anything up, running this command will make your phone buzz. If it doesn’t, then you can wade back into the workflow to find out what went wrong. There’s a “debug” node in the palette; attach it to various places in the workflow to see if something is getting stuck. The “Debug” tab in the right‐hand pane will display any messages that arrive at active debug nodes. (Once you connect a debug node to the workflow, you have to click Deploy for it to take effect. I still forget to do it half the time.)
The more pragmatic among you might think I’m overcomplicating this task. I mean, the ESP8266 can see the internet, can’t it? Why have an intermediary at all?
A few reasons:
Now that we’ve got all this infrastructure in place, we’re ready to give our laundry spy a brain. Next time we’ll do the hard work of turning raw vibration data into meaningful inferences about whether our clothes are clean. Part Three will have all the stuff I had promised would be in Part Two, so I suppose there’s also the suspense of finding out if I’ve lied to you again.
]]>After I graduated from Arduino to ESP8266, my first project last year was also my most complex to date: detecting when a washer or dryer cycle has finished and sending a notification to my phone. Other people have had this idea, and by no means am I saying that this approach is the best, but it’s the first one I tried, and it’s worked wonderfully for me.
Now that I’ve got a few more projects under my belt, I felt like taking another swing at the laundry spy, so I’m documenting the process for revision 2. This is part one of another tedious multi‐part series!
Let’s talk about the hardware.
For the uninitiated: the ESP8266 is a microcontroller with built‐in WiFi made by Espressif. The work of some kind souls over the past couple of years means that the ESP8266 has been brought into the Arduino ecosystem. You can program it with the Arduino IDE or PlatformIO, and many libraries written for the Arduino will work with the ESP8266 with little or no modification.
As if that weren’t enough, the ESP8266 is also cheaper than most Arduinos. For less than the price of an Arduino Uno you can get an ESP8266 with built‐in voltage regulation and a USB interface.
Revision one of the Laundry Spy used an Adafruit HUZZAH. But my preferred board these days is NodeMCU; even the knock‐off versions are reliable, and can be programmed without holding down a button on reset. Three bucks each!
How do we sense when the washer or dryer is in use? Several approaches will work. Current sensing? Damn near foolproof, but I don’t want to mess with 120V AC power. A photo resistor looking at an LED? Neither my washer nor my dryer lights up an LED during a cycle. The one I decided on is vibration detection. We can attach an accelerometer to each machine and read the force values several times per second; fluctuating force values signify vibration, and vibration signifies an active cycle. My washer and dryer vibrate like mad.
Revision one used the MMA8452Q accelerometer, but I went with the cheaper LIS3DH for revision two. You’ll need two of them — one for each machine.
The ESP8266 will sit inside of a small 3D‐printed box. The accelerometers will sit inside two tiny cases, each one attached to the rear of its machine. We’ll connect them to the main unit via umbilical.
Each accelerometer needs four wires: ground, 3.3V, and 2 wires for I2C (SCL and SDA). These pairs of four‐wire pigtails are perfect for the task.
One important thing: the two accelerometers need unique addresses. Take one of the boards and solder together the two pads that determine which I2C address it’ll use.
Let’s start with the accelerometers. Each one gets four pins soldered onto it. Then join it to your pigtail by crimping Dupont connectors onto the bare wire ends. Or, if you’re as bad at crimping as I am, solder the wires together with some pre‐crimped wires like these, then add a four‐connector housing block onto the end.
The accelerometer enclosures are adapted from this parametric box. We’ll get into my sloppy 3D‐printing methodology in another installment, but I’ll often start with someone else’s design, tweak it in OpenSCAD, then use TinkerCAD if I need to place some more complex features.
The ESP8266 sits in an ersatz socket on perfboard. We’ve got eight wires running into the head unit, so there are eight screw terminal blocks along one of the edges. On the bottom of the perfboard, we take each pair of terminal blocks and run it to the appropriate pin.
The perfboard is mounted into another 3D‐printed case based on another favorite design of mine from Thingiverse. I added text in OpenSCAD, recessed into the cover by a fraction of a millimeter; this lets me paint the letters into the cover later on by slathering some acrylic paint across the text and then wiping off any excess.
We’ll use a USB cable for power. I also want to be able to reprogram the MCU via USB if need be without taking it out of the case. I’ve got a bunch of these USB micro breakouts lying around; I like using them because it’s easier to incorporate it into a case design than it is to design the case so that the NodeMCU’s own USB port is accessible from the outside.
For strain relief, I used a cable gland because I had some lying around. But this is overkill; a zip tie or something similar will do.
The body of the case has nut traps at the corners. Put an M3 nut into each trap and you’ll be able to attach the cover to the body with M3 socket cap screws. If you’re careful and the contents of your box don’t exert any pressure on your cover, you can probably get away with M3 grub screws like I did; the lack of screw heads makes for a cleaner look.
Break out your multimeter and put it in continuity test mode. Test each connection from the accelerometer to the MCU; if anything fails to beep, fix it now while all your enclosures are still open.
If everything works correctly, go drink a beer and wait for part two of the series.
Believe it or not, this was the easy part. Next we’ll figure out how to take raw force data and turn it into a laundry state machine. Part two will be exactly as exciting as I just made it sound.
]]>I was very damp when I took this.
]]>I stopped because I realized that there was probably a better solution out there than “wire your own outlet and hope you got it right.” And there is. So that’s how we’ll end this eleven‐part series.
Here’s where we are:
Get an IoT relay. I found mine on Amazon, but Adafruit also has them. Or go with a Powerswitch if you can manage to find one. The goal is to find something already built that allows you to toggle an outlet based on a logic signal. (Europeans will need to find something rated for 230V AC; sorry.)
Let’s refer back to our dear friend pinout.xyz. You need one ground and one pin for logic — when set high, it’ll turn the outlet on, and when set low it’ll turn the outlet off.
The ideal solution is one of the 5V power pins — physical pin 2 or 4 — plus the ground at physical pin 6. Adjacent pins are always simpler. But physical pin 1 or 17 — the 3.3V pins — will also work, as will any other ground pin. The logic voltage can be as low as 3V.
The green module on the side of the relay can be removed. Pull it out. It’s a terminal block! Take two jumper wires, cut and strip one end of each, and put the stripped end into the terminal block and screw it down. One of those wires plugs into your voltage pin and the other plugs into your ground pin; note the +
and –
symbols printed on top of the relay near where the terminal block plugs in.
Plug the relay directly into the outlet box, then plug the monitor and marquee light into the two “normally OFF” outlets on the relay.
That’s it. That’s all you should need. This is the setup that’s running in my cabinet, having replaced the thing I kludged together. It’s simpler than what I had built and costs far less than what you’d pay for an Arduino Nano, two relay modules, an outlet, an outlet cover, and two free‐standing receptacles to hold everything.
When your Pi is on, so should the other devices be, and when the Pi ends shutdown and kills its own power, the other devices should get their power cut as well. Look for the LED in the middle of the relay, too; it should light up when the Pi is on. If it’s not working, check all your connections and make sure you’re using the right GPIO pins on the Pi.
Having done all this, you might find yourself where I am right now: looking at ways to augment the cabinet you’ve already got. If, as I did, you want to make your cabinet play Dragon’s Lair and Space Ace, you can install Daphne; there’s a fine guide for that.
I’m not sure what to do next. My instinct is to add guns in order to open up a whole new genre of arcade gaming. But the technology behind light‐gun games requires a CRT monitor. MAME lets you use a mouse (or mouse‐like device) to replicate the process of aiming and shooting, and devices like the AimTrak leverage this approach.
But I’ve monkeyed around with an AimTrak for a few hours and I’m afraid it’s just not the same. The on‐screen reticle is too twitchy for light‐gun games like Area 51 and Police Trainer, and fixed gun games like Terminator 2 or Revolution X expect more precision than you can muster with a hand‐held gun. I’m punting on gun games for now; hopefully I’ll revisit the issue in a year or so and find that someone smart has found ways around these problems.
Aside from this, the main item on my wish list is better support for some of the more demanding arcade games of the late 90s. Killer Instinct was hugely popular when I was growing up, and my cabinet feels naked without it, but it doesn’t run at a playable frame rate even on a Pi 3. Same with NFL Blitz. Somewhat heretically, my favorite arcade basketball game is not NBA Jam but rather the much later NBA Showtime — which MAME doesn’t even emulate yet. For these I’ll have to wait for the next model of Pi — or for a giant step forward in MAME performance on the Pi — and perhaps longer than that.
Meanwhile, if I had infinite time and money, and if my girlfriend had infinite patience, I’d probably be building myself a Crazy Taxi cabinet. But I’ll quit while I’m ahead.
]]>It is with great ambivalence that I share this fact with you: EmulationStation stores its game metadata in a plaintext, human‐readable format known as XML. I’ll explain what XML is for those of you who don’t know. For those of you who do know, this will be like listening to someone describe the plot of Rocky V if you were unfortunate enough to have seen Rocky V yourself.
XML was handed down from the gods back in the early twenty‐aughts back when people thought that the problem with existing serialization formats was that they weren’t verbose enough. To completely neutralize the main upside of verbosity — the ability to resolve ambiguity from context — they also decreed that an XML parser must fail irrecoverably at the first sign of non‐well‐formedness.
For its faults, XML is at least easy to read and write in a text editor. I’ll take XML any day over a more opaque format. The fact that it can be read and written rather easily by both machines and humans is what made possible the scripts I’ll explain below.
EmulationStation’s gamelists exist in /home/pi/.emulationstation/gamelists/
. Within that directory are subdirectories for each system, and in each such subdirectory is a file named gamelist.xml
. Hence the arcade game list can be found at /home/pi/.emulationstation/gamelists/arcade/gamelist.xml
.
First, a caveat. The gamelist XML can always safely be read. But it can’t safely be written to unless EmulationStation isn’t running. If you edit any gamelist files while ES is open, your changes will be reverted when ES exits or when the system reboots.
In the last post we talked about a script called quit-emulationstation
that I wrote for this very reason. I’ll put it into this post’s accompanying gist for convenience.
Anyway, memorize these two rules for editing gamelist XML:
Your RetroPie theme will likely have a way to show artwork for a particular game. It’s up to the individual what kind of artwork they want to use — game marquee? game logo? a picture of the cabinet? — but I chose to use a representative screen capture from each game. Other kinds of artwork vary wildly in aspect ratio and are thus hard to harmonize inside an EmulationStation theme. (Screen captures also vary, just not quite as wildly.)
The RetroPie docs on screenshots cover two scenarios: taking your own screenshots and using a utility to scrape screenshots off the web for the games in your list. I went a third way.
I got a full set of MAME screenshots from progettosnaps.net. Like all other MAME-related enthusiast sites, the design has not been updated since the late 90s, and that’s how you know this site is legit. Anyway, once it was downloaded, I was staring at a folder full of PNGs, each one with a filename corresponding to its ROM name.
Apparently there’s no problem I can’t solve by creating a new folder in my home directory. So I created /home/pi/screens
, then created subfolders arcade
and daphne
for the two systems I was emulating. (I’ve got only three Daphne games; I’ll find their screenshots manually.) The PNGs corresponding to the games I needed went into /home/pi/screens/arcade
.
At this point, for an arcade game whose ROM is named foo
, you can confidently state that its artwork exists at /home/pi/screens/arcade/foo.png
.
So all that’s left to do is to update the EmulationStation metadata for each game. You can use this script to do it in bulk.
#!/usr/bin/env ruby
require 'pathname'
begin
require 'nokogiri'
rescue LoadError => e
puts "This script requires nokogiri:"
puts " $ gem install nokogiri"
exit 1
end
SYSTEM = 'arcade'
def screenshot_path_for_game(system, game)
Pathname.new("/home/pi/screens/#{system}/#{game}.png")
end
GAMELIST_DIR = Pathname.new("/home/pi/.emulationstation/gamelists/#{SYSTEM}")
GAMELIST_BACKUP_PATH = GAMELIST_DIR.join("gamelist.xml.#{Time.now.to_i}.bak")
GAMELIST_CURRENT_PATH = GAMELIST_DIR.join('gamelist.xml')
# Backup current gamelist.xml.
GAMELIST_BACKUP_PATH.open('w') do |f|
f.write(GAMELIST_CURRENT_PATH.read)
end
GAMELIST_XML = Nokogiri::XML(GAMELIST_CURRENT_PATH.read)
# Traverse the gamelist.
GAMELIST_XML.css('gameList > game').each do |g|
path_node = g.at_css('path')
basename = Pathname.new(path_node).basename('.zip').to_s
art_node = g.at_css('image')
current_art = art_node.content rescue nil
new_art = screenshot_path_for_game(SYSTEM, basename)
puts "GAME: #{basename}"
if new_art == current_art
puts " no change needed"
next
end
# Does the screenshot exist?
if new_art.file?
# Update the artwork's modified time so we can more easily tell which
# artwork files we aren't using later on.
path.touch
else
puts " could not find screenshot at #{new_art}"
next
end
# Create the `image` element if needed.
if !art_node
art_node = Nokogiri::XML::Node.new('image', GAMELIST_XML)
g.add_child(art_node)
end
art_node.content = new_art.to_s
puts " changed art path to #{new_art}"
end
# Write out the XML.
GAMELIST_CURRENT_PATH.open('w') do |f|
GAMELIST_XML.write_xml_to(f)
end
puts "...done."
This script loads the gamelist for a certain system, enumerates all the games in the list, and updates (or creates) pointers to the artwork file for that game.
A couple things:
gamelist.xml
until you (re)launch EmulationStation, at which point ES will notice the new files and create barebones metadata entries for them.gamelist.xml
to a unique filename each time it’s run. I trust you to clear out the clutter manually once you’ve verified that your bulk edits didn’t screw anything up.We can use a similar approach to categorize our arcade games. RetroPie’s built‐in scraper tool is a good abstract strategy for getting metadata about a game, but MAME’s advantage is its community of pedants. For any meaningful piece of metadata you can think of, someone’s already maintaining a file containing that metadata for every game MAME emulates, even the really obscure ones.
The aforementioned progettosnaps.net also hosts a file called catver.ini
. It’s a pretty damned impressive taxonomy for a dataset as weird and varied as arcade games. Ever think about what genre Gauntlet is? Now you don’t have to: it’s “Maze / Shooter Large.” And Windjammers, a sports game about a sport that doesn’t exist, is properly labeled as “Sports / Misc.”
Using this list, it’s easy to bulk‐edit your game categories to match what’s defined in catver.ini
.
#!/usr/bin/env ruby
require 'pathname'
begin
require 'inifile'
require 'nokogiri'
rescue LoadError => e
puts "This script requires nokogiri and inifile:"
puts " $ gem install nokogiri inifile"
exit 1
end
CATVER_PATH = Pathname.new(ARGV[0] || '/home/pi/catver.ini')
CATVER = IniFile::load(CATVER_PATH)
CATEGORIES = CATVER['Category']
GAMELIST_DIR = Pathname.new('/home/pi/.emulationstation/gamelists/arcade')
GAMELIST_BACKUP_PATH = GAMELIST_DIR.join("gamelist.xml.#{Time.now.to_i}.bak")
GAMELIST_CURRENT_PATH = GAMELIST_DIR.join('gamelist.xml')
# Backup current gamelist.xml.
GAMELIST_BACKUP_PATH.open('w') do |f|
f.write(GAMELIST_CURRENT_PATH.read)
end
GAMELIST_XML = Nokogiri::XML(GAMELIST_CURRENT_PATH.read)
# Traverse the gamelist.
GAMELIST_XML.css('gameList > game').each do |g|
path_node = g.at_css('path')
basename = Pathname.new(path_node).basename('.zip').to_s
genre_node = g.at_css('genre')
current_genre = g.at_css('genre').content rescue "(none)"
new_genre = CATEGORIES[basename]
puts "GAME: #{basename}"
# Do we need to do anything for this game?
if new_genre == current_genre
puts " genre is up to date"
next
end
# Do we have a genre to give it?
unless new_genre && !new_genre.empty?
puts " no genre found in INI"
next
end
# This game has a genre. Write it to the XML.
# Create the node if it didn't exist before.
if !genre_node
genre_node = Nokogiri::XML::Node.new('genre', GAMELIST_XML)
g.add_child(genre_node)
end
genre_node.content = new_genre
puts " Current genre: #{current_genre}"
puts " New genre: #{new_genre}"
end
# Write out the XML.
GAMELIST_CURRENT_PATH.open('w') do |f|
GAMELIST_XML.write_xml_to(f)
end
puts "...done."
Some notes:
/home/pi/catver.ini
. If it’s elsewhere, change the script or else specify the right path in the first argument: assign-categories ~/custom/path/catver.ini
.catver.ini
says it is. If you think, for instance, that Windjammers should belong to a category of your invention (like “Sports / Pong‐like”) and change the XML accordingly, then that change will get overwritten the next time you run this script. Beware.Finally: let’s liberate all your game metadata from its XML prison.
80% of the point of an arcade cabinet is that it gets used at parties. Don’t throw parties? Now you have a reason to throw parties. I wanted a way to say “here are the games I’ve got installed; let me know if you want me to add any.” So I wrote a script to build a simple web page from the metadata in my gamelist.xml
files.
My own version has a few more bells and whistles, but I’ve made a simpler version that spits out a single HTML file that you can drop on any server you own. It uses CDN‐hosted jQuery and Bootstrap and fonts loaded from Google Fonts so that you don’t have to manage any local dependencies. You can click on any game name to get a pop‐over with its description.
#!/usr/bin/env ruby
require 'nokogiri'
require 'optparse'
require 'pathname'
SYSTEMS = ARGV
NAME_MAP = {
'arcade' => 'Arcade Games',
'daphne' => 'Laserdisc Games'
}
GAMELIST_ROOT = Pathname.new('/home/pi/.emulationstation/gamelists/')
output = []
$options = {
require: ['name', 'genre', 'developer']
}
opts = OptionParser.new do |o|
o.banner = "Usage: make-game-list [options] [systems]"
o.separator ""
o.on('-r', '--require=FOO,BAR', "Skip games that lack any of these fields (default: name, genre, developer)") do |value|
$options[:require] = value.split(',')
end
end
begin
opts.parse!
rescue OptionParser::InvalidArgument => e
STDERR.puts("#{e.message}\n\n")
STDERR.puts(opts)
exit 1
end
def fails_requirements?(meta)
$options[:require].any? { |k| meta[k.to_sym].nil? }
end
def html_for_game(game)
id = File.basename( game.at_css('path').content, '.zip' )
name = game.at_css('name').content
date = game.at_css('releasedate').content rescue nil
year = date.nil? ? nil : date[0..3]
genre = game.at_css('genre').content rescue nil
developer = game.at_css('developer').content rescue nil
description = game.at_css('desc').content rescue nil
return '' if fails_requirements?({
name: name,
year: year,
genre: genre,
developer: developer
})
%Q[
<tr>
<td data-game="#{id}" data-value="#{name}">
<a href="#" class="game-link" data-toggle="popover" data-title="#{name}">#{name}</a>
<div class="game-description">#{description}</div>
</td>
<td>#{year || '?'}</td>
<td>#{genre || '?'}</td>
<td>#{developer || '?'}</td>
</tr>
]
end
def html_for_system(path)
xml = Nokogiri::XML( path.open )
system = path.dirname.basename.to_s
games = xml.css('gameList > game')
rows = games.map { |game| html_for_game(game) }
%Q(
<h2 class="system-title">#{NAME_MAP[system] || system}</h2>
<table class="table table-bordered table-collapsed table-striped sortable">
<thead>
<tr>
<th>Game</th>
<th>Year</th>
<th>Genre</th>
<th>Manufacturer</th>
</tr>
</thead>
<tbody>
#{rows}
</tbody>
</table>
)
end
SYSTEMS.each do |system|
path = GAMELIST_ROOT.join(system, 'gamelist.xml')
output << html_for_system(path)
end
output = output.join("\n")
content = <<-HTML
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Nostalgia-Tron Games List</title>
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=News+Cycle:700|Oxygen">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<style type="text/css" media="screen">
h1, h2, h3:not(.popover-title) {
font-family: 'News Cycle', sans-serif;
}
h3.popover-title {
font-weight: bold;
}
h2.system-title {
margin: 2.5rem 0 2rem;
}
body {
font-family: 'Oxygen', sans-serif;
padding-top: 3rem;
padding-bottom: 3rem;
}
.popover {
font-family: 'Oxygen', sans-serif;
}
.game-description {
display: none;
}
</style>
</head>
<body>
<div class="container">
#{output}
</div>
<script type="text/javascript">
$(function () {
// When we open a popover, hide any others that may be open.
$('a.game-link').click(function (e) {
e.preventDefault();
var $others = $('[data-toggle="popover"]').not(e.target);
$others.popover('hide');
$(e.target).popover('toggle');
});
// Make popovers wider.
$('a.game-link').on('show.bs.popover', function () {
$(this).data("bs.popover").tip().css("max-width", "600px");
});
// Hide a popover whenever someone clicks off.
$('body').click(function (e) {
var $anchor = $(e.target).closest('a.game-link');
var $popover = $(e.target).closest('.popover');
if ($anchor.length > 0 || $popover.length > 0) return;
$('[data-toggle="popover"]').popover('hide');
});
$('[data-toggle="popover"]').popover({
html: true,
trigger: 'manual',
container: 'body',
placement: 'auto bottom',
content: function () {
var text = $(this).closest('td').find('.game-description').text();
text = "<p>" + text + "</p>";
text = text.replace(/\\n\\s*\\n/g, "</p>\\n<p>");
return text;
}
});
});
</script>
</body>
</html>
HTML
puts content
Call it with a list of systems that you want to include on the page in the order they should be shown, and redirect STDOUT to a file to save it to disk — e.g., make-game-list arcade daphne > gamelist.html
. If you want the heading above a particular system to say something nicer than its bare name, add a key‐value pair to NAME_MAP
near the top. (For instance, without its entry in NAME_MAP
, the daphne
section would have a heading of “daphne” rather than “Laserdisc Games.”)
Once you’ve got an HTML file you can do whatever you please with it. My version of this script also scp
s the HTML file to andrewdupont.net
so that I can update the game list with one command.
Here’s a gist with all three scripts.
One more installment and I’ll be done with this series and onto writing about a different stupid hardware project of mine. Next time I’ll finally cover a safe, idiot‐proof way to power up your monitor and marquee light when your Pi is on and power them off when your Pi is off.
]]>