uv for Python management
There’s a trick here-
$ uv python pin cpython-3.12.4-linux-x86_64-gnu
Updated `.python-version` from `3.12` -> `cpython-3.12.4-linux-x86_64-gnu`
Looks like it should immediately have taken effect, but no-
$ python --version
Python 3.12.5
You have to re-run uv venv
to make sure that it gets picked up & made the working python in your virtualenv-
$ uv venv
Using Python 3.12.4
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
$ python --version
Python 3.12.4
Toddler Proofing systemd-boot
If you let your toddler bang on your keyboard while you boot, you might be surprised to find that it’s not booting right, afterwards.
Many modern Linux distributions aren’t using LILO or Grub anymore, they’re using systemd-boot. It’s nice- fast, lots of security considerations, etc. It’s also very minimalist by default, it doesn’t even show you a prompt. So, to break out of the boot loop fugue state, hold down spacebar as the box boots.
You don’t have to hit space at the right time, for the right length, nothing, you just have to get it down while systemd-boot is going, so the easiest thing to do is to hit that power button and weight the thing down.
Now you’ll be treated to a state of the art, 3-5 line TUI. Fantastic. Pick the normal boot option, log in as you normally would, breathe a sigh of relief. But wait, there’s more.
While normally systemd-boot remembers the last thing you picked, there is some
sort of Toddler Superpower that makes selections sticky. I have not discovered
what this is in the man pages. To undo this feat, you need to manually fiddle
with the boot list. Run bootctl list
to get a list of magic ids-
# sudo bootctl list
Boot Loader Entries:
title: Normal Boot
id: current.conf
source: /boot/efi/loader/entries/current.conf
linux: /EFI/64f4c214-26c0-40ed-a6a4-630aff85d8f7/vmlinuz.efi
initrd: /EFI/64f4c214-26c0-40ed-a6a4-630aff85d8f7/initrd.img
options: root=UUID=64f4c214-26c0-40ed-a6a4-630aff85d8f7 ro quiet splash
title: Oldboi boot
id: oldkern.conf
source: /boot/efi/loader/entries/oldkern.conf
linux: /EFI/64f4c214-26c0-40ed-a6a4-630aff85d8f7/vmlinuz-previous.efi
initrd: /EFI/64f4c214-26c0-40ed-a6a4-630aff85d8f7/initrd.img-previous
options: root=UUID=64f4c214-26c0-40ed-a6a4-630aff85d8f7 ro quiet splash
title: Reboot Into Firmware Interface
id: auto-reboot-to-firmware-setup
source: /sys/firmware/efi/efivars/LoaderEntries-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f
and then run bootctl set-default
with the id
field-
# sudo bootctl set-default current.conf
to quit rebooting into the EFI configuration screen.
Redirect Rot
Things decay over time. It’s true! Life is hard. Sometimes though, you see a mess that really, someone ought to have cleaned up already.
:~$ curl https://gmail.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.com/gmail/">here</A>.
</BODY></HTML>
Oh right, the cute name isn’t the real name anymore. Gosh it’s almost 20 years old now, I remember being so eager and excited for an invite.
:~$ curl https://www.google.com/gmail/
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="https://mail.google.com/mail/">here</A>.
</BODY></HTML>
Oh well that’s silly. /gmail/
, /mail
, who cares. Oh wait no- we’re also switching subdomains.
:~$ curl https://mail.google.com/mail/
<HTML>
<HEAD>
<TITLE>Moved Temporarily</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>Moved Temporarily</H1>
The document has moved <A HREF="https://mail.google.com/mail/u/0/">here</A>.
</BODY>
</HTML>
Ah yes, the framework nonsense.
:~$ curl https://mail.google.com/mail/u/0/
<HTML>
<HEAD>
<TITLE>Moved Temporarily</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>Moved Temporarily</H1>
The document has moved <A HREF="https://accounts.google.com/ServiceLogin?service=mail&passive=1209600&osid=1&continue=https://mail.google.com/mail/u/0/&followup=https://mail.google.com/mail/u/0/&emr=1">here</A>.
</BODY>
</HTML>
oh bother.
:~$ curl 'https://accounts.google.com/ServiceLogin?service=mail&passive=1209600&osid=1&continue=https://mail.google.com/mail/u/0/&followup=https://mail.google.com/mail/u/0/&emr=1'
<HTML>
<HEAD>
<TITLE>Moved Temporarily</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>Moved Temporarily</H1>
The document has moved <A HREF="https://accounts.google.com/v3/signin/identifier?dsh=S-855590345%3A1676005353432758&continue=https%3A%2F%2Fmail.google.com%2Fmail%2Fu%2F0%2F&emr=1&followup=https%3A%2F%2Fmail.google.com%2Fmail%2Fu%2F0%2F&osid=1&passive=1209600&service=mail&flowName=WebLiteSignIn&flowEntry=ServiceLogin&ifkv=AWnogHfOR48lViCet7ZKsjn3ecmnActCnGb49kRIlHFBAVaRU-xG2WnwyugATyMKz6dVwioB0QPzZQ">here</A>.
</BODY>
</HTML>
Oh double bother. I’m going to have to break out a real urldecoder.
:~$ curl 'https://accounts.google.com/v3/signin/identifier?dsh=S-855590345:1676005353432758&continue=https://mail.google.com/mail/u/0/&emr=1&followup=https://mail.google.com/mail/u/0/&osid=1&passive=1209600&service=mail&flowName=WebLiteSignIn&flowEntry=ServiceLogin&ifkv=AWnogHfOR48lViCet7ZKsjn3ecmnActCnGb49kRIlHFBAVaRU-xG2WnwyugATyMKz6dVwioB0QPzZQ'
<html lang=en><meta charset=utf-8><meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width"><title>Error 404 (Not Found)!!1</title><style nonce="z6OTySjnDZtx-o03oZxP8A">*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{color:#222;text-align:unset;margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px;}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}pre{white-space:pre-wrap;}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}</style><main id="af-error-container" role="main"><a href=//www.google.com><span id=logo aria-label=Google role=img></span></a><p><b>404.</b> <ins>That’s an error.</ins><p>The requested URL was not found on this server. <ins>That’s all we know.</ins></main>
Hell Is Other People's Deployments
Or things I fear in k8s operators and helm charts
I’ve spent a lot more time going over other people’s deployments in kubernetes recently, and I’m developing a set of prejudices.
Custom docker images
For lots of popular server software, like Redis, there is an official build supported by both Redis the upstream and Docker Inc with a special, privileged spot in the namespace.
For others, the primary corporate sponsor will have an official image. So if the chart needs a special rebuild- presumably also six months out of date and lighting up all the CVE scanners like a Christmas tree- that’s a minor demerit. Use an init container like everyone else.
CRDs that are literally just a statefulset with a fancy name
I don’t know- what’s the point?
Weird tls support
Configuring TLS should look more or less like configuring TLS on an ingress resource. Weirder structures with different names mean that probably everything is going to get weird fast.
Bad string handling
If I pass a password with special characters and it breaks everything because the gubbins are doing ad-hoc YAML generation in a series of nested bash scripts there’s probably a long tail of other dumb sloppy language security things I’m going to have to start worrying about, and maybe I should bail early.
Requiring me to directly manage the entire config file of a product from deployment
Listen, I realize I’m spoiled here, but if you’re going to abstract something, abstract it.
Operators
There are a few operators that are amazing magical things.
Also most of them are less than a quarter finished, have no integration tests, and provide no real value over a Deployment or a Statefulset outside of the “yeet example YAML into a running service” use case. In fact, they have negative value, as they have fun sharp edges where they break things.
Theme Update
Did a little site maintenance- theme update, new hugo version, all that kind of stuff.
Also it turns out that all my draft posts were being built & shipped in production, which is terrible and terrifying. Really not what I want. Believe I’ve fixed that.
Make Linux Fast Again
https://make-linux-fast-again.com/ has big promises, boy oh boy. I was wondering how much I should blame the recent spate of hardware vulnerability mitigations for my laptop being pokey, so I decided to turn it off. Slamming all of those options blindly into my kernel command line worked OK- but nothing perceptible happened.
/proc/cmdline
is the place to go to check your changes worked end-to-end, by the way.
I was, however, oddly locked out of Flatpak- but was seeing this in the kernel logs-
flatpak-portal[1967]: segfault at c ip 000055665f85f8d0 sp 00007ffce0ffa800 error 4 in flatpak-portal[55665f83f000+d4000]
seems bad, right? Cutting back the kernel flags to just mitigations=off
fixed Flatpak, and I’m still Living Dangerously-
$ grep . /sys/devices/system/cpu/vulnerabilities/*
/sys/devices/system/cpu/vulnerabilities/itlb_multihit:KVM: Vulnerable
/sys/devices/system/cpu/vulnerabilities/l1tf:Mitigation: PTE Inversion; VMX: vulnerable
/sys/devices/system/cpu/vulnerabilities/mds:Vulnerable; SMT vulnerable
/sys/devices/system/cpu/vulnerabilities/meltdown:Vulnerable
/sys/devices/system/cpu/vulnerabilities/spec_store_bypass:Vulnerable
/sys/devices/system/cpu/vulnerabilities/spectre_v1:Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
/sys/devices/system/cpu/vulnerabilities/spectre_v2:Vulnerable, IBPB: disabled, STIBP: disabled
/sys/devices/system/cpu/vulnerabilities/srbds:Vulnerable
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort:Not affected
So, I’m an old web dork, and I don’t have any go-to benchmarks to go by, but I vaguely remember Octane being important like ten years ago, and I don’t have to install anything- and they’re CPU benchmarks, which is sort of relevant. So here’s a baseline, with mitigations on-
Octane Score: 10622 (mitigations=on, firefox)
Octane Score: 10814 (mitigations=on, firefox)
Octane Score: 24399 (mitigations=on, chrome)
Octane Score: 24858 (mitigations=on, chrome)
Wow! Chrome is a lot better at this than Firefox. I mean, it’s their benchmark, but wow!
With all of the Spectre, Meltdown, etc mitigations off-
Octane Score: 12341 (mitigations=off, firefox)
Octane Score: 12602 (mitigations=off, firefox)
Octane Score: 28102 (mitigations=off, chrome)
Octane Score: 28221 (mitigations=off, chrome)
🥳 Running at 110% of normal capacity sounds pretty good.
What about something closer to what I care about, like GUI interactivity? I kind of flailed around a bit on this, trying to find something that measured jank specifically, and came up short. There is, however, good ‘ol GtkPerf, which at least measures something close to what I care about. Fullscreen runs stabilized at around 8.9 seconds with mitigations on and 8.7 seconds with mitigations off. Is that an improvement? Is it good we can draw rubber ducks slightly faster?
Developing inside of a Dark Forest
I have these- and other- news items on my mind-
https://lwn.net/Articles/853717/
For decades now, we as developers have all really benefited from a strong culture of sharing. From experience reports to polished open source projects, from stack overflow answers to professionals livecoding, it’s been almost all love and rainbows1.
Unfortunately, the kind of people who break things for a living look at all of the rainbows and say “lol there are no gates here”, and are very busily and aggressively poisoning all of the wells they can find.
Instead of a freewheeling community zone, we’re entering something that’s going to look more like a Dark Forest. This is one of the less appealing answers to the question of “why, if we can see so much of the universe, can we not see any evidence of other life?” In the Dark Forest model, it’s because anything that’s noisy gets eaten.
The next steps are grim and unavoidable. We’re going to need to lock down our repos, and be suspicious of newcomers. We are going to have a rise of new, restrictive corporate policies at work, and we are going to have to share less.
Incompetent proprietary companies like SolarWinds will be exempt from the new, more highly scrutinized routine- it will only negatively effect people trying to be generous with each other. Everything will get worse, but it will definitely feel like we are being more responsible, especially at first.
Even worse, the logical next step is probably more anti-immigrant paranoia and team firewalling. It’s going to be a rough few years.
I’m feeling pretty down about it.
-
I already know, don’t bug me about it ↩︎
How to do basic shell completion
So you wrote some dumb tool for your coworkers. It works, but no one can remember the dang arguments. Half of them use zsh, the other half use fish, the grumpy loudmouth uses bash, and the new hire uses something that sounds like a tolkein reference. The documentation for everything is garbage, stackoverflow is filled with lies, and most of the available examples take up six pages. Can’t you do this simply?
Yes.
Here are some minimum-viable-shell-completions for popular shells- just enough
to make the <major command> <minor command>
pattern work, you know, like git $command
or aws $product $verb
.
bash
The big kahuna, the default on Linux distributions, and the old default on OS
X, before the GPL v3 and Apple’s Great Terror. There are, as you might expect,
Many Ways To Do It, but my favorite, because it is simple & bloody minded, is
good ‘ol complete
-
:~$ complete -W "fee fi fo fum" giant
:~$ giant <tab>
fee fi fo fum
zsh
The New Hotness in 2005 (I know it dates to 1990), and the default on MacOS for
the past few years. Here there are also Many Ways To Do it, but look, I’m not
going to lie- I like the old, deprecated compctl
, because I am old and
deprecated. Also, you can bang this out in two lines instead of 2 paragraphs,
and brevity is brilliance.
% declare -a _giant_commands=(fee fi fo fum)
% compctl -k _giant_commands giant
% giant <tab><tab>
fee fi fo fum
fish
If you were too cool in 2005 to use something that worked, fish was the shell
for you. That is still sort of true, because dealing with cross-shell
compatibility is still slightly more of a hassle than you really want to put up
with. It also uses the complete
command for completion, but the documentation
is better, and it’s totally different from bash’s.
> complete -c giant -a "fee fi fo fum"
> giant <tab>
fee fi fo fum
elvish
If you like to get in fights on message boards lamenting the lack of structured streams in the shell, Elvish is the shell for you. It is very neat, and has a truly wild feature set, including a built-in multi-pane file manager. Definitely worth a look if you’re feeling like changing your workflow up.
> edit:completion:arg-completer[giant] = [@args]{
possibilities = [fee fi fo fum]
put $@possibilities
}
> giant <tab>
COMPLETING argument
fee fi fo fum
I’m really feeling the namespacing, if I’m being honest. It makes introspection of the shell configuration a lot less like dipping your face into a firehose.
Podman Notes
more niche container system usage notes
Podman is a great alternative to docker for your laptop- it’s daemonless, so
when you aren’t using it, it’s not wrecking your damn battery. It also doesn’t
require sudo
, which feels pretty nice.
wait why doesn’t this work?
Hurl in --log-level=debug
anywhere to figure out why things go bad. The logs
are good!
pull from dockerhub by default
Update your registries.conf-
$ cat ~/.config/containers/registries.conf
[registries.search]
registries = ['docker.io']
[registries.insecure]
registries = ['docker.io']
Insecure, as far as I can tell, is about the Notary vs GPG schism- Docker went with an experimental new system called Notary, Red Hat has always used GPG and continues to use GPG, and DockerHub images are not GPG signed.
It’s all kind of dumb, to be honest- the real protection is from TLS. I am not going to try and work through random third party’s adventures with key management.
rejected by policy
error on pull
Error: Source image rejected: Running image docker://ubuntu:latest is rejected by policy.
or you know, whatever
My default policy.json-
$ cat /etc/containers/policy.json
{
"default": [
{
"type": "reject"
}
],
That’s quite a policy!
You can fix this in a local override, easy peasy-
$ cat ~/.config/containers/policy.json
{
"default": [
{
"type": "insecureAcceptAnything"
}
],
"transports":
{
"docker-daemon":
{
"": [{"type":"insecureAcceptAnything"}]
}
}
}
I feel very… secure now… I think.
Aerc
Speaking of CLI mail programs, this is one I’ve been meaning to tour. It’s actively developed, seems a bit more ambitious about reaching for the future, and the original author is Drew DeVault, whose outspoken software freedom stuff speaks to me, as an old dork who cares about that sort of thing.
Anyway, there are no binary releases, so you gotta go get it the old fashioned way-
git clone https://git.sr.ht/~sircmpwn/aerc
cd aerc
PREFIX=$HOME/.local make install
~/.local/bin/aerc
There is a first-time-setup wizard, which was pretty great. The tutorial help
that pops up immediately is very useful, and you can get it back at any time
with :help tutorial
or by running man aerc-tutorial
in some other terminal
window.
jk
or ↑↓
go up and down through messages, JK
go up and down through
folders in the sidebar. A
archives messages, D
deletes them.
Anyway- aerc
is fast, stays fast with large imap mailboxes, integrates well
with vim, which makes me happy, and generally has safe & sane defaults that you
don’t need to configure. I did have to set this in my ~/.config/aerc/accounts.conf
-
copy-to = [Gmail]/Sent Mail
but uh, that was it.
HTML emails
For person-to-person emails, plain text is fine, but a lot of automated emails, newsletters, etc., are… very html heavy. That also seems like the general trend of things, which makes it ever-harder to do mail outside & unrelated to a web browser. There’s a knob to turn on w3m for text/html mails, but I don’t know- it isn’t amazing. It’s not going to be an experience that sparks joy.
tiny software soapbox
It’s kind of a grim time to be a computer user. Lightweight UIs are all TUIs now, because the terminal is documented, stable, well understood, and portable. Heavyweight UIs are really heavyeight, and bundle a whole damn browser. Gtk+ now wants to be the toolkit of choice for a tiny network of friends, and I haven’t seen a Qt project in years. Apple doesn’t even document their desktop APIs now- and you couldn’t port any of that if you wanted to. Microsoft’s UI things get abandoned every 2 years.
So anyway… if you want a portable program, not a website, you have two options:
- Write Electron apps. People who buy a new $3,000 dollar macbook every other year will love it.
- Write terminal apps. People with slower computers will love it. It will not be accessible to non-turbonerds, but you know, what are you gonna do.
Trapped inside of this paradigm, aerc is pretty great.
Mutt2
So Mutt 2.0 got released, so I figured it was time to take it for another spin. The gmail web client has gotten much slower and less pleasant over the years, and hopefully the rough edges on Mutt + imap have been sanded down. There are a ton of instructions floating around, and I got a little scared about “well what if this wasn’t 2.0 compatible”, so I figured I’d write down the steps again with ‘Mutt 2.0’ at the top for SEO purposes and maybe help the next poor soul.
Anway, first, go get a mutt-specific password here- https://security.google.com/settings/security/apppasswords
Then go plug it into
set realname = "$USERNAME"
set from = "$EMAIL"
set use_from = yes
set envelope_from = yes
set smtp_url = "smtps://${EMAIL}@smtp.gmail.com:465/"
set smtp_pass = "${APP_PASSWORD}"
set imap_user = "${EMAIL}"
set imap_pass = "${APP_PASSWORD}"
set folder = "imaps://imap.gmail.com:993"
set spoolfile = "+INBOX"
set ssl_force_tls = yes
From here, you should be able to type mutt
, and see your inbox.
Luckily, Mutt is not breaking-change happy, so all of the 2 year old instructions in the wiki still work-
set spoolfile = "+INBOX"
set postponed = "+[Gmail]/Drafts"
set record = "+[Gmail]/Sent Mail"
set trash = "+[Gmail]/Trash"
# You need the "noop" bind so that the line editor accepts IMAP
# folders with spaces in their names. The gi, ga, gs and gd shortcuts help
# get around the "navigation quirks" mentioned above too.
bind editor <space> noop
# a steps over the default alias management 'a', but I am too lazy to use aliases,
# so I don't mind
macro index,pager a "<save-message>=[Gmail]/All Mail<enter><enter>" "Archive"
macro index gi "<change-folder>=INBOX<enter>" "Go to inbox"
macro index ga "<change-folder>=[Gmail]/All Mail<enter>" "Go to all mail"
Anyway… it still has the problems it used to. If you open up a large folder, like ‘all mail’, it blocks until it downloads headers for everything. Over 90k emails in my case, at ‘maybe go get a cup of coffee and come back’ speeds. Now, maybe I should have a smaller inbox, and get rid of mailing list archives from 2004, but I’m not sure I want to do a bunch of maybe-scary data deletion & organization just for a mail client that doesn’t do windowing. I also don’t want to sign up for syncing emails to a maildir again, it really hasn’t worked out in the past. So it’s not going to be a mutt year for me, still :/
which where what?
Finding a binary in your $PATH can sometimes be confusing. Especially when which mybin
and whereis mybin
don’t find it, but command -v mybin
does, and worse, your shell finds it- so what is wrong with which
?
It has to do with how you define your path.
export PATH=~/bin/:$PATH
will work with bash
and command -v
, but which
and whereis
aren’t hip to shell metacharacters, and won’t pick up anything in ~/bin/
. Solving this is pretty easy, too-
export PATH=$HOME/bin:$PATH
Caddy and Cloudflare
self signed certs for running Caddy behind Cloudflare
I saw some goofy logs this morning-
acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Cannot negotiate ALPN protocol "acme-tls/1" for tls-alpn-01 challenge, url:
[ERROR] Renewing: acme: Error -> One or more domains had a problem:
[INFO] Unable to deactivated authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4017030008
[INFO] acme: Trying to solve TLS-ALPN-01
[INFO] acme: use tls-alpn-01 solver
[INFO] AuthURL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4017030008
[INFO] acme: Obtaining bundled SAN certificate
[INFO] acme: Trying renewal with -3768 hours remaining
I had a Caddy server doing ACME challenges behind Cloudflare, it turned out. That wasn’t really working. I restarted Caddy, and then it just sat there trying to do its ACME challenge and not serving any pages.
Anyway, in case it ever helps anyone else, the magic Caddyfile incantation is
mydomain.com:443 {
proxy / localhost:1234 {
}
tls self_signed
}
That :443
is the real trick, because without it, Caddy wants to run
self-signed domains on port 2015, which doesn’t do anyone any good.
Systems administration. The gift that keeps on giving.
Really Canonical
* Overheard at KubeCon: "microk8s.status just blew my mind".
https://microk8s.io/docs/commands#microk8s.status
Last login: Thu Nov 21 08:36:04 2019 from 192.168.1.2
hank@tinyserver:~$ microk8s.status
microk8s is running
addons:
cilium: disabled
dashboard: enabled
dns: enabled
fluentd: disabled
gpu: disabled
helm: disabled
ingress: enabled
istio: disabled
jaeger: disabled
juju: disabled
knative: disabled
kubeflow: disabled
linkerd: disabled
metallb: disabled
metrics-server: disabled
prometheus: disabled
rbac: disabled
registry: enabled
storage: enabled
I mean, really? That blew your mind?
I resent folksy advertising. Just plop a static ‘upgrade here’ link in the MOTD and leave me alone.
Weekend Update
I don’t know what I’m doing with my life but this doesn’t really seem like it should be it. Crouched with my head aching over a podcast and a mug of cold coffee staring at the sun outside dreading the resumption of duties and obligations. Happy Saturday everybody.
Self Care Dont Care
The thing I really need to do this week is take care of myself. Watch my sleep schedule, get my exercise, conduct my business in a fulfilling and sustainable way. This is something I have told myself a lot. It’s my own little Mount Everest. Lets give it another shot.
Code Reviews
I’m a big believer in code reviews, possibly more than is warranted. It’s also true that it is hard work, and it is often hard to get started. So here are a list of the things I try to do, maybe they will help someone else.
Say nice things about nice code. We all have our ups and downs. Code reviews are often about preventing things from going wrong in the future- and it can be hard on the people in the present. I value people doing the work, so I want to communicate that. If a function looks clean, say so. If there is a workaround for an ugly wart in the language or framework, commiserate. Think of it like NBA players high-fiving their teammates after free throw attempts- everyone is a professional, no one technically needs it, it’s a little rote- but even perfunctory social gestures help. Even if everyone knows they’re a little forced.
File tickets for tech debt. As a reviewer, it’s more polite to create real tickets for focused followup work than to unload a dump truck of scope creep. I’m sort of in “treat the new hires well” mode in my personal life, but the same applies to long-time developers. The exception is when someone is doing a ton of work in the same section of code and just kind of creating a mess. If your org is too dysfunctional to allow you to work on tech debt tickets occasionally… I don’t know how to help, to be honest. That sounds like its own problem.
Double check that people aren’t re-implementing existing code. New developers on a project are particularly susceptible to this- they don’t know the codebase yet. It’s fine, it’s just about communicating.
Double check the names for naming conventions. This is the right time to make sure that everything isn’t called FooBar except for the one new feature where everything is BarFoo.
Run through the security checklist. Don’t build strings and send them to the
shell, especially if they have user input in them. Use your language’s execve(2)
.
Don’t build strings and send them to a SQL database, use parameterized queries.
User supplied data has got to get escaped in the template. Etc.
Run through the test checklist. Is it tested? Where is it tested? Do the tests exercise the usual boring edge cases- too much, too little, garbage input.
Run through the integration input. Does this require new monitoring? Is there a companion change to the monitoring configuration somewhere?
Anyway, once you’ve run through this list, you’ve read the code a couple times, you’ve thought about it a little, and you’re probably in a better place to think about it as a holistic thing. And all of that needs to get done anyway.
CSS Grid
I don’t really do frontend work any more, so I never got around to actually using it until now. It’s amazing though-
<form>
<label name="title">Title:</label> <input name="title"></input>
<label name="question">Question:</label> <input name="question"></input>
<label name="answer">Answer:</label> <input name="answer"></input>
</form>
form {
display: grid;
grid-template-columns: 100px 1fr;
grid-gap: 10px;
padding: 10px;
background-color: #eee
}
form label {
grid-column: 1; /* put the labels on the left */
text-align: right;
}
form input {
grid-column: 2; /* put the inputs on the right */
}
And bam, a totally passable form:
That would have been… either a ton of extra markup to put it inside a table, or a fiddly float
hellscape ten years ago.
The web is nice, I’ve missed it in my long night of operations plumbing.
a quick Hugo plugin for gnome builder
I whipped this together to try and work on my blog less in vim. It wasn’t that bad, I should write more of these.
rkt beginner notes
Since I collect abandonware container systems:
getting started with rkt
from quay, the coreos dockerhub competitor-
# sudo rkt fetch quay.io/coreos/alpine-sh
# sudo rkt run --interactive quay.io/coreos/alpine-sh --exec=/bin/sh
from dockerhub-
# sudo rkt --insecure-options=image fetch docker://alpine
# sudo rkt run --interactive docker://alpine --exec=/bin/sh
the dockerhub stuff also creates a fake rkt registry for docker-
# sudo rkt run --interactive registry-1.docker.io/library/alpine --exec=/bin/sh
will also work.
Finally, Quay mirrors the default Docker library under quay.io/dockerlibrary, so
# sudo rkt fetch quay.io/dockerlibrary/debian:9
Gets you debian.
mounted volumes
syntax got me for a while, key is to realize the inside/outside distinction-
# rkt run --volume logs,kind=host,source=/srv/logs \
example.com/app1 --mount volume=logs,target=/var/log \
example.com/app2 --mount volume=logs,target=/opt/log
“volume” is outside, “mount” is inside, easy-peasy.
quality of life things
Clean everything up real quick-
rkt gc --grace-period=0s
scratch directories with overlayfs
One of the nice things about Concourse is that everything gets a normal, read-write directory tree to work in, but changes made aren’t persisted, so you don’t have to worry about temporary files, scratch work, mistakes, etc., interfering with other jobs down the line. It turns out you can do this yourself, and it’s not super hard.
overlayfs is a newer Linux filesystem, the new default Concourse filesystem driver, and pretty cool.
To use it, you don’t need anything fancier than good old mount
and mkdir
. There is, however, some setup, so let’s walk through the steps.
First, let’s get a git repo-
$ git clone https://github.com/hfinucane/jsonproc.git
Then, we’ll make some directories for overlayfs
to do its work in-
$ mkdir Lower Upper Work
and finally, lets make the overlayed directory we’re going to be using-
$ mkdir ScratchBuildDir
And we’ll put it together-
$ sudo mount -t overlay overlay -o lowerdir=jsonproc:Lower -o upperdir=Upper -o workdir=Work ScratchBuildDir
The error message situation isn’t great, make sure to run dmesg | tail
if
something goes wrong. That said, lets look at what we can do now-
$ cd ScratchBuildDir
$ go build -o jproc
$ ls
jproc LICENSE main.go main_test.go README.md
Now lets go look at our original directory-
$ cd ..
$ ls jsonproc
LICENSE main.go main_test.go README.md
It’s not there, all our work is isolated in the scratch build directory. Other directories, however, have been affected-
$ ls Upper
jproc
When you tear down the overlay, they will remain-
$ sudo umount ScratchBuildDir
$ ls Upper
jproc
so if you’re building your own system with safe, ephemeral working directories, you’ll need unique Upper directories, or you’ll need to clear them out between uses.
I haven’t really touched on the Lower
directory. overlayfs
doesn’t want to
let you run without doing any overlaying, so you have to overlay something on
something, even if it’s just an empty directory for this example. If you were
writing a container-based build system, Lower
might be the OS tree, and you’d
want the git checkout to be nested inside of var/tmp/build
or something.
You’re not limited to two directories either- jsonproc:Lower:Lowest
will stack
jsonproc
on top of Lower
on top of Lowest
.
a quick pitch for Concourse
Software development is hard, working with other people is hard. Making sure you never skip any steps is hard, reminding other people to not skip any steps is harder. If you are really strict about it, you are a jerk, if you are not, you are a vindictive jerk. The airline industry solves this with checklists, but something about office workers really resists a checklist. It is an admission that you do not know everything, or that your contribution is fungible. Gloomy assertions aside, I have had good results with computer-run checklists.
Continuous integration, or ‘making a computer run the tests’, is table stakes for responsible professional software development. This will work, and be reliable, and is a reasonable place to stop, but there are a bunch of things around the edges that can overwhelm. The most common problems are testing too late, unspecified or underspecified build environments, and management of the build system.
The problem dearest to my heart is that testing the ‘master’ branch is testing too late. That branch is where other developers start when they go to fix a bug, or add a feature. Starting from a broken place can waste an enormous amount of time, and is the sort of thing that leads to hurt feelings. Instead, we should test every branch, as soon as possible, and publish those results.
At a the most basic level, this gives developers a responsible robot, never forgetting to run the tests. There are social benefits too- it means that when another developer goes to review a pending change, they don’t have to go and double check that the checklist has been followed. This makes it easier to concentrate on higher level concerns, and prevents embarrassing “that can’t work” moments that erode a team’s trust.
The management and evolution of the build system should not be an afterthought. It is a crucial part of your workflow, it can be an enormous force-multiplier, and it should be treated with the same care and process as a production environment. Many build systems were built with manual administration through a web ui as a first class citizen, and automation as a second class citizen, and it shows when you try to be rigorous with them. I can be a little dogmatic on this point, but all of your configuration should come from source control. The build system is no exception. Your team knows- or really should know- how to use source control to look at why things changed, your team can use source control to revert changes precisely, your team should use source control to review changes to the build code, and your team should be aware of and reviewing changes to the build code.
Now that I have staked out a vague, philosophical stance, what about concrete recommendations? You should look at Concourse. It is not an out of the box solution for all build problems, but it is quite serious about solving the build management problem, and the rest falls out of that.
maslow's hierarchy of needs
After self-actualization, a little known fact is that the next, tiniest part of the pyramid is the ability find your cell phone before you leave. The normal solution is to have someone call it for you- sound being a part of the hierarchy of senses- but if you’re alone, trying to leave, and you can’t find the thing, what then?
I bought this little board for almost nothing, got a trial account at Twilio, and hacked up a little program.
Calling Twilio was most of the work-
type TwilioPlay struct {
Name xml.Name `xml:"Response"`
Say string `xml:",omitempty"`
Play string `xml:",omitempty"`
}
type TwilioResponse struct {
Sid *string `json:"sid"`
DateCreated *string `json:"date_created"`
DateUpdated *string `json:"date_updated"`
DateSent *interface{} `json:"date_sent"`
AccountSid *string `json:"account_sid"`
To *string `json:"to"`
From *string `json:"from"`
Body *string `json:"body"`
Status string `json:"status"`
Flags *[]string `json:"flags"`
APIVersion *string `json:"api_version"`
Price *interface{} `json:"price"`
URI *string `json:"uri"`
// Optional exception params
Message *string `json:"message"`
MoreInfo *string `json:"more_info"`
Code *int `json:"code"`
}
func Call(toNumber, fromNumber, sid, token, twimlURL string) {
u := url.URL{
Scheme: "https",
Host: "api.twilio.com",
Path: path.Join("2010-04-01/Accounts/", sid, "/Calls.json"),
}
q := u.Query()
q.Set("To", toNumber)
q.Set("From", fromNumber)
q.Set("Url", twimlURL)
r, err := http.NewRequest("POST", u.String(), strings.NewReader(q.Encode()))
if err != nil {
panic(err)
}
r.SetBasicAuth(sid, token)
r.Header.Add("Accept", "application/json")
r.Header.Add("Content-Type", "application/x-www-form-urlencoded")
resp, err := (&http.Client{}).Do(r)
if err != nil {
panic(err)
}
var tresp TwilioResponse
body, err := ioutil.ReadAll(resp.Body)
err = json.Unmarshal(body, &tresp)
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
// fine
} else {
panic(fmt.Sprintf(*tresp.Message))
}
}
Possibly not best practice, but my theory here is to just crash the program
when things go wrong, and let systemd restart it. So there’s no real error
handling anywhere, just unconditional panic
s.
To trigger the thing, I hooked up a button to the board’s GPIO pin 18. linux-sunxi.org has good documentation, but as someone without much experience with the conventions of little single board computers, it took me a while to figure out how to map their GPIO table to the pins on the board. The trick is there is a little ▶ character next to pin 1- count from there left to right, top to bottom.
The GPIO filesystem
is very unixy, and made it really easy to test that the wiring was working.
Hold the button down, cat /sys/class/gpio/gpio18/value
. Let go of the button,
cat /sys/class/gpio/gpio18/value
. I spent some time wandering around looking
for a Go library, and had some bad luck,
so I went ahead and wrote my own. The documentation mentions that
If the pin can be configured as interrupt-generating interrupt
and if it has been configured to generate interrupts (see the
description of "edge"), you can poll(2) on that file and
poll(2) will return whenever the interrupt was triggered. If
you use poll(2), set the events POLLPRI and POLLERR. If you
use select(2), set the file descriptor in exceptfds. After
poll(2) returns, either lseek(2) to the beginning of the sysfs
file and read the new value or close the file and re-open it
to read the value.
So I went ahead and did that-
func poller(pin string, values chan []byte) {
var fds []unix.PollFd = make([]unix.PollFd, 1)
fd, err := os.Open(pin)
fds[0] = unix.PollFd{Fd: int32(fd.Fd()), Events: unix.POLLPRI | unix.POLLERR}
for {
if err != nil {
panic(err)
} // everything is busted
n, err := unix.Poll(fds, 10000)
if err != nil {
panic(err)
} // everything is definitely busted
if n == 1 {
contents, err := ioutil.ReadAll(fd)
if err != nil {
panic(err)
} else {
values <- contents
}
_, err = fd.Seek(0, 0)
if err != nil {
panic(err)
}
}
}
}
Throwing it all together-
func main() {
var pin string = "/sys/class/gpio/gpio18/value"
throttle := time.Tick(time.Second * 30)
values := make(chan []byte)
go poller(pin, values)
for {
edge := <-values
// we're doing pulldown
if bytes.Compare(edge, []byte{'0', '\n'}) == 0 {
Call(call_number, from_number, os.Getenv("SID"), os.Getenv("TOKEN"), twimlURL)
<-throttle
}
}
}
I needed the goroutine and channel for the initial implementation, but since I
switched to poll(2)
, I guess I could just have one loop.
systemd, as always, makes the whole “I would like a supervised process” problem trivial-
[Unit]
Description=Monitor a GPIO port, and call a phone when it goes low
[Service]
Environment=SID='{{sid}}' TOKEN='{{token}}'
# Turn on the GPIO port
ExecStartPre=/bin/echo "18" > /sys/class/gpio/export
# Set high/low behavior
ExecStartPre=/bin/echo both > /sys/class/gpio/gpio18/edge
Exec=/usr/local/bin/findphone
Restart=always
[Install]
WantedBy=multi-user.target
Hopefully this is helpful to someone somehow.
writing a gnome builder plugin
I like Gnome Builder, and I had the copious free time required to be the change I wanted to see in the world. So I looked into writing a Go plugin.
The minimum-viable plugin
A directory, in /home/hank/.local/share/gnome-builder/plugins/${plugin}
, an
empty python file, called ${plugin}.py
, and a .plugin
file, maybe
${plugin}.plugin
. Maybe something like this:
[Plugin]
Name=Go Plugin
Module=go
Loader=python3
X-Project-File-Filter-Pattern=*.go
X-Project-File-Filter-Name=Go Project
This is enough of a skeleton to convince Gnome Builder that folders with Go files constitute a Go project, and will make it easier to navigate the ‘Open Project’ dialog.
Running builds
For this, we’ll fill out ${plugin}.py
:
#!/usr/bin/env python3
import gi
from gi.repository import Ide
class BobBuildPipelineAddin(Ide.Object, Ide.BuildPipelineAddin):
def do_load(self, pipeline):
context = pipeline.get_context()
srcdir = pipeline.get_srcdir()
# Register a BUILD action, which will get run when the user hits 'build'
get_launcher = pipeline.create_launcher()
get_launcher.set_cwd(srcdir)
get_launcher.push_argv("go")
get_launcher.push_argv("build")
get_stage = Ide.BuildStageLauncher.new(context, get_launcher)
self.track(pipeline.connect(Ide.BuildPhase.BUILD, 0, get_stage))
def do_unload(self, application):
pass
def _query(self, stage, pipeline, cancellable):
stage.set_completed(False)
And that’s it- click ‘build’ in Builder, and get a build.
There are a bunch of different build phases, I haven’t done much with them,
although DOWNLOAD
seems to map well to go get
, INSTALL
with go install
,
and go generate
seems like it would map well to AUTOGEN
. The Build Phase
announcement
is sort of tantalizing.
I pieced this together from the official docs, which are good, if incomplete. The upstream plugins are also a great resource.