git log ––grep “Résumé”

For a while now, it’s become clear that a useful and important piece of data about how a future colleague might work out is their open source contributions. While the conditions of open source work are often somewhat different than paid work, a person’s manner of expressing themselves (both interpersonally, on issue trackers for example and in code) is likely to tell you more about their personality than you can learn in the fake environment of an interview.

I am currently starting a job search for a full time job in the Lausanne, Switzerland area (including as far as Geneva and Neuchâtel) with a start date of September 1, 2016. Touching up my résumé is of course on my todo list. But it is more fun to look at and talk about code, so as a productive procrastination exercise, here’s a guided tour of my open source contributions.

I currently work with Vanadium, a new way of making secure RPC calls with interesting naming, delegation and network traversal properties. I believe wide-scale adoption of Vanadium would make IoT devices less broken, which seems like an important investment we need to be making in infrastructure right now.

I have been an occasional contributor to the Go standard library since 2011. Some examples I am particularly proud of include:

  • I profiled the bzip2 decoder and realized that the implementation was cache thrashing. The fixed code is smaller and faster.
  • I noticed that the HTTP implementation (both client and server) was generating more trash than was necessary, and prototyped several solutions. The chosen solution was is in this checkin. This was, for a while, the most intensely exercised code I’ve written in my career; it saved one allocation per header for most headers processed by the billions of HTTP transactions that Go programs do all around the world. It was later replaced with a simpler implementation that was made possible by an optimization in the compiler. The same allocation reduction technique was carried over into Go’s HTTP/2 implementation too, it appears.
  • I improved the TLS Client Authentication to actually work. I needed this for a project at work.

I wrote an MQTT server in Go in order to learn the protocol for work. It is by no means the fastest, but several people have found that the simplicity and idiomaticity of its implementation is attractive. It ships as part of the device software for the Ninja Sphere.

  • I found an interesting change to reduce network overhead in the MQTT serializer I used. Copies = good, more syscalls and more packets = bad.

I worked on control software for a drone in Go for a while. It still crashes instead of flying, but it was an interesting way to learn about control systems, soft realtime systems, websockets, and Go on ARM.

I have written articles about Go on this blog since I started using it in 2010. Some interesting ones in particular are:

I am also active on the Golang sub-reddit, where I like answering newbie requests for code reviews. I also posed a challenge for the Golang Challenge.

My earliest open source project that’s still interesting to look at today was a more scalable version of MRTG called Cricket. Today tools like OpenTSDB and Graphite are used for the same job, it seems. Cricket still works to the best of my knowledge, but development is dead because of lack of interest from users. Cricket is written in Perl.

My earliest open source project was Digger, from Bunyip. If you can unearth archeological evidence of it from the 1996 strata of the Internet geology, you’re a better digger than I am. It was written in C.

Seeking around in an HTTP object

Imagine there’s a giant tar file on a HTTP server, and you want to know what’s inside it. You don’t know if it’s got what you are looking for, and you don’t want to download the whole thing. Is it possible to do something like “tar tvf”?

This is not a theoretical problem just to demonstrate something in Go. In fact, I wasn’t looking to write an article at all, except that I needed to know the structure of the bulk patent downloads from the USPTO, hosted at Google.

So, how to go about this task? We’ve got a couple things to check and then we can plan a way forward. First, do the Google servers support the Range header? That’s easy enough to check using curl and an HTTP HEAD request:

$ curl -I
HTTP/1.1 200 OK
X-GUploader-UploadID: AEnB2UrryncnFxyuHvzmUvVADjSRQJXRogilQ-lE9-pFZwJwkU5jRYC27PBdIgFe4f18Xgol-YOBxi5QqGM5yhGgso2Rd_NsZQ
Expires: Sun, 17 Jan 2016 07:52:14 GMT
Date: Sun, 17 Jan 2016 06:52:14 GMT
Cache-Control: public, max-age=3600
Last-Modified: Sat, 07 Feb 2015 11:40:47 GMT
ETag: "558992439ef9046c834f9dab709e6b1a"
x-goog-generation: 1423309247028000
x-goog-metageneration: 1
x-goog-stored-content-encoding: identity
x-goog-stored-content-length: 2600867840
Content-Type: application/octet-stream
x-goog-hash: crc32c=GwtHNA==
x-goog-hash: md5=VYmSQ575BGyDT52rcJ5rGg==
x-goog-storage-class: STANDARD
Accept-Ranges: bytes
Content-Length: 2600867840
Server: UploadServer
Alternate-Protocol: 443:quic,p=1
Alt-Svc: quic=":443"; ma=604800; v="30,29,28,27,26,25"

Note the “Accept-Ranges” header in there, which says that we can send byte ranges to it. Range headers let you implement the HTTP equivalent of the operating system’s random-access reads (i.e. the io.ReaderAt interface).

So it would theoretically be possible to pick and choose which bytes we download from the web server, in order to download only the parts of a tarfile that have the metadata (table of contents) in it.

As an aside: The fact that Google is serving raw tarfiles here, and not tar.gz files, makes this possible. A tar.gz file does not have metadata available at predictable places, because the location of file n’s metadata depends on how well or badly file n-1’s data was compressed. Why are Google serving tar files? Because the majority of the content of the files are pre-compressed, so compressing the tar file wouldn’t gain much. And maybe they were thinking about my exact situation… though that’s less likely.

Now we need an implementation of the TAR file format that will let us replace the “read next table of contents header” part with an implementation of read that reads only the metadata, using an HTTP GET with a Range header. And that is where Go’s archive/tar package comes in!

However, we immediately find a problem. The tar.NewReader method takes an io.Reader. The problem with the io.Reader is that it does not let us get random access to the resource, like io.ReaderAt does. It is implemented that way because it makes the tar package more adaptable. In particular, you hook the Go tar package directly up to the compress/gzip package and read tar.gz files — as long as you are content to read them sequentially and not jump around in them, as we wish to.

So what to do? Use the source, Luke! Go dig into the Next method, and look around. That’s where we’d expect it to go find the next piece of metadata. Within a few lines, we find an intriguing function call, to skipUnread. And there, we find something very interesting:

// skipUnread skips any unread bytes in the existing file entry, as well as any alignment padding.
func (tr *Reader) skipUnread() {
    nr := tr.numBytes() + tr.pad // number of bytes to skip
    tr.curr, tr.pad = nil, 0
    if sr, ok := tr.r.(io.Seeker); ok {
        if _, err := sr.Seek(nr, os.SEEK_CUR); err == nil {
    _, tr.err = io.CopyN(ioutil.Discard, tr.r, nr)

The type assertion in there says, “if the io.Reader is actually capable of seeking as well, then instead of reading and discarding, we seek directly to the right place”. Eureka! We just need to send an io.Reader into tar.NewReader that also satisfies io.Seeker (thus, it is an io.ReadSeeker).

So, now go check out package and see how we accomplish that.

This package implements not only io.ReadSeeker, but also io.ReaderAt. Why? Because after investigating a remote tarfile, I realized I also wanted to look at a remote zip file, and to do that I needed to call zip.NewReader, which takes both an io.ReaderAt and the size of the file. Why? Because the zipfile format has its table of contents at the end of the file, so the first thing it does is to seek to the end of the file to find the TOC.

A command-line tool to get the table of contents of tar and zip files remotely is in remote-archive-ls.

Dynamic DNS circa 2016

In the old days, if you had an ISP that changed your IP address all the time but you wanted to run a server, you used dynamic DNS, i.e. a hacky script talking to a hacky API on an hacky DNS provider.

These days, if you bring up a cloud server from time to time to work, it is likely to get a different IP address. But you might want a DNS record pointing at it so that it is convenient to talk to.

Same problem, different century.

Here’s my solution for a GCE server and a domain fronted by CloudFlare.

It has hard coded in it, so YMWDV (your mileage will definitely vary).

The most important thing when go-fuzzing

The most important thing to know, when you are using go-fuzz, is that the cover metric should be increasing.

I didn’t know that and I wasted one 12 hour run of fuzzing because my fuzzing function was misbehaving in a way that made it return the same useless error for every input no matter what. That meant that no matter what go-fuzz mutated in the input, it could not find a way to explore more code, and could not find any interesting bugs. It was trying to tell me this by not incrementing the cover metric it was reporting.

Do not do like I did. Watch that cover is going up before leaving your go-fuzz to go spend hours and hours wasting time.

Learning Swift, sans Xcode

Say you are learning Swift. And like a good fanboi, the first thing you do is update to the latest and greatest because that’s like what you do when you are a nerd.

But you live in Osh, Kyrgyzstan. You have bitchin’ FTTH from Unilink, but access outside of Kyrgyzstan is still limited by the great firewall that Putin has put up in Moscow or whatever. I don’t know, but it’s slow as hell.

So you want to learn Swift, but Xcode is out of commission because it is upgrading. Well, sort of out of commission. It gives an “I’m upgrading” message, but it is still there in /Applications/

So you can use this script to call Swift as an interpreter, and then you can learn Swift in Emacs, where you should be programming anyway, YOU FOOL.



$xc/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift \
 -frontend \
 -interpret $1 \
 -target x86_64-apple-darwin15.2.0 \
 -enable-objc-interop \
 -sdk $xc/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk \
 -module-name `basename $1 .m`

echo "Exit code: $rc"

HTTP/2: Thanks Cloudflare and Go!

Look what happened today:

2015/12/04 11:38:07 fetching
2015/12/04 11:38:08 {200 OK 200 HTTP/2.0 2 0 map[Server:[cloudflare-nginx] Date:[Fri, 04 Dec 2015 05:38:08 GMT] Content-Type:[text/html] Set-Cookie:[__cfduid=d3a3ea49ee46eb6a6803e2eb7f597e26e1449207488; expires=Sat, 03-Dec-16 05:38:08 GMT; path=/;; HttpOnly] Vary:[Accept-Encoding] Cf-Ray:[24f529d18893372c-ARN]] 0xc8203bbf60 -1 [] false map[] 0xc8200be000 0xc8206cc420}

Thank you Go 1.6 and Cloudflare. You guys are bringing my website into the bright future of 2016 with no help at all from me. 🙂

Industrial-scale power storage and waste heat

There will, eventually, be a giant wind farm above my house. I say eventually because though Switzerland is not immune from NIMBYism, our court system deals efficiently enough with oppositions so that if something is allowed by law (zoning laws, eco-protection laws, etc) then it does go through. The opposition (and there’s always opposition) does a few court challenges, it goes up a couple layers, sometimes to the supreme court, and the court rather quickly says, “It’s legal, shut up. If you don’t like it, change the laws, don’t come begging us to do so.”

The opposition have two complaints. They choose which one to talk about depending on the context. If they are going for “shock and awe” they use Photoshopped pictures to show how “ugly” the windmills will be. I’m suspicious of their Photoshopping, because though I think the relative sizes are correct, I don’t think the visibility (i.e. brightness of the windmills themselves and the reduction in visibility due to natural haze) in their pictures is correct. Whatever, it’s true, they are giant industrial installations in areas previously used only for grazing and milking cows. But we should recall that raising and milking cows it itself a giant industrial operation with a twice-daily milk run with a diesel-powered truck through this scenic wonderland. Some of the milking barns even run off of polluting diesel generators…

If you tell them, “I don’t mind the windmills, they look like progress to me” (and I have!), then the opposition falls back onto their second line of defense. They say, “windmills do not produce energy when it is needed, so they can’t replace nuclear”.

So, first, that’s a straw-man. No one is talking about nuclear here; if we were we wouldn’t agree anyway because I’m pro-nuclear. I consider nuclear power to be green energy (and the founder of Greenpeace does as well). What I’m interested in is eliminating fossil fuels from electrical generation, and from transport use.

The Tesla battery technology for utilities is the missing part of the equation. They shift energy from the peak generation time to the time when the energy is needed, making it possible for windmills and solar to contribute to the baseline load that existing dirty electric plants provide. But they have so far been tested in giant, ugly, industrial installations, which even I would not like to see here in my backyard.

So that got me thinking about how the Mollendruz windmills could be hooked up to batteries.

Batteries heat up when they are charged, and part of what’s special about Tesla’s innovations is to integrate cooling and fire protection into the heart of their batter packs. So a utility-scale battery installation will create utility-scale waste heat. In the current utility scale batteries, this is appears to be dumped into the atmosphere. But remote heating is a mature and well respected technology in Switzerland. Wouldn’t it be interesting to put that waste heat to use heating our schools and government buildings?

I don’t know how near batteries need to be to windmills to be efficient. Windmills generate alternating current, because they are a rotative power source. And AC travels at lower loss than DC. So it seems that putting the batteries where the heat is needed would be ok.

But the batteries are still ugly. What can we do about that? If you are harvesting the heat from them, and they are already engineered to be installed into moving cars and houses, the utility scale batteries can probably be installed indoors. In Switzerland, when we need to put things indoors and we want the land to remain pretty, we put them underground. Near the Col du Mollendruz there’s an old military fort called Petra Felix. I wonder if there’s enough room inside of it to hold the batteries?

Hacking cars and fixing them

A few years ago, I read an academic paper on how to hack cars. Today news came out that what was previously demonstrated via direct access is also possible over the air.

I thought it would be fun to look at the firmware update file that fixes this, to see what format it is in, what’s in it, etc. To get an update for 2014 Jeep Cherokees, you need a VIN. It turns out a used car sales website posted the VINs of their inventory on their website, so I found one: 1C4PJMDB6EW255433

Then you put it into the UConnect website, which is a typical late 2000’s travesty of over-engineering. It wants you to use some plugin from Akami to download the file, but in small print tells you that you can also click on this link. But of course, there’s javascript insanity to prevent you from finding out what the link is. It is delivered via TLS, which is interesting. It is a 456 meg zip file. It also has a user-specific token on the end of it, and without that you get a 404 when you try to fetch it.

The zip file has an ISO inside of it:

$ unzip -l 
  Length     Date   Time    Name
 --------    ----   ----    ----
583661568  06-23-15 14:48   swdl.iso
 --------                   -------
583661568                   1 file

The ISO file is slightly bigger than the zip file, at 583 megs:

$ ls -l swdl.iso 
-rw-r--r--  1 jra  staff  583661568 Jun 23 14:48 swdl.iso

Inside the ISO file is:

dr-xr-xr-x  2 jra  staff  2048 Jun 23 16:47 bin
dr-xr-xr-x  2 jra  staff  2048 Jun 23 16:47 etc
dr-xr-xr-x  2 jra  staff  2048 Jun 23 16:47 lib
-r-xr-xr-x  1 jra  staff  1716 Jun 23 16:47 manifest
dr-xr-xr-x  4 jra  staff  2048 Jun 23 16:47 usr

And that manifest file? It is Lua, which is apparently read into the updater via execution.

So right. The updater itself apparently gives an attacker execute privs in the address space of the Lua interpreter via an unsigned file.

Jeeze, Chrysler, that’s like Game Set and Match, and I haven’t even looked into bin/ yet. WTF?

Update after reading some more…

Well something interesting happens in ioschk.lua, where the second block of 64 bytes from the ISO is read and then fed to “openssl rsautl”, using a public key that is on the device. But ioschk.lua is loaded from the ISO itself, and is called by, from the ISO. So it seems like if you want to make your own ISO, you need to remember to make’s call to isochk.lua a no-op.

Other interesting things I found while trolling around… they have the Helvetica Neue font, and right next to it a license file saying, “for evaluation only”. Jeeze, sure hope that Harman have paid up, or else they might have a bill in the mail.

There’s a file called which does the necessary to put the device on the Ethernet if a Linksys USB300M adapter is plugged in. It has some checks in it for an internal development mode, but those would be easy to bypass if you can in fact edit the ISO image at will.

So, all in all, it would be fun to play if I had a Jeep. But I’m still planning on getting a Tesla.