The Podcast/Spam nexus

I listen to a lot of podcasts. They virtually are all sponsored by either MailChimp or Emma (some new Mailchimp clone).

What I want to know is why spammers (even opt-in, targeted email marketing solutions are spammers as far as I can tell) find that podcasts are listened to by their target market (i.e. other spammers).

Hmm. Maybe I should be spamming more…

Golang on the Geode processor

If you are trying to use Golang on a PC-Engines Alix board, you need to be careful that all the Go code you are using is compiled while the GO386 environment variable is set to 387. The Geode processor does not support the SSE instructions.

If you have Linux directly on the Alix, you’d not run into this because if GO386 is not set, then the compiler auto-detects the right thing. But if you are, for example, running OpenWRT on the device, and you are using cross-compilation to build on another device, you might run into this if some of your Go code (for example, the std lib) was compiled without GO386.

The symptom you’ve given yourself this problem is a traceback like this:

# ./sms.test
SIGILL: illegal instruction

goroutine 1 [running, locked to thread]:
/home/jra/go/src/math/pow10.go:34 +0x19 fp=0x1862df64 sp=0x1862df60

The solution is that when you initialize cross compilation, you need to do it like this:

$ cd $HOME/go/src
$ GO386=387 GOARCH=386 ./make.bash --no-clean

And all of your compiles, like go test -c need to have the GO386 environment variable set as well, thus:

$ cd $GOROOT/src/
$ GO386=387 GOARCH=386 go test -c

(If your code doesn’t use floating point, you might dodge a bullet if you forget to set GO386. But don’t say I didn’t warn you…)

Type safety saves the day again

Recently, I was writing some code to check the SHA256 of a buffer. Simple, right? All you have to do is take the hash you have, get the hash of the data, and compare them. But then you think, “oh, what a drag, I gotta compare two []byte values for equality, and == doesn’t work on them”.

And then you think, “oh, I’ll use reflection!” And now you have two problems.

package main

import (

func main() {
	in := []byte("a test string")
	hashHex := "b830543dc5d1466110538736d35c37cc61d32076a69de65c42913dfbb1961f46"
	hash1, _ := hex.DecodeString(hashHex)
	hash2 := sha256.Sum256(in)
	fmt.Println("equal?", reflect.DeepEqual(hash1, hash2))

The code on

When you run that, you find out that even though the hash is most assuredly right, reflect.DeepEqual returns “false”.

If you’ve just written this as new code, in the context of some other new code that gets and parses the hash, you might not go looking at that reflect.DeepEqual until you’ve checked and rechecked everything else. And by then you’ll be really annoyed.

You, the reader of this blog, can probably tell you’re being set up by now. And you are.

The thing about the reflect package is that it is all about runtime type checking. And the thing about runtime is that it happens later, when you, the programmer, and the compiler are not available to apply your respective unique talents. So bad things can happen, and the compiler can’t save you (and it goes without saying that you can’t save yourself, because you are a fallible human).

It turns out the type of hash2 is not []byte as I lead you to think. Go read the docs and you’ll see that it’s type is [32]byte.

While you are in the docs, you might notice bytes.Compare. It is the solution to our problem. Because it uses static types instead of interface{} like the reflect package, the compiler is going to be able to help you use it correctly. And when you try to use it:

package main

import (

func main() {
	in := []byte("a test string")
	hashHex := "b830543dc5d1466110538736d35c37cc61d32076a69de65c42913dfbb1961f46"
	hash1, _ := hex.DecodeString(hashHex)
	hash2 := sha256.Sum256(in)
	fmt.Println("equal?", reflect.DeepEqual(hash1, hash2))
	fmt.Println("equal?", bytes.Compare(hash1, hash2) == 0)

The code on

This gives the helpful error message cannot use hash2 (type [32]byte) as type []byte in argument to bytes.Compare. Which, at a stroke, explains why reflect.DeepEqual is screwing up: the first check it is making is “are these the same type?” And the answer is no, so hash1 and hash2 are not equal. Even though they are.

In order to turn hash2 into a []byte so that it can be the second argument to bytes.Compare, you just need to take a full slice of it, changing it to hash2[:].

The final, working, hash compare is here.

AR.Drone 2 camera access

There is lots of information out on the net about how to access the camera in the AR.Drone, but it is all for the original model.

In the AR.Drone 2, the cameras have been replaced and upgraded. So settings for v4l that worked to get data out of the camera need to be updated as well.

The front camera is on /dev/video1. If you are talking to V4L directly via ioctls and all that jazz, you need to request format V4L2_PIX_FMT_UYVY, width 1280 and height 720. UYVY format uses 2 bytes per pixel, so a full image is 1843200 bytes. fwrite those bytes from the mmap buffer into a file.

Or, from the command line, use yavta: yavta -c1 -F -f UYVY -s 1280x720 /dev/video1

Bring the raw file back to your Ubuntu laptop using FTP. Use “apt-get install dirac” to get UYVYtoRGB. Then use “UYVYtoRGB 1280 720 1 < in.uyvy | RGBtoBMP out .bmp 3 1 1 1280 720" to turn in.uyvy into out001.bmp.

You can't get an image from the camera while program.elf is running. You need to kill the respawner and it with "kill -9".

The downward facing camera is on /dev/video2. It is the same format, but 320x240. It gives bad data when you first connect to it, so you need to skip at least one frame. Here's a command that worked for me: "yavta -c5 --skip 4 -F -f UYVY -s 320x240 /dev/video2". The data ends up in frame-000004.bin. You need to adjust the width and height arguments to UYVYtoRGB and RGBtoBMP too, of course.

When I get time, I'll work on the next steps to automating this into Godrone.

Dual scheme URLs

I just made this blog HTTP and HTTPS, thanks to Cloudflare.

But that made me realize that lots and lots of internal links in the HTML output of my WordPress point back to the HTTP version of the site.

Part of the solution to this is to install the HTTP plugin in WordPress which fixes some of the mess. But some of the URLs in my posts need to be fixed too.

The best practice for links inside of a website to “inherit” the context where they are eventually found by keeping them as relative as possible. Thus it’s better to use “/tags/geeking” than “”, because if you want a test and production version, or you rename the blog, or whatever, you’ll be happier later if the links are not absolute when they are first typed.

And if you want your website to adapt to having both an HTTP and an HTTPS version, you really want relative links because that means that the web browser will choose the correct scheme, keeping HTTPS sessions inside the HTTPS version of the website.

But what if you want to refer to an off-site resource? And that resource exists in both HTTP and HTTPS versions? Then you need to give a hostname and path (because it is no longer relative to your hostname), but not a scheme (so that the scheme is relative to the context where the relative URL is found).

Such beasts exist. They look weird, but they exist and are handled correctly by modern browsers (I guess there’s some old browsers that chow on them). They look like “//”. That says, “if you found this on an HTTP page, go get it from hostname port 80. If you found this on an HTTPS page, go get it from hostname port 443.” Whee!

Which reminds me of the HP Apollo lab in Harvey Mudd (where I was working on NeXT and Ultrix machines, not HP ones, thankfully). And it also reminds me of a taxi ride to a conference center in San Jose where the guy who invented the HP network filesystem syntax told me that his invention of //host/path accidentally ended up inside of HTTP URL’s.

Cloudflare Universal SSL totally, completely rocks

Cloudflare was already my favorite non-Google internet company because I’m a Golang fan boi and Cloudflare is vocal about their love of Go.

So when I heard that Cloudflare was willing to help me, for free, put SSL on my website (which I’ve been meaning to do since like forever), I was ready to go for it.

First the rocks. Well, it’s free, for little dinky domains like mine. And that’s a hell of a great way to teach techies like me how their system works. I’d happily sell it to a client who asked me about it. The signup process is fast and easy and interesting. And it works: via SSL

But it sucks too: after turning Cloudflare on, OpenID login on my blog stopped working.

But it rocks again: within seconds of turning it off from their great control panel UI (which sucks in one small way: the “submit” button is way down the page, a long long way from where you edit the first few entries on your DNS) my blog let me log in with my OpenID URL.

So then I looked a little bit and I discovered that there’s a solution to my problem. It worked great, and this site is back to SSL via Cloudflare right now.

Thanks Cloudflare!

Unzip -c is a thing, and it’s good (as long as you use -q too)

I just fetched a Raspian disk image via bitorrent. It is a .zip instead of the .gz I would have chosen myself.

If you have a .zip and you don’t want to do a temporary uncompress of it to get the .img to use with dd, you can use “unzip -q -c” to get the contents of the zip file sprayed onto stdout. Then you can pipe it into dd.

The -c argument of Linux unzip is only documented in the man page. And they neglect to mention that unless you use the -q option also, it will mix filename and other useless info into the stdout, making your disk image useless.

So “unzip -q -c | dd bs=4M of=/dev/sdb” for the win. (Unless your boot disk was /dev/sdb, in which case… umm, sorry.)

Strange characters in IP addresses

A long time ago, I worked for WebTV. The part of WebTV doing filtering for parental control was comparing IP addresses as strings. I managed to evade the parental controls when I noticed that the IP address parser was using an atoi that treated leading 0’s as octal and leading 0x’s as hex. By converting the octets of one of the blocked IP addresses into octal, I tricked the blacklist checker into letting me access the naughty bits.

(It was another time when it made sense to be blocking by IP address at all. But this was 1996, so, it was by definition another time.)

Today while reading some source code at work, I noticed that Cisco IOS accepts IP addresses of the form (int(0-255), dot) * 4. Which is correct, except that (probably later) someone defined int(0-255) as “zero or one plus character, followed by digits 0-9 one or more times”. Which means that IOS thinks “10.+20.30.40″ is a valid IP address.


Dell and the NSA

While I was reading this blog about how NSA’s bad-BIOS malware probably works, I was struck by a “coincidence”: Dell does significant amount of government contracting work. In fact, Ed Snowden worked for Dell at one point. NSA’s bad-BIOS targets the RAID cards in Dell servers.

Now, Dell servers are widely deployed. I’ve used them in several jobs, for example. So it’s not unreasonable that NSA would target them, to get the best bang for the buck. But it also seems possible that in order to achieve the things Dell’s executives promised to NSA executives in fancy sales calls, some Dell engineers would find themselves using what they know about Dell servers to write bad-BIOS malware to attack those very servers.

Which made me think about my company, Cisco. We publicly said we don’t put in backdoors. But we also have a big sales organization staffed with people with clearances who make special products for government organizations. It isn’t hard to imagine, especially with the revolving door between military, intelligence and defense contractors, that some of those people would find their allegiances split between intelligence people asking them for hints from the source code, and Cisco’s Code of Business Conduct.

As Bruce Schneier reminds us, once you start wondering if you can trust your suppliers, it is very hard to stop wondering.

Medium, what’s up with comments?, why do you require me to use Twitter or Facebook to comment? With all your respect for language, ideas, and design, is it really possible that you think people who choose not to use either of those services don’t have anything useful or interesting to add to your conversations?