Doing it the hard way

In my last post I offered to point out some things in Golang Challenge #2 submissions that struck me as “worthy of receiving a (polite) rebuke in code review”, otherwise known as WTFs.

This is opt-in abuse. I don’t mind abusing my colleagues, when I know I can take them out for lunch later and buy them a beer. Hassling random Golang Challenge entrants is not my style. But some have decided they are up for it, even if I’m remote and can’t buy them a beer afterwards.

io.Toys

First up, is Jason Clark. Jason’s entry is quite good, from an idiomatic Go point of view. He fell prey to the “under-testing” problem I mentioned, in that his entry does not handle message framing, or what happens if the buffer passed to Read is smaller than the incoming message. But what I’m really looking for for the purposes of this post are the WTF’s. And seeing io.TeeReader tipped me off that something interesting was about to happen:

// Write encrypts the contents of p and writes it to the underlying io.Writer.
func (w *SecureWriter) Write(p []byte) (int, error) {
	// Each message starts with a generated nonce. Only write the nonce once.
	if w.nonce == nil {
		var nonce [24]byte
		if _, err := io.ReadFull(io.TeeReader(rand.Reader, w.w), nonce[:]); err != nil {
			return 0, ErrNonceWrite
		}
		w.nonce = &nonce
	}

	// encrypted and send message
	_, err := w.w.Write(box.SealAfterPrecomputation(nil, p, w.nonce, w.key))
	if err != nil {
		return 0, err
	}
	return len(p), nil
}

So, the first problem here is that this is just wrong. The docs for NaCl are clear that security is compromised if you use the same nonce for two different messages. But second, the use of io.TeeReader here is overly clever. I didn’t even know what io.TeeReader did, but the docs are clear enough: “TeeReader returns a Reader that writes to w what it reads from r”. So what that line does is get the nonce from cyrpto.Rand, and send the nonce down the unencrypted channel (correct!) while also sending it into nonce, to be saved for later as w.nonce (not correct!). To reason about that line as a reader you need to keep two unrelated processes in mind at once (saving the nonce for later, and sending it down now), and then ask yourself if the error handling correct in both of those cases.

So, if ErrNonceWrite is returned, was a nonce sent down the connection anyway? What was it?

The simpler way of doing this is in two steps. First read into a local called nonce, and check the errors on reading. Once you are sure you have a good nonce, send it down with w.Write(). Checking errors correctly here is critical, because if you don’t manage to get a new nonce, you must not send the message, since you might be sending it with a nonce of all zeros. The second time you’ve done this, you’ve compromised your security.

There’s nothing wrong with using the nifty toys in the io package. And in fact, later in his submission, Jason uses io.MultiWriter in an interesting way. I can’t say that I’m personally 100% confident in the technique of using io.Copy to implement an echo server. I understand the theory, but it seems to me you become dependent on the right magic happening inside of io.Copy, instead of making sure that the logic of your program is front and center, where your reader can see it.

Always remember, code is for reading. When you work on a big system, with lots of coworkers, they will read and re-read your code for months and years after you’ve written it. Code is written once and read a thousand times. Make sure what’s there to read explains what you are doing and what you are depending on.

And now for a random global problem…

Trevor Arjeski got in touch and asked if he had any WTFs. By chance, the first file I clicked on was secure_io.go, and within the first few lines I knew, “we have a winner!”

In a crypto system, the quality of your random numbers matters. This has been known by professional cryptographers for decades, and to the rest of us since 1996. So when I saw that Trevor was importing “math/rand” instead of “crypto/rand”, I knew that he had a big problem coming.

Worse, I read down a few lines and saw the global “seeded”. The problem with this global is that access to it is not protected from a data race. There are three options for how to fix this:

  • Nuke it: This problem calls for using the “crypto/rand” package, which does not need seeding by the Go programmer.
  • Move it to init: Lazy initialization, controlled by a global flag can sometimes be more simply moved to explicit initialization, before main starts running. Go guarantees that init functions are run in a single-threaded context (the spec says: “Package initialization—variable initialization and the invocation of init functions—happens in a single goroutine, sequentially, one package at a time.”)
  • Use sync.Once to arrange for something to be lazily initialized once and only once.

I’ve said before that good testing matters. But to catch this data race, Trevor should have written a test that concurrently created multiple SecureWriters and written into them. Then he would have had to run “go test -race”. It is wonderful that Go has the race detector, but it is better to never write data races to begin with. And a good way to not write them is to not allocate any shared data.

Get in the habit of asking hard questions about every global you see: when it is initialized? Are there writes to it? How are they sequenced with respect to reads?

Want some practice? Move on to lines 14 and 15… :)

Addicted to encoding/binary?

Kristoffer Berdal got in touch and asked me to take a look. Kristoffer made a choice for readability that would eventually be a performance problem, if he wanted to benchmark and tune his implementation.

In his Writer we see a nice symmetrical list of calls to binary.Write. Reading through it, it makes the exact format of the protocol completely clear. So score points for self-documenting code!

But there’s a cost: check out the signature of binary.Write: func Write(w io.Writer, order ByteOrder, data interface{}) error

That means that each time we call into binary.Write, we’ll be doing a type conversion from the fundamental type of the third argument into an interface{}. This may involve an allocation, and once control gets inside of binary.Write, it will end up in the “fall back onto reflection” code path, because the nonce and (usually) the message are not less than 8 bytes long. Reflection is powerful and magical, and a strong point of Go. But it is not fast, when the alternative is not using reflection.

A better choice for writing the nonce and the message, which are simple []byte types, would have been to use the Write method of the underlying io.Writer. (Also, this saves the reader of puzzling over what the meaning of the little endian representation of a [24]byte is…)

One thing that’s interesting about Go is that it can be highly friendly and easy like Python, or highly precise and speedy like C. Choosing when to do one or the other (or finding a way to do both) is the trick that requires experience. It is always important to first get it right, then get it clear, then measure, and only if measurement says to do so, make it fast. In a case like this, Kristoffer has written it right, and clearly. But by turning to a function that uses interface{} where there was another option, he left some performance on the table. Worse, he might have left behind a trap that a colleague will need to deal with later when the prototype goes to production and starts handling gigbits of data per second.

Thanks

Thanks to those who gave me permission to pick on them. I hope that they’ve enjoyed the 15 minutes of fame, and that we’ve all learned something useful.

Golang Challenge 2 comments

I’ve just finished evaluating 40 of the 105 entries to the Golang Challenge #2. The organizer, Satish, asked me to write up my thoughts.

The main similarity I noticed in the entries was not enough testing. The vast majority of the entries used the tests provided in with the challenge unmodified. Taking the given tests without thinking critically about them lead people to make a number of critical mistakes, over and over again. The majority of the entries I graded passed the tests, but would not have stood up to production use. Of the 40 I graded, only 2 or 3 would have received a “ship it” from me in my day job. Those were (not by chance) the ones with extra tests beyond the ones provided in the challenge.

A key part of growing into a mature programmer is to understand that the problem as it arrives in your “todo basket” is rarely correctly specified. This challenge is no exception. The challenge asked participants to “make the given tests pass”, but didn’t mention if you could or should write extra tests. The bottom line is that you can always write extra tests, and by doing so, you will discover where the spec is weak, or wrong. You will also decide for yourself where the “edge” of the task is; thinking about tests you can and should write will help fix the size of the job at hand.

What I’m describing here is test-driven development. Go’s tools make it easy to stay on the golden path of TDD, you just need to decide that you believe in its benefits, and then teach yourself the discipline to do it. In TDD, the tests push the implementation away from the hacky thing that seems to work towards the robust thing that actually does work, that lives alongside of proof that it works (the tests) and that will keep working even as the world changes around it (because you’ll fix the code when the tests start failing).

Two deficiencies in the given tests were that they did not test legitimate network behaviors like short reads, and they gave the impression that the protocol should be one-shot (i.e. set up a TCP connection, exchange public keys, echo one message and close). Given that the problem statement asked for an io.Reader and and io.Writer that encrypted data, it stands to reason that a good solution would be able to handle more than one message, since io.Reader/io.Writer are not one-shot interfaces.

A common problem in TCP servers is short reads. Just because the sender put 1000 bytes into the TCP stream via one write system call does not mean you are guaranteed to receive those same 1000 bytes in a matching read call. To get that behavior, you’d need a reliable transaction protocol like SCTP. With TCP, all you are guaranteed is that the same bytes will arrive at the far end, in the same order.

To see where a short read can come from, we need to look at it one packet at a time. Imagine what happens if the sender gives the kernel 2000 bytes, then sleeps on a the read of a reply. If the MTU on the link is 1500 bytes, the TCP implementation might send out one packet with about 1500 bytes in it, and a second with 500 (the “about” is hand waving for the TCP/IP overhead we’re ignoring here). Now imagine that the second packet is lost, and the replacement is lost as well. On the third try it gets through. On the receiving end, what we see is 1000 bytes which arrive, followed by a pause and 500 more. If the pause is long enough, the kernel will decide to stop making the application wait around for the first 1000 bytes and pass them up. The reader, blocked in the recv() system call, will get unblocked and receive the 1000 bytes. The next time it blocks on recv(), it will get the enxt 500 bytes. This is one of many explanations for how you can get a short read. There’s no guarantees, and code that expects there to be is wrong and will fail in real life (but not in your fast, non-lossy, testing environment; in particular, if you test via localhost, you’re very unlikely to ever see short reads).

But short reads could some from someplace much simpler… from the tests. Consider how your challenge entry would have behaved if it was reading from an io.Reader that intentionally returned only one byte at a time.

Given the reality of short reads, it is up to the application to handle framing the bytes into messages. The majority of entries did not take this into consideration. The solution is that each message needs to be sent with it’s length. And when SecureReader reads from the underlying io.Reader, it needs to use something like ioutil.ReadAll to make certain all the bytes arrive, no matter how many calls to the underlying io.Reader it takes to get them all. But then that opens up the question of what to do when the connection hangs? Do you timeout? How can you cause ioutil.ReadAll to return early?

In Go, different pieces of the language and the standard library can be composed to make something better than any of them alone. A great way to read frames from the network is to prepend the frame with it’s length. You read the length using encoding/binary.Read, make an io.LimitedReader which is limited to the length, then give that LimitedReader to ioutil.ReadAll. You can handle timeouts by checking that if the underlying io.Reader satisfies an interface with SetReadDeadline in it (as the various connection types in package net do), and then using it to set a deadline. After the deadline passes, the underlying io.Reader will return a timeout error, which the LimitedReader will return to ioutil.ReadAll, which will cause it to return early, telling you that the read timed out.

The other thing I noticed is that even in a language designed for simplicity, there are a myriad of ways to do things too difficultly in Go. In other words, go is not WTF-proof. *

The only solution to this problem is that you need to read lots and lots of code (way, way more than your write!). Read the highest quality code you can find. Code from the Go core team is a very good choice (the standard library itself and side projects by Andrew, Russ, and Brad). You can also read the reviews of potential new code for the Go core (read the golang-dev mailing list to find out where the reviews are). By watching other people get their screw ups corrected, you can avoid making them in the first place! It is much easier on the ego too, to watch other people get corrected for a mistake you would have made. :)

* In order to not single people out here in public I’m not going to talk about the specific WTF’s I saw. But if you’d be willing to have your code publicly mocked by me, drop me a line with a pointer to your challenge solution. If there’s a WTF in there, I’ll post about it and you’ll be infamous, and we can all have a good laugh… :)

Building Go 1.4 when the linker doesn’t know about build-id

Today at work, on a Redhat 5.5 machine, I tried to build Go 1.4.

This happened:


$ cd go1.4/src
$ ./all.bash
...snip...
# runtime/cgo
/usr/bin/ld: unrecognized option '--build-id=none'
/usr/bin/ld: use the --help option for usage information
collect2: ld returned 1 exit status

The solution is to retry without the “–build-id=none” option:

diff --git a/src/cmd/go/build.go b/src/cmd/go/build.go
index ad03239..ca45217 100644
--- a/src/cmd/go/build.go
+++ b/src/cmd/go/build.go
@@ -2436,13 +2436,21 @@ func (b *builder) cgo(p *Package, cgoExe, obj string, pcCFLAGS, pcLDFLAGS, cgofi
 	// --build-id=none.  So that is what we do, but only on
 	// systems likely to support it, which is to say, systems that
 	// normally use gold or the GNU linker.
+	retryWithoutBuildId := false
 	switch goos {
 	case "android", "dragonfly", "linux", "netbsd":
 		ldflags = append(ldflags, "-Wl,--build-id=none")
+		retryWithoutBuildId = true
 	}
 
 	if err := b.gccld(p, ofile, ldflags, gccObjs); err != nil {
-		return nil, nil, err
+		if retryWithoutBuildId {
+			ldflags = ldflags[0:len(ldflags)-1]
+			err = b.gccld(p, ofile, ldflags, gccObjs)
+		}
+		if err != nil {
+			return nil, nil, err
+		}
 	}
 
 	// NOTE(rsc): The importObj is a 5c/6c/8c object and on Windows

Just in case someone else is looking for it… :)

Go does not officially support RHEL before version 6, but it does seem to sort of work. This post explains why it doesn’t really work.

Go will make you a better programmer

The last line of Dave Cheny’s Gophercon 2015 India keynote is the best: “Go will make you a better programmer.”

It’s true. When I am programming Go, I never think, “OK, is this an OK shortcut to take here? Is this a play context? Is this a work context, but I can push off bounds checking on some other layer? Is this just a little local server, and I don’t care about hackers?”

These questions don’t exist, because the way Go and its standard library and its idioms work together, the right way — the easy way! — to write it is simply, clearly, and securely.

Go makes me a better programmer.

And it makes me increasingly intolerant of C and Python. This week alone at work I struggled with a crashing router from a missing C bounds check and late- and strangely-failing Python code from dynamic typing. Idiomatic Go code does not have those two failure modes, and my patience for them is waning fast as a result.

A Quick Go hack: lines-per-second

Today I wanted to compare how fast two builds were on two different machines. As a rough estimate, I just wanted to see how many lines per second were going into the log file. This is what I came up with:

package main

import (
        "bufio"
        "fmt"
        "os"
        "time"
)

func main() {
        scn := bufio.NewScanner(os.Stdin)

        lines := 0

        go func() {
                for {
                        fmt.Fprintf(os.Stderr, "%v lps\n", lines)
                        lines = 0
                        time.Sleep(time.Second)
                }
        }()

        for scn.Scan() {
                lines++
        }

}

This is a quick hack. It’s not production quality, because there is a data race on lines. I think if I had to fix that race up, I’d choose to use sync/atomic’s ops. This because I’d only need to hit the two places lines is touched concurrently. A channel-based solution would be bigger and messier, not in tune with the minimal nature of this little program.

The equivalent program in Python or Perl or whatever would have been as short and sweet. That’s not the point. The point is that it was so easy to do in Go, I just did it in go without really even considering an alternative.

The Podcast/Spam nexus

I listen to a lot of podcasts. They virtually are all sponsored by either MailChimp or Emma (some new Mailchimp clone).

What I want to know is why spammers (even opt-in, targeted email marketing solutions are spammers as far as I can tell) find that podcasts are listened to by their target market (i.e. other spammers).

Hmm. Maybe I should be spamming more…

Golang on the Geode processor

If you are trying to use Golang on a PC-Engines Alix board, you need to be careful that all the Go code you are using is compiled while the GO386 environment variable is set to 387. The Geode processor does not support the SSE instructions.

If you have Linux directly on the Alix, you’d not run into this because if GO386 is not set, then the compiler auto-detects the right thing. But if you are, for example, running OpenWRT on the device, and you are using cross-compilation to build on another device, you might run into this if some of your Go code (for example, the std lib) was compiled without GO386.

The symptom you’ve given yourself this problem is a traceback like this:

# ./sms.test
SIGILL: illegal instruction
PC=0x813db39

goroutine 1 [running, locked to thread]:
math.init·1()
/home/jra/go/src/math/pow10.go:34 +0x19 fp=0x1862df64 sp=0x1862df60
math.init()

The solution is that when you initialize cross compilation, you need to do it like this:


$ cd $HOME/go/src
$ GO386=387 GOARCH=386 ./make.bash --no-clean

And all of your compiles, like go test -c need to have the GO386 environment variable set as well, thus:


$ cd $GOROOT/src/github.com/jeffallen/linkio
$ GO386=387 GOARCH=386 go test -c

(If your code doesn’t use floating point, you might dodge a bullet if you forget to set GO386. But don’t say I didn’t warn you…)

Type safety saves the day again

Recently, I was writing some code to check the SHA256 of a buffer. Simple, right? All you have to do is take the hash you have, get the hash of the data, and compare them. But then you think, “oh, what a drag, I gotta compare two []byte values for equality, and == doesn’t work on them”.

And then you think, “oh, I’ll use reflection!” And now you have two problems.

package main

import (
	"crypto/sha256"
	"encoding/hex"
	"fmt"
	"reflect"
)

func main() {
	in := []byte("a test string")
	hashHex := "b830543dc5d1466110538736d35c37cc61d32076a69de65c42913dfbb1961f46"
	hash1, _ := hex.DecodeString(hashHex)
	hash2 := sha256.Sum256(in)
	fmt.Println("equal?", reflect.DeepEqual(hash1, hash2))
}

The code on play.golang.org.

When you run that, you find out that even though the hash is most assuredly right, reflect.DeepEqual returns “false”.

If you’ve just written this as new code, in the context of some other new code that gets and parses the hash, you might not go looking at that reflect.DeepEqual until you’ve checked and rechecked everything else. And by then you’ll be really annoyed.

You, the reader of this blog, can probably tell you’re being set up by now. And you are.

The thing about the reflect package is that it is all about runtime type checking. And the thing about runtime is that it happens later, when you, the programmer, and the compiler are not available to apply your respective unique talents. So bad things can happen, and the compiler can’t save you (and it goes without saying that you can’t save yourself, because you are a fallible human).

It turns out the type of hash2 is not []byte as I lead you to think. Go read the docs and you’ll see that it’s type is [32]byte.

While you are in the docs, you might notice bytes.Compare. It is the solution to our problem. Because it uses static types instead of interface{} like the reflect package, the compiler is going to be able to help you use it correctly. And when you try to use it:

package main

import (
	"bytes"
	"crypto/sha256"
	"encoding/hex"
	"fmt"
	"reflect"
)

func main() {
	in := []byte("a test string")
	hashHex := "b830543dc5d1466110538736d35c37cc61d32076a69de65c42913dfbb1961f46"
	hash1, _ := hex.DecodeString(hashHex)
	hash2 := sha256.Sum256(in)
	fmt.Println("equal?", reflect.DeepEqual(hash1, hash2))
	fmt.Println("equal?", bytes.Compare(hash1, hash2) == 0)
}

The code on play.golang.org.

This gives the helpful error message cannot use hash2 (type [32]byte) as type []byte in argument to bytes.Compare. Which, at a stroke, explains why reflect.DeepEqual is screwing up: the first check it is making is “are these the same type?” And the answer is no, so hash1 and hash2 are not equal. Even though they are.

In order to turn hash2 into a []byte so that it can be the second argument to bytes.Compare, you just need to take a full slice of it, changing it to hash2[:].

The final, working, hash compare is here.

AR.Drone 2 camera access

There is lots of information out on the net about how to access the camera in the AR.Drone, but it is all for the original model.

In the AR.Drone 2, the cameras have been replaced and upgraded. So settings for v4l that worked to get data out of the camera need to be updated as well.

The front camera is on /dev/video1. If you are talking to V4L directly via ioctls and all that jazz, you need to request format V4L2_PIX_FMT_UYVY, width 1280 and height 720. UYVY format uses 2 bytes per pixel, so a full image is 1843200 bytes. fwrite those bytes from the mmap buffer into a file.

Or, from the command line, use yavta: yavta -c1 -F -f UYVY -s 1280x720 /dev/video1

Bring the raw file back to your Ubuntu laptop using FTP. Use “apt-get install dirac” to get UYVYtoRGB. Then use “UYVYtoRGB 1280 720 1 < in.uyvy | RGBtoBMP out .bmp 3 1 1 1280 720" to turn in.uyvy into out001.bmp.

You can't get an image from the camera while program.elf is running. You need to kill the respawner and it with "kill -9".

The downward facing camera is on /dev/video2. It is the same format, but 320x240. It gives bad data when you first connect to it, so you need to skip at least one frame. Here's a command that worked for me: "yavta -c5 --skip 4 -F -f UYVY -s 320x240 /dev/video2". The data ends up in frame-000004.bin. You need to adjust the width and height arguments to UYVYtoRGB and RGBtoBMP too, of course.

When I get time, I'll work on the next steps to automating this into Godrone.