Building Go 1.4 when the linker doesn’t know about build-id

Today at work, on a Redhat 5.5 machine, I tried to build Go 1.4.

This happened:


$ cd go1.4/src
$ ./all.bash
...snip...
# runtime/cgo
/usr/bin/ld: unrecognized option '--build-id=none'
/usr/bin/ld: use the --help option for usage information
collect2: ld returned 1 exit status

The solution is to retry without the “–build-id=none” option:

diff --git a/src/cmd/go/build.go b/src/cmd/go/build.go
index ad03239..ca45217 100644
--- a/src/cmd/go/build.go
+++ b/src/cmd/go/build.go
@@ -2436,13 +2436,21 @@ func (b *builder) cgo(p *Package, cgoExe, obj string, pcCFLAGS, pcLDFLAGS, cgofi
 	// --build-id=none.  So that is what we do, but only on
 	// systems likely to support it, which is to say, systems that
 	// normally use gold or the GNU linker.
+	retryWithoutBuildId := false
 	switch goos {
 	case "android", "dragonfly", "linux", "netbsd":
 		ldflags = append(ldflags, "-Wl,--build-id=none")
+		retryWithoutBuildId = true
 	}
 
 	if err := b.gccld(p, ofile, ldflags, gccObjs); err != nil {
-		return nil, nil, err
+		if retryWithoutBuildId {
+			ldflags = ldflags[0:len(ldflags)-1]
+			err = b.gccld(p, ofile, ldflags, gccObjs)
+		}
+		if err != nil {
+			return nil, nil, err
+		}
 	}
 
 	// NOTE(rsc): The importObj is a 5c/6c/8c object and on Windows

Just in case someone else is looking for it… :)

Go does not officially support RHEL before version 6, but it does seem to sort of work. This post explains why it doesn’t really work.

Go will make you a better programmer

The last line of Dave Cheny’s Gophercon 2015 India keynote is the best: “Go will make you a better programmer.”

It’s true. When I am programming Go, I never think, “OK, is this an OK shortcut to take here? Is this a play context? Is this a work context, but I can push off bounds checking on some other layer? Is this just a little local server, and I don’t care about hackers?”

These questions don’t exist, because the way Go and its standard library and its idioms work together, the right way — the easy way! — to write it is simply, clearly, and securely.

Go makes me a better programmer.

And it makes me increasingly intolerant of C and Python. This week alone at work I struggled with a crashing router from a missing C bounds check and late- and strangely-failing Python code from dynamic typing. Idiomatic Go code does not have those two failure modes, and my patience for them is waning fast as a result.

A Quick Go hack: lines-per-second

Today I wanted to compare how fast two builds were on two different machines. As a rough estimate, I just wanted to see how many lines per second were going into the log file. This is what I came up with:

package main

import (
        "bufio"
        "fmt"
        "os"
        "time"
)

func main() {
        scn := bufio.NewScanner(os.Stdin)

        lines := 0

        go func() {
                for {
                        fmt.Fprintf(os.Stderr, "%v lps\n", lines)
                        lines = 0
                        time.Sleep(time.Second)
                }
        }()

        for scn.Scan() {
                lines++
        }

}

This is a quick hack. It’s not production quality, because there is a data race on lines. I think if I had to fix that race up, I’d choose to use sync/atomic’s ops. This because I’d only need to hit the two places lines is touched concurrently. A channel-based solution would be bigger and messier, not in tune with the minimal nature of this little program.

The equivalent program in Python or Perl or whatever would have been as short and sweet. That’s not the point. The point is that it was so easy to do in Go, I just did it in go without really even considering an alternative.

The Podcast/Spam nexus

I listen to a lot of podcasts. They virtually are all sponsored by either MailChimp or Emma (some new Mailchimp clone).

What I want to know is why spammers (even opt-in, targeted email marketing solutions are spammers as far as I can tell) find that podcasts are listened to by their target market (i.e. other spammers).

Hmm. Maybe I should be spamming more…

Golang on the Geode processor

If you are trying to use Golang on a PC-Engines Alix board, you need to be careful that all the Go code you are using is compiled while the GO386 environment variable is set to 387. The Geode processor does not support the SSE instructions.

If you have Linux directly on the Alix, you’d not run into this because if GO386 is not set, then the compiler auto-detects the right thing. But if you are, for example, running OpenWRT on the device, and you are using cross-compilation to build on another device, you might run into this if some of your Go code (for example, the std lib) was compiled without GO386.

The symptom you’ve given yourself this problem is a traceback like this:

# ./sms.test
SIGILL: illegal instruction
PC=0x813db39

goroutine 1 [running, locked to thread]:
math.init·1()
/home/jra/go/src/math/pow10.go:34 +0x19 fp=0x1862df64 sp=0x1862df60
math.init()

The solution is that when you initialize cross compilation, you need to do it like this:


$ cd $HOME/go/src
$ GO386=387 GOARCH=386 ./make.bash --no-clean

And all of your compiles, like go test -c need to have the GO386 environment variable set as well, thus:


$ cd $GOROOT/src/github.com/jeffallen/linkio
$ GO386=387 GOARCH=386 go test -c

(If your code doesn’t use floating point, you might dodge a bullet if you forget to set GO386. But don’t say I didn’t warn you…)

Type safety saves the day again

Recently, I was writing some code to check the SHA256 of a buffer. Simple, right? All you have to do is take the hash you have, get the hash of the data, and compare them. But then you think, “oh, what a drag, I gotta compare two []byte values for equality, and == doesn’t work on them”.

And then you think, “oh, I’ll use reflection!” And now you have two problems.

package main

import (
	"crypto/sha256"
	"encoding/hex"
	"fmt"
	"reflect"
)

func main() {
	in := []byte("a test string")
	hashHex := "b830543dc5d1466110538736d35c37cc61d32076a69de65c42913dfbb1961f46"
	hash1, _ := hex.DecodeString(hashHex)
	hash2 := sha256.Sum256(in)
	fmt.Println("equal?", reflect.DeepEqual(hash1, hash2))
}

The code on play.golang.org.

When you run that, you find out that even though the hash is most assuredly right, reflect.DeepEqual returns “false”.

If you’ve just written this as new code, in the context of some other new code that gets and parses the hash, you might not go looking at that reflect.DeepEqual until you’ve checked and rechecked everything else. And by then you’ll be really annoyed.

You, the reader of this blog, can probably tell you’re being set up by now. And you are.

The thing about the reflect package is that it is all about runtime type checking. And the thing about runtime is that it happens later, when you, the programmer, and the compiler are not available to apply your respective unique talents. So bad things can happen, and the compiler can’t save you (and it goes without saying that you can’t save yourself, because you are a fallible human).

It turns out the type of hash2 is not []byte as I lead you to think. Go read the docs and you’ll see that it’s type is [32]byte.

While you are in the docs, you might notice bytes.Compare. It is the solution to our problem. Because it uses static types instead of interface{} like the reflect package, the compiler is going to be able to help you use it correctly. And when you try to use it:

package main

import (
	"bytes"
	"crypto/sha256"
	"encoding/hex"
	"fmt"
	"reflect"
)

func main() {
	in := []byte("a test string")
	hashHex := "b830543dc5d1466110538736d35c37cc61d32076a69de65c42913dfbb1961f46"
	hash1, _ := hex.DecodeString(hashHex)
	hash2 := sha256.Sum256(in)
	fmt.Println("equal?", reflect.DeepEqual(hash1, hash2))
	fmt.Println("equal?", bytes.Compare(hash1, hash2) == 0)
}

The code on play.golang.org.

This gives the helpful error message cannot use hash2 (type [32]byte) as type []byte in argument to bytes.Compare. Which, at a stroke, explains why reflect.DeepEqual is screwing up: the first check it is making is “are these the same type?” And the answer is no, so hash1 and hash2 are not equal. Even though they are.

In order to turn hash2 into a []byte so that it can be the second argument to bytes.Compare, you just need to take a full slice of it, changing it to hash2[:].

The final, working, hash compare is here.

AR.Drone 2 camera access

There is lots of information out on the net about how to access the camera in the AR.Drone, but it is all for the original model.

In the AR.Drone 2, the cameras have been replaced and upgraded. So settings for v4l that worked to get data out of the camera need to be updated as well.

The front camera is on /dev/video1. If you are talking to V4L directly via ioctls and all that jazz, you need to request format V4L2_PIX_FMT_UYVY, width 1280 and height 720. UYVY format uses 2 bytes per pixel, so a full image is 1843200 bytes. fwrite those bytes from the mmap buffer into a file.

Or, from the command line, use yavta: yavta -c1 -F -f UYVY -s 1280x720 /dev/video1

Bring the raw file back to your Ubuntu laptop using FTP. Use “apt-get install dirac” to get UYVYtoRGB. Then use “UYVYtoRGB 1280 720 1 < in.uyvy | RGBtoBMP out .bmp 3 1 1 1280 720" to turn in.uyvy into out001.bmp.

You can't get an image from the camera while program.elf is running. You need to kill the respawner and it with "kill -9".

The downward facing camera is on /dev/video2. It is the same format, but 320x240. It gives bad data when you first connect to it, so you need to skip at least one frame. Here's a command that worked for me: "yavta -c5 --skip 4 -F -f UYVY -s 320x240 /dev/video2". The data ends up in frame-000004.bin. You need to adjust the width and height arguments to UYVYtoRGB and RGBtoBMP too, of course.

When I get time, I'll work on the next steps to automating this into Godrone.

Dual scheme URLs

I just made this blog HTTP and HTTPS, thanks to Cloudflare.

But that made me realize that lots and lots of internal links in the HTML output of my WordPress point back to the HTTP version of the site.

Part of the solution to this is to install the HTTP plugin in WordPress which fixes some of the mess. But some of the URLs in my posts need to be fixed too.

The best practice for links inside of a website to “inherit” the context where they are eventually found by keeping them as relative as possible. Thus it’s better to use “/tags/geeking” than “http://blog.nella.org/tags/geeking”, because if you want a test and production version, or you rename the blog, or whatever, you’ll be happier later if the links are not absolute when they are first typed.

And if you want your website to adapt to having both an HTTP and an HTTPS version, you really want relative links because that means that the web browser will choose the correct scheme, keeping HTTPS sessions inside the HTTPS version of the website.

But what if you want to refer to an off-site resource? And that resource exists in both HTTP and HTTPS versions? Then you need to give a hostname and path (because it is no longer relative to your hostname), but not a scheme (so that the scheme is relative to the context where the relative URL is found).

Such beasts exist. They look weird, but they exist and are handled correctly by modern browsers (I guess there’s some old browsers that chow on them). They look like “//hostname.domain.com/path/path/file.html”. That says, “if you found this on an HTTP page, go get it from hostname port 80. If you found this on an HTTPS page, go get it from hostname port 443.” Whee!

Which reminds me of the HP Apollo lab in Harvey Mudd (where I was working on NeXT and Ultrix machines, not HP ones, thankfully). And it also reminds me of a taxi ride to a conference center in San Jose where the guy who invented the HP network filesystem syntax told me that his invention of //host/path accidentally ended up inside of HTTP URL’s.

Cloudflare Universal SSL totally, completely rocks

Cloudflare was already my favorite non-Google internet company because I’m a Golang fan boi and Cloudflare is vocal about their love of Go.

So when I heard that Cloudflare was willing to help me, for free, put SSL on my website (which I’ve been meaning to do since like forever), I was ready to go for it.

First the rocks. Well, it’s free, for little dinky domains like mine. And that’s a hell of a great way to teach techies like me how their system works. I’d happily sell it to a client who asked me about it. The signup process is fast and easy and interesting. And it works: nella.org via SSL

But it sucks too: after turning Cloudflare on, OpenID login on my blog stopped working.

But it rocks again: within seconds of turning it off from their great control panel UI (which sucks in one small way: the “submit” button is way down the page, a long long way from where you edit the first few entries on your DNS) my blog let me log in with my OpenID URL.

So then I looked a little bit and I discovered that there’s a solution to my problem. It worked great, and this site is back to SSL via Cloudflare right now.

Thanks Cloudflare!

Unzip -c is a thing, and it’s good (as long as you use -q too)

I just fetched a Raspian disk image via bitorrent. It is a .zip instead of the .gz I would have chosen myself.

If you have a .zip and you don’t want to do a temporary uncompress of it to get the .img to use with dd, you can use “unzip -q -c foo.zip” to get the contents of the zip file sprayed onto stdout. Then you can pipe it into dd.

The -c argument of Linux unzip is only documented in the man page. And they neglect to mention that unless you use the -q option also, it will mix filename and other useless info into the stdout, making your disk image useless.

So “unzip -q -c foo.zip | dd bs=4M of=/dev/sdb” for the win. (Unless your boot disk was /dev/sdb, in which case… umm, sorry.)