AR.Drone 2 camera access

There is lots of information out on the net about how to access the camera in the AR.Drone, but it is all for the original model.

In the AR.Drone 2, the cameras have been replaced and upgraded. So settings for v4l that worked to get data out of the camera need to be updated as well.

The front camera is on /dev/video1. If you are talking to V4L directly via ioctls and all that jazz, you need to request format V4L2_PIX_FMT_UYVY, width 1280 and height 720. UYVY format uses 2 bytes per pixel, so a full image is 1843200 bytes. fwrite those bytes from the mmap buffer into a file.

Or, from the command line, use yavta: yavta -c1 -F -f UYVY -s 1280x720 /dev/video1

Bring the raw file back to your Ubuntu laptop using FTP. Use “apt-get install dirac” to get UYVYtoRGB. Then use “UYVYtoRGB 1280 720 1 < in.uyvy | RGBtoBMP out .bmp 3 1 1 1280 720" to turn in.uyvy into out001.bmp. You can't get an image from the camera while program.elf is running. You need to kill the respawner and it with "kill -9". The downward facing camera is on /dev/video2. It is the same format, but 320x240. It gives bad data when you first connect to it, so you need to skip at least one frame. Here's a command that worked for me: "yavta -c5 --skip 4 -F -f UYVY -s 320x240 /dev/video2". The data ends up in frame-000004.bin. You need to adjust the width and height arguments to UYVYtoRGB and RGBtoBMP too, of course. When I get time, I'll work on the next steps to automating this into Godrone.

Dual scheme URLs

I just made this blog HTTP and HTTPS, thanks to Cloudflare.

But that made me realize that lots and lots of internal links in the HTML output of my WordPress point back to the HTTP version of the site.

Part of the solution to this is to install the HTTP plugin in WordPress which fixes some of the mess. But some of the URLs in my posts need to be fixed too.

The best practice for links inside of a website to “inherit” the context where they are eventually found by keeping them as relative as possible. Thus it’s better to use “/tags/geeking” than “http://blog.nella.org/tags/geeking”, because if you want a test and production version, or you rename the blog, or whatever, you’ll be happier later if the links are not absolute when they are first typed.

And if you want your website to adapt to having both an HTTP and an HTTPS version, you really want relative links because that means that the web browser will choose the correct scheme, keeping HTTPS sessions inside the HTTPS version of the website.

But what if you want to refer to an off-site resource? And that resource exists in both HTTP and HTTPS versions? Then you need to give a hostname and path (because it is no longer relative to your hostname), but not a scheme (so that the scheme is relative to the context where the relative URL is found).

Such beasts exist. They look weird, but they exist and are handled correctly by modern browsers (I guess there’s some old browsers that chow on them). They look like “//hostname.domain.com/path/path/file.html”. That says, “if you found this on an HTTP page, go get it from hostname port 80. If you found this on an HTTPS page, go get it from hostname port 443.” Whee!

Which reminds me of the HP Apollo lab in Harvey Mudd (where I was working on NeXT and Ultrix machines, not HP ones, thankfully). And it also reminds me of a taxi ride to a conference center in San Jose where the guy who invented the HP network filesystem syntax told me that his invention of //host/path accidentally ended up inside of HTTP URL’s.

Cloudflare Universal SSL totally, completely rocks

Cloudflare was already my favorite non-Google internet company because I’m a Golang fan boi and Cloudflare is vocal about their love of Go.

So when I heard that Cloudflare was willing to help me, for free, put SSL on my website (which I’ve been meaning to do since like forever), I was ready to go for it.

First the rocks. Well, it’s free, for little dinky domains like mine. And that’s a hell of a great way to teach techies like me how their system works. I’d happily sell it to a client who asked me about it. The signup process is fast and easy and interesting. And it works: nella.org via SSL

But it sucks too: after turning Cloudflare on, OpenID login on my blog stopped working.

But it rocks again: within seconds of turning it off from their great control panel UI (which sucks in one small way: the “submit” button is way down the page, a long long way from where you edit the first few entries on your DNS) my blog let me log in with my OpenID URL.

So then I looked a little bit and I discovered that there’s a solution to my problem. It worked great, and this site is back to SSL via Cloudflare right now.

Thanks Cloudflare!

Unzip -c is a thing, and it’s good (as long as you use -q too)

I just fetched a Raspian disk image via bitorrent. It is a .zip instead of the .gz I would have chosen myself.

If you have a .zip and you don’t want to do a temporary uncompress of it to get the .img to use with dd, you can use “unzip -q -c foo.zip” to get the contents of the zip file sprayed onto stdout. Then you can pipe it into dd.

The -c argument of Linux unzip is only documented in the man page. And they neglect to mention that unless you use the -q option also, it will mix filename and other useless info into the stdout, making your disk image useless.

So “unzip -q -c foo.zip | dd bs=4M of=/dev/sdb” for the win. (Unless your boot disk was /dev/sdb, in which case… umm, sorry.)