My first ever Rust program!

Here, for posterity, is my first ever Rust program. It checks the key log on the Upspin Key Server.

extern crate sha2;
extern crate reqwest;

use std::io::Read;
use sha2::{Sha256, Digest};

fn main() {
    let mut resp = match reqwest::get("https://key.upspin.io/log") {
        Ok(resp) => resp,
        Err(e) => panic!(e),
    };
    assert!(resp.status().is_success());

    let mut content = String::new();
    match resp.read_to_string(&mut content) {
        Ok(_) => {}
        Err(e) => panic!(e),
    }

    let mut hasher = Sha256::default();

    let mut ct = 0;
    let mut last_hash = "".to_string();
    for line in content.lines() {
        if line.starts_with("SHA256:") {
            let mut fields = line.split(":");

            // skip first token
            match fields.next() {
                Some(_) => {}
                _ => {
                    println!("Bad SHA256 line: {}", line);
                    continue;
                }
            };

            last_hash = fields.next().unwrap().to_string();
            let expected = string_to_u8_array(&last_hash);
            let output = hasher.result();
            assert_eq!(output.as_slice(), expected.as_slice());
        } else {
            hasher = Sha256::default();
            hasher.input(line.as_bytes());
            let newline = "\n".as_bytes();
            hasher.input(newline);
            if last_hash != "" {
                hasher.input(last_hash.as_bytes());
            }
        }

        ct += 1;
        println!("Line {}", ct);
    }
}

use std::u8;
fn string_to_u8_array(hex: &String) -> Vec<u8> {

    // Make vector of bytes from octets
    let mut bytes = Vec::new();
    for i in 0..(hex.len() / 2) {
        let res = u8::from_str_radix(&hex[2 * i..2 * i + 2], 16);
        match res {
            Ok(v) => bytes.push(v),
            Err(e) => {
                println!("Problem with hex: {}", e);
                return bytes;
            }
        };
    }
    return bytes;
}

I found myself sprinkling mut’s and unpack()’s here and there like the mutation unpacking fairy, hoping something would work. I don’t think that how you are supposed to do it, but we’ll see.

People give Go a bad time about the size of it’s compiled files, so I expected this little Rust program to be much smaller. But not so:

$ ls -lh ../target/{debug,release}/upspin
-rwxrwxr-x 2 jra jra  23M Jul 21 15:12 ../target/debug/upspin
-rwxrwxr-x 2 jra jra 5.1M Jul 21 15:14 ../target/release/upspin

People also say, “Go is fat because it is statically linked”. Pure Rust code is also self contained in the program’s ELF file. But the 5.1 meg release image does not count several quite large shared libraries that it depends on:

$ ldd ../target/release/upspin
	linux-vdso.so.1 =>  (0x00007fff319c2000)
	libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f3d7a6cc000)
	libcrypto.so.1.0.0 => /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007f3d7a288000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f3d7a083000)
	librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f3d79e7b000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f3d79c5e000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f3d79a47000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3d7967d000)
	/lib64/ld-linux-x86-64.so.2 (0x00005555d33b8000)

From a code safety point of view, I find it really troubling that even if we work so hard to make the compiler happy to prove our code is secure, we are still depending on libc and the platform’s OpenSSL. I remember the first time I started digging down into Go and realized that it wasn’t even linking libc, and it gave me a fantastic feeling of finally cutting the cord to the old, dangerous way of programming. With Rust (and hyper) we’re right back to depending on C again. Yuck.

Finally, the incredible speedup from debug to release images was surprising:

$ time ../target/debug/upspin > /dev/null

real	0m3.631s
user	0m0.888s
sys	0m0.020s
$ time ../target/release/upspin > /dev/null

real	0m2.961s
user	0m0.068s
sys	0m0.016s

Look at user, not real. Real is dominated by waiting on the far-end server. Real is more timing the TLS decryption, the parsing and the hashing.

Read it and weep

I searched for “how do I make an HTTP request in Rust?”. I’m a newbie, we do things like that. Don’t judge.

I found this. I still don’t know how, because the answer marked correct refers to a library that a comment from 2016 informs me is no longer supported. There’s also a helpful comment pointing me at cURL, which is written in C, which is the opposite of safe.

It does appear that the right answer, in 2017, is hyper.

I do not have warm fuzzies from this experience.

Update: Headache-inducing head slap from this (answer with 1 vote: “It is now 2017, and rustc-serialize is now deprecated as well…”), which happened 30 minutes after the above.

A Go programmer continues to learn Rust

I went looking for the equivalent of goimports and didn’t find it. Sad.

I wanted to use std::fmt to do the same thing as sprintf or fmt.Sprintf. I got stuck on “expected &str, found struct `std::string::String`”. I found a blog posting trying to explain it but I don’t understand enough yet to understand it. What I do understand is that it is highly suspicious that a language has two incompatible types for string. WTF, Rust? I’m trying to write a program here and you want me to sprinkle magic to_string()s in it? How about you STFU and let me be a human, and you be the computer doing the boring crap for me?

So back to basics, how about search for “rust format a string”. Top 3 hits are 3 different pages from the Rust reference manual, with 3 different import paths for apparently different ways to format strings? Well, just try one. Compiler says add a &. Fixed! So “give stuff to macros format!, get back a std::string::String, borrow a reference to it (which somehow magically does some type conversion between the 2 kinds of strings?), and give that to expect”. Right, got it.

I want to detect if nothing was read from io::stdin().read_line(). Hmm, how about `if guess.trim() == “”`? Bingo! Maybe this isn’t so hard after all!

Ugh, so many semi-colons. Bro, do you even semi-colon?

Aiee, the tutorial ran out and now I’m into a boring reference book. Let’s switch to this.

What’s #[]? Is that a macro? A compiler directive? A pre-processor directive? Tutorials that don’t give links to fundamental docs are annoying. I’m not a child…

Well, let’s go see what kind of crypto libraries are available. Hmm. Curious, the TLS connector for Hyper uses Rust Native TLS, which uses OpenSSL. Rust: fast, modern, and safe. Unless you are doing security critical work, then we hand that off to C programmers from 1990, because they never made any mistakes. Doh.

So, search for a pure Rust TLS, and I find rustls. Which uses ring for crypto. Which… wait for it… is made from Rust and C, with the C copied from OpenSSL.

Come on guys, this is not very impressive so far.

However, ring uses something called “untrusted” to parse attacker-controlled input more safely. That’s really interesting.

A Go programmer’s first day with Rust

Where is the tutorial? The first Google hit gives a redirect to a page explaining that I should read the book. The first page of the book explains that I should read the Second Edition unless I want to go deep, then I should later read the First Edition also (and presumably ignore the things that had been changed later in the Second Edition?)

OK, let’s do this thing! Get rust-mode installed in Emacs, and get it to call rustfmt on save, because Go taught me that the tab button is from 1960 and I don’t use it anymore.

Search for “why is rustc hanging while compiling rustfmt”. Find closed issue from someone who did the same thing. He says, “in the 6 minutes it took me to make this issue, it finally finished compiling”. I go back to my compilation window and find the same thing. Why is Rust’s compiler so slow?

OK, let’s do this thing! Wait, why is rustfmt making different format than the tutorial? Try some stuff, find out rustfmt wants me to upgrade my compiler to nightly, tell it, “No, my friend, that’s not going to happen. I rode that train with Go and it made me crazy.” Give up and accept that rustfmt is fighting the tutorial to teach me different formats for Rust. I miss gofmt.

Play with the example code in the tutorial, make syntax errors on purpose. The compiler error messages are… trying to be helpful, but as a result are too big to fit in my window. Rust developers must have bigger monitors than me. It is surprisingly difficult to say the minimal helpful thing. “I would have written a shorter error message, but I didn’t have time.”

Read about Results and how they encapsulate the return value. This is really elegant and pretty.

Tried making some code with and without mut and I think I’ve got it more or less. I understand why this is important, but I’m still pretty suspicious that it should be my job as a programmer to keep track of this. Wonder if there should be a rustfmt plugin where it adds in the mut’s that I’m not interested in keeping track of myself?

I would rate this 60 minute session of Rusting as about a 3 out of 10 on the “that was fun and I’d like to learn more of that” scale. Let’s see how the next hour goes…

Python keyword “finally” trumps “return”

Here is a piece of Python that I did not expect to surprise me:

def a():
        try:
                raise 1
                return 1
        except:
                print "exception!"
                return 2
        finally:
                return 3

print a()

In both Python 2.7 and Python 3, that prints:

exception!
3

It seems that in Python, return is just a casual suggestion to the interpreter, and finally gets to decide, “Nope! Your return is canceled and I will do my return instead!”

Now, to be fair, in Go a defer can change the return value of a function as well. But the defer keyword literally does what it says on the tin, it defers some work until after the return, so you can reason about why it might be changing the return value.

The SRE book

I gave a Lightning Talk at SREcon16 and I was lucky enough to win the SRE book from Google while I was there.

Here are some notes of things I was thinking while reading it.

First, this is a phenomenal piece of work, that really marks a special point in time: the dawn of the possibility of wide adoption of SRE principles. I say, “possibility” because after getting exposed to the deepest details of what makes SRE work, I think that there are lots of organizations that won’t be willing or able to make it work.

Even though I’ve been in IT for decades at this point, I’d fallen into the trap, as an outside observer of Google, to imagine that there was some magic bullet they possessed that made it possible to deliver such enormous services with such high reliability. If someone had asked me to explain that viewpoint before reading this book, I would have bashfully admitted that it was ridiculous to imagine that there are silver bullets in IT operations. Fred Brooks already taught us that.

Now that I’ve read the SRE book, I’ve figured out what the silver bullet is: it’s sweat.

Over and over while reading it, I thought to myself, “well, yeah, I knew that was the solution to the problems I was facing growing Tellme in 2001, but we just weren’t in a position to put in that work”. I’d also think while reading, “man, I can see how that would totally work, but you’d really need an immense amount of goodwill, dedication, and good leadership to make it happen; I’ve been in teams where there’s not a critical mass of team members who sweat the details to make that work.”

So that’s my 10000 meter take-away from the SRE book: Wow, man, that looks like a lot of work. It is as if Thomas Edison came back to life and restated his maxim: “Reliability is 99.999% perspiration and 0.0001% downtime”.

But that’s not the end of the story. The other thing I felt time and time again reading the book was a sense of longing to get back in the game. It made me ready to sweat. The book, told as it is from passionate, proud, smart people who have been sweating in the trenches, is as intoxicating as a Crossfit Promotional Video.

To outsiders, who think the understand IT operations or Software Development based just on the English-language definitions of the constituent terms, SRE might look easy. But what I really liked about SRE book was how time and again through the book, it talked about how the values of SRE inform the successful approach to a certain problem. When a team needs to introspect on its values in order to choose a way forward, you are no longer in the realm of technology: that’s about culture.

In my last job, from the first moment, my colleagues looked to me to guide the culture of the team. My title was not “tech lead”, but there were some behaviors I knew we needed to be encouraging and I knew how to model them. Reading the SRE book triggered the same instincts in me again. A lot of the info in the SRE book I already had learned in my own way, from my own experiences. But lots of the information was a new take on the old problems I knew about, and inspired me to say, “wow, yes, of course that’s the answer, I’d like to be in a team that was acting like that!”

But the fact that integrating SRE into an organization is a cultural, not technical, affair dooms it to partial, spotty uptake. There will be organizations that don’t have the right kind of cultural flexibility and leadership who is able to bring people around to SRE. They will carry on with what they are doing, but they will pay the price by forgoing the benefits that Google has shown that SRE can bring to an organization. Their dev teams and ops teams will forever be locked in battle, and the only action item from their postmortems will continue to be “we need more change control meetings”.

I pity the fools.

A Kafkaesque Experiment

As part of my interview prep, last night I challenged myself to do the following:

  • Make a Kubernetes cluster (on Google Cloud Platform)
  • …running Dockerized Zookeeper (1) and Kafka (2)
  • …with Kafka reporting stats into Datadog
  • Send in synthetic load from a bunch of Go programs moving messages around on Kafka
  • Then run an experiment to kill the Kafka master and watch how the throughput/latencies change.

Since thats a lot of that stuff I’ve never touched before (though I’ve read up on it, and it uses all the same general concepts I’ve worked with for 15 years) it should not be too surprising that I didn’t get it done. Yet.

The surprising thing is where I got stuck. I found a nice pair of Docker containers for Zookeeper and Kafka. I got Zookeeper up and running, and I could see it’s name in the Kubernetes DNS. My two Kafkas were up and running, and they found the Zookeeper via service discovery. So far so good. But then something went wrong with the place where I was going to run clients from; it could not talk to either of the Kafkas via TCP, connection timed out. What’s more, I couldn’t be sure that both of my Kafkas were even being advertised by Kubernetes DNS.

(Shower thought after writing this: perhaps my client container was started before the Kafka one, and as a result, it didn’t have the correct container-to-container networking magic set up. It would be interesting to read up on how that works and then debug it to see if I can see the exact problem. Or it might go away the next time I start the containers, this time in the right order. But… how can order matter? This would make it very difficult to operate these things.)

Learning how to debug in the container environment is one of the hardest things. It’s like walking around in a brewery in the dark armed only with a keychain flashlight and your nose, looking for the beer leak.

I think it is time to take a break from container-ville and use small, local Kafka on my Mac to develop the synthetic load generator. That will also be interesting, because I’m hoping to be able to generate spiky, floody flows of messages using feedback from producers to consumers. It is actually something I’ve had in mind for years, and never had the right situation calling on me to finally try it out.

Update: Well the load generator was fun hacking/learning. The final step would be to put it all together. That may come in the future, but for now I’m busy with a trip to New York.