tldr edit

TL;DR:

  • Task switch frequently. Do 2 or 3 things at once if it’s productive. Walk and chew gum at the same time.
  • Diversify tasks and mediums. Treat them like the stocks in your retirement fund. Or the clothes in your closet, you want to be able to mix and match.
  • Invest in work environments. Chairs. Monitors.
  • Pay someone to do your chores.
  • Retrospect. Optimize. Repeat.
  • Plan for reusability. What you produce will get leveraged in different mediums by different stakeholders. Work once, use many.
  • Have a vision of how products will get re-leveraged. Do the small, easy piece first. Minimum Viable Product.
  • Work evenings, weekends, holidays.
  • Stay healthy.
  • Talk with your significant other. Develop a shared vision. Accept shared sacrifice. Enjoy shared rewards.
  • If you’re good at code, differentiate by working on communication and influence.

Link to How I optimised my life to make my job redundant

automation, docker, linux, pluralsight edit

The first explanation that crystallized into a mental model was Carl Franklin’s from the .NET Rocks docker podcast. It helps that he’s repeated it repeatedly on subsequent podcasts when the subject has come up.

Docker is like a VM with the weight of a process.

This is true as far as it goes. They’re both isolation mechanisms that allow you to host more than one sandboxed environment on a machine.

This weekend I watched Nigel Poulton’s Docker Deep Dive on Pluralsight. It was nice to see someone step through all the Linux commands for creating and maintaining Docker images and instances. But above and beyond that, it was helpful to flesh out my anemic mental model. Here’s a few tidbits that came to light.

##Build Process

I’d heard that Dockerfiles allow you to version your servers just like you version your source code. From Nigel’s course, I learned that the contents of that Dockerfile are transformed into a final server image by executing the Docker build process. Given a base image as a starting point, each line in the Dockerfile modifies the filesystem then the build process takes a snapshot image of the changes and moves on to the next instruction in the Dockerfile.

##Image Layers

I already knew that you can download a Docker image from the Docker Hub repository. It had the look and feel of downloading a package using a package manager but for an OS and the rest of the environment. A lot like my first experience of npm actually. What I hadn’t grasped fully until I saw Nigel’s course was that beneath the surface, each of those images is composed of many different layers during the Docker build process.

The similarities to git helped inform my mental model. In both tools, we’re taking snapshots as we go. You can see a history of all snapshots and how they relate to each other using the built-in tools. If you do so, each looks like a directed graph where each node is a step from the past state into the future state.

##Union Mounts

Docker relies on a union mounted filesystem to merge together all these image layers. Something like aufs on Linux allows Docker to superimpose the files of one layer upon another so that when the OS opens a file, it gets the most recently modified version. If I’ve got 7 layers in my favorite Docker image, then it’ll search the top-most layer for my file and if it doesn’t find it, the OS will search the next layer down.

It’s always safe, and relatively fast, to read such a filesystem but what about writing to it? All filesystem layers except the top-most one are set to read-only so when we want to write to a file the OS makes a copy of it at the top-most layer and edits that. We call this behavior copy-on-write. Ayende Rahien has a good description on his blog.

##PID 1

Docker assumes that you’ll only be using one process inside the container. That process is assigned a process ID of 1 (PID 1). In Unix-land it seems that PID 1 is usually an init process that bootstraps and orchestrates other processes but in Docker-land it’s a little less-so. Still, Docker starts each container with something at PID 1 and will shut the container down when that process exits. Picking which process to use is a whole lot easier when there’s only one to choose from; for anything more complicated you should either reconsider your direction or do you homework.

##A Network Switch in the Kernel

This one made sense to me from my time with Hyper-V. To get network packets from the host to the client (and back) you need a virtualized switch. Docker has one built into the Linux kernel to support this level of transport. Only the kernel had the level of visibility necessary to mediate communication with the sandboxed containers.

##Where Next?

Getting the lowdown on the Linux implementation of Docker provokes some interesting questions regarding the Microsoft implementation. The network virtualization elements seem really well baked in Hyper-V so I’m assuming that’ll be a piece of cake to build into the next Windows OS. The union mount filesystem seems harder to pull off. I’m not sure if Microsoft can add that feature to NTFS or if they need to build something from scratch. Time will tell, they’ve announced that they’ll support Docker containers so I’m sure they have a plan.

builds, automation, ruby, jekyll, travis-ci, linux edit

I can’t help it. I know that I should be writing blog entries. I’ve got a ton of ideas stuck in my head. But every time I sit down to write, I notice that the build for this site is broken.

Build Status

Having a working build is almost stupidly important to me. It’s like software isn’t software until an automated build runs some unit tests and declares success. Even for a web page.

Jekyll pages on Github can integrated pretty cleanly with Travis-CI. There’s even helpful documentation to get you started. The HTML-Proofer gem crawls your generated static HTML site and points out things that you should fix like images without alt tags, bad css references, etc.

I’ll admit that it took me weeks of trying off and on to even get to a build with failing tests. All hail build #14. It really just took time to learn how to install and setup Ruby and Jekyll in Linux. Piece of cake. No, really. Cake.

So now I’ve got 62 errors in the markup generated from my markdown (and Jekyll templates) to fix. Does that make me a happy camper? Well, kindof. It makes me less unhappy. That should count for something. And I suspect I’ll feel another sense of accomplishment when I get a clean build. Red, Green, Refactor is supposed to work like that.

chromebook, linux edit

My birthday came and went a couple of days ago. To splurge, I bought myself a nice little 11” Acer Chromebook.

CB3-111-C670

Chromebooks are by definition (the Pixel proves this rule) cheap, low-spec ultrabooks. I got mine for about $150 with the following less than impressive stats:

Intel Celeron 2.16 GHz Processor, 2 Core 2 GB DDR3L SDRAM 16 GB SSD Storage; No optical drive 11.6 inch, 1366 x 768 pixels, LED-lit Screen Chrome Operating System; Moonstone White 2.4 Pounds 8 Hours Battery Life

Its cramped in a lot of ways but it just works so well for what I really want to do. And what is that you ask? Squeaking every minute out of my day. The Chromebook boots in a few seconds and resumes from sleep in a handful of milliseconds. That kind of responsiveness means I can pop it open to do some research, draft some prose, or hack some code in the five minutes between catching my morning bus and arriving at the transfer station. I tried that exercise with my work issued Windows 7 machine and it couldn’t finish logging me in before the time was up.

You’d think that being tied to the internet during the commute (it is Chrome) would limit the fun but I’ve found two ways to keep things flowing. First, I’ve downloaded crouton which allows me to host a Linux environment alongside ChromeOS. That gives me access to Node.js, Ruby, Python and a host of other fun toys. Second, I piggy-back on my phone’s 4G data connection by turning it into a wifi hotspot. I’m glad I didn’t spring for LTE built into my laptop because sharing works so well.

So, we’ll see what comes of having an underpowered Linux box with me every day. My hope is that I’ll be able to write more. Blogging and ssh were equally painful when it was just my thumbs on a 5-inch screen.

powershell, testing edit

I’ve started a small hobby project inspired by my favorite code koans. I first ran into the Ruby koans years back and have found them a fun and accessible way to describe a language. I’ve enjoyed my walk through the Javascript koans (mrdavidlaing and liammclennan versions) and started in on the Python koans by Greg Malcolm. So when I wanted to know more about Powershell I searched Github for a set of koans to walk me through the features and syntax of the language. To my disappointment I didn’t find anything. So I decided to write my own.

The challenge of course is finishing them. I’ve found inspiration in the structure and content of Greg Malcom’s work. He wrote the original Python koans and is also a primary maintainer of the mrdavidlaing version of the JS koans. Stepping through the code from a producer side instead of a consumer has given me an appreciation for how to structure lessons that teach themselves.

The framework that is not a framework

A koans project is easiest to stand up when leveraging a nice unit testing framework. In Powershell’s case that’s clearly Pester. It provides BDD style should syntax and chains together very fluently.

Describe "AboutAsserts" {
	It "should expect true" {
		# We shall contemplate truth by testing reality, via asserts.
		$true | Should Be $false # This should be true
	}
}

In this example, we’re piping $true to the Should function which then asserts equality using the Be function. The thing is, you don’t need to know how the functions are implemented or much of anything about Powershell to get started. Even for a novice, the syntax has sufficient context to lead you to the correct solution.

powershell koans

Deeper down the rabbit hole

All code koans have a few things in common

  • An entry point to run the tests
  • Report all test successes
  • Stop running tests at the first failure
  • Report which test failed with a helpful stack trace
  • Add a nice zen saying somewhere

The Python koans split each of these out into a complex set of modules. I started down that path but reversed course and simplified down to a single file. Powershell makes it easy to use the objects exported by Pester and pretty up the output using the native Write-Host function. This is my first stable version:

$ScriptDir = Split-Path -parent $MyInvocation.MyCommand.Path
Import-Module $ScriptDir\..\lib\Pester


#helpful defaults
$__FILL_ME_IN__ = "FILL ME IN"


#run koans, results ordered by file name then by order within file
$allKoans = Invoke-Pester -PassThru -Quiet


#output results
$about = ""
$karma = $true
$i = 0
While ($karma) {
	$koan = $allKoans.TestResult[$i]
	
	if ($about -ne $koan.Describe) {
		$about = $koan.Describe
		Write-Host "Thinking $about" -ForegroundColor Magenta
	}
	
	$name = $koan.Name
	
	if ($koan.Passed) {
		Write-Host "    $name has expanded your awareness." -ForegroundColor Green
	} else {
		$failed = $koan.FailureMessage
		$stackTrace = $koan.StackTrace
		
		Write-Host "    $name has damaged your karma." -ForegroundColor Red
		Write-Host ""
		Write-Host "You have not yet reached enlightenment ..."
		Foreach ($str in $failed -split "\n") {
			Write-Host "    $str" -ForegroundColor Red 
		}  
		Write-Host ""
		Write-Host "Please meditate on the following code:"
		Foreach ($str in $stackTrace -split "\n") {
			Write-Host "    $str" -ForegroundColor Yellow
		}
		Write-Host ""
		Write-Host ""
	}
	
	$i += 1
	$karma = $koan.Passed -and $i -lt $allKoans.TestResult.Length
}
Write-Host "Flat is better than nested." -ForegroundColor Cyan

Pretty straightforward, eh? Next steps would be to add more cryptic zen sayings to the last Cyan bit. That and build out more koans. And maybe create an automated build on appveyor.com

meta edit

It Was Fun While It Lasted

One reason for setting up my blog was to try out Azure. I’ve got a MSDN subscription through work so I have enough free monthly credits to cover the cost. On the whole, the experience was fun and educational. It makes the barrage of Azure feature updates a little easier to digest when you’ve got some skin in the game.

A point that’s been driven home repeatedly of late is that I don’t really need cloud scale infrastructure to handle my blog. In fact, even maintaining a PAAS solution with my meagre time budget was a challenge. Even with a month’s notice I just barely managed to squeak in the re-up of my SSL cert before it expired. Updating the blogging code for Ghost took a similarly long time. None of these tasks were very challenging but I found myself questioning why I should be investing my rare idle time in useless infrastructure work.

I needed a lower maintenance solution.

Return of the Octocat

I once had a Github blog based on the Jekyll engine. It suffered from lack of content but I got a lot of enjoyment from learning a bit about Ruby and how Liquid templates work. Now that I’m back on the Github platform things are a bit lower key. I’ve borrowed the site design from the excellent Phil Haack with a few tweaks of my own.

With a template in place I can comfortably focus on writing. Not that I have a lot of time for that between work and family. Over the years, I’ve gotten sufficiently comfortable with markdown that I can draft this in Nodepad++ and be reasonably confident that I got it all correct. Scratch that, I screwed up the link syntax. Heh. Ok, moving to StackEdit. Its a better editor than Ghost anyway.

edit

Tough to Grok

Yeah, CSS is a language that many developers (myself included) struggle with. It isn’t fun to play with and it often punishes you for so much as looking the wrong way at it. I’ve been trying to learn what I can about CSS, especially best practices but also the flashy new things that are coming with CSS3. To that end, I read CSS-Tricks religiously and the occasional Smashing Magazine article that peaks my interest. Chris Coyier linked to an interesting article (the subject of this post) a short while back from Harry Roberts on graphing CSS specificity. But enough with the blabbing, let’s look at some pictures.

Bring on the Pretty Pictures

Here’s a graph of pemcostyle.css from the Pemco.com home page. The graph is generated from a GitHub project by Jonas Ohlosson.

css specificity

And here’s what Harry thinks a good graph looks like (sloping up and to the right) from his blog post.

specificity graph

So, what are we supposed to get out of visualizing a CSS specificity graph? Harry Roberts suggests that a spikey graph indicates a less maintainable codebase. CSS style sheets are challenging to write in part because they always read top to bottom (left to right on the graph) so that a CSS selector that is both at the top of the style sheet and has high specificity will limit our ability to style other elements later in the doc. Overriding such a selector may not be possible or if it is it may require undesirable hacks like !important that make further changes even more challenging. Mis-use of CSS specificity creates a slippery slope.

So Now What

It’s always nice to pair a problem with a potential solution. In this case, the suggestion from Harry is two-fold. Reorder you CSS selectors so that low specificity ones come first and high specificity ones come last. Then you should reduce the specificity of your selectors where you can. For example, the selector in the above graphic is from the following snippet:

.detailAccordion .panel-group .panel-heading + .panel-collapse .panel-body {
    border:0px;
}

That’s awfully specific just to eliminate the size of the boarder. Instead it might be possible to attach a new class to the markup and use a selector like this:

.no-border {
    border:0px;
}

Or perphaps use BEM syntax like so:

.panel-heading__panel-body--collapsed {
    border:0px;
}

The idea is the same, a single (perhaps wordy) class is prefereable to any combination of classes because it has a lower specificity. And lower specificity leads to increased re-use and decreased frustration extending classes.

For an example of absurdly low specificity in CSS and an elegant solution to a problem check out the lobotimized owl selector.

node.js, javascript, testing, education edit

Because it’s a new technology that solves the same old problems in new ways. JavaScript on the server has to work pretty hard to be maintainable so you’ll have to learn about npm for package management,require for includes/dependencies, and testing frameworks like jasmine to make sure your code is kosher. These are the same powerful patterns that have influenced modern software development on other platforms.

Learning node.js also allows you to polish your JavaScript. When asked what languages a new or returning developer should learn to get up to speed on development today JavaScript has to be at the top of the heap. It is the language of the web (for better or for worse).

Like many of us, my JS has been decidedly client side for the bulk of my career. I grew up despising it’s spaghetti complexity; spurning it for stabler languages like C# with things like type safety and compile warnings. I first found that JavaScript could be beautiful when I discoveredjQuery. The use of a fluent API to chain multiple behaviors together was elegant compared to the Vanilla.js that I could muster.

I read Crockford’s JavaScript: The Good Parts around the same time I worked through the JavaScript koans of Liam McLennan and David Laing. All of that impressed me with the beauty and power of the language. I never had a chance to learn the language in a formal setting but it reminds me of how I wrote Scheme (a variant of Lisp) in college.

Which gets me back to my original thought on node.js. Maybe it’s just wistful thinking but I like to imagine that the kids these days won’t have it as bad as me when it comes to JavaScript. I see node as an opportunity to introduce JavaScript to those that don’t know it well. And do it in the right way with the very best patterns, practices and community we have. For those that know their way around a === it’s a way to provide feedback and guidance to a rapidly evolving domain.

So, because I’m closer to the former than the latter, I’ve started picking up node.js at Node School. The creators of Node School have structured educational modules that work the student through a series of increasingly complex programming challenges. Along the way you learn about the core concepts of node.js and pick up skills that can aide your JavaScript elsewhere as well. The whole thing is fully automated with a unit test suite hidden behind the colorful console based UI. When you drift off course, the system can point out which test cases you aren’t passing and maybe drop a hint about what to do to make them work correctly.

meta edit

So I’m having a tough time managing my time. There are a boat load of legitimate needs that take priority over writing about technical subjects. It’s tough to argue with a two month old that you just need to scribble down a few more thoughts before you can get her more milk.

So now I’m hunting for blogging methodologies that minimize time commitment. It limits my ability to create robust long-form content but I’m probably not up to much of that anyway at this stage in my blogging career. Let’s examine a few of my favorite examples.

First up, is Chirs Coiyer. He has a nice mix of content lengths at http://css-tricks.com/. Occasionally, he drops an ultra-short post about something interesting that he’s read.

It’s a fun little soundbite to talk about how the web is responsive right out of the box. With no authored CSS at all, a website will flow to whatever screen width is available. If your site isn’t responsive, you broke it.

Well that’s almost true, but as Adam Morse says in this new project:

HTML is almost 100% responsive out of the box. These 115 bytes of css fix the ‘almost’ part.

Things like images and tables can have a set widths that would force a layout wider than a viewport. And of course, the meta tag.

And look at that TLD!

Direct Link

This is super-quick content curation at it’s best. The whole post is about 100 words and probably took less than 15 minutes to put together. It also has a lot of value because Chris is raising awareness of a useful trick as well as an interesting author.

Second on my list of interesting authors is Ayende Rahien. The size and polish of his posts varies; sometimes he’s got really clean long form content but oftentimes he’s willing to just throw out something in a more stream of consciousness style. His series on Go-Raft is a good example of that.

Ayende makes it clear that he’s exploring a new project, he outlines his methodology (reading through everything from A to Z, top to bottom), then dives in with his analysis.

http_transporter.go is next, and is a blow to my hope that this will do a one way messaging system. I’m thinking about doing Raft over ZeroMQ or NanoMSG. Here is the actual process of sending data over the wire:

// Sends an AppendEntries RPC to a peer.
func (t *HTTPTransporter) SendAppendEntriesRequest(server Server, peer *Peer, req *AppendEntriesRequest) *AppendEntriesResponse {
    var b bytes.Buffer
    if _, err := req.Encode(&b); err != nil {
        traceln("transporter.ae.encoding.error:", err)
        return nil
    }

    url := joinPath(peer.ConnectionString, t.AppendEntriesPath())
    traceln(server.Name(), "POST", url)

    t.Transport.ResponseHeaderTimeout = server.ElectionTimeout()
    httpResp, err := t.httpClient.Post(url, "application/protobuf", &b)
    if httpResp == nil || err != nil {
        traceln("transporter.ae.response.error:", err)
        return nil
    }
    defer httpResp.Body.Close()

    resp := &AppendEntriesResponse{}
    if _, err = resp.Decode(httpResp.Body); err != nil && err != io.EOF {
        traceln("transporter.ae.decoding.error:", err)
        return nil
    }

    return resp
}

This is very familiar territory for me, I have to say :). Although, again, there is a lot of wasted memory here by encoding the data multiple times, instead of streaming it directly.

He writes as he thinks and with minimal editing. I can see him, in my mind’s eye, typing up the blog post as he reads through the codebase. This keeps the effort to O(1) and allows him to produce ~1200 words of content. Not a fantastic fit for my lifestyle at home since I don’t have the dedicated time to draft that kind of analysis as I go but it might do for blogging work related things. I’m assuming that this was done in a single evening sometime after dinner given his closing:

And I think that this is enough for now… it is close to 9 PM, and I need to do other things as well. I’ll get back to this in my next post.

And with that, I’m going to follow in Ayende’s footsteps and get back to other things that need doing.

meta edit

… that is the question.

Some of us are writers and some of us just aren’t. I’ve always put myself in the latter category. This blog has been an effort to explore the former.

So far it hasn’t been working out.

I’ve had plenty of ideas but finding time to put them to paper has been the challenge. I have a little time to write on the bus while commuting to work but I’ve found that the small screen isn’t condusive to a lot of prose.

I’ve tried to write a little at work while on my lunch break but as those of you who know me are aware, my lunch lasts 15 minutes at most. I tend to work while I’m at work; anything else feels uncomfortably unethical.

I’ve tried to write in the evenings but I find that it distracts from my relationship with my wife. We’re very close and enjoy spending what little time we can together.

I’ve tried to write on the weekends but that kind of me time takes a back seat to the necessary chores that I have to do on the weekend. I’m writing this as my wife takes a shower and I watch my two month old daughter sleep. It ain’t gonna last.

So, moral of the story seems to be that when I write I have to finish fast. I need a full keyboard and all my thoughts in order. Then I might be able to crank out an article in 15-30 minutes. Otherwise I’m stuck with a folder full of drafts that won’t have the time to develop.

I’m interested in how others structure their time when doing a technical blog. Drop me a line if you have brilliant ideas.

security edit

Dicewords passwords (the correct horse battery staple mentioned previously) should be much longer than 4-5 words. That’s because hackers use hordes of compromised computers to throw billions of guesses per second at the problem. So the number of possibilities of your funkalicious password choosing algorithm has to be subtantially more than what an attacker can guess. That’s the ratio you’re managing.

Taking the 350 billion guesses per second in the above article one of my 5 word passwords would be cracked in about 22 hours. Adding a sixth word extends the life to about 5 years. Expanding the dictionary to the dicewords list of 7776 and adding a seventh word password would last until about 2030.

Security professionals measure the strength of a password differently than computing the ratio that I’ve outlined above. This helps as the password cracking power of attackers is constanly increasing. Professionals measure password strength in bits of entropy. You can compute the entropy bits by multiplying the number of words by log2(N) where N is the number of words in the dictonary. My dictionary has about 10.9 bits of entropy and the dicewords list has 12.9 whereas normal written English is estimated at 1.3 bits per letter.

security edit

I like password managers. I’ve been using KeePass for a couple years now and it has never done me wrong. Storing passwords in a password manager allows you to pick passwords like sir35zf58u without having to remember all that gobbledygook. The catch is that you have to remember the password to your password manager which is where the Correct Horse Battery Staple comes in.

Password Strength - xkcd.com

The method goes something like this:

  1. Find a dictionary.
  2. Pick any four words at random
  3. Profit

I’ve used Preshing’s xkcd passphrase generator which does exactly the above. The dictionary is 1949 common words which is important for a number of reasons. First, so you don’t have to remember something like ternion or spell something like sesquipedalian every time you log in. And second because the size of the dictionary you use and the words in it tell you how long a hacker would have to work to get your password. If we got Correct Horse Battery Staple from passphra.se then it’d be one out of 1949^4=14,429,369,557,201 or about 14 trillion.

For a really important password like the master password on a password safe like KeePass you’d want to add in some additional complexity to make it harder to guess. Picking a fifth word (e.g. Correct Horse Battery Staple Supply) would put you at 1949^5=28,122,841,266,984,749 or about one in twenty eight quadrillion. You could also try mis-spelling one word (e.g. Batery), substituting letters and numbers (e.g. [email protected]), adding in some secret sauce of your own with random letters and numbers at the front, back or middle of your password (e.g. Correct Horse Battery Staple Supply 4ti8R).

Once you’ve gotten a complex but memorable password on your password safe then you can start generating complex passwords that you could never remember. The thing that makes a password like sir35zf58u strong is it’s length and it’s character set. This one is 10 characters long and uses lower case letters and numbers for a total of 36 characters to choose from. For those that remember their combinatorics that’s 36^10=3,656,158,440,062,976 or three and a half quadrillion possibilities. I wanted to show you something with upper case letters as well but it was so much longer that I couldn’t make sense of it.

That makes it tougher for hackers to get your password after they’ve gotten into Adobe/Gawker/Forbes/Snapchat/Sony/Yahoo’s database. Even big, professional companies get compromised so it’s important to make your passwords complex, secure, and to never re-use them. I’ve got over one hundred websites stored in my password safe and I probably add another handful each month.

meta edit

Everyone seems to have their own reason to start blogging. I don’t know mine yet. We’ll see if I can keep with this long enough to find out.

I’ve been starting to blog for the last few years now. It began with my career as a computer programmer in 2008. The technology industry moves so fast that you need to be learning constantly to not fall behind as fast.

It’s a truism that good programmers are always learning. By always, I mean every day. Perhaps even every hour. Searching the internet for someone else’s solution to your quandry becomes a reflex. You find sources of information that can be discarded because they’re unhelpful or inaccurate. You find people that you come to know and trust. After tripping across bloggers who wrote down exactly what I needed to read time and again I began watching them more closely to see what they’d write next.

Over years of reading technical blogs you being to resonate with some authors. I like Scott Hanselman for his personality and the bredth of angle-braket-ish concerns. I like Phil Haack for his quirky personality and diverse nerdiness. I’ve recently started following Troy Hunt because of the strong personality that comes out in his writing on security related subjects. I’ve found that half of why I read what I read is the personality behind who’s writing it.

Which leads me to this point. I’ve decided to start writing because I admire it so much in the others that I look up to. And it isn’t lost on me that I wouldn’t be able to look up to these people if they hadn’t been brave enough to start sharing in the first place. Perhaps one day I’ll find myself carrying that torch forward. For now I’ll focus on Jeff Attwood’s ultimate one-step solution. The hows of it might make for a good blog post.