math, prime, binary edit

We always use base-10 for our number lines. It helps us think about things. Last night I couldn't sleep because I was imagining that each prime number had its own number line where it was the base. The beautiful part was that it met the definition of a prime number which is that the number is divisible only by itself and 1.

For example, two is a prime number. In base-2 (aka binary) that is 0010. All other even numbers aren't prime because they're divisible by two (e.g. 0100, 1000, 0001 0000, etc.) The pattern I liked here was those zeros padding the right part of the number. Each one represented another number that the Sieve of Erathosthenes knocked out.

Here's a handful of primes in decimal, binary, and their own base:

Decimal binary base-prime
2 0000 0001 0 1
3 0000 0011 0 2
5 0000 0101 0 4
7 0000 0111 0 6
11 0000 1011 0 10
13 0000 1101 0 12
17 0001 0001 0 16
19 0001 0011 0 18
23 0001 0111 0 22
29 0001 1101 0 28
31 0001 1111 0 30
37 0010 0101 0 36
41 0010 1001 0 40
43 0010 1011 0 42
47 0010 1111 0 46
53 0011 0101 0 52
59 0011 1011 0 58
61 0011 1101 0 60
67 0100 0011 0 66
71 0100 0111 0 70
73 0100 1001 0 72
79 0100 1111 0 78
83 0101 0011 0 82
89 0101 1001 0 88
97 0110 0001 0 96
101 0110 0101 0 100
103 0110 0111 0 102
107 0110 1011 0 106
109 0110 1101 0 108
113 0111 0001 0 112
127 0111 1111 0 126
131 1000 0011 0 130

I like the symetry of shifting each prime number into its own base because except for the first prime number they're all even numbers. In the regular list of primes everything is odd. My brain wants there to be a pattern. Heck, maybe there is a pattern but I didn't take enough math in school to get to it.

resume, cv, firing edit

A friend near to me was let go today at work and I'm still feeling raw about it. The metaphors that jumped to mind were funerals and deaths. I'm sure he'll do alright; he's a super-smart guy and was a ton of fun to work with. But I'll miss him.

The thing is, people stay here at PEMCO for years and years. It's not uncommon to celebrate 10, 15, and 20+ year anniversaries regularly. Which means you might be a bit stale when you leave. Am I stale? Am I marketable? Am I battle-tested?

Zanshin

When I was in my 20s and first living on my own I felt the need for some socialization and structure. My Highschool friends had mostly gone away to college and I was working through two years of community college with a plan to transfer... somewhere later. I took a PE elective of Karate since I needed the credit and found I liked what it offered. Just one of the things I took away was the idea of zanshin which I understood as the followthrough of a punch.

Picture yourself punching a punchingbag. Do you reach out and tap it with your knuckless? A movement with followthrough extends through the punchingbag with the expectation that your movement cannot be stopped and will continue on the other side of the bag. Visualizing this way in a sparing match gives you more power against your opponent.

Perhaps it's hubris, but I've come to think that I can overcome many of the challenges in my way by being committed to the cause and giving my whole self to it. I think that an employer, and even moreso a friend, wants someone with them who will sink or swim as a team. Someone who leaves it all on the field. Someone who cares.

The opposite of this is appathy and a paycheck.

Sharpening the Saw

I was transfered out of my Development team and into our DevOps team in February of 2023. It was scary. I'd built my identity around developing software and all of a sudden I was surrounded by Operations guys that swore a lot more than the old team and just had a totally different flow. It was all about tickets. And everyone was in their silo.

It's been eight months now and we've gone through the storming and norming to get to the performing. I write a lot of terraform and occasionally some PowerShell. The problems now are cloud infrastructure problems but the science of troubleshooting them is substantially similar. I'm comfortable enough to share my personality with my new teammates and they haven't rejected me.

Hanselman suggested in a blog post to keep sharpening the saw by which he meant to go out of your way to keep learning. Find that edge of discomfort and occasionally tip into that feeling. Read about the Kubernetes ecosystem and feel queasy. Write some JavaScript and get angry at prototypical inheritance until it starts to gel.

The opposite of this is a fragility.

Consultant Mindset

I graduated in 2008 with a CS degree from the University of Washington's branch campus in Tacoma. I'd gotten married the year before and was ready to start life as an adult. Heh, kids were another decade off but a global recession was just around the corner. The first gig to say "yes" to me was a tiny consulting shop in Kirkland that had one and a half developers and was looking to expand.

Mahendar was the senior developer assigning out work and reviewing it when I was done. Everything was in C# which was close enough to Java that I could pick up the syntax in a day and the tasks were really not too hard. I learned the agile method over the first few months then got the news that Mahendar was moving on and we were back to one and a half developers. For the 20 hours a week that Vishnu wasn't there I was the whole department.

I was only with that company for 3 or 4 years. In that time I learned that when most of your paycheck comes from billable hours it pays to 1) build good relationships, and 2) always be looking for new work. I found quickly that I wanted at least two medium sized projects on my plate in case one slowed down and a backlog of eager potential customers. If I ran out of work I had to fall back on #1 and work the relationships I had to drum up new work.

The opposite of this is complacency.

password, passphrase, numberphrase edit

We're getting up to story problems and simple algebra for 3rd grade math. If I want to magic up some math problems on short notice it helps to have something random come up with some numbers. Then I can shorten/lengthen/combine them into a solveable equation.

3604 9375 2031 0600

password, passphrase edit

Ever since XKCD came out with the correct horse battery staple I've moved to using passphrases everywhere.

Now that I've got kids I use them for homework assignments that I generate mere moments before they get off the bus. They need to practice reading, writing, and arithmatic.

Here's a passphrase generator using the "ten hundred" most popular words that Randall Munroe used in his Thing Explainer book.

huh ever able clothes

Git, tfs edit

So we've got a big ol' monorepo that's stored in TFS. Most of it is getting converted to Git as we trasition away from Team Foundation Version Control (TFVC). The monorepo is about 3GB and has about a decade of history. Some of the projects that were bolted onto the sides of the repo are good candidates to break out into their own small repositories in Git. But how exactly can I do that and keep all the history?

I've got 3 solutions so far:

  • Use Git-TFS to transform a TFS sub-folder into a Git repository
  • Use the BFG tool to remove folders/files that you don't like from a Git repo
  • Use built-in Git tools to filter-branch

Using Git-TFS to transform a TFS sub-folder into a Git repository

Since I've got the option of doing a custom migration from TFS to Git I can add a complicated regex with negative lookaheads to pick out all the subfolders I want to keep. Then Git-TFS has to walk through all the commits since the dawn of time and only keep the files that get through the ignore filter. That might take a day but I think I'll get exactly what I want.

#choco install git -y
#choco install gittfs -y
Start-Transcript git.txt -Force
$destination = "pad"
Remove-Item -Recurse -Force $destination -Verbose
$ignoreRegex = @(
    "[Pp]ackages/", # nuget package folder
    "[Dd]ebug/", # build output folders
    "bin/[Rr]elease/",
    "\.vssscc", # TFS source control stuff
    "\.vspscc",
    "\.vs/", # Visual Studio folder for IIS settings and stuff
    "\.vscode/",
    "node_modules/", # npm packages folder
    "AppLibs/(?!PAD).*", # only convert the AppLibs/PAD folder and ignore everthing else
    "Database/(?!PAD|Deploy).*",
    "Websites/(?!PAD).*",
    "Common/",
    "PoC/",
    "Reports/",
    "Tools/"
) -join '|'
git tfs clone http://tfs.myorg.net/tfs/myorg $/Path/To/Main/mymonorepo $destination --debug --branches=none --from=348573 --ignore-regex=$ignoreRegex
Set-Location $destination
git lfs migrate import --above=50MB
git tfs cleanup --debug
git gc --aggressive
git branch -m master main
Set-Location ..
Stop-Transcript

Use the BFG tool to remove folders/files that you don't like from a Git repo

You can use the BFG tool to purge unwanted folders and files from history. Not my favorite approach because you have to list all the folders you don't want and sometimes the code you do want has a folder of the same name. It doesn't allow you to specify paths and will grump at you if you try.

I tried this but lost history that I didn't want to part with (e.g. Activity and Tools). On the plus side, it was done in about a minute.

# https://github.com/rtyley/bfg-repo-cleaner
Start-Transcript git.txt -Force
# Error: *** Can only match on filename, NOT path *** - remove '/' path segments
$foldersToDelete = @(
    "Activity",
    "Policy",
    "Rating",
    "CorpMisc",
    "Admin",
    "PoC",
    "Reports",
    "Tools"
) -join ','
# git clone https://github.com/myorg/mymonorepo
# git remote remove origin
"java.exe -jar bfg-1.14.0.jar --delete-folders {$($foldersToDelete)} --no-blob-protection mymonorepo" | Out-File -FilePath "slimdown-pad.cmd" -Force
& ".\slimdown-pad.cmd"
cd mymonorepo
# git wipe
git reflog expire --expire=now --all
git gc --prune=now --aggressive
Stop-Transcript

Use built-in Git tools to filter-branch

Git's built-in tooling comes with a lot of warnings. When you try to run git filter-branch it says that it will be very, very, very slow and potentially mangle your history.

This is currenly taking me hours to do one path. I'm probably not going to get to the end of this exercise

rem https://www.atlassian.com/blog/git/tear-apart-repository-git-way
rem create a repo with just this folder
git filter-branch --subdirectory-filter Websites/PAD -- main

rem remove one path and its history from this repository
git filter-branch --index-filter "git rm -r --cached --ignore-unmatch AppLibs/Activity" --prune-empty

tfs edit

From time to time it's nice to clean up old TFS builds and branches. These things accumulate over time and slowly increase the cognitive load required to find your branch when patching or whatever.

Use automation to build a list of branches

First we need an inventory of what we've got to delete. I'm a fan of git-tfs so I use the command git tfs list-remote-branches https://tfs.mycompany.net/tfs/mycode | clip to get a list of feature branches, release branches, etc. I pipe it to clip to get it on the clipboard from my PowerShell session then paste it into VS Code. Then I do a small regex workout to remove characters; it's good to use that muscle from time to time.

TFS branches that could be cloned:

$/mycode/Main [*]
|
+- $/mycode/Releases/MyApp/MyApp.54
|  |
|  +- $/mycode/Releases/MyApp/MyApp.54.7

Cloning root branches (marked by [*]) is recommended!


PS:if your branch is not listed here, perhaps you should convert its containing folder into a branch in TFS:
-> Open 'Source Control Explorer' and for each folder corresponding to a branch, right click on the folder and select 'Branching and Merging' > 'Convert to branch'.

When you're done cutting you should have a list of branches each on their own line that you can paste into Excel to create a CSV file. You'll also want to create a column called ShouldKeep which is defaulted to TRUE.

Use automation to build a list of recently deployed branches

We have a database with build information in it here. I queried the db for the list of builds to keep and merged that with my hand-crafted list of branches.

$tfsBranches = Get-Content -Path '.\TFSBranches.csv' |
    ConvertFrom-Csv

$SqlConnection = New-Object System.Data.SqlClient.SqlConnection

$SqlConnection.ConnectionString = "Server=dev.mycompany.net;Database=BuildDb;Integrated Security=True"
$SqlConnection.Open()
$SqlCmd = New-Object System.Data.SqlClient.SqlCommand
$SqlCmd.CommandText = @"
SELECT [BuildLabel]
FROM [BuildDB].[dbo].[vw_BuildsToNotDelete]
WHERE BuildLabel != ''
"@
$SqlCmd.Connection = $SqlConnection
$reader = $SqlCmd.ExecuteReader()
$table = New-Object System.Data.DataTable
$table.Load($reader)
$SqlConnection.Close()

$buildsToNotDelete = @($table.BuildLabel | Select-Object -Unique)

$tfsBranches = $tfsBranches |
    ForEach-Object {
        $branch = $_.Branch
        $branchLabel = ($branch -split '/') | select -Last 1
        $builds = $buildsToNotDelete | ? { $_ -like "*$branchLabel*" }
        [PSCustomObject]@{
            Branch = $branch;
            ShouldKeep = [bool]$builds;
        }
    }

$tfsBranches |
    ConvertTo-Csv -NoTypeInformation |
    Out-File -FilePath '.\TFSBranches.csv' -Encoding utf8

Ask your co-workers to tell you if you're about to shoot yourself in the foot

Communication is an important skill at work. Never more so than when you're doing to destroy something that took weeks to put together. Or months. Or years. If you're releasing to production more frequently than that then you probably don't use TFS and don't have these problems.

In order to be effective, your communication has to generate engagement. You need to have your co-workers actually look at the list and correct you if you're about to delete something of value. This is harder than one might expect even in the center of a COVID-19 pandemic where people have a lot of time on their hands. Follow up with your peeps until you're confident that things aren't going to go too sideways.

Commence with the foot-shooting

I've cooked up a script that works for me. Given a workspace with almost everything mapped in TFS but almost no files on disk it will get latest onto the disk, delete the files in that branch, then commit the delete operation to source control.

# place this file in a folder that has a workspace that lines up with the $localpath variables listed
# you should open a Visual Studio 2019 command prompt then navigate to this directory
# that should make tf.exe accessible
# execute this script with `powershell.exe .\DeleteBranch.ps1` from the commandline
# You'll also need a TFSBranches.csv in the same folder

Start-Transcript DeleteBranch.txt -Append

$tfsBranches = Get-Content .\TFSBranches.csv |
ConvertFrom-Csv

$tfsBranchesToDelete = $tfsBranches |
Where-Object {
    $_.ShouldKeep -eq "FALSE"
}

"[$(Get-Date)] Found $($tfsBranchesToDelete.Count) branches to delete"

$tfsBranchesToDelete |
ForEach-Object {
    $b = $_
    try {
        $localPath = $b.Branch -replace '\$/mycode/', ''
        "[$(Get-Date)] Deleting $localPath"
        &tf.exe vc get /recursive $localPath
        if (Test-Path $localPath) {
            &tf.exe vc delete /recursive $localPath
            &tf.exe vc checkin /comment:"Delete $localPath" /noprompt
        }
    }
    catch {
        "[$(Get-Date)] Error: $_"
    }
}

Stop-Transcript

Extra Credit: delete some old builds

I've used TfsTeamProjectManager to delete XAML build definitions with some success. Just check the box to delete the build via the UI.

configuration edit

Shipping quality software means decoupling the data that flows through that software from the actual deployed bits. I work in C# every day and our compiled binaries depend on a whole lot of moving parts. The fact that these parts can move makes them more valuable.

User data

A river of user data flows in through our web pages and out to a database. Not a shocker here. Everyone does it. Our tool of choice is a SQL database like MSSQL or DB2 if its on the mainframe backend side. A tiny portion of the data is configuration data where the user can tweak some settings that affect their user experience. There might be other feature flags deeper in the system that change how that user config data is treated.

Web.config Files

Most of the apps I work on are old-school ASP.NET web projects. That means we do a lot of our configuration in a config file that ships with the software. The Web.config file has the obvious <AppSettings> section where you can write key/value pairs. Each of these is weakly typed. If you want a Boolean or a date you'll need to interpret it in your code.

The strength of this is that it's simple to understand and operate. The biggest downside is that a change in your software means needing to ship a new config file and restart the process to read it. This kind of change can only be shipped by developers through the build pipeline.

Database Support Data

We have a reasonable amount of data that is read in from the database. Things like magic strings, lists of zip codes, etc. These categories of support data are really useful to share amongst all our apps and since they're almost always read-only getting them from a database makes sense.

The only people who write to the database are our operations team and they can do it at any time in any environment.

CMS Content

We have a Content Management System (CMS) that allows non-developers to change the words on our website. Pretty typical of a CMS. We also use it to serve up other content like JavaScript and CSS. This keeps the distribution and presentation of that content wholely outside of our usual .NET developer teams. We can have front-end developers with those skills collaborate with stakeholders on those changes. Because content isn't compiled we can have a more ad hoc distribution pipeline.

Feature Flags

Being able to enable/disable features and update configuration in the environment at runtime has been hugely beneficial. We have a simple key/value pair store hooked up to each of our environments. There's a nice approval mechanism for each change so we have to have two sets of eyes on each change. In lower environments our QA team owns the configs, in production it's owned by the BA team. And the config provider inside .NET caches the configuration so we don't make too many calls over the network.

Conway's Configuration

If you haven't heard of Conway's Law it's the idea that the (software) systems of an organization will tend to mirror their communication structure. Application configuration is an interesting lense to use to examine an organization.

Coordination between development teams is often required to make the same configuration changes in the Web.config files. Its not a shared configuration so if a communication point is missed there might be an app that didn't switch on the feature. Because these configs are part of a packaged deployment we'll have developers and QA confirm that they're working correctly. Making a system-wide change might take a developer on four different teams talking to each other and to a QA person to make the change simultaneously.

Conversely, support data from the database can be changed by operations without all of that communicating. Its a faster path to changes but a bit riskier because we don't take as much time to double check things. The same system-wide change from above might take one ops person and a QA person. But there's also no opting out of the change; with support data changes everyone is along for the ride so it'd better be a safe change.

Reverse Conway

To maximize productivity and optimize for happiness we can attempt the reverse Conway maneuver. We can build a software system that works well with specific communication flows and hopefully the teams using it will adapt. This might feel a bit dirty and manipulative but if you believe in Conway's law that software will tend to mirror team communication then you'd have to acknowledge that the tools you choose for your organization will have an impact on team dynamics. The big question is whether you approach tool choice with intentionality.

So what kind of outcomes do we want from a configuration tool? A shared-nothing approach a la Microservices builds for speed of release and assumes that each system configuration point is located in exactly one spot. In that model, I guess all configuration stuff would be accessible via an API? I like shared-nothing configuration to be an option because at least half of the time the config settings for my app are very specific to my domain and not interesting to other teams. I need a system that doesn't have too much noise.

The other half of the time a feature flag, URL, or other setting will be shared. Or it'll start off in one team who's getting started early and eventually get promoted to a system-wide configuration point. So I want the ability to see configuration points that other teams have so I can pillage their configs and avoid writing/maintaining that stuff myself. Of course, I'd always like to be able to fork my config off and go my own way so having an override mechanism would be excellent. And if I go that route I'd like my choices to be easily discoverable by other teams.

Git, tfs edit

It's been 5 years give or take since my last blog post. Pretty strong evidence that I'm not the type that needs to get things off my chest or down on paper. Still, I've volunteered to run a quick workshop at work and I need a format that'll help me think through what to present and how to present it.

I'm reminded of my last post back in August of 2015. Troy Hunt, and a number of other highly successful people, create content with the intent to re-purpose it. A tweet that becomes a blog post that becomes a workshop. Each subsequent refinement requires effort but it's never starting from scratch. And they've been road-tested in a less time intensive format before converting into something that'll take serious commitment.

So, let's say that we've got an organization with a number of developers interested in Git but haven't really worked extensively with it. They're all quick learners but most are working with Team Foundation (TF) for version control so many of the concepts will be foreign. What concepts does Git bring to the table?

The high points that come to mind are:

  • Local branching
  • Rewriting history like with rebase
  • Pull based merging
  • Graph database used to store commits
  • Garbage collection of orphaned commits
  • Remote repositories
  • Stash for handy changes that I want applied locally but don't want to share with others

Stories

Telling stories is important in a presentation. And my goal is to use stories to lead my audience to certain conclusions, to educate, and to inspire. So what can I share in ~45 minutes that really hooks them and gives them the confidence to learn more? What can I do to help them become better by using what I think is a better tool?

I need a story arc. The classic hero's journey. We need to start at a place that ain't bad. Things take a turn for the worse. They hit rock bottom then start getting better. The hero triumphs and ends up better than where they started. How do I do that with source control?

I took that journey. I was using TF for version control and I could do my job. It sometimes sucked to have 20 or more files with local edits. Merge conflicts were sometimes a train wreck if the branch had drifted far away from the trunk. I could not switch context from story to story; instead I'd have to take the time to create a shelveset (too much work) or just have multiple work streams all joined together and do my best not to cross the streams.

Somewhere around 2014 I was curious about Git and started googling around. I wanted a clean path to migrate a TF repository to a Git repo. I found the Git-TFS and better yet it had a Chocolatey package (link). I migrated our repo through trial and error then set about finding how to use my new tool-chain productively. Heck, in the worst case I'd learn a lot about Git and how it doesn't work for me.

Phil Haack had some excellent blog posts about how to create aliases in Git. I took about half of his scripts and re-wrote them to work with TFS. A lot of Phil's other blog posts helped me think about Git flow, Github flow, and what I'd eventually think of as Git-TFS flow.

Git-TFS flow

Every day I pick up new work. There's a story on our storyboard that I want to do so I read the requirements then get to business. While I read I make sure I've got the latest source code from TF merged into my local Git trunk (e.g. git tfup).

[alias]
tfup = !git tfs pull --rebase --debug $@

I start by creating a new branch and switching to it (e.g. git cob ABC-1234). I name my branches after the story ID just to keep them straight.

cob = checkout -b

I write some code. Every time I get something working I try to commit it with a good commit message (e.g. git cm "Fixed another typo"). Commits are cheap in Git so adding more is almost always the right answer. If I decide that I'm down a bad path I can undo the commit and get back to my last known good (e.g. git undo; git wipe). The wipe command is especially nice because it creates a commit from all the current work and orphans it. Eventually Git will garbage collect and permanently erase it but for the next little bit I could find it on the reflog. Think of it as the recycle bin for commits.

cm = !git add -A && git commit -m
undo = reset HEAD~1 --mixed
wipe = !git add -A && git commit -qm 'WIPE SAVEPOINT' && git reset HEAD~1 --hard

Let's say I get stuck. I need to talk to the Business Analyst but they're in a meeting for the next 35 minutes. Do I just take a long coffee break? Nope. I just commit what I've got and switch branches to whatever else I've got up in the air. Or pick up a new story and try to get half way through it before switching back to higher priority work. With Git I'm able to switch context in a really clean way. And if I can't remember where I left off I can check history on the branch to jog my memory (e.g. git lga).

lga = log --graph --oneline --all --decorate --abbrev-commit

Now it's time to share some code. We like doing reviews of our work with the rest of the team. I want to be able to demonstrate work from several of my branches without having to force my colleagues to watch me type Git commands in and restart my IDE. Branching is cheap in Git so I'll create a branch just for my demo and merge in all of today's work (e.g. git cob demo-2020-02-06; git merge ABC-1234; git merge ABC-1237).

The demo went well and it's finally time to commit my work back to TF version control (e.g. git rct). This will get the latest commits, rebase so that my commits are sitting on top of the latest TFS code, then iterate through each of my commits turning them into TF commits and using tf.exe to ship them up. If someone on my team creates a new commit between when I start the process and when the TF commits are created then the tooling stops with an error message and I get to retry.

rct = !git tfup && git tfs rcheckin --debug

Want a summary of everything you did yesterday for your morning standup? I found this one on Twitter: git standup.

Here's the complete list of aliases from my .gitconfig:

[alias]
co = checkout
ec = config --global -e
up = !git pull --rebase --prune $@ && git submodule update --init --recursive
tfup = !git tfs pull --rebase --debug $@
ct = tfs ct
rct = !git tfup && git tfs rcheckin --debug
cob = checkout -b
cm = !git add -A && git commit -m
save = !git add -A && git commit -m 'SAVEPOINT'
wip = !git add -u && git commit -m "WIP" 
undo = reset HEAD~1 --mixed
amend = commit -a --amend
wipe = !git add -A && git commit -qm 'WIPE SAVEPOINT' && git reset HEAD~1 --hard
bclean = "!f() { git branch --merged ${1-master} | grep -v " ${1-master}$" | xargs -r git branch -d; }; f"
bdone = "!f() { git checkout ${1-master} && git up && git bclean ${1-master}; }; f"
migrate = "!f(){ CURRENT=$(git symbolic-ref --short HEAD); git checkout -b $1 && git branch --force $CURRENT ${3-'$CURRENT@{u}'} && git rebase --onto ${2-master} $CURRENT; }; f"
userstats = shortlog -sne
lga = log --graph --oneline --all --decorate --abbrev-commit
standup = "!f() { USERNAME=$(git config user.name); if [ $(date +%u) -eq 1 ]; then git --no-pager lga --since=\"last friday\" --author=\"$USERNAME\"; else git --no-pager lga --since=\"1 day ago\" --author=\"$USERNAME\"; fi; }; f"

tldr edit

TL;DR:

  • Task switch frequently. Do 2 or 3 things at once if it's productive. Walk and chew gum at the same time.
  • Diversify tasks and mediums. Treat them like the stocks in your retirement fund. Or the clothes in your closet, you want to be able to mix and match.
  • Invest in work environments. Chairs. Monitors.
  • Pay someone to do your chores.
  • Retrospect. Optimize. Repeat.
  • Plan for reusability. What you produce will get leveraged in different mediums by different stakeholders. Work once, use many.
  • Have a vision of how products will get re-leveraged. Do the small, easy piece first. Minimum Viable Product.
  • Work evenings, weekends, holidays.
  • Stay healthy.
  • Talk with your significant other. Develop a shared vision. Accept shared sacrifice. Enjoy shared rewards.
  • If you're good at code, differentiate by working on communication and influence.

Link to How I optimised my life to make my job redundant

automation, docker, linux, pluralsight edit

The first explanation that crystallized into a mental model was Carl Franklin's from the .NET Rocks docker podcast. It helps that he's repeated it repeatedly on subsequent podcasts when the subject has come up.

Docker is like a VM with the weight of a process.

This is true as far as it goes. They're both isolation mechanisms that allow you to host more than one sandboxed environment on a machine.

This weekend I watched Nigel Poulton's Docker Deep Dive on Pluralsight. It was nice to see someone step through all the Linux commands for creating and maintaining Docker images and instances. But above and beyond that, it was helpful to flesh out my anemic mental model. Here's a few tidbits that came to light.

##Build Process

I'd heard that Dockerfiles allow you to version your servers just like you version your source code. From Nigel's course, I learned that the contents of that Dockerfile are transformed into a final server image by executing the Docker build process. Given a base image as a starting point, each line in the Dockerfile modifies the filesystem then the build process takes a snapshot image of the changes and moves on to the next instruction in the Dockerfile.

##Image Layers

I already knew that you can download a Docker image from the Docker Hub repository. It had the look and feel of downloading a package using a package manager but for an OS and the rest of the environment. A lot like my first experience of npm actually. What I hadn't grasped fully until I saw Nigel's course was that beneath the surface, each of those images is composed of many different layers during the Docker build process.

The similarities to git helped inform my mental model. In both tools, we're taking snapshots as we go. You can see a history of all snapshots and how they relate to each other using the built-in tools. If you do so, each looks like a directed graph where each node is a step from the past state into the future state.

##Union Mounts

Docker relies on a union mounted filesystem to merge together all these image layers. Something like aufs on Linux allows Docker to superimpose the files of one layer upon another so that when the OS opens a file, it gets the most recently modified version. If I've got 7 layers in my favorite Docker image, then it'll search the top-most layer for my file and if it doesn't find it, the OS will search the next layer down.

It's always safe, and relatively fast, to read such a filesystem but what about writing to it? All filesystem layers except the top-most one are set to read-only so when we want to write to a file the OS makes a copy of it at the top-most layer and edits that. We call this behavior copy-on-write. Ayende Rahien has a good description on his blog.

##PID 1

Docker assumes that you'll only be using one process inside the container. That process is assigned a process ID of 1 (PID 1). In Unix-land it seems that PID 1 is usually an init process that bootstraps and orchestrates other processes but in Docker-land it's a little less-so. Still, Docker starts each container with something at PID 1 and will shut the container down when that process exits. Picking which process to use is a whole lot easier when there's only one to choose from; for anything more complicated you should either reconsider your direction or do you homework.

##A Network Switch in the Kernel

This one made sense to me from my time with Hyper-V. To get network packets from the host to the client (and back) you need a virtualized switch. Docker has one built into the Linux kernel to support this level of transport. Only the kernel had the level of visibility necessary to mediate communication with the sandboxed containers.

##Where Next?

Getting the lowdown on the Linux implementation of Docker provokes some interesting questions regarding the Microsoft implementation. The network virtualization elements seem really well baked in Hyper-V so I'm assuming that'll be a piece of cake to build into the next Windows OS. The union mount filesystem seems harder to pull off. I'm not sure if Microsoft can add that feature to NTFS or if they need to build something from scratch. Time will tell, they've announced that they'll support Docker containers so I'm sure they have a plan.

builds, automation, ruby, jekyll, travis-ci, linux edit

I can't help it. I know that I should be writing blog entries. I've got a ton of ideas stuck in my head. But every time I sit down to write, I notice that the build for this site is broken.

Build Status

Having a working build is almost stupidly important to me. It's like software isn't software until an automated build runs some unit tests and declares success. Even for a web page.

Jekyll pages on Github can integrated pretty cleanly with Travis-CI. There's even helpful documentation to get you started. The HTML-Proofer gem crawls your generated static HTML site and points out things that you should fix like images without alt tags, bad css references, etc.

I'll admit that it took me weeks of trying off and on to even get to a build with failing tests. All hail build #14. It really just took time to learn how to install and setup Ruby and Jekyll in Linux. Piece of cake. No, really. Cake.

So now I've got 62 errors in the markup generated from my markdown (and Jekyll templates) to fix. Does that make me a happy camper? Well, kindof. It makes me less unhappy. That should count for something. And I suspect I'll feel another sense of accomplishment when I get a clean build. Red, Green, Refactor is supposed to work like that.

chromebook, linux edit

My birthday came and went a couple of days ago. To splurge, I bought myself a nice little 11" Acer Chromebook.

CB3-111-C670

Chromebooks are by definition (the Pixel proves this rule) cheap, low-spec ultrabooks. I got mine for about $150 with the following less than impressive stats:

Intel Celeron 2.16 GHz Processor, 2 Core 2 GB DDR3L SDRAM 16 GB SSD Storage; No optical drive 11.6 inch, 1366 x 768 pixels, LED-lit Screen Chrome Operating System; Moonstone White 2.4 Pounds 8 Hours Battery Life

Its cramped in a lot of ways but it just works so well for what I really want to do. And what is that you ask? Squeaking every minute out of my day. The Chromebook boots in a few seconds and resumes from sleep in a handful of milliseconds. That kind of responsiveness means I can pop it open to do some research, draft some prose, or hack some code in the five minutes between catching my morning bus and arriving at the transfer station. I tried that exercise with my work issued Windows 7 machine and it couldn't finish logging me in before the time was up.

You'd think that being tied to the internet during the commute (it is Chrome) would limit the fun but I've found two ways to keep things flowing. First, I've downloaded crouton which allows me to host a Linux environment alongside ChromeOS. That gives me access to Node.js, Ruby, Python and a host of other fun toys. Second, I piggy-back on my phone's 4G data connection by turning it into a wifi hotspot. I'm glad I didn't spring for LTE built into my laptop because sharing works so well.

So, we'll see what comes of having an underpowered Linux box with me every day. My hope is that I'll be able to write more. Blogging and ssh were equally painful when it was just my thumbs on a 5-inch screen.

powershell, testing edit

I've started a small hobby project inspired by my favorite code koans. I first ran into the Ruby koans years back and have found them a fun and accessible way to describe a language. I've enjoyed my walk through the Javascript koans (mrdavidlaing and liammclennan versions) and started in on the Python koans by Greg Malcolm. So when I wanted to know more about Powershell I searched Github for a set of koans to walk me through the features and syntax of the language. To my disappointment I didn't find anything. So I decided to write my own.

The challenge of course is finishing them. I've found inspiration in the structure and content of Greg Malcom's work. He wrote the original Python koans and is also a primary maintainer of the mrdavidlaing version of the JS koans. Stepping through the code from a producer side instead of a consumer has given me an appreciation for how to structure lessons that teach themselves.

The framework that is not a framework

A koans project is easiest to stand up when leveraging a nice unit testing framework. In Powershell's case that's clearly Pester. It provides BDD style should syntax and chains together very fluently.

Describe "AboutAsserts" {
	It "should expect true" {
		# We shall contemplate truth by testing reality, via asserts.
		$true | Should Be $false # This should be true
	}
}

In this example, we're piping $true to the Should function which then asserts equality using the Be function. The thing is, you don't need to know how the functions are implemented or much of anything about Powershell to get started. Even for a novice, the syntax has sufficient context to lead you to the correct solution.

powershell koans

Deeper down the rabbit hole

All code koans have a few things in common

  • An entry point to run the tests
  • Report all test successes
  • Stop running tests at the first failure
  • Report which test failed with a helpful stack trace
  • Add a nice zen saying somewhere

The Python koans split each of these out into a complex set of modules. I started down that path but reversed course and simplified down to a single file. Powershell makes it easy to use the objects exported by Pester and pretty up the output using the native Write-Host function. This is my first stable version:

$ScriptDir = Split-Path -parent $MyInvocation.MyCommand.Path
Import-Module $ScriptDir\..\lib\Pester


#helpful defaults
$__FILL_ME_IN__ = "FILL ME IN"


#run koans, results ordered by file name then by order within file
$allKoans = Invoke-Pester -PassThru -Quiet


#output results
$about = ""
$karma = $true
$i = 0
While ($karma) {
	$koan = $allKoans.TestResult[$i]
	
	if ($about -ne $koan.Describe) {
		$about = $koan.Describe
		Write-Host "Thinking $about" -ForegroundColor Magenta
	}
	
	$name = $koan.Name
	
	if ($koan.Passed) {
		Write-Host "    $name has expanded your awareness." -ForegroundColor Green
	} else {
		$failed = $koan.FailureMessage
		$stackTrace = $koan.StackTrace
		
		Write-Host "    $name has damaged your karma." -ForegroundColor Red
		Write-Host ""
		Write-Host "You have not yet reached enlightenment ..."
		Foreach ($str in $failed -split "\n") {
			Write-Host "    $str" -ForegroundColor Red 
		}  
		Write-Host ""
		Write-Host "Please meditate on the following code:"
		Foreach ($str in $stackTrace -split "\n") {
			Write-Host "    $str" -ForegroundColor Yellow
		}
		Write-Host ""
		Write-Host ""
	}
	
	$i += 1
	$karma = $koan.Passed -and $i -lt $allKoans.TestResult.Length
}
Write-Host "Flat is better than nested." -ForegroundColor Cyan

Pretty straightforward, eh? Next steps would be to add more cryptic zen sayings to the last Cyan bit. That and build out more koans. And maybe create an automated build on appveyor.com

meta edit

It Was Fun While It Lasted

One reason for setting up my blog was to try out Azure. I've got a MSDN subscription through work so I have enough free monthly credits to cover the cost. On the whole, the experience was fun and educational. It makes the barrage of Azure feature updates a little easier to digest when you've got some skin in the game.

A point that's been driven home repeatedly of late is that I don't really need cloud scale infrastructure to handle my blog. In fact, even maintaining a PAAS solution with my meagre time budget was a challenge. Even with a month's notice I just barely managed to squeak in the re-up of my SSL cert before it expired. Updating the blogging code for Ghost took a similarly long time. None of these tasks were very challenging but I found myself questioning why I should be investing my rare idle time in useless infrastructure work.

I needed a lower maintenance solution.

Return of the Octocat

I once had a Github blog based on the Jekyll engine. It suffered from lack of content but I got a lot of enjoyment from learning a bit about Ruby and how Liquid templates work. Now that I'm back on the Github platform things are a bit lower key. I've borrowed the site design from the excellent Phil Haack with a few tweaks of my own.

With a template in place I can comfortably focus on writing. Not that I have a lot of time for that between work and family. Over the years, I've gotten sufficiently comfortable with markdown that I can draft this in Nodepad++ and be reasonably confident that I got it all correct. Scratch that, I screwed up the link syntax. Heh. Ok, moving to StackEdit. Its a better editor than Ghost anyway.

edit

Tough to Grok

Yeah, CSS is a language that many developers (myself included) struggle with. It isn't fun to play with and it often punishes you for so much as looking the wrong way at it. I've been trying to learn what I can about CSS, especially best practices but also the flashy new things that are coming with CSS3. To that end, I read CSS-Tricks religiously and the occasional Smashing Magazine article that peaks my interest. Chris Coyier linked to an interesting article (the subject of this post) a short while back from Harry Roberts on graphing CSS specificity. But enough with the blabbing, let's look at some pictures.

Bring on the Pretty Pictures

Here's a graph of pemcostyle.css from the Pemco.com home page. The graph is generated from a GitHub project by Jonas Ohlosson.

css specificity

And here's what Harry thinks a good graph looks like (sloping up and to the right) from his blog post.

specificity graph

So, what are we supposed to get out of visualizing a CSS specificity graph? Harry Roberts suggests that a spikey graph indicates a less maintainable codebase. CSS style sheets are challenging to write in part because they always read top to bottom (left to right on the graph) so that a CSS selector that is both at the top of the style sheet and has high specificity will limit our ability to style other elements later in the doc. Overriding such a selector may not be possible or if it is it may require undesirable hacks like !important that make further changes even more challenging. Mis-use of CSS specificity creates a slippery slope.

So Now What

It's always nice to pair a problem with a potential solution. In this case, the suggestion from Harry is two-fold. Reorder you CSS selectors so that low specificity ones come first and high specificity ones come last. Then you should reduce the specificity of your selectors where you can. For example, the selector in the above graphic is from the following snippet:

.detailAccordion .panel-group .panel-heading + .panel-collapse .panel-body {
    border:0px;
}

That's awfully specific just to eliminate the size of the boarder. Instead it might be possible to attach a new class to the markup and use a selector like this:

.no-border {
    border:0px;
}

Or perphaps use BEM syntax like so:

.panel-heading__panel-body--collapsed {
    border:0px;
}

The idea is the same, a single (perhaps wordy) class is prefereable to any combination of classes because it has a lower specificity. And lower specificity leads to increased re-use and decreased frustration extending classes.

For an example of absurdly low specificity in CSS and an elegant solution to a problem check out the lobotimized owl selector.

node.js, javascript, testing, education edit

Because it's a new technology that solves the same old problems in new ways. JavaScript on the server has to work pretty hard to be maintainable so you'll have to learn about npm for package management,require for includes/dependencies, and testing frameworks like jasmine to make sure your code is kosher. These are the same powerful patterns that have influenced modern software development on other platforms.

Learning node.js also allows you to polish your JavaScript. When asked what languages a new or returning developer should learn to get up to speed on development today JavaScript has to be at the top of the heap. It is the language of the web (for better or for worse).

Like many of us, my JS has been decidedly client side for the bulk of my career. I grew up despising it's spaghetti complexity; spurning it for stabler languages like C# with things like type safety and compile warnings. I first found that JavaScript could be beautiful when I discoveredjQuery. The use of a fluent API to chain multiple behaviors together was elegant compared to the Vanilla.js that I could muster.

I read Crockford's JavaScript: The Good Parts around the same time I worked through the JavaScript koans of Liam McLennan and David Laing. All of that impressed me with the beauty and power of the language. I never had a chance to learn the language in a formal setting but it reminds me of how I wrote Scheme (a variant of Lisp) in college.

Which gets me back to my original thought on node.js. Maybe it's just wistful thinking but I like to imagine that the kids these days won't have it as bad as me when it comes to JavaScript. I see node as an opportunity to introduce JavaScript to those that don't know it well. And do it in the right way with the very best patterns, practices and community we have. For those that know their way around a === it's a way to provide feedback and guidance to a rapidly evolving domain.

So, because I'm closer to the former than the latter, I've started picking up node.js at Node School. The creators of Node School have structured educational modules that work the student through a series of increasingly complex programming challenges. Along the way you learn about the core concepts of node.js and pick up skills that can aide your JavaScript elsewhere as well. The whole thing is fully automated with a unit test suite hidden behind the colorful console based UI. When you drift off course, the system can point out which test cases you aren't passing and maybe drop a hint about what to do to make them work correctly.

meta edit

So I'm having a tough time managing my time. There are a boat load of legitimate needs that take priority over writing about technical subjects. It's tough to argue with a two month old that you just need to scribble down a few more thoughts before you can get her more milk.

So now I'm hunting for blogging methodologies that minimize time commitment. It limits my ability to create robust long-form content but I'm probably not up to much of that anyway at this stage in my blogging career. Let's examine a few of my favorite examples.

First up, is Chirs Coiyer. He has a nice mix of content lengths at http://css-tricks.com/. Occasionally, he drops an ultra-short post about something interesting that he's read.

It's a fun little soundbite to talk about how the web is responsive right out of the box. With no authored CSS at all, a website will flow to whatever screen width is available. If your site isn't responsive, you broke it.

Well that's almost true, but as Adam Morse says in this new project:

HTML is almost 100% responsive out of the box. These 115 bytes of css fix the 'almost' part.

Things like images and tables can have a set widths that would force a layout wider than a viewport. And of course, the meta tag.

And look at that TLD!

Direct Link

This is super-quick content curation at it's best. The whole post is about 100 words and probably took less than 15 minutes to put together. It also has a lot of value because Chris is raising awareness of a useful trick as well as an interesting author.

Second on my list of interesting authors is Ayende Rahien. The size and polish of his posts varies; sometimes he's got really clean long form content but oftentimes he's willing to just throw out something in a more stream of consciousness style. His series on Go-Raft is a good example of that.

Ayende makes it clear that he's exploring a new project, he outlines his methodology (reading through everything from A to Z, top to bottom), then dives in with his analysis.

http_transporter.go is next, and is a blow to my hope that this will do a one way messaging system. I’m thinking about doing Raft over ZeroMQ or NanoMSG. Here is the actual process of sending data over the wire:

// Sends an AppendEntries RPC to a peer.
func (t *HTTPTransporter) SendAppendEntriesRequest(server Server, peer *Peer, req *AppendEntriesRequest) *AppendEntriesResponse {
    var b bytes.Buffer
    if _, err := req.Encode(&b); err != nil {
        traceln("transporter.ae.encoding.error:", err)
        return nil
    }

    url := joinPath(peer.ConnectionString, t.AppendEntriesPath())
    traceln(server.Name(), "POST", url)

    t.Transport.ResponseHeaderTimeout = server.ElectionTimeout()
    httpResp, err := t.httpClient.Post(url, "application/protobuf", &b)
    if httpResp == nil || err != nil {
        traceln("transporter.ae.response.error:", err)
        return nil
    }
    defer httpResp.Body.Close()

    resp := &AppendEntriesResponse{}
    if _, err = resp.Decode(httpResp.Body); err != nil && err != io.EOF {
        traceln("transporter.ae.decoding.error:", err)
        return nil
    }

    return resp
}

This is very familiar territory for me, I have to say :). Although, again, there is a lot of wasted memory here by encoding the data multiple times, instead of streaming it directly.

He writes as he thinks and with minimal editing. I can see him, in my mind's eye, typing up the blog post as he reads through the codebase. This keeps the effort to O(1) and allows him to produce ~1200 words of content. Not a fantastic fit for my lifestyle at home since I don't have the dedicated time to draft that kind of analysis as I go but it might do for blogging work related things. I'm assuming that this was done in a single evening sometime after dinner given his closing:

And I think that this is enough for now… it is close to 9 PM, and I need to do other things as well. I’ll get back to this in my next post.

And with that, I'm going to follow in Ayende's footsteps and get back to other things that need doing.

meta edit

... that is the question.

Some of us are writers and some of us just aren't. I've always put myself in the latter category. This blog has been an effort to explore the former.

So far it hasn't been working out.

I've had plenty of ideas but finding time to put them to paper has been the challenge. I have a little time to write on the bus while commuting to work but I've found that the small screen isn't condusive to a lot of prose.

I've tried to write a little at work while on my lunch break but as those of you who know me are aware, my lunch lasts 15 minutes at most. I tend to work while I'm at work; anything else feels uncomfortably unethical.

I've tried to write in the evenings but I find that it distracts from my relationship with my wife. We're very close and enjoy spending what little time we can together.

I've tried to write on the weekends but that kind of me time takes a back seat to the necessary chores that I have to do on the weekend. I'm writing this as my wife takes a shower and I watch my two month old daughter sleep. It ain't gonna last.

So, moral of the story seems to be that when I write I have to finish fast. I need a full keyboard and all my thoughts in order. Then I might be able to crank out an article in 15-30 minutes. Otherwise I'm stuck with a folder full of drafts that won't have the time to develop.

I'm interested in how others structure their time when doing a technical blog. Drop me a line if you have brilliant ideas.

security edit

Dicewords passwords (the correct horse battery staple mentioned previously) should be much longer than 4-5 words. That's because hackers use hordes of compromised computers to throw billions of guesses per second at the problem. So the number of possibilities of your funkalicious password choosing algorithm has to be subtantially more than what an attacker can guess. That's the ratio you're managing.

Taking the 350 billion guesses per second in the above article one of my 5 word passwords would be cracked in about 22 hours. Adding a sixth word extends the life to about 5 years. Expanding the dictionary to the dicewords list of 7776 and adding a seventh word password would last until about 2030.

Security professionals measure the strength of a password differently than computing the ratio that I've outlined above. This helps as the password cracking power of attackers is constanly increasing. Professionals measure password strength in bits of entropy. You can compute the entropy bits by multiplying the number of words by log2(N) where N is the number of words in the dictonary. My dictionary has about 10.9 bits of entropy and the dicewords list has 12.9 whereas normal written English is estimated at 1.3 bits per letter.

security edit

I like password managers. I've been using KeePass for a couple years now and it has never done me wrong. Storing passwords in a password manager allows you to pick passwords like sir35zf58u without having to remember all that gobbledygook. The catch is that you have to remember the password to your password manager which is where the Correct Horse Battery Staple comes in.

Password Strength - xkcd.com

The method goes something like this:

  1. Find a dictionary.
  2. Pick any four words at random
  3. ...
  4. Profit

I've used Preshing's xkcd passphrase generator which does exactly the above. The dictionary is 1949 common words which is important for a number of reasons. First, so you don't have to remember something like ternion or spell something like sesquipedalian every time you log in. And second because the size of the dictionary you use and the words in it tell you how long a hacker would have to work to get your password. If we got Correct Horse Battery Staple from passphra.se then it'd be one out of 1949^4=14,429,369,557,201 or about 14 trillion.

For a really important password like the master password on a password safe like KeePass you'd want to add in some additional complexity to make it harder to guess. Picking a fifth word (e.g. Correct Horse Battery Staple Supply) would put you at 1949^5=28,122,841,266,984,749 or about one in twenty eight quadrillion. You could also try mis-spelling one word (e.g. Batery), substituting letters and numbers (e.g. B@tt3ry), adding in some secret sauce of your own with random letters and numbers at the front, back or middle of your password (e.g. Correct Horse Battery Staple Supply 4ti8R).

Once you've gotten a complex but memorable password on your password safe then you can start generating complex passwords that you could never remember. The thing that makes a password like sir35zf58u strong is it's length and it's character set. This one is 10 characters long and uses lower case letters and numbers for a total of 36 characters to choose from. For those that remember their combinatorics that's 36^10=3,656,158,440,062,976 or three and a half quadrillion possibilities. I wanted to show you something with upper case letters as well but it was so much longer that I couldn't make sense of it.

That makes it tougher for hackers to get your password after they've gotten into Adobe/Gawker/Forbes/Snapchat/Sony/Yahoo's database. Even big, professional companies get compromised so it's important to make your passwords complex, secure, and to never re-use them. I've got over one hundred websites stored in my password safe and I probably add another handful each month.