Technology Blog

Storing growing files using Azure Blob Storage and Append Blobs

04/16/2022 15:55:00

There are several categories of problems that require data to be append only, sequentially stored, and able to expand to arbitrary sizes. You’d need this type of “append only” file for building out a logging platform or building the data storage backend for an Event Sourcing system.

In distributed systems where there can be multiple writers and often when your files are stored in some cloud provider the “traditional” approach to managing these kinds of data structures often don’t work well. You could acquire a lock, download the file, append to it and re-upload – but this will take an increasing amount of time as your files grow, or you could use a database system that implements distributed locking and queuing – which is often more expensive than just manipulating raw files.

Azure blob storage offers Append Blobs, which go some way to solving this problem, but we’ll need to write some code around the storage to help us read the data once it’s written.

What is an Append Blob?

An Append Blob is one of the blob types you can create in Azure Blob Storage – Azure’s general purpose file storage. Append Blobs, as the name indicates, can only be appended to – you append blocks to append blobs.

From Azure’s Blob Storage documentation:

An append blob is composed of blocks and is optimized for append operations. When you modify an append blob, blocks are added to the end of the blob only, via the Append Block operation. Updating or deleting of existing blocks is not supported. Unlike a block blob, an append blob does not expose its block IDs.

Each block in an append blob can be a different size, up to a maximum of 4 MiB, and an append blob can include up to 50,000 blocks. The maximum size of an append blob is therefore slightly more than 195 GiB (4 MiB X 50,000 blocks).

There are clearly some constraints there that we must be mindful of, using this technique:

  • Our total file size must be less than 195GiB
  • Each block can be no bigger than 4MiB
  • There’s a hard cap on 50,000 blocks, so if our block size is less than 4MiB, the maximum size of our file will be less.

Still, even with small blocks, 50,000 blocks should give us a lot of space for entire categories of application storage. The Blob Storage SDKs allow us to read our stored files as one contiguous file or read ranges of bytes from any given offset in that file.

Interestingly, we can’t read the file block by block – only by byte offset, and this poses and interesting problem – if we store data which has any kind of data format that isn’t just plain text (e.g., JSON, XML, literally any data format) and we want to seek through our file, there is no way we can ensure we read valid data from our stored file, even if it was written as a valid block when first saved.

Possible Solutions to the Read Problem

It’s no good having data if you can’t meaningfully read it – especially when we’re using a storage mechanism specifically optimised for storing large files. There are a few things we could try to make reading from our Append Only Blocks easier.

  • We could maintain an index of byte-offset to block numbers
  • We could pad our data to make sure block sizes were always consistent
  • We could devise a read strategy that understands it can read partial or seemingly malformed data

The first solution – maintaining a distinct index, may seem appealing at first – but it takes a non-trivial amount of effort to maintain that index and make sure that it’s keep both in track and up to data with our blob files. This introduces the possibility of a category of errors where those files drift apart, and we may well be in a situation where data appears to get lost, even if it’s in the original data file, because our index loses track of it.

The second solution is the “easiest” – as it gives us a fixed block size that we can use to page back through our blocks – but storing our data becomes needlessly more expensive.

Which really leaves us with our final option – making sure the code that reads arbitrary data from our file understands how to interpret malformed data and interpret where the original write-blocks were.

Scenario: A Chat Application

One of the more obvious examples of infinite-append-only logs are chat applications – where messages arrive in a forwards-only sequence, contain metadata, and to add a little bit of spice, must be read tail-first to be useful to their consumers.

We’ll use this example to work through a solution, but a chat log could be an event log, or a list of business events and metadata, or really, anything at all that happens in a linear fashion over time.

We’ll design our fictional chat application like this:

  • A Blob will be created for every chat channel.
  • We’ll accept that a maximum of 50,000 messages can be present in a channel. In a real-world application, we’d create a subsequent blob once we hit this limit.
  • We’ll accept that a single message can’t be more than 4MiB in size (because that’d be silly).
  • In fact, we’re going to limit every single chat message to be a maximum of 512KiB – this means that we know that we’ll never exceed the maximum block size, and each block will only contain a single chat message.
  • Each chat message will be written as its own distinct block, including its metadata.
  • Our messages will be stored as JSON, so we can also embed the sender, timestamps, and other metadata in the individual messages.

Our message could look something like this:

  “senderId”: "foo",
  “messageId”: "some-guid",
  “timestamp”: "2020-01-01T00:00:00.000Z",
  “messageType”: "chat-message",
  “data”: {
    “text”: "hello"

This is a mundane and predictable data structure for our message data. Each of our blocks will contain data that looks roughly like this.

There is a side effect of us using structured data for our messages like this – which is that if we read the entire file from the start, it would not be valid JSON at all. It would be a text file with some JSON items inside of it, but it wouldn’t be “a valid JSON array of messages” – there’s no surrounding square bracket array declaration [ ], and there are no separators between entries in the file.

Because we’re not ever going to load our whole file into memory at once, and because our file isn’t actually valid JSON, we’re going to need to do something to indicate our individual message boundaries so we can parse the file later. It’d be really nice if we could just use an open curly bracket { and that’d just be fine, but there’s no guarantee that we won’t embed complicated object structures in our messages at a later point that might break our parsing.

Making our saved messages work like a chat history

Chat applications are interesting as an example of this pattern, because while the data is always append-only, and written linearly, it’s always read in reverse, from the tail of the file first.

We’ll start with the easy problem – adding data to our file. The code and examples here will exist outside of an application as a whole – but we’ll be using TypeScript and the @azure/storage-blob Blob Storage client throughout – and you can presume this code is running in a modern Node environment, the samples here have been executed in Azure Functions.

Writing to our file, thankfully, is easy.

We’re going to generate a Blob filename from our channel name, suffixing it with “.json” (which is a lie, it’s “mostly JSON”, but it’ll do), and we’re going to add a separator character to the start of our blob.

Once we have our filename, we’ll prefix a serialized version of our message object with our separator character, create an Append Blob Client, and call appendBlob with our serialized data.

import { BlobServiceClient } from "@azure/storage-blob";

export default async function (channelName: string, message: any) {
    const fileName = channelName + ".json";
    const separator = String.fromCharCode(30);
    const data = separator + JSON.stringify(message);

    const blobServiceClient = BlobServiceClient.fromConnectionString(process.env.AZURE_STORAGE_CONNECTION_STRING);

    const containerName = process.env.ARCHIVE_CONTAINER || "archive";
    const containerClient = blobServiceClient.getContainerClient(containerName);
    await containerClient.createIfNotExists();

    const blockBlobClient = containerClient.getAppendBlobClient(fileName);
    await blockBlobClient.createIfNotExists();

    await blockBlobClient.appendBlock(data, data.length);

This is exceptionally simple code, and it looks almost like any “Hello World!” Azure Blob Storage example you could thing of. The interesting thing we’re doing in here is using a separator character to indicate the start of our block.

What is our Separator Character?

It’s a nothing! So, the wonderful thing about ASCII, as a standard, is that it has a bunch of control characters that exist from a different era to do things like send control codes to printers in the 1980s, and they’ve been enshrined in the standard ever since.

What this means is there’s a whole raft of character that exist as control codes, that you never see, never use, and are almost fathomably unlikely to occur in your general data structures.

ASCII 30 is my one.

According to ASCII – the 30th character code, which you can see in the above sample being loaded using String.fromCharCode(30), is “RECORD SEPARATOR” (C0 and C1 control codes - Wikipedia).

“Can be used as delimiters to mark fields of data structures. If used for hierarchical levels, US is the lowest level (dividing plain-text data items), while Record Separator, Group Separator, and File Separator are of increasing level to divide groups made up of items of the level beneath it.”

That’ll do. Let’s use it.

By prefixing each of our stored blocks with this invisible separator character, we know that when it comes time to read our file, we can identify where we’ve appended blocks, and re-convert our “kind of JSON file” into a real array of JSON objects.

Whilst these odd control codes from the 1980s aren’t exactly seen every day, this is a legitimate use for them, and we’re not doing anything unnatural or strange here with our data.

Reading the Chat History

We’re not going to go into detail of the web applications and APIs that’d be required above all of this to present a chat history to the user – but I want to explore how we can read our Append Only Blocks into memory in a way that our application can make sense of it.

The Azure Blob Client allows us to return the metadata for our stored file:

    public async sizeof(fileName: string): Promise<number> {
        const blockBlobClient = await this.blobClientFor(fileName);
        const metadata = await blockBlobClient.getProperties();
        return metadata.contentLength;

    private async blobClientFor(fileName: string): Promise<AppendBlobClient> {
        const blockBlobClient = this._containerClient.getAppendBlobClient(fileName);
        await blockBlobClient.createIfNotExists();
        return blockBlobClient;

    private async ensureStorageExists() {
        // TODO: Only run this once.
        await this._containerClient.createIfNotExists();

It’s been exposed as a sizeof function in the sample code above – by calling getProperties() on a blobClient, you can get the total content length of a file.

Reading our whole file is easy enough – but we’re almost never going to want to do that, sort of downloading the file for backups. We can read the whole file like this:

    public async get(fileName: string, offset: number, count: number): Promise<Buffer> {
        const blockBlobClient = await this.blobClientFor(fileName);
        return await blockBlobClient.downloadToBuffer(offset, count);

If we pass 0 as our offset, and the content length as our count, we’ll download our entire file into memory. This is a terrible idea because that file might be 195GiB in size and nobody wants those cloud vendor bills.

Instead of loading the whole file, we’re going to use this same function to parse, backwards through our file, to find the last batch of messages to display to our users in the chat app.


  • We know our messages are a maximum of 512KiB in size
  • We know our blocks can store up to 4MiB of data
  • We know the records in our file are split up by Record Separator characters

What we’re going to do is read chunks of our file, from the very last byte backwards, in batches of 512KiB, to get our chat history.

Worst case scenario? We might just get one message before having to make another call to read more data – but it’s far more likely that by reading 512KiB chunks, we’ll get a whole collection of messages, because in text terms 512KiB is quite a lot of data.

This read amount really could be anything you like, but it makes sense to make it the size of a single data record to prevent errors and prevent your app servers from loading a lot of data into memory that they might not need.

    /// <summary>
    /// Reads the archive in reverse, returning an array of messages and a seek `position` to continue reading from.
    /// </summary>
    public async getTail(channelName: string, offset: number = 0, maxReadChunk: number = _512kb) {
        const blobName = this.blobNameFor(channelName);
        const blobSize = await this._repository.sizeof(blobName);

        let position = blobSize - offset - maxReadChunk;
        let reduceReadBy = 0;
        if (position < 0) {
            reduceReadBy = position;
            position = 0;

        const amountToRead = maxReadChunk + reduceReadBy;
        const buffer = await this._repository.get(blobName, position, amountToRead);


In this getTail function, we’re calculating the name of our blob file, and then calculating a couple of values before we fetch a range of bytes from Azure.

The code calculates the start position by taking the total blob size, subtracting the offset provided to the function, and then again subtracting the maximum length of the chunk of the file to read.

After the read position has been calculated, data is loaded into an ArrayBuffer in memory.


 const buffer = await this._repository.get(blobName, position, amountToRead);

        const firstRecordSeparator = buffer.indexOf(String.fromCharCode(30)) + 1;
        const wholeRecords = buffer.slice(firstRecordSeparator);
        const nextReadPosition = position + firstRecordSeparator;

        const messages = this.bufferToMessageArray(wholeRecords);
        return { messages: messages, position: nextReadPosition, done: position <= 0 };

Once we have 512KiB of data in memory, we’re going to scan forwards to work out where the first record separator in that chunk of data is, discarding any data before it in our Buffer – because we know that from that point onwards, because we are strictly parsing backwards through the file, we will only have complete records.

As the data before that point has been discarded, the updated “nextReadPosition” is returned as part of the response to the consuming client, which can use that value on subsequent requests to get the history block before the one returned. This is similar to how a cursor would work in a RDBMS.

The bufferToMessageArray function splits our data chunk on our record separator, and parses each individual piece of text as if it were JSON:

    private bufferToMessageArray(buffer: Buffer) {
        const messages = buffer.toString("utf8");
        return messages.split(String.fromCharCode(30))
            .filter(data => data.length > 0)
            .map(m => JSON.parse(m));

Using this approach, it’s possible to “page backwards” thought our message history, without having to deal with locking, file downloads, or concurrency in our application – and it’s a really great fit for storing archive data, messages and events, where the entire stream is infrequently read due to it’s raw size, but users often want to “seek upwards”.


This is a fun problem to solve and shows how you could go about building your own archive services using commodity cloud infrastructure in Azure for storing files that could otherwise be “eye wateringly huge” without relying on third party services to do this kind of thing for you.

It’s a great fit for chat apps, event stores, or otherwise massive stores of business events because blob storage is very, very, cheap. In production systems, you’d likely want to implement log rotation for when the blobs inevitably reach their 50,000 block limits, but that should be a simple problem to solve.

It’d be nice if Microsoft extended their block storage SDKs to iterate block by block through stored data, as presumably that metadata exists under the hood in the platform.

Writing User Stories

01/10/2022 11:00:00

Software development transforms human requirements, repeatedly, until software is eventually produced.

We transform things we'd like into feature requests. Which are subsequently decomposed into designs. Which are eventually transformed into working software.

At each step in this process the information becomes denser, more concrete, more specific.

In agile software development, user stories are a brief statement of what a user wants a piece of software to do. User stories are meant to represent a small, atomic, valuable change to a software system.

Sounds simple right?

But they're more than that – user stories are artifacts in agile planning games, they're triggers that start conversations, tools used to track progress, and often the place that a lot of "product thinking" ends up distilled. User stories end up as the single source of truth of pending changes to software.

Because they're so critically important to getting work done, it's important to understand them – so we're going to walk through what exactly user stories are, where they came from, and why we use them.

The time before user stories

Before agile reached critical mass, the source of change for software systems was often a large specification that was often the result of a lengthy requirements engineering process.

In traditional waterfall processes, the requirements gathering portion of software development generally happened at the start of the process and resulted in a set of designs for software that would be written at a later point.

Over time, weaknesses in this very linear "think -> plan -> do" approach to change became obvious. The specifications that were created ended up in systems that often took a long time to build, didn't finish, and full of defects that were only discovered way, way too late.

The truth was that the systems as they were specified were often not actually what people wanted. By disconnecting the design and development of complicated pieces of software, frequently design decisions were misinterpreted as requirements, and user feedback was hardly ever solicited until the very end of the process.

This is about as perfect a storm as can exist for requirements – long, laborious requirement capturing processes resulting in the wrong thing being built.

To make matters worse, because so much thought-work was put into crafting the specifications at the beginning of the process, they often brought out the worst in people; specs became unchangeable, locked down, binding things, where so much work was done to them that if that work was ever invalidated, the authors would often fall foul of the sunk cost fallacy and just continue down the path anyway because it was "part of the design".

The specifications never met their goals. They isolated software development from it's users both with layers of people and management. They bound developers to decisions made during times of speculation. And they charmed people with the security of "having done some work" when no software was being produced.

They provided a feedback-less illusion of progress.

"But not my specifications!" I hear you cry.

No, not all specifications, but most of them.

There had to be a better way to capture requirements that:

  • Was open to change to match the changing nature of software
  • Could operate at the pace of the internet
  • Didn't divorce the authors of work from the users of the systems they were designing
  • Were based in real, measurable, progress.

The humble user story emerged as the format to tackle this problem.

What is a user story

A user story is a short, structured statement of a change to a system. They should be outcome focused , precise, and non-exhaustive.

Stories originated as part of physical work-tracking systems in early agile methods – they were handwritten on the front of index cards, with acceptance criteria written on the reverse of the card. The physical format added constraints to user stories that are still useful today.

Their job is to describe an outcome , and not an implementation. They're used as artefacts in planning activities, and they're specifically designed to be non-exhaustive – containing only the information absolutely required as part of a change to a product.

It's the responsibility of the whole team to make sure our stories are high enough quality to work from, and to verify the outcomes of our work.

Furthermore, user stories are an exercise in restraint. They do not exist to replace documentation. They do not exist to replace conversation and collaboration. The job is to decompose large, tough, intractable problems into small, articulated, well considered changes.

User stories are meant to represent a small, atomic, valuable change to a software system and have mostly replaced traditional requirements engineering from the mid-2000s onwards.

The user story contents

The most common user story format, and generally the one that should be followed by default, was popularised by the XP team at Connextra in 2001. It looks like this:

As a <persona>

I want <business focused outcome>

So that <reason driving the change>


  • List of…
  • Acceptance criteria…

Notes: Any notes

This particular format is popular because it considers both the desired outcome from a user's perspective (the persona), and also includes the product thinking or justification for the change as part of the "So that" clause.

By adhering to the constraint of being concise, the story format forces us to decompose our work into small, deliverable chunks. It doesn't prevent us from writing "build the whole solution", but it illuminates poorly written stories very quickly.

Finally, the user story contains a concise, non-exhaustive list of acceptance criteria. Acceptance criteria list the essential qualities of the implemented work. Until all of them are met, the work isn't finished.

Acceptance criteria aren't an excuse to write a specification by stealth. They are not the output format of response documents when you're building APIs, or snippets of HTML for web interfaces. They're conversation points to verify and later accept the user story as completed.

Good acceptance criteria are precise and unambiguous – anything else isn't an acceptance criteria. As an example – "must work in IE6" is better than "must work in legacy browsers", equally "must be accessible" is worse than "must adhere to all WCAG 2.0 recommendations".

Who and what is a valid persona?

Personas represent the users of the software that you are building.

This is often mistaken to mean "the customers of the business" and this fundamental misunderstanding leads to lots of unnatural user stories being rendered into reality.

Your software has multiple different types of users – even users you don't expect. If you're writing a web application, you might have personas that represent "your end user", "business to business customers", or other customer architypes. In addition to this, however, you'll often have personas like "the on call engineer supporting this application", "first line support" or "the back-office user who configures this application".

While they might not be your paying customers, they're all valid user personas and users of your software.

API teams often fall into the trap of trying to write user stories from the perspective of the customer of the software that is making use of their API. This is a mistake, and it's important that if you're building APIs, you write user stories from the perspective of your customers – the developers and clients that make use of your APIs to build consumer facing functionality.

What makes a good user story?

While the vast majority of teams use digital tracking systems today, we should pay mind to the constraints placed upon user stories by physical cards and not over-write our stories. It's important to remember that user stories are meant to contain distilled information for people to work from.

As the author of a user story, you need to be the world's most aggressive editor – removing words that introduce ambiguity, removing any and all repetition and making sure the content is precise. Every single word you write in your user story should be vital and convey new and distinct information to the reader.

It's easy to misinterpret this as "user stories must be exhaustive", but that isn't the case. Keep it tight, don't waffle, but don't try and reproduce every piece of auxiliary documentation about the feature or the context inside every story.

For example:

As a Back-Office Manager
I want business events to be created that describe changes to, or events happening to, customer accounts that are of relevance to back-office management
So that those events may be used to prompt automated decisions on changing the treatment of accounts based on back-office strategies that I have configured.

Could be re-written:

As a Back-Office Manager
I want an event published when a customer account is changed
So that downstream systems can subscribe to make decisions

- Event contains kind of change
- Event contains account identifiers
- External systems can subscribe

In this example, edited, precise language makes the content of the story easier to read , and moving some of the nuance to clearly articulated acceptance criteria prevent the reader having to guess what is expected.

Bill West put together the mnemonic device INVEST , standing for Independent, Negotiable, Verifiable, Estimable, Small and Testable – to describe characteristics of a good user story – but in most cases these qualities can be met by remembering the constraints of physical cards.

If in doubt, remember the words of Ernest Hemingway:

"If I started to write elaborately, or like someone introducing or presenting something, I found that I could cut that scrollwork or ornament out and throw it away and start with the first true simple declarative sentence I had written."

Write less.

The joy of physical limitations

Despite the inevitability of a digital, and remote-first world, it's easy to be wistful for the days of user stories in their physical form, with their associated physical constraints and limitations.

Stories written on physical index cards are constrained by the size of the cards – this provides the wonderful side effect of keeping stories succinct – they cannot possibly bloat or become secret specifications because the cards literally are not big enough.

The scrappy nature of index cards and handwritten stories also comes with the additional psychological benefit of making them feel like impermanent, transitory artefacts that can be torn up and rewritten at will, re-negotiated, and refined, without ceremony or loss. By contrast, teams can often become attached to tickets in digital systems, valuing the audit log of stories moved back and forth and back and forth from column to column as if it's more important than the work it's meant to inspire and represent.

Subtasks attached to the index-card stories on post-it notes become heavy and start falling apart, items get lost, and the cards sag, prompting and encouraging teams to divide bloated stories into smaller, more granular increments. Again, the physicality of the artefact bringing its own benefit.

Physical walls of stories are ever present, tactile, and real. Surrounding your teams with their progress helps build a kind of total immersion that digital tools struggle to replicate. Columns on a wall can be physically constrained, reconfigured in the space, and visual workspaces built around the way work and tasks flow, rather than how a developer at a work tracking firm models how they presume you work.

There's a joy in physical, real, artefacts of production that we have entirely struggled to replicate digitally. But the world has changed, and our digital workflows can be enough, but it takes work to not become so enamoured and obsessed with the instrumentation, the progress reports, and the roll-up statistics and lose sight of the fact that user stories and work tracking systems were meant to help you complete some work, to remember that they are the map and not the destination.

All the best digital workflows succeed by following the same kinds of disciplines and following the same constraints as physical boards have. Digital workflows where team members feel empowered to delete and reform stories and tickets at any point. Where team members can move, refine, and relabel the work as they learn. And where teams do what's right for their project and worry about how to report on it afterwards, find the most success with digital tools.

It's always worth acknowledging that those constraints helped give teams focus and are worth replicating.

What needs to be expressed as a user story?

Lots of teams get lost in the weeds when they try to understand "what's a user story" vs "what's a technical task" vs "what's a technical debt card". Looking backwards towards the original physical origin of these artefacts it's obvious – all these things are the same thing.

Expressing changes as user stories with personas and articulated outcomes is valuable whatever the kind of change. It's a way to communicate with your team that everyone understands, and it's a good way to keep your work honest.

However, don't fall into the trap of user story theatre for small pieces of work that need to happen anyway.

I'd not expect a programmer to see a missing unit test and write a user story to fix it - I'd expect them to fix it. I'd not expect a developer to write a "user story" to fix a build they just watched break. This is essential, non-negotiable work.

As a rule of thumb, technical things that take less time to solve than write up should just be fixed rather than fudging language to artificially legitimise the work – it's already legitimate work.

Every functional change should be expressed as a user story – just make sure you know who the change is for. If you can't articulate who you're doing some work for, it is often a symptom of not understanding the audience of your changes (at best) or at worst, trying to do work that needn't be done at all.

The relationship between user stories, commits, and pull requests

Pull request driven workflows can suffer from the unfortunate side-effect of encouraging deferred integration and driving folks towards "one user story, one pull request" working patterns. While this may work fine for some categories of change, it can be problematic for larger user stories.

It's worth remembering when you establish your own working patterns that there is absolutely nothing wrong with multiple sets of changes contributing to the completion of a single user story. Committing the smallest pieces of work that doesn't break your system is safer by default.

The sooner you're integrating your code, the better, regardless of story writing technique.

What makes a bad user story?

There are plenty of ways to write poor quality user stories, but here are a few favourites:

Decomposed specifications / Design-by-stealth – prescriptive user stories that exhaustively list outputs or specifications as their acceptance criteria are low quality. They constrain your teams to one fixed solution and in most cases don't result in high quality work from teams.

Word Salad – user stories that grow longer than a paragraph or two almost always lead to repetition or interpretation of their intent. They create work, rather than remove it.

Repetition or boiler-plate copy/paste – Obvious repetition and copy/paste content in user stories invents work and burdens the readers with interpretation. It's the exact opposite of the intention of a user story, which is to enhance clarity. The moment you reach for CTRL+V/C while writing a story, you're making a mistake.

Given / Then / When or test script syntax in stories – user stories do not have to be all things to all people. Test scripts, specifications or context documents have no place in stories – they don't add clarity, they increase the time it takes to comprehend requirements. While valuable, those assets should live in wikis, and test tools, respectively.

Help! All my stories are too big! Sequencing and splitting stories.

Driving changes through user stories becomes trickier when the stories require design exercises , or the solution in mind has some pre-requirements (standing up new infrastructure for the first time etc. It's useful to split and sequence stories to make larger pieces of technical work easier while still being deliverable in small chunks.

Imagine, for example, a user story that looked like this:

As a customer I want to call a customer API To retrieve the data stored about me, my order history, and my account expiry date

On the surface the story might sound reasonable, but if this were a story for a brand new API, your development team would soon start to spiral out asking questions like "how does the customer authenticate", "what data should we return by default", "how do we handle pagination of the order history" and lots of other valid questions that soon represent quite a lot of hidden complexity in the work.

In the above example, you'd probably split that work down into several smaller stories – starting with the smallest possible story you can that forms a tracer bullet through the process that you can build on top of.

Perhaps it'd be this list of stories:

  • A story to retrieve the user's public data over an API. (Create the API)
  • A story to add their account expiry to that response if they authenticate. (Introduce auth)
  • A story to add the top-level order summary (totals, number of previous orders)
  • A story to add pagination and past orders to the response

This is just illustrative, and the exact way you slice your stories depends heavily on context – but the themes are clear – split your larger stories into smaller useful shippable parts that prove and add functionality piece by piece. Slicing like this removes risk from your delivery , allows you to introduce technical work carried by the story that needs it first, and keeps progress visible.

Occasionally you'll stumble up against a story that feels intractable and inestimable. First, don't panic, it happens to everyone, breathe. Then, write down the questions you have on a card. These questions form the basis of a spike – a small Q&A focused time-boxed story that doesn't deliver user-facing value. Spikes exist to help you remove ambiguity, to do some quick prototyping, to learn whatever you need to learn so that you can come back and work on the story that got blocked.

Spikes should always pose a question and have a defined outcome – be it example code, or documentation explaining what was learnt. They're the trick to help you when you don't seem to be able to split and sequence your work because there are too many unknowns.

Getting it right

You won't get your user stories right first time – but much in the spirit of other agile processes you'll get better at writing and refining user stories by doing it. Hopefully this primer will help you avoid trying to boil the ocean and lead to you building small things, safely.

If you're still feeling nervous about writing high quality user stories with your teams Henrik Kniberg and Alistair Cockburn published a workshop they called "The Elephant Carpaccio Exercise" in 2013 which will help you practice in a safe environment. You can download the worksheet here - Elephant Carpaccio facilitation guide (

Open-Source Exploitation

12/13/2021 11:00:00

Combative title.

I don’t have a title for this that works.

It’s horrible, it’s difficult, and it’s because all of the titles sound so relentlessly negative that I honestly don’t want to use them. I promise this talk isn’t about being negative, it’s a talk about the work we have to do to be better as an industry. As someone that doesn’t believe in hills, or like walking up them, or death, or dying, this, I think, is probably the hill I’m going to die on.

I want to talk about how open source has in the most cases, been turned into exploitation by the biggest organisations in the world. How it’s used to extricate free labour from you, and why this is fundamentally a bad thing. I’m going to talk about how we can do better. I’m going to talk about what needs to change to make software, and especially open-source software – something I love dearly – survive.

Because right now, open-source is in the most precarious place it’s ever been in its entire existence, and I feel like hardly anyone is talking about it.

The Discourse Is Dead

Before I start with the gory details, I want to talk about loving things.

More importantly, let’s talk about how it’s important to understand that you can be critical of something that you love because you want it to be better, not because you want to harm it.

I’ve been building open-source software since the actual 1990s. When the internet was a village, and everything was tiny. When I was tiny. But it’s important to understand that on any journey of maturity, the techniques, opinions, and approaches that get you from A to B, are not necessarily the things that you’re going to need to get you from B to C.

People frequently struggle with this fact. Humans are beautiful and oft simple creatures that presume because something has worked before, it’ll work again, regardless of the changing context around us.

As technologists, and an industry, we’re going to have to embrace this if we want open source to survive.

Open Source Won the Fight

At this point we all know this to be true.

You probably use Visual Studio Code at the very least as a text editor.

As of 2018 40% of VMs on Azure were in Linux.

Open source won the hearts and minds of people by telling them that software could and should be free.

What does free really mean?

While the GPL and its variants – were probably not the first permissive and free software licenses – they were the licenses that rapidly gained mindshare with the rising popularity of Linux in the late 90s.

Linux, really, was the tip of the spear that pushed open-source software into the mainstream, and it’s GPL license was originally described by the Free Software Foundation as “free as in speech, not free as in beer”. A confounding statement that a lot of people struggled to understand.

So what does the GPL really mean? In simple terms, if you use source code available under its license, you need to make your changes public for other people to use. This is because the FSF promoted “software freedoms” – literally, the right of software to be liberated, so that it’s users could modify, inspect, and make their own changes to it.

A noble goal, which shows its lineage as the license used to build a Unix clone that was supposed to be freely available to all – a goal centred around people sharing source code at local computer clubs.

It’s important to stress that “free” never meant “free from cost”. It always meant “free as in freedom” – and in fact, much of the original literature focuses on this by describing software that is “free from cost” as “gratis”.

From the FSF FAQs:

Does free software mean using the GPL?

Not at all—there are many other free software licenses. We have an incomplete list. Any license that provides the user certain specific freedoms is a free software license.

Why should I use the GNU GPL rather than other free software licenses? (#WhyUseGPL)

Using the GNU GPL will require that all the released improved versions be free software. This means you can avoid the risk of having to compete with a proprietary modified version of your own work. However, in some special situations it can be better to use a more permissive license.

But it wasn’t that version of free software that really won

Despite Linux, and despite early and limited forays into open source by organisations like RedHat the strong copyleft licenses of the GPL were not the reason open-source software is thriving in the market today.

They’re certainly not the reasons ultra-mega-corps like Microsoft, or Amazon, or Google now champion open source.

The widespread adoption of open-source software in the enterprise is directly related to the MIT license, and the Apache – “permissive” licenses, which don’t force people that build on top of software to re-share their modifications back to the wider communities.

Permissive licensing allows individuals or organisations to take your work, build on top of it, and even operate these modified copies of a work for profit.

Much like the GPL, the aim of open source was not to restrict commercial exploitation, but to ensure the freedom of the software itself.

Who benefits from permissive licensing?

This is a trick question really – because in any situation where there is a power imbalance – let’s say, between the four or five largest organisations in the world, and, some person throwing some code on the internet, the organisation is always the entity that will benefit.

Without wanting to sound like a naysayer – because I assure you, I deeply love open-source, and software freedom, and the combinatorial value that adds to teaching, and our peers, and each other, I cannot say loud enough:

Multi-national organisations do not give a single solitary fuck about you.

Businesses do not care about you.

But you know what they do care about? They care about free “value” that they are able to commercially exploit. The wide proliferation of software in businesses is a direct result of licenses like the Apache license, and the MIT license being leveraged into closed source, proprietary and for-profit work.

Want to test the theory?

Go into your office tomorrow and try adding some GPL’d code to your companies' applications and see how your line manager responds.

Permissive licenses explicitly and without recourse shift the balance of power towards large technical organisations and away from individual authors and creators. They have the might to leverage code, they have the capability to build upon it, and they have the incentive and organisational structures to profit from doing so.

Open-source software took hold in the enterprise because it allowed itself to be exploited.

Oh come on, exploited? That’s a bit much isn’t it?

Nope. It’s entirely accurate.

exploitation (noun) · exploitations (plural noun)

  • the action or fact of treating someone unfairly in order to benefit from their work.

"the exploitation of migrant workers"

  • the action of making use of and benefiting from resources.

"the Bronze Age saw exploitation of gold deposits"

  • the fact of making use of a situation to gain unfair advantage for oneself.

"they are shameless in their exploitation of the fear of death"

Oh wow, that’s got such a negative slant though, surely that’s not fair?

Surely people are smarter than the let their work just get leveraged like this?

The internet runs on exploited and unpaid labour

XKCD is always right

XKCD Dependency_x2

This is just the truth. It’s widely reported. The vast majority of open source projects aren’t funded. Even important ones.

There is no art without patronage. None. The only successful open source projects in the world are either a) backed by enormous companies that use them for strategic marketing and product positioning advantage OR b) rely on the exploitation of free labour for the gain of organisations operating these products as services.

I can see you frothing already – “but GitHub has a donations thing!”, “what about Patreon!”, “I donated once, look!”.

And I see you undermine your own arguments. We’ve all watched people burn out. We’ve watched people trying to do dual-licensing get verbally assaulted by their own peers for not being “free enough for them”. We’ve watched people go to the effort of the legal legwork to sell support contracts and have single-digit instances of those contracts sold.

We’ve seen packages with 20+ million downloads languish because nobody is willing to pay for the work. It’s a hellscape. It victimises creators.

I would not wish a successful open-source project on anyone.

Let’s ask reddit

(Never ask reddit)

I recently made the observation in a reddit thread that it’s utterly wild that I can stream myself reading an open-source codebase on YouTube and people will happily allow me to profit from it, but the open-source community has become so wrongheaded that the idea of charging for software is anathema to them.

Let’s get some direct quotes:

“Ahh, so you hate gcc and linux too, since they're developed by and for companies?”

“Arguing against free software? What year is it?!”

“If it’s free, why wouldn’t it be free to everyone? That includes organizations. I’m honestly not clear what you’re suggesting, specifically and logistically.”

Obviously I was downvoted to oblivion because people seemed to interpret “perhaps multinational organisations should pay you for your work” as “I don’t think software freedom is good”.

But I was more astonished by people suggesting that charging for software was somehow in contradiction with the “ethos” of open source, when all that position really shows is an astonishing lack of literacy of what open source really means.

Lars Ulrich Was Right

In 1999 Napster heralded the popularisation of peer-to-peer file sharing networks. And Metallica litigated and were absolutely vilified for doing so.

The music business had become a corporate fat cat, nickel and diming everyone with exorbitant prices for CDs (£20+ for new releases!), bands were filthy rich and record executives more-so. And we all cried – “what do Metallica know about this! They’re rich already! We just want new music!”.

I spent my mid-teens pirating music on Napster, and AudioGalaxy, and Limewire, and Kazaa, and Direct Connect, and and and and and and. And you know what? If anyone had spent time listening to what Lars Ulrich (Metallica’s drummer) was actually saying at the time, they’d realise he was absolutely, 100% correct, and in the two decades since has been thoroughly vindicated.

I read an interview with him recently, where he looks back on it – and he’s reflective. What he actually said at the time was “We’re devaluing the work of musicians. It doesn’t affect me, but it will affect every band that comes after me. I’m already a multi-millionaire. File sharing devalues the work, and once it’s done, it can never be undone.”

And he was right.

After ~1999, the music industry was never the same. Small touring bands that would make comfortable livings scrape by in 2020. Niche and underground genres, while more vibrant than ever, absolutely cannot financially sustain themselves. It doesn’t scale. We devalued the work by giving it all away.

And when you give it all away, the only people that profit are the large organisations that are in power.

Spotify, today, occupies the space that music labels once did, a vastly profitable large organisations while artists figuratively starve.


I wish I had the fine wine and art collection of Ulrich, but forgive me for feeling a little bit like I’m standing here desperately hoping that people listen to this message. Because we are here, right now.

I love open-source, just like Lars loved tape trading and underground scenes, but the ways in which we allow it to be weaponised is a violence. It doesn’t put humans, maintainers, creators and authors at its centre – instead, it puts organisational exploitation as the core goal.

We all made a tragic mistake in thinking that the ownership model that was great for our local computing club could scale to plant-sized industry.

How did we get here?

Here’s the scariest part, really. We got here because this is what we wanted to do. I did this. You did this. We all made mistakes.

I’ve spent the last decade advocating for the adoption of open-source in mid-to-large organisations.

I, myself, sat and wrote policies suggesting that while we needed to adopt and contribute if we could (largely, organisations never do) to open-source for both strategic and marketing benefit, that we really should only look at permissive licensed software, because anything else would incur cost at best, and force us to give away our software at worst.

And god, was I dead wrong.

I should’ve spent that time advocating that we licensed dual-licensed software. That we bought support contracts. That we participated in copyleft software and gave back to the community.

I was wrong, and I’m sorry, I let you all down.

I’m not the only one

Every time a small organisation or creator tries to license their software in a way that protects them from the exploitation of big business – like Elastic, or recently Apollo, or numerous others over the years – the community savages them, without realising that it’s the community savaging itself.

We need to be better at supporting each other, at knowing whenever a creator cries burn-out, or that they can’t pay rent in kudos, or that they need to advertise for work in their NPM package, that they mean it. That it could easily be you in that position.

We need new licenses, and a new culture, which prioritises the freedom of people from exploitation, over the freedom of software.

I want you to get paid. I want you to have nice things. I want you to work sustainably. I want a world where it’s viable for smart people to build beautiful things and make a living because of it.

If we must operate inside a late-stage-capitalistic hellhole, I want it to be on our terms.

Can companies ever ethically interact with open-source?

Absolutely yes.

And here’s a little bit of light, before I talk about what we need to do the readdress this imbalance.

There are companies that do good work in open-source, and fund it well. They all have reasons for doing so, and even some of the biggest players have reasonably ethical open-source products, but they always do it for marketing position, and for mindshare, and ultimately, to sell products and services and that’s ok.

If we’re to interact with these organisations, there is nothing wrong with taking and using software they make available, for free, but remember that your patronage is always the product. Even the projects that you may perceive to be “independent”, like Linux, all have funding structures or staff provided from major organisations.

The open-source software that you produce is not the same kind of open-source software that they do, and it’s foolish to perceive it to be the same thing.

How can we change the status quo?

We need both better approaches, and better systems, along with the cooperation of all the major vendors to really make a dent in this problem.

It will not be easy. But there’s a role for all of us in this.

Support creators

This is the easiest, and the most free of all the ways we’ll solve this problem.

The next time each of you is about to send a shitty tweet because Docker desktop made delaying updates a paid feature, perhaps, just for a second, wonder why they might be doing that.

The next time you see a library you like adopting an “open-core” licensing model, where the value-added features, or the integrations are paid for features – consider paying for the features.

Whenever a maintainer asks for support or contributions on something you use, contribute back.

Don’t be entitled, don’t shout down your peers, don’t troll them for trying to make a living. If we all behaved like this, the software world would be a lot kinder.

Rehabilitate package management

I think it’s table stakes for the next iteration of package managers and component sharing platforms to support billing. I’d move to a platform that put creator sustainability at its heart at a moment’s notice.

I have a theory that more organisations would pay for software if there were existing models that supported or allowed it. Most projects couldn’t issue an invoice, or pay taxes, or accept credit cards, if they tried.

Our next-generation platforms need to support this for creator sustainability. We’re seeing the first steps towards these goals with GitHub sponsorships, and nascent projects like SDKBin – the “NuGet, but paid” distribution platform.

Petition platform vendors

A step up from that? I want to pay for libraries that I use in my Azure bill. In my AWS bill. In my GCP bill.

While I’ve railed against large organisations leveraging open-source throughout, large organisations aren’t fundamentally bad, immoral, or evil, I just believe they operate in their best interest. The first platform that lets me sell software components can have their cut too. That’s fair. That’s help.

I think this would unlock a whole category of software sales that just doesn’t exist trivially in the market today. Imagine if instead of trying to work through some asinine procurement process, you could just add NuGet, or NPM, or Cargo packages and it’ll be accounted for and charged appropriately by your cloud platform vendor over a private package feed.

This is the best thing a vendor could do to support creators – they could create a real marketplace. One that’s sustainable for everyone inside of it.

Keep fighting for free software

For users! For teachers! For your friends!

I feel like I need to double down on what I said at the start. I love open-source software dearly. I want it to survive. But we must understand that what got it to a place of success is something that is currently threatening its sustainable existence.

Open-source doesn’t have to a proxy for the exploitation of the individual.

It can be ethical. It can survive this.

I do not want to take your source code away from you, I just desperately want to have enough people think critically about it, that when it’s your great new idea that you think you can do something meaningful with, that it’s you that can execute on and benefit from the idea.

By all means give it away to your peers but spare no pity for large organisations that want to profit from your work at your expense.

Support the scene

In music, there’s the idea of supporting our scene, our heritage, the shared place where “the art” comes from.

This is our culture.

Pay your friends for their software and accept it gracefully if they want to give it you for free.

Footnotes - see this live!

An expanded version of this piece is available as a conference talk - if you would like me to come and talk to your user-group or conference about ethics and open-source, please get in touch.

A Love Letter to Software

12/03/2021 09:00:00

Most software isn’t what people think it is.

It’s a common thread – it’s mistaken for lots of things that it isn’t.

“I bet you’re good at maths!”

Ask the well-intentioned friends and strangers who think we write in ones and zeros.

“Software is engineering”

Cries the mid-career developer, desperately searching for legitimacy in a world that makes it hard for people to feel their own worth.

“Software is architecture!”

Says the lead developer, filling out job applications looking to scale their experience and paycheck.

But programming is programming. And we do it a disservice, we lower it, by claiming it to be something else.

Comparison and the Death of Identity

Software is the youngest of industries, and with youth comes the need for identity, and understanding – because those things bring legitimacy to an occupation.

Every new job must run the gauntlet of people’s understanding – and when a discipline is new, or complicated it is tempting to rely on comparison to borrow credibility, and to feel legitimate.

But comparison is reductive, and over a long enough time can harm the identity of the people that rely on it.

Software is none of the things that it may seem like.

Software is beautiful because of what it is

I’m so proud of software and the people that build it. And I want you to all be proud of what you are and what you do too.

Software is the most important innovation of the last one hundred years. Without software, the modern world wouldn’t exist.

Software is beautiful because we’ve found our own patterns that aren’t engineering, or design. They’re fundamentally of software, and they have legitimacy, and are important.

You wouldn’t TDD a building, even if you might model it first – the disciplines that are of software, are ours, and things to celebrate.

We do not need to borrow the authority or identity of other disciplines on the way to finding our own.

Just as much as we should celebrate our success, we should embrace and be accountable for our failures. It’s important that we embrace and solve our ethical problems, defend the rights of our workers, and are accountable for our mistakes.

Because software does not exist without the humans at the centre of it.

Love What You Are

There’s a trend of negativity that can infect software.

That everything sucks, that everything is buggy, that modern programmers just plug Lego bricks together – and it’s toxic. It is absolutely possible to be critical of things that you love to help them grow, but unbridled aggression and negativity is worthless.

Software, even buggy software, has changed the world.

Software, even software you don’t like, has inspired, and enabled, has been a life changing experience to someone.

As programmers, our software is our work, our literature, our singular creative output.

As romantic and pretentious as that sounds – respect each other, and the work, lest we drown out someone’s beautiful violent urge to create and make things.

Every character matters

I was recently on a podcast where I was asked what advice I’d give to myself twenty years ago, if I could, and after some deliberation I think I finally know the answer.

“Always take more time”

Time is the most finite of resources, and if you want to write beautiful software, you have to do it with intent. With thoughtfulness.

And the only way to do that is to take your time.

We’re often subjected to environments while producing works, where time is the scarcest resource – and simultaneously the thing you need to defend the most to produce work of value and of quality.

If I could have my time again, after every change, every completed story, after everything was done, I’d give myself the time and the mental space to sit with the work, and soak it in, and improve it. Taking time, in the middle of my career, has become the most important part of doing the work.

When you realise that how you do something matters as much as why you’re doing it – that legibility and form directly affect function, when you take time to think articulate why something is – that’s when you do your best work.

The only analogy I subscribe to

And now the contradiction – after writing about how software should be empowered to be its own thing, let me tell you what I think software is really closest to.

Software is fundamentally a work of literature.

You can look at software through the same lens you would as any body of writing. It has text and subtext. It has authorial intent that perhaps is contradictory to its form. It has phrasing, rhythm, and constrained grammar. Software is a method of communicating concepts and ideas between humans, using a subset of language. The fact that it happens to be executed by a computer is almost a beautiful side effect.

Your code tells a story, your commit messages transfer mood. The best code is written for the reader – using form to imitate function, and flow and form to transfer meaning.

I love software, and I love the people that write it – and who do so with intent, and thoughtfulness. You’re not plumbers, or electricians, or engineers, however wonderful those jobs are.

You’re artists. <3

Hiring is broken! Let's fix it with empathy.

09/30/2021 02:30:00

Hiring technical people is difficult, and doubly so if you want to get people who are a good fit for you and the teams you're working with, yet repeatedly we seem to get it awfully wrong as an industry.

The tropes are real – and we're now in our second iteration of "hiring terribly". Where the 80s and early 90s were characterised by mystery puzzle hiring ("how would you work out how many cars you can fit into three cruise ships?"), the 2010s are defined by the tired trope of the interview that is orders of magnitude more difficult to pass and bares increasingly less resemblance to the job you do once you get the role.

Over fifteen years of hiring people for coding jobs, a few things still seem to hold:

  1. The ability to talk fluently about what you like and don't like about code for an hour or so is the most reliable indicator of a good fit.
  2. It's a bad idea to hire someone if you have never seen code they have written.
  3. Interview processes are stressful, unnatural, and frequently don't get the best from people.

We're faced with the quandary – how do we find people from a pool of unknowns, who will quickly be able to contribute, work in relative harmony, and enjoy being a part of your team.

The kind of people who will fit best in your organisations is inevitably variable – as it's driven by the qualities you desire in your team's – but personally, I value kind people who are clear communicators who are a pleasure to work with. Those are not everyone's values, but I want to speak to how I've tried to cultivate those kinds of teams.

You're going to need to know how to write an excellent job spec, construct a good interview process, evaluate technical performance, and give meaningful feedback. Let's cover each of those topics in turn.

How to construct a kind interview process

A good interview process respects everyone's time.

Set amongst the hellscape of FAANG multi-stage interview processes with one hundred asinine divisional directors, it's simple to put together an interview process that isn't hell on earth for everyone involved.

  1. Write a job spec that captures your cultural values.
  2. Have an hour-long conversation with them about themselves, their experiences, and their opinions.
  3. See some code they've written.
  4. Have the team they would join, or someone else representative, talk to them about code.

There's no reason for this process to take any longer than three hours end-to-end, and ideally shouldn't be a chore for anybody involved.

The first bit is all on you, the interviewer. It's important that a job spec contains concrete information on the work that the role involves, that the only skills listened as mandatory are skills used in the actual role, and that you are clear about constraints and salary conditions.

The conversation is what most people are used to as an interview. Be kind. Understand people are humans and might be nervous, make sure they know that the best outcome is that you both "win" – don't be there to get a rise out of someone.

How to be a good interviewer

The first and most important thing about being a good interviewer is that you're not there to trip people up or catch people out. If that's what you feel an interview should be, I implore you to pass on interviewing.

Interviews are not meant to be hostile environments, and as a candidate, if you encounter one, do not under any circumstances take the job.

You're in an interview to verify someone's experience, understand their communication style, and discuss the expectations of the role you're hiring for.

You're there to sell the position, hopefully stimulating enthusiasm in the candidate, and to set expectations of what the job is like, day-to-day, so that neither you nor the candidate is surprised if you both choose to work together.

You need to be honest – both about the problem space, and the work. You need to be clear about where you need to grow as a team or organisation. There is nothing worse, as a candidate, than being sold a lie. Much rather articulate your challenges up front lest you ruin your own reputation.

You need to ask clear and relevant questions – learn from the mistakes of a thousand poor "balance a binary tree" style interview questions and leave that stuff at home.

Ask candidates questions about their relevant experience. Ask them how they would solve problems that you have already solved in the course of your work, or how they would approach them. Don't ask meaningless brain teasers.

You need to give them space to talk about broad topics – I love asking candidates what they think makes good code. I love to ask the question because everyone will say "readable" or "maintainable" and then we get to have a conversation on what they think satisfies those qualities in a codebase.

As an interviewer, I don't care that you learnt to say, "it follows the solid principles", I'd much rather a candidate has the floor to talk about how code makes them feel and why. Nice big broad questions are good at opening the floor to a discussion once you've talked about experience.

Take notes. Don't interrupt the candidate. Give them time to speak, and actively listen.

Seeing some code

You're going to want to see some code for technical roles – this is an absolute minefield, but the thing that I've settled on after trying all sorts of techniques here is to offer the candidates choice.

My standard process here is to offer candidates any of the following:

  • Bring me some code you have written that you're comfortable talking about
  • Do a well-known kata, in your own time, and send it across
  • Set up a one-hour session and I will pair program the kata with you

I ask the candidates to "please pick whichever is less stressful for you".

People perform differently under different types of assessment, and qualitatively, I get the same outcome from a candidate regardless of the path they pick. I like to hope that this opens the door for more neurodiversity in applicants and protects me from only hiring people that share my exact mental model. Choice is good, it doesn't hurt to be kind, it costs nothing.

Each approach has subtle pros and cons – their own arbitrary code might not quite give me the same high-quality signal, but it's a great way for people who are unquestionably competent to avoid wasting their own time. The take-home kata is a nice happy medium, though could potentially accidentally have a candidate thrashing around trying to complete something that doesn't need to be complete. The pairing session requires a little bit more of the interviewer's time and is probably the more high-stress option as people sometimes don't perform well when they feel like they're being actively evaluated, but you know precisely how someone works in those conditions.

Technical tests are intimidating to all but the most confident of candidates, this choice lets them wrestle a little bit of confidence and control back to at least feel like they're not being ambushed by something with which they cannot reckon.

It's the right thing to do.

How to set a good technical test

I've been involved in setting a lot of technical tests over the years – and I'm extremely sensitive to the ire that tech tests often cause in people. I've seen so many borderline abusive practices masquerading as technical tests that I'm not even remotely surprised.

The commandments of good tech tests:

  • A test should take no longer than one hour
  • It should be completable by a junior to the most senior, senior
  • It should not be in your problem domain
  • It should not be unpaid work
  • The answer should be provided in the question

There are a couple of potentially controversial points here.

The tech tests should respect a candidate's time.

You are not the only place they are applying, and the candidate does not owe you their time. Anything more than thirty minutes to an hour can act as implicit discrimination against people that don't have unlimited time, or have families, or other social constraints.

Using the same test for your most junior developers to your most senior allows you to understand the comparative skill of candidates who are applying, on a level playing field. You might not expect the same level of assessment or scrutiny between submissions, but that baseline is a powerful way of removing the vast discrepancies between titles and pay and focusing on a candidate's capability.

The test should be synthetic, and not part of your domain. For years I believed the opposite of this and was a fan of making tests look like "real work", but this often fails because it expects that the candidate often must understand a whole set of new concepts that doesn't help you assess their capability for the job.

And finally, providing the answer in the question deliberately reinforces that it's not a "puzzle", but an interview aid.

If a tech test contains the answer, and isn't domain specific, then what is it really for?

A tech test exists to verify, at the most basic level, that a candidate can code at all. The extremely non-zero number of people I have interviewed that couldn't so much as add new classes to an application is real, and it's why FizzBuzz is a good traditional screening question – it does little more than "test" if you can write an if-statement.

Once you've established a candidate can code, you're looking to see how they approach problem solving.

Do they write tests? Do they write code that is stylistically alike to your team's preferences? Can they clearly articulate why they made the choices they made, however small?

A technical test isn't there to see if a candidate can complete a problem under exam conditions, it's just an indicator as to the way they approach a problem.

A good technical test is the quickest shortcut to providing you these signals. I've come to value well known code katas as recruitment tests as they tend to fulfil most of these criteria trivially, without having to be something of my own invention.

I tend to use the Diamond Kata –

Given a character from the alphabet, print a diamond of its output with that character being the midpoint of the diamond. Write appropriate tests.

Example of the Diamond Kata - find it on github davidwhitney Code Katas

Giving feedback

If a candidate has given you an hour of their time, it's responsible to give them meaningful feedback as notes. It doesn't have to be much, and you don't need to review them – just a few hints as to what they could have done in future to be more successful ("we didn't feel like you had enough experience in Some Framework" or "We didn't feel confident in the tests you were writing") is absolutely fine.

Be kind. Hope they take the feedback away and think about it.

There are hundreds of examples of "failed interview candidate later the hiring manager" out there – being nice to people even if they don't get the job is a good precedent for when you inevitably meet them in the future.

An unfortunate majority of job roles won't contact unsuccessful candidates at all – and there is a balance to be struck. You're certainly not obligated to everyone that applies to a CV screen funnel, but anyone you talk to deserves the courtesy of feedback for their time spent.

Adapt to fit

The best interview processes accurately reflect your own personal values and set the stage for the experience your new team members are going to have when they join your organisation. Because of this, it's an absolute truth that no one way will work for everyone – it's impossible.

I hope that the pointers in here will stimulate a little bit of thought as to how you can re-tool your own interview process to be simpler, kinder, and much quicker.

Below is an appendix about marking technical recruitment tests that may be useful in this process.

Appendix: How to mark a technical test

Because I tend to use the same technical tests for people across the entire skill spectrum, I've come to use a standard marking sheet to understand where a particular candidate fits in the process. I expect less from candidates earlier on in their careers than more experienced individuals – this grading sheet isn't the be all and end all, but as you scale out your process and end up with different people reviewing technical tests and seeing candidates, it's important that people are assessing work they see through the same lens.

Feel free to use this if it is helpful for you understanding what good looks like.

Problem domain and understanding of question

  1. Submitter suggested irrelevant implementation / entirely misunderstood domain
  2. Submitter modelled single concept correctly
  3. Submitter modelled a few concepts in domain
  4. Submitter modelled most concepts in domain
  5. Submitter modelled all concepts in domain

Accuracy of solution

  1. Code does not compile
  2. Code does not function as intended, no features work
  3. Code builds and functions, but only some of the acceptance criteria are met
  4. ~90% of the acceptance criteria are met. Bugs outside of the scope of the acceptance criteria allowed
  5. All acceptance criteria met. Any "hidden" bugs found and solved.

Simplicity of solution

  1. Is hopeless spaghetti code, illegible, confusing, baffling
  2. An overdesigned mess, or nasty hacky code - use of large frameworks for simple problems, misusing DI containers, exceptions as flow control, needless repetition, copy-­pasting of methods, lack of encapsulation, overuse of design patterns to show off, excess of repetitive comments, long methods
  3. Code is concise, size of solution fits the size of the problem, no surprises. Maybe a few needless comments, the odd design smell, but nothing serious
  4. Code is elegant, minimalist, and concise without being code-golf, no side effects, a good read. Methods and functions are descriptive and singular in purpose
  5. Perfect, simple solution. Absolutely no needless comments, descriptive method names. Trivial to read, easy to understand

Presentation of solution

  1. Ugly code, regions, huge comment blocks, inconsistent approach to naming or brace style, weird amounts of whitespace
  2. Average looking code. No regions, fewer odd comment blocks, no bizarre whitespace
  3. Nice respectable code. Good code organisation, no odd comment blocks or lines (no stuff like //======= etc), internally consistent approach to naming and brace style
  4. Utterly consistent, no nasty comment blocks, entirely consistent naming and brace style, effective use of syntactic sugar (modern language features in the given language etc)
  5. Beautiful code. Great naming, internally consistent style. Follows conventions of language of test. Skillful use of whitespace / stanzas in code to logically group lines of code and operations. Code flows well and is optimised for the reader.

Quality of unit tests

  1. No test coverage, tests that are broken, illegible, junk
  2. Tests that don't test the class that's supposed to be under test, some tests test some functionality. Vaguely descriptive naming. AAA pattern in unit tests.
  3. Descriptive, accurate names. AAA in unit tests. Use of test setup to DRY out tests if appropriate. Reasonable coverage.
  4. Complete test coverage to address all acceptance criteria, setup if appropriate, good descriptive names. BDD style tests with contexts are appreciated.
  5. Full coverage, all acceptance criteria covered, great naming that represents the user stories accurately, little to no repetition, no bloated repetitive tests, effective use of data driven tests if appropriate, or other framework features.

How to write a tech talk that doesn’t suck.

06/20/2021 23:00:00

Conferences, user-groups, and internal speaking events are a great way for you to level up your career as a developer.

There are a lot of reasons behind this – the network effect of meeting like minded peers is infectious and probably where your next job will come from. Reaching out into the community will help you discover things you never would alone or inside your organisation. Most importantly of all though - learning to explain a technical thing will help you understand it and literally make you better at your job.

This is all well and good, but if you've never put together written technical content before, the idea can be terrifying.

I regularly have conversations with people who would like to get into speaking, but just don't know where to start. They tell me they stare at an empty slide deck and just feel blank page anxiety , and often give up. I know seasoned veterans of the tech scene who still fight this problem with every new talk they put together.

Before we start, it's worth highlighting that there's not just one way to write a talk , no more than there's one plot structure to a movie or a book. I'm going to talk about a pattern that works for me.

All my talks don't follow this pattern, but most of them start this way before they perhaps become something else – and I'll dissect a few of my talks to show you how this process works on the way.

I can't promise that your talk won't suck by following these instructions, but hopefully it'll give you enough of a direction to remove some of that empty page anxiety.

What is a conference talk meant to do?

Conference talks are meant to be interesting; they are entertainment. It's very easy to lose track of that when you're in the middle of writing a talk.

We've all been in that session.

The room is too hot, and the speaker is dryly reading patch notes.

You want to close your eyes, but you can practically make eye contact with them you're so close and you just want it to be over. Time slows down. There's inexplicably still half an hour left in the session, and you just want to get out.

When you are finally free of the room, you turn to your friend, and in shared misery mutter "god I'd never write a conference talk this boring". A good conference talk entertains the audience, it tells a human story, and it leaves the audience with something to take away at the end.

You're not going to teach the entirety of a topic in an hour or less, accept this, and do not try to.

Your goal is to leave the audience with some points that they can look up** once they've left the room, and a story about your experiences with the thing.

That's the reason you were bored in that talk and just wanted it to end. It's the reason you didn't enjoy a talk by a famous speaker who came unprepared and just "winged it".

Conference talks are not the same as documentation, or blog posts, or even lectures.

Once you accept that the goal of a good conference talk is to be interesting and entertaining it frees you from a lot of the anxiety of "having to be really technical" or "having to show loads of code". Your audience doesn't have the time to read the code as you're showing it, but they have time to listen to you speak about your experiences with it.

The reason it's you giving the talk is often the most interesting bit, the human story, that's what people connect to – because if you were in a specific situation and solved that problem with technology, your audience probably is too.

Understanding your audience.

There are four different kinds of people you will find in your audience.

1. The person who you think is your target audience.

They probably know a little about the thing you're talking about – perhaps they've tried the technology in your talk description once or twice and haven't quite had the time to work out how to use it. They have the right amount of experience, and your content will connect with them.

2. The person who is in the wrong room

This person might have come to your talk without really understanding the technology. Perhaps they're very new and your talk is aimed at existing practitioners. Perhaps they're interested in seeing different types of talks or talks outside of their main area of expertise. They're very unlikely to "just understand" any of the code you show them.

3. The person that is here to see code

There's a special kind of audience member who will never be satisfied unless they're looking at hardcore computer programming source code on a projected slide. You'll find them everywhere – they're the folks that suggest talks "aren't technical enough" because they often expect to see the documentation projected before them.

4. The person who knows more than you about the tech you're speaking about

This person already knows the technology you're going to cover, and they're here either to verify their own positions, or possibly because there wasn't another talk on that interested them, so they just went for something they knew.

Your job is to entertain and inform all of them at once.

You'll see these people time and time again in the audience – remember – they don't know if this talk is going to be for them until they're in the room, so it's your responsibility as the author of the material to entertain them all.

The four-act play.

An entertaining talk lets each of those different audience members to leave the room with something interesting that they learnt to lookup afterwards, and we can do this by splitting our talk into segments that satisfy each of those different audience members.

First and foremost, your talk is both a performance and should tell a story that people can connect to. This means we shouldn't just be pasting code into slide decks and narrating through it for the entire session - those deep dives work far better as an essay or blog post.

What works for me, is to divide talks up into a four-act structure.

  • Introduction
  • Act 1 – The Frame
  • Act 2 – The Technology
  • Act 3 – The Example
  • Act 4 – The Demo
  • Conclusion

Once you've got through the introduction – which is typically just a couple of slides to introduce the talk and help you connect to the audience, each of the "acts" has a purpose, and is sneakily designed to satisfy the different things people are hoping for in your talk.

Act 1 – The Frame

This is the setup for the entire talk, it's the hook, the central conflict, or the context that you're providing your audience, so they understand why you're talking about the thing you're talking about.

It's often about you, or an experience you've had that lead you to a particular piece of technology. It should contextualise your talk in a way that literally anyone can understand – it's a non-detailed scene setter.

In a recent talk I wrote about GameBoy Emulation , the frame was a mixture of two things – my childhood infatuation with the console, along with wanting to explain, at a low level, how computers worked to an audience. To explain emulation, I knew I needed to do a bit of a Comp-sci 101 session in hardware and remembering the things that seemed like magic to me as a child gave me a suitable narrative position to dive into technical details from. It made a talk about CPU opcodes and registers feel like a story.

Another example, from a talk I wrote about writing a 3D ray-casting "engine", my frame was the history of perspective in classic art. Again, because I knew that the technology part of the talk was going to talk about how visual projections were implemented in code, I was able to frame that against the backdrop of 3D in traditional art before leading into the history of 3D in games.

In both examples, the frame helps someone who might not understand the more technical portions of the talk in details find something of interest and hopefully of value to take away from the session.

Act 2 – The Technology

This is "the obvious part", once we've introduced the context that the talk exists in, we can discuss the specific technology / approach / central topic of the talk as it exists in this context.

In the case of my 3D ray-casting talk, at this point we switch from a more general history of perspective in art to the history of 3D games, and what ray-casting is.

You can go into as much detail as you like about the technology at this point, because this part of the talk is for the person who is your intended audience – who knows just enough but is here to learn something new.

I often find this part of my talks are quite code-light, but approach-heavy – explaining what the technology does, rather than how it does it.

Act 3 – The Example

Once we've introduced the technology, your example should work through how you apply it to solve a problem. This is often either a live coding segment (though I'd advise against it) or talking through screenshots of code.

Some people thrive doing live coding segments, but for most, it's a lot of stress, and introduces places where your talk can go badly wrong. Embed a recorded video, embed screenshots and narrate through them – pretty much anything is "safer" than doing live coding, especially when you've not done it before.

This act is for everyone, but especially people who feel cheated if they don't see code in a session.

Act 4 – The Demo

Applying much the same logic as the above – show the thing working. If there's any chance at all that it could fail at runtime, especially for your first few talks, pre-record the demo. Live demos are fun, and even the most veteran speakers have had live demos fail on them while on stage.

Don't panic if it happens, laugh it off, and be prepared with screenshots of the thing working correctly.

The Conclusion

The trick to making your talks seem like a cohesive story or journey is that you then use your conclusion to come all the way back to your framing device again from the start , to answer the question you posed, or explain how the technology helped you overcome whatever the central problem was at the start of your talk.

In my GameBoy talk, the "moral of the story" is all about it being ok to fail at something, and continue until you make progress, the difficulties I initially struggled in building a GameBoy emulator were part of the framing device and coming back to them ends the talk with a resolution.

Similarly, in my 3D rendering talk, the frame is brought back in again as I explain that throughout the process of building a ray-caster, I understood and could answer some questions that confused me about 3D graphics as a child, again, providing resolution to the talk.

While your talks will inevitably not please everyone in attendance, by taking the audience on a journey that they can take different kinds of learnings from throughout, it increases the chance of your talk going down well. This, as a result, makes the topic more memorable and relatable.

The slide deck.

The empty slide deck is a real anxiety inducing nightmare – so the most important thing to do when you need to start writing the talk is just getting something into the deck. I like to imagine my slide deck as an essay plan – with a slide per bullet point.

You're going to want to create your slide deck in multiple passes, and to start with, just add slides with titles and empty bodies to get the main structure of your talk down on paper. You'll want to budget about 2-3 minutes per slide so split your talking points down quite finely.

Getting this first outline done in slides is the single most motivating thing you'll do when writing your talk, because once it's on paper, you no longer have to hold the whole talk in your head, and you can focus on filling out specific slides.

I generally avoid text in my slide deck instead using images that represent the concepts until I need to show concrete things like diagrams, code, or screenshots. This has the nice side effect of having your audience looking at you, and paying attention to what you're saying, rather than trying to read the text on the deck.

Once the main flow of your talk is locked in, leave it, and get on with writing your script – you should come back at the end once the talk is written, and make the slides look good. Trust me – it's easy to get distracted polishing slides before you finish your narrative, and you might find that the slides that you'd previously made beautiful will have to change as a result.

The Script

If you ask people if they script their talks, you'll get a wide variety of responses – some people prefer to talk around their slides, others prefer to read from a script like an auto-cue, and I'm sure you'll find someone everywhere in between.

I script my talks to the word – every utterance, every seemingly throw-away joke, dramatic pause, chuckle, it's all in there. That doesn't mean that I always read my script in real-time – as you do your talk more frequently, you'll amend, and memorise it.

What a tight script is, though, is security. It protects you from illness, hangovers, your own poor life choices, literally anything that could get in between you and you doing a good session. Talks can feel robotic when they're read from the script without a speaker privacy screen, but if you make sure that you write your script to take the flow and rhythm of language into account – literally write it as you would like to say it – you can stop it looking like you're obviously reading a script.

While you don't have to use the script that you write, the act of producing the script is often as good as learning your talk , complete with the added protection if you need it.

The more technical your talk, the more you'll end up needing the script. It doesn't matter how well you know the topic, it's remarkably easy to fumble words when you're speaking and under the stress of being watched by an audience. It's ok, don't feel bad about it. Much rather you have a script than make a mistake and undermine a technical part of your talk.

Once you have your slide deck outline of your talk, you should walk through each slide, and write the script for that slide in the speaker notes of the slide deck. This way, when you're presenting, your speaker view will always have your script in plain sight. PowerPoint / Google Slides / Keynote all do this by default.

Do multiple passes , make sure the script still works if you re-order slides, and make sure that whatever you write in the slides fit into your 2–3-minute budget. Some concepts will take longer than that to explain but it's worth chunking larger topics across multiple slides. It's a useful mental crutch to know that you're keeping good time.

Remember - you do not have to use your script, but your talk will almost certainly be better if you have one.

Knowing what to talk about.

Often the hardest thing to do when you're starting writing conference talks is even knowing what you want to talk about.

Consider these three questions:

  1. Is there something you did that lead you to reconsider your approach or opinion?
  2. Is there something you've worked on where the outcome was surprising or non-obvious?
  3. Is there something you absolutely love?

The things that surprise you, or made you change your views on something are all good candidates for first conference talks. If you love something so much you must talk about it, then the decision makes itself.

If you're still lacking inspiration, you can check user groups / Reddit / Quora / LinkedIn for questions people commonly ask – if you spot a trend in something you know the answer for, it could be a good candidate to tell your story in that context.

There's no sure-fire hit for picking a topic that people will want to watch at a conference, but in my experience, enthusiasm is infectious , and if you care about something it's probably the thing you should speak about to start out.

In Summary.

You can never guarantee that you're going to write a talk that everyone is going to love but following this structure has worked for me and many of my talks over the last 10-12 years of public speaking.

One size doesn't fit all – and there are plenty of different talk formats that I utterly adore, but if you're struggling to get started hopefully some of this structural guidance will help you on your path to write better talks than I ever have.

Good luck!

Agile software estimation for everyone.

02/10/2021 19:15:00

Without fail, in every consulting engagement I start, there is a common theme – that failure to articulate and scope “the software development work” pushes wide and varying dysfunction through organisations.

It’s universal. It happens to everyone. And it’s nothing to be ashamed about.

Working out what to do, how hard it is, and communicating it effectively is a huge proportion of getting work done. Entire jobs (project management, delivery management) and methodologies exist around trying to do just that.

What is slightly astonishing is how prevalent poor estimation and failed projects are in technology – especially when there’s so much talk about agile, and data driven development, and measurements.

In this piece, we’re going to talk about why we estimate, what we estimate, and how you should actually estimate, hopefully to remove some of the stress and toil from what can often seem like theatrics and process.

Why we estimate.

No more than you would hire a plumber to fix up your bathroom if they couldn’t indicate to you how long it would take or what it would cost, should you start a software project without understanding its complexity. Estimation is a necessary burden that exists when people must pay for work to get done and estimation means different things for different people.

For a team, estimation is a tool we use to understand how much work we can get done without burning out. For your C-level, estimation is a tool they use to understand what promises they feel comfortable making.

In both cases, estimation is about reducing or mitigating risk by making sure we don’t promise things we can’t deliver. At an even wider scale, estimation is simply about us understanding what we can and cannot do.

Estimation also does not exist in a vacuum - estimations are only valid for the specific set of people that make them. If you change the composition of a team or the skills it possesses, the amount and types of work that team can accomplish are going to change.

Estimation works by presuming that in fixed period, the same people should be able to do the same amount of work, of roughly similar or known types, that they have previously. It’s based around a combination of a stable team, knowledge, and experience.

Estimates are owned by the team; they are the team’s measure.

What we estimate

From the mid-2000s, User Stories have been the more-or-less-standard way of articulating work that needs to be done in software, they are meant to be an artifact that tracks a stand-alone and deployable change to a piece of software.

They should be articulated from the position of user following the mostly standard format of:

As a <type of user>
I want <some outcome>
So that <reason that this is important>

In traditional agile workflows, user stories tended to be written on index cards, with any associated acceptance criteria written on the back of the index card. When all the acceptance criteria are met, your story is complete.

Over time, the idea of tasking stories grew in popularity, and tasks were often stuck to the story cards using sticky notes. This was a useful physical constraint, as stories could only get so big before the sticky notes would literally fall off the cards.

To estimate, we estimate the complexity of each of these discrete pieces of work, in isolation. Our estimates are meant to be the measure of how much effort it would take the whole team to get the entire story into production.

It’s not uncommon to find different teams have different ideas about what “done” means for their stories – but over time almost everyone agrees that “in production” is what qualifies a story as done.

Some organisations may think they need to complete additional work in release process automation before this feels like an achievable goal, but I would always rather an estimate be honest and capture the existing complexity related to releasing software, if there is any.

Ultimately, for estimation to provide an accurate picture of your teams’ capability, any work the team performs should be captured as user stories and estimated.

Nothing ships to production without estimation.

Estimation Scales

Estimation in agile projects takes place using abstract measurements that represent the amount of effort it takes a given team to complete a user story.

Over the last twenty years, people have suggested different estimation scales for work.

These are:

  • Time (usually in half days)
  • T-Shirt Sizes (XS, S, M, L, XL, XXL)
  • XPs “doubling numbers” (1,2,4,8,16,32…)
  • The Fibonacci sequence (1, 2, 3, 5, 8, 13, 21…)
  • The Modified Fibonacci sequence (1, 2, 3, 5, 8, 13, 20, 40 and 100)
  • A few others not worth articulating.

The different methods offer different pros and cons.

Time is a generally poor estimation scale because it is both concrete and fixed. There is no room for manoeuvre when you estimate in time, and no negotiation. Estimating in time is the quickest way to disappoint your stakeholders the first time you are wrong.

Estimating in T-Shirt sizes is often useful for long-term projections and roadmap planning where the objective is to just get a feeling for the relative size of a piece of work.

XP (eXtreme Programming) originally suggested the doubling of numbers for its abstract estimation scale to capture the fact that work got more complex the bigger it became, and it was very easy for work to grow too big.

Later, the Fibonacci sequence was suggested as a more accurate way of capturing that same concept, but instead as an exponentially increasing curve, to capture the escalation of complexity. This sequence was later modified and capped off with 20, 40 and 100, for practical reasons (nobody wants to endlessly debate which enormous number is the right one to pick).

Most teams that do estimation end up using the Modified Fibonacci sequence because it captures the increasing complexity of unknown, unknowns, along with being simple to remember, and explicitly not being a measure of time.

How to estimate

In a planning meeting, the entire team that will be working on the work get together and read through the stories that have been written for them. These stories are usually prioritised by a product manager or product owner, but estimation doesn’t require this to be useful.

As a team, short 2–3-minute discussion about each story take place, sharing with each other any context or information. During this discussion, people can note down the kind of tasks that completing the work would require, discuss special knowledge they have about the story, or provide any additional context.

As a rule of thumb, it’s a good idea to list the kinds of activities involved - “some front-end work, maybe a new API, some configuration, some new deployment scripts, some UI design” as a way of highlighting hidden complexity.

This is because the more moving parts there are to a story, the more complex it will become.

At the end of this brief discussion and crucially at the same time using either planning poker cards or just by holding the pre-requisite number of fingers up as if playing a game of stone paper scissors, the team should all estimate the complexity of a story.

The team estimates simultaneously to prevent the cognitive bias of anchoring, where whoever gets in first with a number will “set the tone” for future estimates, consciously or not.

The estimate is then noted on the story card, and everyone moves on.

It’s exceptionally important that the entire team estimates every card, and the estimates are for the whole amount of effort required of the entire team.

One of the more common mistakes in estimation involves people estimating only the work for their specific function (Front-end and back-end estimates, dev and QA estimates, design and dev estimates, are all equal anti-patterns). This is problematic, because it hinders a shared understanding of complexity by all team members, along with occasionally driving people towards the anti-pattern of maintaining estimates for individual team functions.

The team estimates as a whole to increase its coherence, to share it’s understanding of the complexity of its work, and most importantly to replace the mental model of a single contributor with that of the whole team.

There are a few common points of conflict that happen during estimation sessions that are worth calling out.

The team can disagree on the score they are settling on.

This will happen frequently – imagine you’re estimating using modified Fibonacci and half the team believes the story to be worth 5 points, while the other half believes the story to be an 8-point story. You can resolve this conflict by always taking the higher estimate.

There is often resistance to this approach from product owners responsible for trying to maximise for team throughput but the truth to this is very simple – it’s always easier to under-promise and over-deliver than to do the opposite and disappoint people.

If a team over-estimates some work, they’ll have plenty of capacity to schedule in subsequent work anyway. If the larger estimate was true, and the smaller number was chosen, it jeopardises work that may have been promised outside of the team.

The range of story points expressed by the team in planning poker is too wide.

Another common scenario is where vastly different numbers are selected by team members. Perhaps there’s an outlier where one team member thinks a piece of work is a 1-point story against an average of 5 points from everyone else.

You can resolve this conflict with a short discussion – perhaps that team member knows something that wasn’t shared in the prior discussion. Estimates can be discussed and revised at this stage by negotiation.

The team argues a story is inestimable, or just too big.

If the team feels the story is not refined enough to be estimated as actionable work, they may push back on the grounds that the story is too big for estimation.

When this occurs, the team should either, collectively, during planning, decompose the story into smaller stories that can be estimated or, defer the story to do the same in another meeting.

If stories are frequently inestimable, unclear, or lacking sufficient detail, schedule a refinement session to occur every other day for 45 minutes. In this session, the team can collectively refine and write stories, until the backlog of work to be done becomes more approachable. The team should choose to what extent these sessions are required based on their comfort levels with the work present in the backlog.

For particularly sticky stories that appear to get repeatedly rebuffed for being inapproachable, it might be advisable to write up a spike. A spike is a time-boxed story that results in further information and research as its output and should inform the refinement and rework of stories deemed too complicated or unapproachable.

If you find yourself writing spikes, be sure to articulate and write acceptance criteria on the spike that state the exact expected outcome of the work.

Planning and estimating work often starts from a place mired in detail and complexity, but with the use of frequent refinement sessions and lightweight estimation conversations, it should mature to be more predictable and simpler as your team gains an increasing coherence.

Estimation and planning with newly formed teams will often take a few iterations to level out, as the team members slowly gain a greater understanding of the kinds of work that happens in the team.

Understanding velocity

Velocity is the term used to describe the rolling average of story points completed during iterations.

A teams velocity is an average, and represents the work of the team in it’s current configuration. If your team makeup changes (the number of people, the distribution of skills, holidays) your velocity will change accordingly.

Velocities are used during planning exercises to understand how much work a team can realistically expect to get done in an average iteration. It’s important to understand that velocity is not a target to be exceeded – it’s an abstract number that represents the sustainable pace of the team.

Velocity is for the team and is a non-comparable measure.

Two teams, even two teams in the same organisation, reporting the same velocity number will always be talking about different things. While it is often tempting to try and baseline or compare teams based on velocity metrics, you are not comparing like for like and any statistics gleamed from them will be false.

Velocity is designed to be used in the following scenarios:

  • Understanding how much work the team can do in each iteration.
  • Planning for holidays and sickness by deliberately reducing velocity
  • Understanding the predictability of a team – inconsistent velocity is a key indicator of poor story writing hygiene or an unpredictable environment.

The goal of a team is to have a stable velocity because a stable velocity means a team can make promises about when stories will be delivered.

Estimating roadmaps, and projection

Estimation is often desired during long term planning exercises such as company roadmaps, quarterly reviews, or other long term vision activities.

It’s useful to understand that estimation is like a shotgun – it is very accurate at short-range and decreases in accuracy the further away your plan or your schedule is. Consequently, detailed estimates become useless at range.

This frequently leads to tension between business leaders that want to communicate to their investors, the public, or their other departments and technologists, who embrace the uncertainty of high-level planning exercises.

Road mapping and other high level estimation processes are an entirely valid way for a business to state it’s intent and its desires but what they are not, and cannot be, are schedules.

We can use high level estimating to help these processes with a few caveats. Firstly, we should always use a different estimation scale for high level estimates. This is important to ensure that wires do not get crossed and estimations conflated.

Secondly**,** high-level estimation processes should and will lack implementation detail, and as a result will end up with items often categorised broadly as either easy, average, large, or extra-large.

This idea of vague complexity is important for a business to understand the order of magnitude size of a piece of work, rather than day-and-date estimation of expected delivery. Once this rough size is established, further work will be required to produce more useful, team centric estimates.

An alternative approach to detailed roadmap projections, and I promise I am not joking, is counting.

Usually, roadmaps are vast and contain many themes and pieces of work, and it’s often enough to simple count the number of items and ask the honest question “if your entire company was working on this roadmap, could you do one thing a day”. If the answer is no, repeat the question for two days, and three days until you work out what you think your “multiplication factor” is.

Often, this number is accurate enough to understand how long a roadmap might take, in conjunction with some simple “small / medium / large” estimates over the roadmap items.

Estimation and dates

If estimates were dates, they’d be called dates, not estimates.

Estimates exist to help teams understand their sustainable pace and velocity. Using estimates to reverse engineer dates is a dangerous (if sometimes possible) non-science.

Rather than attempting to convert story points back to days-and-dates, teams should focus on providing stable velocity and release windows.

A team with a stable velocity should be able to predict the earliest point at which a feature should be delivered by combining their estimates, their capacity, and the expected end of the iteration that work is prioritised into.

The earliest point work could be expected, should be that date.


“But I read on twitter that estimates never work! All the cool kids are talking about #NoEstimates!”

Over time, healthy and stable teams refine their stories down to consistently shaped manageable chunks. As these teams cut through their work, they often find that their estimates for stories are always the same leading to a scenario where “everything is a 5!” or “everything is a 3!”.

This is amazing for predictability, because what those consistent numbers really represent are a shared mental model and a known sustainable pace.

Now, because estimates are deliberately abstract and deliberately the teams own internal measure if everything is a 5, and all stories are always the same size, and they’re always a 5. Then, well, everything could just as easily be a 1.

When everything is the same size, then the goal of estimation has been reached.

As a result, you may hear people from highly functioning, long term and consistent teams advocating for no estimates because they are at the far end of this journey and have probably developed internal mechanisms for accounting for team capacity changes.

In contrast to this, teams that do not share a mental model, and a coherent internalised idea of average story size absolutely require estimation to help them walk to that place of predictability and safety.

It’s unrealistic to expect teams to magically develop a shared idea of story size and capability without giving them tools – like estimation – to measure their progress on the way.

When your team reaches a comfortable place of coherence, you’ll know when you can drop estimates.


I feel like I’ve been talking about estimation and planning for a decade and a half in various forms, and people have plenty of questions about the topic. Here are some of the most common ones.

Should we estimate tasks and subtasks?

You estimate anything at all that the team must work on, to do some work.

There’s certainly an anti-pattern supported by digital tools with a lack of physical constraints (you can’t stick loads of subtasks onto an index card!) that leads to explosions of subtasks.

Often this is people using subtasks as a specification, rather than doing the work to split up and refine a story that is too large to reason about.

I would always prefer any tasks be present as notes on a story card, rather than things that can move, be reallocated, and block on their own. They’re either an integral part of the story, or they are not.

Either way, yes, you must point them if they are work.

How do we estimate technical tasks?

This comes up a lot – and there are two answers.


  • The technical task as part of the work of a user facing story, and its “estimate” is just part of that work.
  • You articulate a technical user persona, who has desires of the system, and the “technical task” is a user story that meets a need for that user.

In both cases, you estimate the story like any other, based on the complexity for the whole team to complete that change in production.

Should we estimate bugs?

Bugs are only bugs when they’re on production – before that point, they’re just unfinished work.

The effort it takes to fix any bugs you find during development is part of the estimation of the story and observations made by testers while you’re building is just part of the work.

Once a bug has made it to production? Well, it’s just another story to fix it, and it should be estimated, like any other story and these stories will count towards your velocity, like any other work.

Should we estimate front-end and back-end tasks separately?

No. Estimates account for the work of the entire team, communication overhead, and delivery. Any work that people do should be accounted for in the teams estimate.

Why are spikes time-boxed and not pointed?

Good catch! Spikes are the anomaly in this process because they are timeboxed.

They are timeboxed because they do not result in work shipped to production but result in more information. They should be treated as if whoever is doing the work is removed from the iteration for the duration of the spike, and the amount of work done in your iteration should be reduced accordingly.

Over-reliance on spikes is an anti-pattern – they’re here to help you break down meaty pieces of work, not to push analysis downwards into the team.

If we don’t finish a story at the end of a sprint, should we re-estimate it at the start of the next one?

It doesn’t really matter a huge amount, you either leave the story on your board, and reduce incoming work until everything is finished, or you re-estimate your story for “work remaining”.

I don’t like re-estimating stories once they’re being worked on and seeing as it’s less work to just do nothing, I’d recommend doing nothing and allowing the fact that velocity is a rolling average to deal with any discrepancy in your reporting.

You need different tools at different times.

Estimation and planning are probably some of the more frequently debated agile methods.

The good news is that you get better at estimation and planning by practicing it, and by refining your stories, and by functioning increasingly as a team rather than a group of individuals.

As you work towards a level of coherence where estimates become less useful, it’s important to remember that at different levels of maturity, teams need different things, and that’s totally, totally fine.

The purpose of estimation is to help give you a sense of predictability. Do not be afraid to change the process if it’s not helping you achieve that goal.

While we cannot predict the future, we can get better at estimating it with practice.

Writing good code by understanding cognitive load

02/02/2021 14:22:22

Programming is full of dogma.

Programmers are so often self-reflective and obsessed with the 'new way of doing things'. It can make programming feel like a cottage industry dedicated to best practice and leadership, patterns and practices and trying to write the 'right' code.

Amongst all the noise, it’s easy to lose sight of why people practice certain disciplines. There’s no shortage of people desperate tell you exactly what you should be doing to make your code “good”, but it is less common for people toarticulate why their approach is better.

The pursuit of good code framed as “application architecture” or “patterns and practices” often pushes codebases towards “tried and tested design patterns”.

These design patterns are model answers – examples of how you can solve specific problem in your language and environment. They exist to help people understand how a good, well factored, solution to a problem should look.

At some point, the collective hive mind decided that remembering these patterns was a mark of experience and status, while at the same time often losing the knowledge of why those solutions – the patterns - were reached in the first place.

Unfortunately, because so much of software is context sensitive, there are many times where blindly applying design patterns makes software much more difficult to reason about and increases the cognitive load of understanding what a program does.

So how can we make sure we’re writing our programs in maintainable, sustainable, and most importantly, understandable, ways?

Without patterns, how do we know what good looks like?

Once you have seen enough codebases, in enough languages, you will realise that most of the arbitrary stylistic qualities of code that people often focus on (“tabs vs spaces!”, “brace style!”) are mostly irrelevant, and instead there are a couple of non-negotiable qualities you look for in a codebase that are far less regimented than your average JavaScript style guide.

Good code is:

  • Code that is easy for someone with minimal “domain context” to read Code that your average developer can understand is good code.

  • Code that focuses on developer-experience, debuggability, and usability Code that you cannot debug meaningfully, is not good code.

  • Code where the intent takes up more visual space than the language syntax. Code that buries what it does under the programming language, is not good code.

Regardless of language, or style, or intent, all programmers interact with codebases in the same way – they glance at it to understand what it does, and they read intently to understand how it does it.

Good code, regardless of style, makes that easy.

You don’t like design patterns?

A design pattern is a commonly accepted approach to a specific type of problem.

The original works about design patterns (the famous “Gang of Four” book) managed to capture battle tested solutions that were required so often by the programming languages and runtimes at the time, that they became the “model answers”.

These design patterns were useful because they exposed shortcomings in the languages of the time. They identified common pieces of code that were written time and time again, to solve problems that existed in multiple systems. As languages have changed and evolved, the design patterns that remain relevant have mutated and changed with them.

The reason design patterns are often read as good code is because they act as a shorthand for a shared understanding that teams can reach.

This shared understanding comes with an amount of gatekeeping, and with gatekeeping, a trend for people to believe that they are the only legitimate solutions to some problems.

Because of this, people will reach for the patterns they know in inappropriate contexts, believing that is the only way. People will reach for them to prove they “know good code”. People will reach for them and end up introducing complexity instead of clarity.

I love design patterns and known good solutions when they fit, but only when the need for them can be precisely articulated.

Care about the cost of abstraction

At a really meta level, all code is constructed from abstractions and encapsulation. Regardless of programming language, and regardless of era, this has always been true.

The programming languages, runtimes and frameworks with which we write higher level code go through an enormous amount of compilation, translation, and interpretation to execute.

Programming languages themselves are common abstractions over operating system APIs, which are abstractions over assembly languages of microarchitectures, which are abstractions over physical hardware.

This trend continues upwards; the functions you write, the types you define, and the classes and files you create that make up your application are all examples of both abstraction and encapsulation in action.

For the most part we are blissfully unaware of the huge amount of complexity that exists underneath your programming environment because the value that that abstraction brings vastly outweighs the complexity of the code running underneath it.

It is orders of magnitude easier to comprehend this (in pseudocode here):


than the following roughly equivalent x64 Linux sample (stolen with love from StackOverflow)

; Program to open and write to file
; Compile with:
;     nasm -f elf64 -o writeToFile64.o writeToFile64.asm
; Link with:
;     ld -m elf_x86_64 -o writeToFile64 writeToFile64.o
; Run with:
;     ./writeToFile64
; Author : Rommel Samanez
global _start

%include 'basicFunctions.asm'

section .data
fileName:  db "testFile.txt",0
fileFlags: dq 0102o         ; create file + read and write mode
fileMode:  dq 00600o        ; user has read write permission
fileDescriptor: dq 0
section .rodata    ; read only data section
msg1: db "Write this message to the test File.",0ah,0
msglen equ $ - msg1
msg2: db "File Descriptor=",0

section .text
    mov rax,2               ;   sys_open
    mov rdi,fileName        ;   const char *filename
    mov rsi,[fileFlags]       ;   int flags
    mov rdx,[fileMode]        ;   int mode
    mov [fileDescriptor],rax
    mov rsi,msg2
    call print
    mov rax,[fileDescriptor]
    call printnumber
    call printnewline
    ; write a message to the created file
    mov rax,1                 ; sys_write
    mov rdi,[fileDescriptor]
    mov rsi,msg1
    mov rdx,msglen
    ; close file Descriptor
    mov rax,3                 ; sys_close
    mov rdi,[fileDescriptor]

    call exit

You never have to worry about all that stuff going on, because “File.Open” and its equivalents reduce the cognitive load on the developer by taking complex implementation details and removing them from your code.

As you implement your own software, sadly it is not a truth that all abstractions reduce cognitive load, and it’s often a very subtle process to understand what does and doesn’t make code easier to work with.

The cognitive trade-off in extracting functions

One of the hardest challenges when working out exactly how to design your code, is understanding when the right time is to introduce abstraction or refactor out code into functions.

Consider this example:

public class Customer
    public List<string> Names { get; set; } = new List<string>();

public static class Program
    public static void Main()
        var input1 = "My Customer Name";
        var customer = new Customer();
        customer.Names = input1.Split(' ').ToList();

One of the more common refactorings that could be applied to this code is to extract a method from the logic that computes the customer names based on the input.

public class Customer
    public List<string> Names { get; set; } = new List<string>();
public static class Program
    public static void Main()
        var input1 = "My Customer Name";
        var customer = new Customer();
        ExtractName(customer, input1);

    private static void ExtractName(Customer customer, string input1)
        customer.Names = input1.Split(' ').ToList();

This refactoring, in a codebase only 3 lines long, adds nothing for the syntactic weight it adds. Reading the code requires an additional hop on behalf of the reader – and introduces an entire extra concept (parameter passing) for it’s supposed benefit (providing a name to the extraction operation).

In small examples of code, where you can rationalise and read the code in a single pass, extracting this function adds cognitive load rather than removes it.

If you read that example carefully, it relies on the fact that input1 is a well-formed string, with spaces between the Names. A good counter example, where factoring would improve readability, is the following sample that deals with UpperCamelCaseNames.

public class Customer
    public List<string> Names { get; set; } = new List<string>();

public static class Program
    public static void Main()
        var input = "MyCustomerName";
        var customer = new Customer();

        var parts = new List<string>();
        var buffer = "";
        foreach (var letter in input)
            if (char.IsUpper(letter))
                if (!string.IsNullOrWhiteSpace(buffer))

                buffer = "";

            buffer += letter;

        if (!string.IsNullOrWhiteSpace(buffer))

        customer.Names = parts;

By subtly changing the requirements, the amount of code required to solve this problem explodes in size – the cyclomatic complexity (number of conditionals in the code) increases, and with it, the ability to glance and sight read the code.

Refactoring that sample by extracting a well named Split function is vital to keeping the code legible.

public static class Program
    public static void Main()
        var input = "MyCustomerName";
        var customer = new Customer();
        customer.Names = SplitOnCapLetters(input);

    private static List<string> SplitOnCapLetters(string input)
        var parts = new List<string>();
        var buffer = "";
        foreach (var letter in input)
            if (char.IsUpper(letter))
                if (!string.IsNullOrWhiteSpace(buffer))

                buffer = "";

            buffer += letter;

        if (!string.IsNullOrWhiteSpace(buffer))

        return parts;

Some of the most illegible and hard to follow codebases I’ve seen fall foul of this anti-pattern of prematurely extracting implementation details from the place they’re most important.

Always try and remember the Principle of locality (borrowed from physics) – “an object is directly influenced only by its immediate surroundings” and keep the implementation of logic close to the place it’s used.

Over deconstruction of implementation can hamper readability because it forces people to context switch. If we consider the first example again –

public static class Program
    public static void Main()
        var input1 = "My Customer Name";
        var customer = new Customer();
        ExtractName(customer, input1);

With the removal of the implementation of ExtractName, you would be forgiven for thinking that it contained complex or lengthy logic. Since its implementation is elsewhere, it forces you to check that it does not in order to accurately understand the code. It is an abstraction that adds no value.

Extract functions when they enhance readability, rather than just incur the cost of a context switch.

The cognitive trade-off in moving code physically

If moving code between functions has a chance of both increasing or decreasing cognitive load, then moving code between files amplifies this difference.

Files and directories act as a larger context switch – you should expect that things that exist in different files to contain difference concepts inside of your domain, and that things in different modules describe entirely different and complete concepts.

The inverse of this is also true – splitting out code into different modules, that are tightly coupled to concepts used elsewhere in your application, dilutes the readability of the code. Therefore, making apprehending your codebase more complicated

Understanding this is the path towards feature based code organisation, as the natural conclusion is that defining code closest to where it is used in the application or system is often the right answer.

The right time to introduce abstraction.

So far, we’ve covered the cost of context switching when dealing with abstractions, but do not think the message here is “don’t use abstractions in your software”. Abstractions are the thing you use to build up the internal language of your software, they’re what you use to craft your APIs, they exist to enhance clarity, not reduce it.

Let’s think about a few rules to follow:

Introduce abstractions to fully describe concepts in a codebase.

These abstractions can be files, directories, or types (this’ll vary by programming language). You should absolutely create files and types to capture core-domain concepts like “Users” and not have that logic leak throughout your code.

These encapsulations should be kept as near to the parts of the code that use them as possible.

The more distinct a concept is from the code that calls it, the further away it should be physically located in your codebase.

For example, the type or file related to “Users” should be located near the implementation of features that interact with “Users” – not arbitrarily split into another package or assembly, because the concept of “Users” is application specific and doesn’t need to be used elsewhere.

Beware prematurely creating shared Domain or Core packages because they often become dependency magnets that get inappropriately reused and are hard to untangle. Resist the urge to extract the core concepts of your application from the application itself. The moment you extract this code into its own assembly, you introduce the chance that a breaking change impacts another application that use it, which increases cognitive load.

Conversely, abstractions that describe concepts unrelated to what your application does should be kept further and further from where you use them. Don’t bury abstractions relating to infrastructure deep inside feature code and accidentally tie your applications behaviours to its hosting environment.

When things get big – moving to repositories and published packages.

As your application grows, it will become increasingly less maintainable by virtue of its size. This isn’t your fault, it’s an inevitability. The larger something becomes, the harder it is to fit the thing inside your head, and with this will come talk of decomposition.

There are two main ways of decomposing an application:

  • You split out full vertical slices of its functionality into different “bounded contexts” (this is the original microservice vision)

  • You split out lower-level concepts into their own packages or modules and maintain them separately from the application.

Decomposing an application has value – it allows you to fit more people around a problem and scale your teams, but as you sub-divide something that was once a single thing, complexity and interdependencies add to cognitive load.

If you’re lucky, and the original codebase was feature-organised, then splitting the application up should be a relatively simple task - but it will inevitably result in either one of two things:

  • Shared component libraries are extracted from the original application.
  • Code is duplicated between the two new applications.

There’s a natural tension between those two approaches – duplication of the code gives the owners of these respective codebases total control over their entire application but introducing shared dependencies will add a layer of complexity where the applications may require coordination in order to grow.

This cost of code sharing is real and significant and should be considered carefully.

It is important that the shared code encapsulates significant complexity to be worthy of the additional logistical costs of sharing and maintenance. Smaller pieces of shared code should be trivially duplicated to avoid this cost.

The cost of reuse

But reuse is good!

I hear you cry, sobbing in the distance.

The SOLID principles!


I’m not listening.

In the eternal tension between DRY (don’t repeat yourself) and SRP (single responsibility), SRP should nail DRY into a coffin.

Extracting something out to be an independent package means that you need to make sure that re-use justifies the cost of:

  • Maintaining a repository
  • Ensuring the repository always has an owner.
  • Running a pull request process for that component.
  • Writing documentation
  • Creating build pipelines to build, test, and release, that component.
  • The extra leap it takes to change this component in any system that use it.

Reuse is great because it stops you solving the same problems twice, but as soon as something gets promoted to being its own first-class concept in your software, it needs to be maintained like one.

This is work. It’s not always complicated work, and you may have good patterns and automation to help you deal with this process, but reusing components isn’t as free as you think it is.

In fact, the cost of change with re-usable code is an order of magnitude higher that just putting some code in your application.

It’s ok! We’ll use a mono-repository!

In languages and ecosystems that were born in the age of package management (Node and Python especially) it has been the norm for years to split dependencies into lots of very small, often open source and reusable packages.

While at first this separation of concerns felt like a good thing, the over-eagerness to divide out applications caused increasing debugging and development friction, to the point where entire tools, like Facebook’s Lerna, were created just to… copy files back into the applications that they should never have been removed from in the first place.

Because those “packages” were never real, they were just the application code.

In a sense, you can look to the proliferation of the term “mono-repository” as something that tracks the presence of codebases so DRY that they might turn to dust.

The real answer is neither “use a mono-repo”, nor “split out all the things”, but in fact is this – whatever moves together, and is versioned together, and deploys together, is the same thing, and lives together.

Thoughtful Code

There’s clearly a cost and benefit to each of the refactorings you can make to your codebase. The cost of introducing abstractions and encapsulations has to be lower than leaving the code where it is or repeating it in another location.

As a guiding, thoughtful principle, it’s always worth thinking “what am I gaining from moving this piece of code or changing the way it describes a concept?” and frequently, that little bit of time and thoughtfulness is enough to combat dogma in your own work.

If you can’t articulate why you are moving something, then you probably shouldn’t do it. Here, in contrast, are some good reasons for introducing abstractions -

Creating an abstraction to describe concepts more clearly.

Extracting to provide some meaningful syntax over what the code does, rather than how it achieves it, is a good way to make your code easier to understand. Much like the name parsing example above, make sure that you’re encapsulating enough behaviour for this to be valuable.

Splitting out something that is expected to grow large.

In many codebases you start building something that is clearly a stub of an implementation, a placeholder for something that’ll become bigger. Appreciating that this is an entirely different concept than the rest of the code it is near is a good reason to either provide an extensibility point or move your nascent implementation into another file somewhere else.

Increasing testability or developer ergonomics.

It’s a common and good thing to extract out particularly significant pieces of code so that they can be independently isolated and tested. It’s vital to keep important code factored out and free from dependencies of the system around it.

For safety.

In some cases, it makes sense to physically move important components out of your application and into a module or package if you explicitly want to introduce ceremony around its modification – either for peer review or safety reasons.

To reduce visual noise at the call-site.

Sometimes it’s important to factor code out to reduce the cognitive load in reading the code that depends upon it. Anything that improves readability and clarity in your application is an excellent reason.

To encourage reuse.

While it’s useful to understand the reasons that you wouldn’t want to reuse code, it’s just as important to understand that reuse is a powerful tool. It can help keep alignment across systems, and to prevent waste in fixing the same problems time and time again.

Appropriate reuse should be encouraged, just so long as you’re aware of the associated costs.

Does this make my code good?

Whenever you introduce anything into code, being thoughtful about the reasons why, and the developer experience of the resulting code should always be your guide.

There isn’t really such a thing as “good code” – there’s just code that works, that is maintainable, and is simple. There’s a good chance you’ll end up in a better place if you spend time and effort making sure the cognitive load of your application is as low as possible while you code.

And on the way, you might realise that removing some of the abstractions in your codebase is a simpler path than over-design.

9 time-management tips for developers

11/16/2020 13:10:00

One of the things I see absolutely plague developers is time management. There is never enough time in the day to both do work, and communicate it. This counts extra in an increasingly distributed world where clear and accurate communication is more important than ever.

The other side of this desire for communication, is people clinging to filling diaries up with meetings as a way to "keep people connected", but if you're not careful, this can significantly backfire.

We've all been there, and we've all seen it:

  • "too many meetings!"
  • "I can't get my work done because of all these meetings!"
  • "I don't feel like I have control over my time!"

Here are a few things I've done, to make sure I can get things actually done, over the years.

1. Diarise your own time.

Your calendar doesn't exist purely for other people to fill your time up. It is yours. You can use it.

During consulting gigs where people bombard you with requests, literally the moment someone asks me to do something, or look at something, I'll block out a slot in my calendar of at least one hour to actually do the work and think about the problem.


  • Helps you remember to do the thing
  • Organises your day
  • Stops meetings preventing work

This is a great trick to help set expectations of when people will expect to see results from you and embraces the fact that doing any work takes time and space.

This is perhaps the easiest way to make sure meetings don't get in the way of doing.

2. Politely reject meetings that do not have agendas or stated outcomes.

Many meetings are formless & ineffective, becoming unstructured thought & conversation. All meetings requests should come with either an agenda or an explicit goal.

Help by making sure your own invites do.

3. Leave meetings where you are adding or gaining nothing.

I suspect the biggest cost-waste in most organisations are meetings with passive participants.

Respect your own time by observing the law of two feet and politely excusing yourself to get other things done.

4. Be present in the meetings you do attend.

No laptops unless you're taking notes or doing some meeting related activity. It's simple, it's respectful.

If you feel like you have to brain space to do other things, see point 3 - observe the law of two feet and get up and leave.

5. Get into the habit of circulating meeting outcomes

There is no need to minute or notate the vast majority of meetings - but any outcomes - decisions, should be circulated to the exact invite group of the meeting after the fact.

This gives people the confidence that if they cannot attend a meeting in real time because of time conflicts, that they will be able to understand the outcomes regardless.

This is part of the respectful contract that not everyone you wish to attend a meeting, will be able to.

6. Decline meeting invites if you cannot, or do not want to, attend.

To solidify the human contract that people will circulate conclusions and communicate effectively, you need to let people know if you're not coming.

You don't have to tell them why, but it'll help them plan.

7. Don't miss mandatory team communication meetings.

There's a pattern of fatigue that develops when meetings start to feel rote or useless. Don't respond by not attending, respond by "refactoring your meetings".

Discuss cancelling pointless recurring events to free up time.

8. Schedule formless catchup meetings

It's easy for meetings to devolve into chatter, especially in 2020, where we're all craving human contact. Set meetings up with this goal.

A social meeting is not a crime, it's optional, and people that are craving contact will thank you.

9. Consider doing work in meetings rather than talking about it.

Many meetings can be replaced with mob programming sessions, where you do the thing, instead of discussing it.

Doing the work is the best way to know if it's right & valuable.

Prefer this format where possible.

Changing a meeting culture

It's easy to think that you have no control over your time, or that the meeting culture of your organisation won't change - this is a real concern, but there are a few things you can do to counter it.

Regardless of the meeting culture of your organisation, you can make sure your own meeting requests follow these guidelines. Encourage your peers to follow suit. This goes doubley if you're in a position of power or responsibility, like a team lead, or a head of. You can normalise good meeting discipline.

You can also help nudge meeting culture out of the way by talking to meeting organisers about the above as meetings appear. Challenge the meeting requests with friendly questions, offer to help improve meeting invites so people can be more prepaired.

Effective meetings are wonderful, people are wonderful, and getting work done requires brain space and time.

Don't let meetings crush your work and motivation 🖤

Does Functional Programming Make Your Code Hard To Read?

10/14/2020 15:40:00

Work in software long enough and you'll meet people that absolutely adore functional programming, and the families of languages that self-select as "functional" - even if you might not realise what it means.

"In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that each return a value, rather than a sequence of imperative statements which change the state of the program."

From Wikipedia

Originally coined as a term to describe an emerging family of languages - Haskell, LISP, Standard ML, and more recently applied to languages that follow in their spiritual footsteps like F# and Clojure - functional programming languages are very much the mathematical end of programming.

"Functional programming languages are a class of languages designed to reflect the way people think mathematically, rather than reflecting the underlying machine. Functional languages are based on the lambda calculus, a simple model of computation, and have a solid theoretical foundation that allows one to reason formally about the programs written in them."

From "Functional Programming Languages", Goldberg, Benjamin

This style of programming leads to applications that look very dissimilar to procedural or object-orientated code bases, and it can be arresting if you're not used to it as a style.

Let's take a look at an example of a program written in Clojure

user=> (if true
         (println "This is always printed")
         (println "This is never printed"))

and that same logic using C-like syntax

if (user) {
    println("This is always printed");
} else {
    println("This is never printed")

These two programs do exactly the same thing but the way that control flows through the programs are different. On smaller programs, this might not seem like a significant difference in style, but as a program grows in size, the way that control flow and state is managed between functional and procedural programming languages differs greatly, with larger functional programming styled programs looking more like large compound equations than programs that you "step through".

Consider this example - a port of some of the original "Practical LISP" samples into Clojure

Alt Text

This is a good, and well factored sample.

Any programming language that you don't know will feel alien at first glance, but consider the style of programming as you see it here - with functions composed as the unions of smaller, lower level functions.

With C-like languages (C, C++, Java, JavaScript, C#, Ruby, et al) being the dominant style of programming, if you've not seen functional code before, it at least appears different on a surface level.

Why do people do this?

Ok, so just because it's different, doesn't make it bad ??

Functional programming has got pretty popular because state management and mutability are hard, and cause bugs.

What does that mean?

Well, if your program operates on some variables that it keeps in memory, it's often quite difficult to keep track of which parts of your program are modifying your variables. As programs grow, this becomes and increasingly difficult and bug prone task.

People that love functional programming often cite the "immutability" as one of its biggest benefits - the simplest way of thinking about it, is that the functions that comprise of a functional program, can only modify data inside of them, and return new pieces of data. Any variables defined never escape their functions, and data passed to them is doesn't have its state changed.

There's a really nice benefit to this, in that each individual function is easy to test, and reason about in isolation, it really does help people build code with less bugs, and the bugs that are present often are contained only to single functions rather than sprawled across a codebase.

Functional programs often remove your need to think about the global state of your program as you're writing it, and that's a pleasant thing for a programmer.

Let's make everything functional!

So yes, some people reach that conclusion - and actually over the last decade or so, there's been a trend of functional syntactic constructs (that's a mouthful) being introduced into non-functional languages.

This probably started with C#'s Linq (Language Integrated Query - no, I don't understand quite how they reached that acronym either), and continued with functional constructs in Kotlin (probably strongly inspired by C#), Swift, the general introduction of Lambdas and "functions as data" into a lot of programming languages, and more recently, hugely popularised by React.js being an implementation of "functional reactive programming".

This subtle creep of functional style programming into more mainstream programming environments is interesting, because more and more people have been exposed to this style of programming without necessarily either studying it's nuance, or having to buy into a whole ecosystem switch.

It's lead to codebases with "bits of functional" in them, and in some of the more extreme cases, the idea of a "functional core" - where core programming logic is expressed in a functional style, with more traditional procedural code written to deal with data persistence, and bits at the edges of the application.

As more hybrid-functional code exists in the real world, there's an interesting antipattern that's emerged - functional code can be exceptionally difficult to comprehend and read.

But this isn't the functional utopia you promised!

Reading code is a distinctly different discipline to being able to write it.

In fact, reading code is more important than writing it, because you spend more of your time reading code, than you do writing it, and functional programming suffers a lot at the hands of "the head computing problem".

Because functional programming optimises for the legibility of reading a single function and understanding that all of its state is contained, what it's far worse at is helping you comprehend the state of the entire application. That's because to understand the state of the entire application outside of running the code, you have to mentally maintain the combined state of the functions as you read.

This isn't particularly different to code written in a more linear procedural style, where as you read code, you imagine the state as it's mutated, but with functional applications, you're also trying to keep track of the flow of control through the functions as they are combined - something that is presented linearly in a procedural language, as you follow the flow of control line-by-line.

Holding these two things in your mind at once involves more cognitive load than just following the control flow through a file in a procedural codebase, where you're just keeping track of some piece of state.

Proponents of functional programming would probably argue that this comprehension complexity is counter balanced by how easy it is to test individual functions in a functional application, as there's no complicated state to manage or build up, and you can use a REPL to trivially inspect the "true value".

Reading difficulty spikes however, are the lived experience of many people trying to read functional code, especially in apps-with-functional-parts.

What we really have is a context problem

This leads to an oft repeated trope that "functional is good for small things and not large applications" - you get the payoff of easy state free comprehension without the cognitive dissonance of trying to comprehend both the detail of all the functions, and their orchestration at once.

As functional codebases get bigger, the number of small functions required to compose a full application increases, as does the corresponding cognitive load. Combine this with the "only small parts" design philosophy of many functional languages, and if you're not careful about your application structure and file layout, you can end up with an application that is somehow at once not brittle, but difficult to reason about.

Compared to Object Oriented approaches - OO code is often utterly fixated on encapsulation, as that's kind of the point - and a strong focus on encapsulation pushes a codebase towards patterns of organisation and cohesion where related concepts are physically stored together, in the same directories and files, providing a kind of sight-readable abstraction that is often easy to follow.

OO codebases are forced into some fairly default patterns of organisation, where related concepts tend to be captured in classes named after their functionality.

But surely you can organise your code in functional languages?


I suppose I just feel like OO code and its focus on encapsulation pushes code towards being organised by default. By comparison, FP code revels in its "small and autonomous" nature, perhaps a little to its detriment.

It often is trying to be not be everything OO code is, and actually a little bit of organisational hierarchy by theme and feature benefits all codebases in similar ways.

I absolutely believe that FP code could be organised in a way that gives it the trivial sight-readability of "class-ical" software, combined with the state free comprehension benefits of "small FP codebases", but it definitely isn't the default stylistic choice in much of the FP code I've seen in industry.

There's a maths-ish purism in functional code that I find both beautiful, and distressing, all at once. But I don't think it helps code comprehension.

I don't think people look at maths equations and succeed at "head computing" them without worked examples, and in the same way, I think programmers often struggle to "head compute" functional programs without a REPL or step debugger, because the combined effects of the functions is opaque, and what a computer is good at.

I absolutely also appreciate that some people love this exact same thing I find an un-expressive chore with my preference for languages with more expressive and less constrained syntax

We're really talking about what the best abstraction for reading is

There's a truism here - and it's that hierarchy and organisation of code is the weapon we have to combat cognitive load, and different programming styles need to take advantage of it in different ways to feel intuitive.

All abstractions have a cost and a benefit - be it file and directory hierarchy, function organisation and naming, or classical encapsulation. What fits and is excellent for one style of interaction with a codebase might not be the best way to approach it for another.

When chatting about this on twitter, I received this rather revealing comment -

"Whenever I've seen functional in the wild there is inevitably large chunks of logic embedded in 30-line statements that are impossible to grok. I think it is due to being such a natural way to write-as-you-think but without careful refactoring it is a cognitive nightmare"

Stephen Roughley (@SteBobRoughley)

This is many peoples truth about encountering functional code in the wild, and I don't think it has to be this way - it really shares a lot in common with all the "write once read never" jokes thrown at Regular Expressions - which are really just tiny compounded text matching functions.

Does functional programming make your code hard to read then?

Like everything in programming, "it depends".

Functional code is beautiful, and side effect free, but for it to be consumed by people that aren't embedded in it, it often needs to meet its audience half way with good approaches to namespacing, file organisation, or other techniques to provide context to the reader so they feel like they can parse it on sight, rather than having to head compute its flow control.

It's easy to get swept away in the hubris of programming style without considering the consumers of the code, and I think well organised functional codebases can be just as literate as a procedural program.

Functional parts of hybrid apps have a steeper challenge - to provide the reader with enough context during the switch from another programming style that they do not feel like they're lost in myopic and finely grained functions with no structure.

Beware cognitive load generators :)

« Previous Entries