Technology Blog

Lo-Fi Service Discovery in .NET8

11/21/2023 22:00:00

The vast majority of systems that you build will inevitably call a HTTP API at some point. Whether it's a microservice, a third party API, or a legacy system. Because of this, it's not uncommon to see applications with reams of configuration variables defining where their downstream dependencies live.

This configuration is frequently a source of pain and duplication, especially in larger systems where tens or hundreds of components need to keep track of location of downstream dependencies, many of which are shared, and almost all of which change depending on deployment environment.

These configuration values get everywhere in your codebases, and often are very difficult to coordinate changes to when something changes in your deployed infrastructure.

Service discovery to the rescue

Service Discovery is a pattern that aims to solve this problem by providing a centralised location for services to register themselves, and for clients to query to find out where they are. This is a common pattern in distributed systems, and is used by many large scale systems, including Netflix, Google, and Amazon.

Service registries are often implemented as a HTTP API, or via DNS records on platforms like Kubernetes.

Service Discovery Diagram

Service discovery is a very simple pattern consisting of:

  • A service registry, which is a database of services and their locations
  • A client, which queries the registry to find out where a service is
  • Optionally, a push mechanism, which allows services to notify clients of changes

In most distributed systems, teams tend to use infrastructure as code to manage their deployments. This gives us a useful hook, because we can use the same infrastructure as code to register services with the registry as we deploy the infrastructure to run them.

Service discovery in .NET8 and .NET Aspire

Example

.NET 8 introduces a new extensions package - Microsoft.Extensions.ServiceDiscovery - which is designed to interoperate with .NET Aspire, Kubernetes DNS, and App Config driven service discovery.

This package provider a hook to load service URIs from App Configuration json files, and subsequently to auto-configure HttpClient instances to use these service URIs. This allows you to use service names in the HTTP calls in your code, and have them automatically resolved to the correct URI at runtime.

This means that if you're trying to call your foo API, that instead of calling

var response = await client.GetAsync("http://192.168.0.45/some-api");

You can call

var response = await client.GetAsync("http://foo/some-api");

And the runtime will automatically resolve the service name foo to the correct IP address and port.

This runtime resolution is designed to work with the new Aspire stack, which manages references between different running applications to make them easier to debug, but because it has fallback hooks to App Configuration which means it can be used with anything that can load configuration settings.

Here's an example of a console application in C# 8 that uses these new service discovery features:

using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

// Register your appsettings.json config file
var configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
    .Build();

// Create a service provider registering the service discovery and HttpClient extensions
var provider = new ServiceCollection()
    .AddServiceDiscovery()
    .AddHttpClient()
    .AddSingleton<IConfiguration>(configuration)
    .ConfigureHttpClientDefaults(static http =>
    {
        // Configure the HttpClient to use service discovery
        http.UseServiceDiscovery();
    })
    .BuildServiceProvider();

// Grab a new client from the service provider
var client = provider.GetService<HttpClient>()!;

// Call an API called `foo` using service discovery
var response = await client.GetAsync("http://foo/some-api");
var body = await response.Content.ReadAsStringAsync();

Console.WriteLine(body);

If we pair this with a configuration file that looks like this:

{
  "Services": {
    "foo": [
      "127.0.0.1:8080"
    ]
  }
}

At runtime, when we make our API call to http://foo/some-api, the HttpClient will automatically resolve the service name foo to 127.0.0.1:8080. For the sake of this example, we've stood up a Node/Express API on port 8080. It's code looks like this:

const express = require('express');
const app = express();
const port = 8080;

app.get('/some-api', (req, res) => res.send('Hello API World!'));
app.listen(port, () => console.log(`Example app listening on port ${port}!`));

So now, when we run our application, we get the following output:

$ dotnet run
Hello API World!

That alone is pretty neat - it gives us a single well known location to keep track of our services, and allows us to use service names in our code, rather than having to hard code IP addresses and ports. But this gets even more powerful when we combine it with a mechanism to update the configuration settings the application reads from at runtime.

Using Azure App Configuration Services as a service registry

Azure App Configuration Services provides a centralised location for configuration data. It's a fully managed service, and consists of Containers - a key/value stores that can be used to store configuration data.

App Configuration provides a REST API that can be used to read and write configuration data, along with SDKs and command line tools to update values in the store.

When you're using .NET to build services, you can use the Microsoft.Extensions.Configuration.AzureAppConfiguration package to read configuration data from App Configuration. This package provides a way to read configuration data from App Configuration Services, integrating neatly with the IConfiguration API and ConfigurationManager class.

If you're following the thread, this means that if we enable service discovery using the new Microsoft.Extensions.ServiceDiscovery package, we can use our app config files as a service registry. If we combine this extension with Azure App Configuration Services and it's SDK, we can change one centralised configuration store and push updates to all of our services whenever changes are made.

This is really awesome, because it means if you're running large distributed teams, so long as all the applications have access to the configuration container, they can address each other by service name, and the service discovery will automatically resolve the correct IP address and port, regardless of environment.

Setting up Azure App Configuration Services

You'll need to create an App Configuration Service. You can do this by going to the Azure Portal, and clicking the "Create a resource" button. Search for "App Configuration" and click "Create".

Create App Configuration Service

For the sake of this example, we're going to grab a connection string from the portal, and use it to connect to the service. You can do this by clicking on the "Access Keys" button in the left hand menu, and copying the "Primary Connection String". You'd want to use RBAC in a real system.

We're going to add an override by clicking "Configuration Explorer" in the left hand menu, and adding a new key called Services:foo with a value of:

[
	"value-from-app-config:8080"
]

and a content type of application/json.

Setting up the Azure App Configuration SDK

We need to add a reference to the Microsoft.Extensions.Configuration.AzureAppConfiguration package to access this new override. You can do this by running the following command in your project directory:

dotnet add package Microsoft.Extensions.Configuration.AzureAppConfiguration

Next, we modify the configuration bootstrapping code in our command line app.

using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

var appConfigConnectionString = "YOUR APP CONFIG CONNECTION STRING HERE";

var configuration = new ConfigurationBuilder()
    .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
    .AddAzureAppConfiguration(appConfigConnectionString, false) // THIS LINE HAS BEEN ADDED
    .Build();

This adds our Azure App Configuration as a configuration provider.

Nothing else in our calling code needs to change - so when we execute our application, you'll notice that the call now fails:

$ dotnet run
Unhandled exception. System.Net.Http.HttpRequestException: No such host is known. (value-from-app-config:8080)
 ---> System.Net.Sockets.SocketException (11001): No such host is known.
   at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
   at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token)
   at System.Net.Sockets.Socket.<ConnectAsync>g__WaitForConnectWithCancellation|285_0(AwaitableSocketAsyncEventArgs saea, ValueTask connectTask, CancellationToken cancellationToken)
   at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken)
   --- End of inner exception stack trace ---
   ...

If you look at the error message carefully it's now trying to connect to value-from-app-config:8080 - which is the value we put in our App Configuration container.

In a real-world scenario, you would want to configure refreshing of the configuration values following the guides available here.

Populating values in the real world

We've gone into detail about how you can configure service discovery using a combination of the new Microsoft.Extensions.ServiceDiscovery package, and the Microsoft.Extensions.Configuration.AzureAppConfiguration package, along with an Azure App Configuration Service Container - but this is all useless if you can't populate the values in the first place.

Unfortunately this entirely depends on how you automate your deployments. But in principle, you can use the Azure App Configuration SDK or API to populate the values in the container. You'll likely want to do this when your infrastructure as code runs (Pulumi/Bicep/Terraform), or as part of your CI/CD pipeline.

As part of these updates, I'd also recommend hasing all the values and adding a checksum key into the configuration store. This will allow you to monitor a single key from the client side SDKs, and trigger a refresh when the checksum changes.

If you're still working on automating your infrastructure, this technique can still be useful as you can use the portal itself to update the values in the container, and the SDK will automatically pick up the changes.

Conclusion

While a lot of the experiences being built for Aspire have limited value for larger distributed systems, I think this is an excellent example of how we can use some of the low-level features of the Aspire stack to build useful tools for other use-cases.

While we've focused purely on config file driven service discovery in this piece, you can implement custom resolvers to use other service registries like Hashicorp Consul, or another home grown solution.

The Surface Area of Software

12/04/2022 18:00:00

The surface area of software is often very complicated to comprehend, and a direct result of that is that we're often subjected to amateur discourse about how teams, or organisations should have worked on a thing, based purely on outside observation. It's almost always wrong, and I want to spend some time talking about how software is produced, and how it scales.

For people not close to code these examples can kind of look like similar situations with somehow opposite suggested "outcomes", so let's talk about the impact size has on code. We'll start with the most difficult thing for non-technical folks to get their heads around:

  1. The observed surface area of a system often has very little in common with its actual size, either in staff count, or lines of code.

The obvious example of this is Google and it's "one text box" UI, covering a massive indexing (+more) operation.

  1. More code is frequently not a better scenario to find yourself in. Code atrophies and rots over time naturally. Every line of code you write increases your total maintenance cost.

Doing things with less is most commonly a better outcome.

  1. There's only a certain amount of people that can "fit around" codebases of certain sizes - once the size of your teams outgrows the code, work slows rather than speeds up, because the rates of conflicts, points of contention and hotspots, and coordination increases.

  2. To fit more people around a codebase, systems are often decomposed to different libraries, subsystems or services. This expands the footprint of the teams you can surround your code with (to increase parallelism of work) but at least doubles the amount of coordination needed.

  3. Microservice architectures largely help because they allow organisations to define boundaries around pieces of their software ("bounded contexts") and parallise work - at the cost of expensive coordination and runtime latency.

  4. Equally, software is sometimes decomposed to help it scale in technical terms, rather than to match it's human needs (fitting more folks around a codebase) - this what most people often think of as "scalability" first but the former is more common.

Few have technical scaling requirements or limitations.

  1. Evey subdivision of software tends to increase total complexity of comprehending a system - it makes it less likely anyone can keep it all in their heads, it increases possible failure points (both technical and human) and increases the total cost of ownership of code.

  2. Why? Simple - each time you divide up your software, you have to also create new supporting structures (teams, work tracking, CI+CD pipelines, infrastructure) in order to allow the now bigger teams to be self sufficient and have enough autonomy.

  3. Lots of software does this too readily (see the knee jerk reactions in the form of monorepos and back-to-monolith trends) - but dividing your software is good, but requires thoughtfulness and intention to accept the cost.

  4. This isn't news, @martinfowler was talking about monolith-first designs about a decade ago. I like to think of it as "fix things that hurt". A lot of backlash against microservice arch. is really just folks getting lost in premature complexity too soon.

  5. In both kinds of scale - human and technical, you should build for a reasonable amount of growth. Some folks say an order of magnitude, others say 10x traffic, but it should only be one big leap.

Don't copy what big tech do before you are big tech.

  1. The important rule is this - in modern software, the best software is software that is malleable, software that can be added to without buckling, with foundations made for growth later, but not now.

This is why basically all layperson takes on software are wrong.

Folks look at software and presume they can predict its form from its external surface area (false), it's complexity from its ease of use (actually often the inverse), and fall into the mythical man month trap (RIP Fred!) of "if one woman can make a baby in 9 months, get 9 women!".

The size of code is a function of your team size, the subdivisions in your software and your cost and plans.

It's all a compromise. And it's never "just go faster" or "just spend more" - even if those things can sometimes help, they can just as often hinder and bury a project.

Making software, design and architecture "just the right size" is really very difficult. Many, many systems have fallen into the nanoservices/distributed monoliths trap, even more into premature internal package management hell.

Remember every subdivision of a system has a cost.

Notes on the Monorepo Pattern

12/04/2022 15:00:00

Monorepos (meaning "a singular repository") is a term coined by Facebook to describe a single repository that contains all the code for a project.

It is a pattern that has been used by many large companies, including Google, Facebook, Twitter, and Microsoft. It is also used by many smaller companies, including GitHub, and by many open-source projects, including the Linux kernel.

It is frequently misinterpreted to mean "all of the software that we build", and I want to share some notes that clarify where monorepos succeed, and fail, in organisations of various sizes.

Where do monorepos work?

Monorepos work well in an inverse bell curve if productivity related to the size of the software and teams that you have:

  • when your repo is just really one app and a "component library" (...just the bits of the app in some bad directory layout)
  • when you have a very low number of apps you are coupling together via source control
  • when you have apps that either change super infrequently, or are all sharing dependencies that churn all the time that must be in lockstep.
  • when you've really just got "your app and a few associated tools" - that's very "same as it ever was" because so few repos ever had "just one tiny piece of a system" in them to start with.

Unfortunately, the zone of productivity for these organisational patterns - in my opinion - is a trap that folks fall into.

Most software doesn't fit those three categories mentioned above.

Software tends to moves at medium speed, with SME shaped teams - and in those situations monorepos are hell fraught with problems that only occur once you've opted in, wholesale, to that organisational structure.

Alternatives that match those probem spaces

In most of those cases:

  • when the software is really just one app - you should use directories instead of complicated build tools
  • when it's all for some shared libraries - you're going to reach a point where you want to version in distinctly because the blast radius of change is going to start hard coupling your teams together over time

It's trivially easy to end up in the bad place where teams end up with tightly coupled deployments that get extremely slow and have to be resolved with tools like nx that frequently take over your entire development workflow (bad!)

But the biggest red flag with them is obvious - we've been here before and it sucked!

Just an old solution

The first decade of my career before DVCS (distributed version control systems) was all effectively big monorepo source trees and it was absolutely horrible and fraught with the same coupling risks. So we changed!

Git is designed for narrower slices, and doing the monorepo dance in medium to large orgs with all your software will leave you inevitably fighting your tools, both in build, deployment, and source control scenarios.

The sane approach is this:

Software that versions together, deploys together and changes together, should be collocated.

In the case of the thin end of the wedge with web apps, this is often just "the app, a few shared libraries, and a backend admin thing, perhaps a few tools".

Monorepos are fine here! At least until you need to split ownership of those things between team boundaries where things creek.

TL;DR - This is all about Conway's Law and change frequency that charts the success of software organisation - and hard team coupling is more dangerous than software coupling.

Monorepos in massive organisations

Let's briefly talk about the other end of the spectrum - the massive organisations that have a lot of software and a lot of teams, and all claim to use monorepos. There are notable examples - Google, Facebook, Twitter, Microsoft, GitHub.

Firstly, none of those organisations use a monorepo as it is frequetly interpreted by smaller orgs and the community. It's easy to verify this, because they all operate open-source repositories that are public, and distinct from any internal monorepos they may have. What they do tend to have, is application centric repositories where a single application, and it's associated tools and libraries are colocated.

This makes absolute sense, and is no different from your existing non-monorepo.

In fact, the majority of the "famous monorepos" - Windows, the Linux kernel (which of course, isn't the same as "Linux"), and Facebook - all have entire tooling teams dedicated to making collaborating on them work, at scale, with the communities they serve. It's very important that you don't apply logic from organisations of a scale that you aren't, with resources that you do not have, to your own problem space without strong consideration.

If you don't have the budget for entire teams working on source control and collaboration, nor tens of thousands of developers to fit around your codebase - perhaps don't mimic the patterns of those who do.

Should I use a monorepo?

Application centric repositories with associated tools and libraries?

Yeah! Knock yourself out, makes lots of sense.

Putting all your applications, spread across multiple teams and ownership boundaries, into a single repository?

Absolutely not, this way leads to madness and coupling hell.

Open-Source Software, Licensing and Enterprise Procurement

06/24/2022 16:00:00

Modern software development makes use of hundreds of thousands of open-source software libraries and in some cases full applications. This page is written primarily for a non-technical audience and will presume no prior knowledge of the landscape - so questions, however small, are welcome.

This piece was originally written to help non-technical internal audiences better understand how open-source interacts with traditional procurement processes in the Enterprise. If you're a technical person looking for some literature to help your organisation understand the impact open-source has on it or are a non-technical person looking to gain an understanding of the same - then this piece is hopefully for you.

What is Open-Source Software?

From Wikipedia:

Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. Open-source software is a prominent example of open collaboration, meaning any capable user can participate online in development, making the number of potential contributors' indefinite. The ability to examine the code facilitates public trust in the software.”

Open-source software is an exceptionally prominent type of software in the world today, with many languages, frameworks, and even programming tools being open source.

Open-source projects are mostly maintained by groups of volunteers often with corporate sponsorship or managed by companies, who provide the software to sell either consulting or support contracts.

In its most common form, open-source software is shipped as software libraries that can be incorporated into your own applications, by your own programmers.

Open-Source Software and Your Applications

Even if you're not aware of it, most organisations use tens to hundreds of thousands of open-source libraries in the software that they build. This is the default position in technology - to labour the point - the world's most popular web framework, React - which many organisations build all this webapps in - is both open-source, and depends on 1391 additional open-source packages. So, to build any of those web applications, just to start, would be using 1392 open-source packages. For every website.

The proliferation of open-source in this manner can appear uncontrolled if you’re not familiar with the ecosystem, and in this document, we’ll cover some of the strategies organisations use to adopt open-source safely. Most of these processes should be pleasantly unremarkable and are standard across the technology industry.

Open-Source and Licensing

By default, it's common for teams to be permitted to use open-source software with permissive licenses. This means, licenses where there is no expectation of the organisation sharing their own software or modifications - common examples of these licenses include, but are not limited to the MIT license, and the Apache2 license. It's also common to see organisations license open-source software that is available to businesses under commercial terms, in lieu of contributing code back.

These dual-licensed pieces of open-source effectively allow organisations to pay, instead of contributing, and this model is used to help open-source projects become sustainable.

Because authoring open-source software in a sustainable manner is often a challenge - the projects maintainers or core contributors are frequently working for free - a culture of commercial licensing, consulting, and support contracts has grown around open-source software.

Commercial Support for Open-Source Software

If this sounds confusing, it’s because it often is!

It’s possible, and common, for multiple commercial organisations to exist around supporting open-source software. For the most part, because of the nature of open source - people don’t opt for support. This is because the prevailing attitude is that if an issue occurs in an open-source library, then the consumers can fix it and contribute those changes back to the community.

However, for certain categories of open-source - commonly when it’s entire open-source applications instead of libraries, it’s prudent to make sure you have a support relationship in place - and even more important where the open-source application occupies a strategic place in your architectural landscape. Furthermore, it’s often the correct ethical position to take, and helps ensure the longevity of projects that organisations depend on.

Quality Assurance of Open-Source Software

As a result of the complex nature of the development of open-source software - that it’s code, developed by unknown persons, and provided as-is for consumption, with no legal assurance whatsoever - it’s important that you surround your use of open-source software with the same software development practices that you'd expect from software you author yourself.

What this means is, that where you consume open-source libraries:

  • Surround them with unit and integration level automated testing to verify their behaviour
  • Execute static and dynamic security scanning over the libraries (often referred to as SAST and DAST)
  • Use security scanning tools that track vulnerabilities and patch versions of the libraries to make sure they’re kept up to date.

The fact that you’re doing these activities makes our use of open-source libraries safe at point of consumption - augmented by the fact that of course, you'll have full control of what and when you update, and you can read all the code if we so choose.

Procurement and Open-Source Software

With traditional, paid for software, a procurement process is required to license software, and subsequently start using and developing with the application or library. With open-source, the software just… exists. It can be included from public package management sources (the places that you download the software to use), and development can start with it immediately.

However, this inadvertently circumvents and entire category of assurance that would exist in traditional procurement processes around software. With most of the open-source software that folks use, the burden of quality and assurance is on your own teams at the point of integration. This means that by wrapping the libraries you use in your own software development lifecycle activities like automated testing and scanning, you treat those libraries exactly as if they are code that you author.

But there exists three kinds of open-source as oft consumed:

  • Open source for which you do not require a support contract and can test/integrate yourself
  • Open source that you desire support for from a maintainer or organisation providing those services
  • Open source that is available commercially under dual-licensing - requiring payment for commercial use

For the first category of open-source, you are on your own, and your own internal development processes must be enough. For the two subsequent categories of open-source, in Enterprise, you will inevitably require a procurement and assurance process.

Important Questions for Open-Source Support

Traditional assurance process often centres on the organisation providing the support - their policies, their processes, their workplace environment, and their security. In open-source projects these questions are often nonsensical; there are no offices, there are no staff, the work is done at the behest of contributors. But as you evaluate support contracts with vendors, it’s important to understand their relationship with the product and what kind of support and assurance that they can provide for the payments you make to them.

Here are some sample questions that might replace more traditional assurance questions in open-source procurement processes.

“What relationship do you have with the project?”

Ideally, you would purchase support from the core authors of the project, or the organisation formed around it.

Good answers: “we are the core maintenance team”, “we are core contributors”, “we have commit rights”

Bad answers: “None”, “we are a consultancy providing extra services with no relationship with the project”

“Do you have a process in place to vet contributors?”

Code changes should be reviewed by maintainers.

Good answers: “Changes go through our pull request and testing process, and are accepted by maintainers”

Bad answers: “None”, “we take any contributions without inspection”

Larger open-source projects, or corporate sponsored projects, often have paperwork to ensure that there can be no copyright claims made against the software by employers of contributors. For example, Microsoft use the Contributor License Agreement to ensure this.

While not exceptionally common, it’s a good question to ask.

“Do you have the ability to integrate any bug fixes required back to the main project source?”

You would ideally buy a support license where any bug fixes were integrated into the main project, and not a fork for your own benefit.

Whilst this might sound counter-productive, this ensures that you don’t end up using a “forked”, or modified version of the main project that can drop out of active development.

“Are there any special feature request processes or privileges that come with this support contract?”

Ideally, if you are buying a support contract, it would give us priority in feature requests.

“Do you, and if so, how do you, provide security scanning and exploit prevention in the software?”

Even open-source software is expected to be tested, have build pipelines, and be reliable - doubly so if you’re paying for support licenses.

“What is the SLA on response to bug reports?”

Equally, it’s important to know the service level you’re expecting from support contracts - to manage expectations between both your internal teams and the support organisation.

“Do you have a process in place to support us in the case of a 0-day vulnerability?”

0-day vulnerabilities are “live, fresh bugs, just discovered” and there should be an expectation that if a support body becomes aware of a bug in the software they are supporting, there would be a process of notification and guidance towards a fix.

Conclusion

The use of open-source software is a huge positive for commercial organisations (who effectively benefit from free labour), and to use it both sustainably and ethically, it's important to be aware of both your responsibilities around assurance, and those of any partner you choose to engage with.

Paying for software, and open-source, is one of the most ethical ways organisations can interact with the open-source community (short of contributing back!) and should always be considered where possible.

Storing growing files using Azure Blob Storage and Append Blobs

04/16/2022 15:55:00

There are several categories of problems that require data to be append only, sequentially stored, and able to expand to arbitrary sizes. You’d need this type of “append only” file for building out a logging platform or building the data storage backend for an Event Sourcing system.

In distributed systems where there can be multiple writers and often when your files are stored in some cloud provider the “traditional” approach to managing these kinds of data structures often don’t work well. You could acquire a lock, download the file, append to it and re-upload – but this will take an increasing amount of time as your files grow, or you could use a database system that implements distributed locking and queuing – which is often more expensive than just manipulating raw files.

Azure blob storage offers Append Blobs, which go some way to solving this problem, but we’ll need to write some code around the storage to help us read the data once it’s written.

What is an Append Blob?

An Append Blob is one of the blob types you can create in Azure Blob Storage – Azure’s general purpose file storage. Append Blobs, as the name indicates, can only be appended to – you append blocks to append blobs.

From Azure’s Blob Storage documentation:

An append blob is composed of blocks and is optimized for append operations. When you modify an append blob, blocks are added to the end of the blob only, via the Append Block operation. Updating or deleting of existing blocks is not supported. Unlike a block blob, an append blob does not expose its block IDs.

Each block in an append blob can be a different size, up to a maximum of 4 MiB, and an append blob can include up to 50,000 blocks. The maximum size of an append blob is therefore slightly more than 195 GiB (4 MiB X 50,000 blocks).

There are clearly some constraints there that we must be mindful of, using this technique:

  • Our total file size must be less than 195GiB
  • Each block can be no bigger than 4MiB
  • There’s a hard cap on 50,000 blocks, so if our block size is less than 4MiB, the maximum size of our file will be less.

Still, even with small blocks, 50,000 blocks should give us a lot of space for entire categories of application storage. The Blob Storage SDKs allow us to read our stored files as one contiguous file or read ranges of bytes from any given offset in that file.

Interestingly, we can’t read the file block by block – only by byte offset, and this poses and interesting problem – if we store data which has any kind of data format that isn’t just plain text (e.g., JSON, XML, literally any data format) and we want to seek through our file, there is no way we can ensure we read valid data from our stored file, even if it was written as a valid block when first saved.

Possible Solutions to the Read Problem

It’s no good having data if you can’t meaningfully read it – especially when we’re using a storage mechanism specifically optimised for storing large files. There are a few things we could try to make reading from our Append Only Blocks easier.

  • We could maintain an index of byte-offset to block numbers
  • We could pad our data to make sure block sizes were always consistent
  • We could devise a read strategy that understands it can read partial or seemingly malformed data

The first solution – maintaining a distinct index, may seem appealing at first – but it takes a non-trivial amount of effort to maintain that index and make sure that it’s keep both in track and up to data with our blob files. This introduces the possibility of a category of errors where those files drift apart, and we may well be in a situation where data appears to get lost, even if it’s in the original data file, because our index loses track of it.

The second solution is the “easiest” – as it gives us a fixed block size that we can use to page back through our blocks – but storing our data becomes needlessly more expensive.

Which really leaves us with our final option – making sure the code that reads arbitrary data from our file understands how to interpret malformed data and interpret where the original write-blocks were.

Scenario: A Chat Application

One of the more obvious examples of infinite-append-only logs are chat applications – where messages arrive in a forwards-only sequence, contain metadata, and to add a little bit of spice, must be read tail-first to be useful to their consumers.

We’ll use this example to work through a solution, but a chat log could be an event log, or a list of business events and metadata, or really, anything at all that happens in a linear fashion over time.

We’ll design our fictional chat application like this:

  • A Blob will be created for every chat channel.
  • We’ll accept that a maximum of 50,000 messages can be present in a channel. In a real-world application, we’d create a subsequent blob once we hit this limit.
  • We’ll accept that a single message can’t be more than 4MiB in size (because that’d be silly).
  • In fact, we’re going to limit every single chat message to be a maximum of 512KiB – this means that we know that we’ll never exceed the maximum block size, and each block will only contain a single chat message.
  • Each chat message will be written as its own distinct block, including its metadata.
  • Our messages will be stored as JSON, so we can also embed the sender, timestamps, and other metadata in the individual messages.

Our message could look something like this:

{
  “senderId”: "foo",
  “messageId”: "some-guid",
  “timestamp”: "2020-01-01T00:00:00.000Z",
  “messageType”: "chat-message",
  “data”: {
    “text”: "hello"
  }
}

This is a mundane and predictable data structure for our message data. Each of our blocks will contain data that looks roughly like this.

There is a side effect of us using structured data for our messages like this – which is that if we read the entire file from the start, it would not be valid JSON at all. It would be a text file with some JSON items inside of it, but it wouldn’t be “a valid JSON array of messages” – there’s no surrounding square bracket array declaration [ ], and there are no separators between entries in the file.

Because we’re not ever going to load our whole file into memory at once, and because our file isn’t actually valid JSON, we’re going to need to do something to indicate our individual message boundaries so we can parse the file later. It’d be really nice if we could just use an open curly bracket { and that’d just be fine, but there’s no guarantee that we won’t embed complicated object structures in our messages at a later point that might break our parsing.

Making our saved messages work like a chat history

Chat applications are interesting as an example of this pattern, because while the data is always append-only, and written linearly, it’s always read in reverse, from the tail of the file first.

We’ll start with the easy problem – adding data to our file. The code and examples here will exist outside of an application as a whole – but we’ll be using TypeScript and the @azure/storage-blob Blob Storage client throughout – and you can presume this code is running in a modern Node environment, the samples here have been executed in Azure Functions.

Writing to our file, thankfully, is easy.

We’re going to generate a Blob filename from our channel name, suffixing it with “.json” (which is a lie, it’s “mostly JSON”, but it’ll do), and we’re going to add a separator character to the start of our blob.

Once we have our filename, we’ll prefix a serialized version of our message object with our separator character, create an Append Blob Client, and call appendBlob with our serialized data.

import { BlobServiceClient } from "@azure/storage-blob";

export default async function (channelName: string, message: any) {
    const fileName = channelName + ".json";
    const separator = String.fromCharCode(30);
    const data = separator + JSON.stringify(message);

    const blobServiceClient = BlobServiceClient.fromConnectionString(process.env.AZURE_STORAGE_CONNECTION_STRING);

    const containerName = process.env.ARCHIVE_CONTAINER || "archive";
    const containerClient = blobServiceClient.getContainerClient(containerName);
    await containerClient.createIfNotExists();

    const blockBlobClient = containerClient.getAppendBlobClient(fileName);
    await blockBlobClient.createIfNotExists();

    await blockBlobClient.appendBlock(data, data.length);
};

This is exceptionally simple code, and it looks almost like any “Hello World!” Azure Blob Storage example you could thing of. The interesting thing we’re doing in here is using a separator character to indicate the start of our block.

What is our Separator Character?

It’s a nothing! So, the wonderful thing about ASCII, as a standard, is that it has a bunch of control characters that exist from a different era to do things like send control codes to printers in the 1980s, and they’ve been enshrined in the standard ever since.

What this means is there’s a whole raft of character that exist as control codes, that you never see, never use, and are almost fathomably unlikely to occur in your general data structures.

ASCII 30 is my one.

According to ASCII – the 30th character code, which you can see in the above sample being loaded using String.fromCharCode(30), is “RECORD SEPARATOR” (C0 and C1 control codes - Wikipedia).

“Can be used as delimiters to mark fields of data structures. If used for hierarchical levels, US is the lowest level (dividing plain-text data items), while Record Separator, Group Separator, and File Separator are of increasing level to divide groups made up of items of the level beneath it.”

That’ll do. Let’s use it.

By prefixing each of our stored blocks with this invisible separator character, we know that when it comes time to read our file, we can identify where we’ve appended blocks, and re-convert our “kind of JSON file” into a real array of JSON objects.

Whilst these odd control codes from the 1980s aren’t exactly seen every day, this is a legitimate use for them, and we’re not doing anything unnatural or strange here with our data.

Reading the Chat History

We’re not going to go into detail of the web applications and APIs that’d be required above all of this to present a chat history to the user – but I want to explore how we can read our Append Only Blocks into memory in a way that our application can make sense of it.

The Azure Blob Client allows us to return the metadata for our stored file:

    public async sizeof(fileName: string): Promise<number> {
        const blockBlobClient = await this.blobClientFor(fileName);
        const metadata = await blockBlobClient.getProperties();
        return metadata.contentLength;
    }

    private async blobClientFor(fileName: string): Promise<AppendBlobClient> {
        this.ensureStorageExists();
        const blockBlobClient = this._containerClient.getAppendBlobClient(fileName);
        await blockBlobClient.createIfNotExists();
        return blockBlobClient;
    }

    private async ensureStorageExists() {
        // TODO: Only run this once.
        await this._containerClient.createIfNotExists();
    }

It’s been exposed as a sizeof function in the sample code above – by calling getProperties() on a blobClient, you can get the total content length of a file.

Reading our whole file is easy enough – but we’re almost never going to want to do that, sort of downloading the file for backups. We can read the whole file like this:

    public async get(fileName: string, offset: number, count: number): Promise<Buffer> {
        const blockBlobClient = await this.blobClientFor(fileName);
        return await blockBlobClient.downloadToBuffer(offset, count);
    }

If we pass 0 as our offset, and the content length as our count, we’ll download our entire file into memory. This is a terrible idea because that file might be 195GiB in size and nobody wants those cloud vendor bills.

Instead of loading the whole file, we’re going to use this same function to parse, backwards through our file, to find the last batch of messages to display to our users in the chat app.

Remember

  • We know our messages are a maximum of 512KiB in size
  • We know our blocks can store up to 4MiB of data
  • We know the records in our file are split up by Record Separator characters

What we’re going to do is read chunks of our file, from the very last byte backwards, in batches of 512KiB, to get our chat history.

Worst case scenario? We might just get one message before having to make another call to read more data – but it’s far more likely that by reading 512KiB chunks, we’ll get a whole collection of messages, because in text terms 512KiB is quite a lot of data.

This read amount really could be anything you like, but it makes sense to make it the size of a single data record to prevent errors and prevent your app servers from loading a lot of data into memory that they might not need.

    /// <summary>
    /// Reads the archive in reverse, returning an array of messages and a seek `position` to continue reading from.
    /// </summary>
    public async getTail(channelName: string, offset: number = 0, maxReadChunk: number = _512kb) {
        const blobName = this.blobNameFor(channelName);
        const blobSize = await this._repository.sizeof(blobName);

        let position = blobSize - offset - maxReadChunk;
        let reduceReadBy = 0;
        if (position < 0) {
            reduceReadBy = position;
            position = 0;
        }

        const amountToRead = maxReadChunk + reduceReadBy;
        const buffer = await this._repository.get(blobName, position, amountToRead);

	  ...

In this getTail function, we’re calculating the name of our blob file, and then calculating a couple of values before we fetch a range of bytes from Azure.

The code calculates the start position by taking the total blob size, subtracting the offset provided to the function, and then again subtracting the maximum length of the chunk of the file to read.

After the read position has been calculated, data is loaded into an ArrayBuffer in memory.

 ...

 const buffer = await this._repository.get(blobName, position, amountToRead);

        const firstRecordSeparator = buffer.indexOf(String.fromCharCode(30)) + 1;
        const wholeRecords = buffer.slice(firstRecordSeparator);
        const nextReadPosition = position + firstRecordSeparator;

        const messages = this.bufferToMessageArray(wholeRecords);
        return { messages: messages, position: nextReadPosition, done: position <= 0 };
    }

Once we have 512KiB of data in memory, we’re going to scan forwards to work out where the first record separator in that chunk of data is, discarding any data before it in our Buffer – because we know that from that point onwards, because we are strictly parsing backwards through the file, we will only have complete records.

As the data before that point has been discarded, the updated “nextReadPosition” is returned as part of the response to the consuming client, which can use that value on subsequent requests to get the history block before the one returned. This is similar to how a cursor would work in a RDBMS.

The bufferToMessageArray function splits our data chunk on our record separator, and parses each individual piece of text as if it were JSON:

    private bufferToMessageArray(buffer: Buffer) {
        const messages = buffer.toString("utf8");
        return messages.split(String.fromCharCode(30))
            .filter(data => data.length > 0)
            .map(m => JSON.parse(m));
    }

Using this approach, it’s possible to “page backwards” thought our message history, without having to deal with locking, file downloads, or concurrency in our application – and it’s a really great fit for storing archive data, messages and events, where the entire stream is infrequently read due to it’s raw size, but users often want to “seek upwards”.

Conclusion

This is a fun problem to solve and shows how you could go about building your own archive services using commodity cloud infrastructure in Azure for storing files that could otherwise be “eye wateringly huge” without relying on third party services to do this kind of thing for you.

It’s a great fit for chat apps, event stores, or otherwise massive stores of business events because blob storage is very, very, cheap. In production systems, you’d likely want to implement log rotation for when the blobs inevitably reach their 50,000 block limits, but that should be a simple problem to solve.

It’d be nice if Microsoft extended their block storage SDKs to iterate block by block through stored data, as presumably that metadata exists under the hood in the platform.

Writing User Stories

01/10/2022 11:00:00

Software development transforms human requirements, repeatedly, until software is eventually produced.

We transform things we'd like into feature requests. Which are subsequently decomposed into designs. Which are eventually transformed into working software.

At each step in this process the information becomes denser, more concrete, more specific.

In agile software development, user stories are a brief statement of what a user wants a piece of software to do. User stories are meant to represent a small, atomic, valuable change to a software system.

Sounds simple right?

But they're more than that – user stories are artifacts in agile planning games, they're triggers that start conversations, tools used to track progress, and often the place that a lot of "product thinking" ends up distilled. User stories end up as the single source of truth of pending changes to software.

Because they're so critically important to getting work done, it's important to understand them – so we're going to walk through what exactly user stories are, where they came from, and why we use them.

The time before user stories

Before agile reached critical mass, the source of change for software systems was often a large specification that was often the result of a lengthy requirements engineering process.

In traditional waterfall processes, the requirements gathering portion of software development generally happened at the start of the process and resulted in a set of designs for software that would be written at a later point.

Over time, weaknesses in this very linear "think -> plan -> do" approach to change became obvious. The specifications that were created ended up in systems that often took a long time to build, didn't finish, and full of defects that were only discovered way, way too late.

The truth was that the systems as they were specified were often not actually what people wanted. By disconnecting the design and development of complicated pieces of software, frequently design decisions were misinterpreted as requirements, and user feedback was hardly ever solicited until the very end of the process.

This is about as perfect a storm as can exist for requirements – long, laborious requirement capturing processes resulting in the wrong thing being built.

To make matters worse, because so much thought-work was put into crafting the specifications at the beginning of the process, they often brought out the worst in people; specs became unchangeable, locked down, binding things, where so much work was done to them that if that work was ever invalidated, the authors would often fall foul of the sunk cost fallacy and just continue down the path anyway because it was "part of the design".

The specifications never met their goals. They isolated software development from it's users both with layers of people and management. They bound developers to decisions made during times of speculation. And they charmed people with the security of "having done some work" when no software was being produced.

They provided a feedback-less illusion of progress.

"But not my specifications!" I hear you cry.

No, not all specifications, but most of them.

There had to be a better way to capture requirements that:

  • Was open to change to match the changing nature of software
  • Could operate at the pace of the internet
  • Didn't divorce the authors of work from the users of the systems they were designing
  • Were based in real, measurable, progress.

The humble user story emerged as the format to tackle this problem.

What is a user story

A user story is a short, structured statement of a change to a system. They should be outcome focused , precise, and non-exhaustive.

Stories originated as part of physical work-tracking systems in early agile methods – they were handwritten on the front of index cards, with acceptance criteria written on the reverse of the card. The physical format added constraints to user stories that are still useful today.

Their job is to describe an outcome , and not an implementation. They're used as artefacts in planning activities, and they're specifically designed to be non-exhaustive – containing only the information absolutely required as part of a change to a product.

It's the responsibility of the whole team to make sure our stories are high enough quality to work from, and to verify the outcomes of our work.

Furthermore, user stories are an exercise in restraint. They do not exist to replace documentation. They do not exist to replace conversation and collaboration. The job is to decompose large, tough, intractable problems into small, articulated, well considered changes.

User stories are meant to represent a small, atomic, valuable change to a software system and have mostly replaced traditional requirements engineering from the mid-2000s onwards.

The user story contents

The most common user story format, and generally the one that should be followed by default, was popularised by the XP team at Connextra in 2001. It looks like this:

As a <persona>

I want <business focused outcome>

So that <reason driving the change>

Accept:

  • List of…
  • Acceptance criteria…

Notes: Any notes

This particular format is popular because it considers both the desired outcome from a user's perspective (the persona), and also includes the product thinking or justification for the change as part of the "So that" clause.

By adhering to the constraint of being concise, the story format forces us to decompose our work into small, deliverable chunks. It doesn't prevent us from writing "build the whole solution", but it illuminates poorly written stories very quickly.

Finally, the user story contains a concise, non-exhaustive list of acceptance criteria. Acceptance criteria list the essential qualities of the implemented work. Until all of them are met, the work isn't finished.

Acceptance criteria aren't an excuse to write a specification by stealth. They are not the output format of response documents when you're building APIs, or snippets of HTML for web interfaces. They're conversation points to verify and later accept the user story as completed.

Good acceptance criteria are precise and unambiguous – anything else isn't an acceptance criteria. As an example – "must work in IE6" is better than "must work in legacy browsers", equally "must be accessible" is worse than "must adhere to all WCAG 2.0 recommendations".

Who and what is a valid persona?

Personas represent the users of the software that you are building.

This is often mistaken to mean "the customers of the business" and this fundamental misunderstanding leads to lots of unnatural user stories being rendered into reality.

Your software has multiple different types of users – even users you don't expect. If you're writing a web application, you might have personas that represent "your end user", "business to business customers", or other customer architypes. In addition to this, however, you'll often have personas like "the on call engineer supporting this application", "first line support" or "the back-office user who configures this application".

While they might not be your paying customers, they're all valid user personas and users of your software.

API teams often fall into the trap of trying to write user stories from the perspective of the customer of the software that is making use of their API. This is a mistake, and it's important that if you're building APIs, you write user stories from the perspective of your customers – the developers and clients that make use of your APIs to build consumer facing functionality.

What makes a good user story?

While the vast majority of teams use digital tracking systems today, we should pay mind to the constraints placed upon user stories by physical cards and not over-write our stories. It's important to remember that user stories are meant to contain distilled information for people to work from.

As the author of a user story, you need to be the world's most aggressive editor – removing words that introduce ambiguity, removing any and all repetition and making sure the content is precise. Every single word you write in your user story should be vital and convey new and distinct information to the reader.

It's easy to misinterpret this as "user stories must be exhaustive", but that isn't the case. Keep it tight, don't waffle, but don't try and reproduce every piece of auxiliary documentation about the feature or the context inside every story.

For example:

As a Back-Office Manager
I want business events to be created that describe changes to, or events happening to, customer accounts that are of relevance to back-office management
So that those events may be used to prompt automated decisions on changing the treatment of accounts based on back-office strategies that I have configured.

Could be re-written:

As a Back-Office Manager
I want an event published when a customer account is changed
So that downstream systems can subscribe to make decisions

Accept:
- Event contains kind of change
- Event contains account identifiers
- External systems can subscribe

In this example, edited, precise language makes the content of the story easier to read , and moving some of the nuance to clearly articulated acceptance criteria prevent the reader having to guess what is expected.

Bill West put together the mnemonic device INVEST , standing for Independent, Negotiable, Verifiable, Estimable, Small and Testable – to describe characteristics of a good user story – but in most cases these qualities can be met by remembering the constraints of physical cards.

If in doubt, remember the words of Ernest Hemingway:

"If I started to write elaborately, or like someone introducing or presenting something, I found that I could cut that scrollwork or ornament out and throw it away and start with the first true simple declarative sentence I had written."

Write less.

The joy of physical limitations

Despite the inevitability of a digital, and remote-first world, it's easy to be wistful for the days of user stories in their physical form, with their associated physical constraints and limitations.

Stories written on physical index cards are constrained by the size of the cards – this provides the wonderful side effect of keeping stories succinct – they cannot possibly bloat or become secret specifications because the cards literally are not big enough.

The scrappy nature of index cards and handwritten stories also comes with the additional psychological benefit of making them feel like impermanent, transitory artefacts that can be torn up and rewritten at will, re-negotiated, and refined, without ceremony or loss. By contrast, teams can often become attached to tickets in digital systems, valuing the audit log of stories moved back and forth and back and forth from column to column as if it's more important than the work it's meant to inspire and represent.

Subtasks attached to the index-card stories on post-it notes become heavy and start falling apart, items get lost, and the cards sag, prompting and encouraging teams to divide bloated stories into smaller, more granular increments. Again, the physicality of the artefact bringing its own benefit.

Physical walls of stories are ever present, tactile, and real. Surrounding your teams with their progress helps build a kind of total immersion that digital tools struggle to replicate. Columns on a wall can be physically constrained, reconfigured in the space, and visual workspaces built around the way work and tasks flow, rather than how a developer at a work tracking firm models how they presume you work.

There's a joy in physical, real, artefacts of production that we have entirely struggled to replicate digitally. But the world has changed, and our digital workflows can be enough, but it takes work to not become so enamoured and obsessed with the instrumentation, the progress reports, and the roll-up statistics and lose sight of the fact that user stories and work tracking systems were meant to help you complete some work, to remember that they are the map and not the destination.

All the best digital workflows succeed by following the same kinds of disciplines and following the same constraints as physical boards have. Digital workflows where team members feel empowered to delete and reform stories and tickets at any point. Where team members can move, refine, and relabel the work as they learn. And where teams do what's right for their project and worry about how to report on it afterwards, find the most success with digital tools.

It's always worth acknowledging that those constraints helped give teams focus and are worth replicating.

What needs to be expressed as a user story?

Lots of teams get lost in the weeds when they try to understand "what's a user story" vs "what's a technical task" vs "what's a technical debt card". Looking backwards towards the original physical origin of these artefacts it's obvious – all these things are the same thing.

Expressing changes as user stories with personas and articulated outcomes is valuable whatever the kind of change. It's a way to communicate with your team that everyone understands, and it's a good way to keep your work honest.

However, don't fall into the trap of user story theatre for small pieces of work that need to happen anyway.

I'd not expect a programmer to see a missing unit test and write a user story to fix it - I'd expect them to fix it. I'd not expect a developer to write a "user story" to fix a build they just watched break. This is essential, non-negotiable work.

As a rule of thumb, technical things that take less time to solve than write up should just be fixed rather than fudging language to artificially legitimise the work – it's already legitimate work.

Every functional change should be expressed as a user story – just make sure you know who the change is for. If you can't articulate who you're doing some work for, it is often a symptom of not understanding the audience of your changes (at best) or at worst, trying to do work that needn't be done at all.

The relationship between user stories, commits, and pull requests

Pull request driven workflows can suffer from the unfortunate side-effect of encouraging deferred integration and driving folks towards "one user story, one pull request" working patterns. While this may work fine for some categories of change, it can be problematic for larger user stories.

It's worth remembering when you establish your own working patterns that there is absolutely nothing wrong with multiple sets of changes contributing to the completion of a single user story. Committing the smallest pieces of work that doesn't break your system is safer by default.

The sooner you're integrating your code, the better, regardless of story writing technique.

What makes a bad user story?

There are plenty of ways to write poor quality user stories, but here are a few favourites:

Decomposed specifications / Design-by-stealth – prescriptive user stories that exhaustively list outputs or specifications as their acceptance criteria are low quality. They constrain your teams to one fixed solution and in most cases don't result in high quality work from teams.

Word Salad – user stories that grow longer than a paragraph or two almost always lead to repetition or interpretation of their intent. They create work, rather than remove it.

Repetition or boiler-plate copy/paste – Obvious repetition and copy/paste content in user stories invents work and burdens the readers with interpretation. It's the exact opposite of the intention of a user story, which is to enhance clarity. The moment you reach for CTRL+V/C while writing a story, you're making a mistake.

Given / Then / When or test script syntax in stories – user stories do not have to be all things to all people. Test scripts, specifications or context documents have no place in stories – they don't add clarity, they increase the time it takes to comprehend requirements. While valuable, those assets should live in wikis, and test tools, respectively.

Help! All my stories are too big! Sequencing and splitting stories.

Driving changes through user stories becomes trickier when the stories require design exercises , or the solution in mind has some pre-requirements (standing up new infrastructure for the first time etc. It's useful to split and sequence stories to make larger pieces of technical work easier while still being deliverable in small chunks.

Imagine, for example, a user story that looked like this:

As a customer I want to call a customer API To retrieve the data stored about me, my order history, and my account expiry date

On the surface the story might sound reasonable, but if this were a story for a brand new API, your development team would soon start to spiral out asking questions like "how does the customer authenticate", "what data should we return by default", "how do we handle pagination of the order history" and lots of other valid questions that soon represent quite a lot of hidden complexity in the work.

In the above example, you'd probably split that work down into several smaller stories – starting with the smallest possible story you can that forms a tracer bullet through the process that you can build on top of.

Perhaps it'd be this list of stories:

  • A story to retrieve the user's public data over an API. (Create the API)
  • A story to add their account expiry to that response if they authenticate. (Introduce auth)
  • A story to add the top-level order summary (totals, number of previous orders)
  • A story to add pagination and past orders to the response

This is just illustrative, and the exact way you slice your stories depends heavily on context – but the themes are clear – split your larger stories into smaller useful shippable parts that prove and add functionality piece by piece. Slicing like this removes risk from your delivery , allows you to introduce technical work carried by the story that needs it first, and keeps progress visible.

Occasionally you'll stumble up against a story that feels intractable and inestimable. First, don't panic, it happens to everyone, breathe. Then, write down the questions you have on a card. These questions form the basis of a spike – a small Q&A focused time-boxed story that doesn't deliver user-facing value. Spikes exist to help you remove ambiguity, to do some quick prototyping, to learn whatever you need to learn so that you can come back and work on the story that got blocked.

Spikes should always pose a question and have a defined outcome – be it example code, or documentation explaining what was learnt. They're the trick to help you when you don't seem to be able to split and sequence your work because there are too many unknowns.

Getting it right

You won't get your user stories right first time – but much in the spirit of other agile processes you'll get better at writing and refining user stories by doing it. Hopefully this primer will help you avoid trying to boil the ocean and lead to you building small things, safely.

If you're still feeling nervous about writing high quality user stories with your teams Henrik Kniberg and Alistair Cockburn published a workshop they called "The Elephant Carpaccio Exercise" in 2013 which will help you practice in a safe environment. You can download the worksheet here - Elephant Carpaccio facilitation guide (google.com)

Open-Source Exploitation

12/13/2021 11:00:00

Combative title.

I don’t have a title for this that works.

It’s horrible, it’s difficult, and it’s because all of the titles sound so relentlessly negative that I honestly don’t want to use them. I promise this talk isn’t about being negative, it’s a talk about the work we have to do to be better as an industry. As someone that doesn’t believe in hills, or like walking up them, or death, or dying, this, I think, is probably the hill I’m going to die on.

I want to talk about how open source has in the most cases, been turned into exploitation by the biggest organisations in the world. How it’s used to extricate free labour from you, and why this is fundamentally a bad thing. I’m going to talk about how we can do better. I’m going to talk about what needs to change to make software, and especially open-source software – something I love dearly – survive.

Because right now, open-source is in the most precarious place it’s ever been in its entire existence, and I feel like hardly anyone is talking about it.

The Discourse Is Dead

Before I start with the gory details, I want to talk about loving things.

More importantly, let’s talk about how it’s important to understand that you can be critical of something that you love because you want it to be better, not because you want to harm it.

I’ve been building open-source software since the actual 1990s. When the internet was a village, and everything was tiny. When I was tiny. But it’s important to understand that on any journey of maturity, the techniques, opinions, and approaches that get you from A to B, are not necessarily the things that you’re going to need to get you from B to C.

People frequently struggle with this fact. Humans are beautiful and oft simple creatures that presume because something has worked before, it’ll work again, regardless of the changing context around us.

As technologists, and an industry, we’re going to have to embrace this if we want open source to survive.

Open Source Won the Fight

At this point we all know this to be true.

You probably use Visual Studio Code at the very least as a text editor.

As of 2018 40% of VMs on Azure were in Linux.

Open source won the hearts and minds of people by telling them that software could and should be free.

What does free really mean?

While the GPL and its variants – were probably not the first permissive and free software licenses – they were the licenses that rapidly gained mindshare with the rising popularity of Linux in the late 90s.

Linux, really, was the tip of the spear that pushed open-source software into the mainstream, and it’s GPL license was originally described by the Free Software Foundation as “free as in speech, not free as in beer”. A confounding statement that a lot of people struggled to understand.

So what does the GPL really mean? In simple terms, if you use source code available under its license, you need to make your changes public for other people to use. This is because the FSF promoted “software freedoms” – literally, the right of software to be liberated, so that it’s users could modify, inspect, and make their own changes to it.

A noble goal, which shows its lineage as the license used to build a Unix clone that was supposed to be freely available to all – a goal centred around people sharing source code at local computer clubs.

It’s important to stress that “free” never meant “free from cost”. It always meant “free as in freedom” – and in fact, much of the original literature focuses on this by describing software that is “free from cost” as “gratis”.

From the FSF FAQs:

Does free software mean using the GPL?

Not at all—there are many other free software licenses. We have an incomplete list. Any license that provides the user certain specific freedoms is a free software license.

Why should I use the GNU GPL rather than other free software licenses? (#WhyUseGPL)

Using the GNU GPL will require that all the released improved versions be free software. This means you can avoid the risk of having to compete with a proprietary modified version of your own work. However, in some special situations it can be better to use a more permissive license.

But it wasn’t that version of free software that really won

Despite Linux, and despite early and limited forays into open source by organisations like RedHat the strong copyleft licenses of the GPL were not the reason open-source software is thriving in the market today.

They’re certainly not the reasons ultra-mega-corps like Microsoft, or Amazon, or Google now champion open source.

The widespread adoption of open-source software in the enterprise is directly related to the MIT license, and the Apache – “permissive” licenses, which don’t force people that build on top of software to re-share their modifications back to the wider communities.

Permissive licensing allows individuals or organisations to take your work, build on top of it, and even operate these modified copies of a work for profit.

Much like the GPL, the aim of open source was not to restrict commercial exploitation, but to ensure the freedom of the software itself.

Who benefits from permissive licensing?

This is a trick question really – because in any situation where there is a power imbalance – let’s say, between the four or five largest organisations in the world, and, some person throwing some code on the internet, the organisation is always the entity that will benefit.

Without wanting to sound like a naysayer – because I assure you, I deeply love open-source, and software freedom, and the combinatorial value that adds to teaching, and our peers, and each other, I cannot say loud enough:

Multi-national organisations do not give a single solitary fuck about you.

Businesses do not care about you.

But you know what they do care about? They care about free “value” that they are able to commercially exploit. The wide proliferation of software in businesses is a direct result of licenses like the Apache license, and the MIT license being leveraged into closed source, proprietary and for-profit work.

Want to test the theory?

Go into your office tomorrow and try adding some GPL’d code to your companies' applications and see how your line manager responds.

Permissive licenses explicitly and without recourse shift the balance of power towards large technical organisations and away from individual authors and creators. They have the might to leverage code, they have the capability to build upon it, and they have the incentive and organisational structures to profit from doing so.

Open-source software took hold in the enterprise because it allowed itself to be exploited.

Oh come on, exploited? That’s a bit much isn’t it?

Nope. It’s entirely accurate.

exploitation (noun) · exploitations (plural noun)

  • the action or fact of treating someone unfairly in order to benefit from their work.

"the exploitation of migrant workers"

  • the action of making use of and benefiting from resources.

"the Bronze Age saw exploitation of gold deposits"

  • the fact of making use of a situation to gain unfair advantage for oneself.

"they are shameless in their exploitation of the fear of death"

Oh wow, that’s got such a negative slant though, surely that’s not fair?

Surely people are smarter than the let their work just get leveraged like this?

The internet runs on exploited and unpaid labour

XKCD is always right

XKCD Dependency_x2

This is just the truth. It’s widely reported. The vast majority of open source projects aren’t funded. Even important ones.

There is no art without patronage. None. The only successful open source projects in the world are either a) backed by enormous companies that use them for strategic marketing and product positioning advantage OR b) rely on the exploitation of free labour for the gain of organisations operating these products as services.

I can see you frothing already – “but GitHub has a donations thing!”, “what about Patreon!”, “I donated once, look!”.

And I see you undermine your own arguments. We’ve all watched people burn out. We’ve watched people trying to do dual-licensing get verbally assaulted by their own peers for not being “free enough for them”. We’ve watched people go to the effort of the legal legwork to sell support contracts and have single-digit instances of those contracts sold.

We’ve seen packages with 20+ million downloads languish because nobody is willing to pay for the work. It’s a hellscape. It victimises creators.

I would not wish a successful open-source project on anyone.

Let’s ask reddit

(Never ask reddit)

I recently made the observation in a reddit thread that it’s utterly wild that I can stream myself reading an open-source codebase on YouTube and people will happily allow me to profit from it, but the open-source community has become so wrongheaded that the idea of charging for software is anathema to them.

Let’s get some direct quotes:

“Ahh, so you hate gcc and linux too, since they're developed by and for companies?”

“Arguing against free software? What year is it?!”

“If it’s free, why wouldn’t it be free to everyone? That includes organizations. I’m honestly not clear what you’re suggesting, specifically and logistically.”

Obviously I was downvoted to oblivion because people seemed to interpret “perhaps multinational organisations should pay you for your work” as “I don’t think software freedom is good”.

But I was more astonished by people suggesting that charging for software was somehow in contradiction with the “ethos” of open source, when all that position really shows is an astonishing lack of literacy of what open source really means.

Lars Ulrich Was Right

In 1999 Napster heralded the popularisation of peer-to-peer file sharing networks. And Metallica litigated and were absolutely vilified for doing so.

The music business had become a corporate fat cat, nickel and diming everyone with exorbitant prices for CDs (£20+ for new releases!), bands were filthy rich and record executives more-so. And we all cried – “what do Metallica know about this! They’re rich already! We just want new music!”.

I spent my mid-teens pirating music on Napster, and AudioGalaxy, and Limewire, and Kazaa, and Direct Connect, and and and and and and. And you know what? If anyone had spent time listening to what Lars Ulrich (Metallica’s drummer) was actually saying at the time, they’d realise he was absolutely, 100% correct, and in the two decades since has been thoroughly vindicated.

I read an interview with him recently, where he looks back on it – and he’s reflective. What he actually said at the time was “We’re devaluing the work of musicians. It doesn’t affect me, but it will affect every band that comes after me. I’m already a multi-millionaire. File sharing devalues the work, and once it’s done, it can never be undone.”

And he was right.

After ~1999, the music industry was never the same. Small touring bands that would make comfortable livings scrape by in 2020. Niche and underground genres, while more vibrant than ever, absolutely cannot financially sustain themselves. It doesn’t scale. We devalued the work by giving it all away.

And when you give it all away, the only people that profit are the large organisations that are in power.

Spotify, today, occupies the space that music labels once did, a vastly profitable large organisations while artists figuratively starve.

YOU ARE HERE

I wish I had the fine wine and art collection of Ulrich, but forgive me for feeling a little bit like I’m standing here desperately hoping that people listen to this message. Because we are here, right now.

I love open-source, just like Lars loved tape trading and underground scenes, but the ways in which we allow it to be weaponised is a violence. It doesn’t put humans, maintainers, creators and authors at its centre – instead, it puts organisational exploitation as the core goal.

We all made a tragic mistake in thinking that the ownership model that was great for our local computing club could scale to plant-sized industry.

How did we get here?

Here’s the scariest part, really. We got here because this is what we wanted to do. I did this. You did this. We all made mistakes.

I’ve spent the last decade advocating for the adoption of open-source in mid-to-large organisations.

I, myself, sat and wrote policies suggesting that while we needed to adopt and contribute if we could (largely, organisations never do) to open-source for both strategic and marketing benefit, that we really should only look at permissive licensed software, because anything else would incur cost at best, and force us to give away our software at worst.

And god, was I dead wrong.

I should’ve spent that time advocating that we licensed dual-licensed software. That we bought support contracts. That we participated in copyleft software and gave back to the community.

I was wrong, and I’m sorry, I let you all down.

I’m not the only one

Every time a small organisation or creator tries to license their software in a way that protects them from the exploitation of big business – like Elastic, or recently Apollo, or numerous others over the years – the community savages them, without realising that it’s the community savaging itself.

We need to be better at supporting each other, at knowing whenever a creator cries burn-out, or that they can’t pay rent in kudos, or that they need to advertise for work in their NPM package, that they mean it. That it could easily be you in that position.

We need new licenses, and a new culture, which prioritises the freedom of people from exploitation, over the freedom of software.

I want you to get paid. I want you to have nice things. I want you to work sustainably. I want a world where it’s viable for smart people to build beautiful things and make a living because of it.

If we must operate inside a late-stage-capitalistic hellhole, I want it to be on our terms.

Can companies ever ethically interact with open-source?

Absolutely yes.

And here’s a little bit of light, before I talk about what we need to do the readdress this imbalance.

There are companies that do good work in open-source, and fund it well. They all have reasons for doing so, and even some of the biggest players have reasonably ethical open-source products, but they always do it for marketing position, and for mindshare, and ultimately, to sell products and services and that’s ok.

If we’re to interact with these organisations, there is nothing wrong with taking and using software they make available, for free, but remember that your patronage is always the product. Even the projects that you may perceive to be “independent”, like Linux, all have funding structures or staff provided from major organisations.

The open-source software that you produce is not the same kind of open-source software that they do, and it’s foolish to perceive it to be the same thing.

How can we change the status quo?

We need both better approaches, and better systems, along with the cooperation of all the major vendors to really make a dent in this problem.

It will not be easy. But there’s a role for all of us in this.

Support creators

This is the easiest, and the most free of all the ways we’ll solve this problem.

The next time each of you is about to send a shitty tweet because Docker desktop made delaying updates a paid feature, perhaps, just for a second, wonder why they might be doing that.

The next time you see a library you like adopting an “open-core” licensing model, where the value-added features, or the integrations are paid for features – consider paying for the features.

Whenever a maintainer asks for support or contributions on something you use, contribute back.

Don’t be entitled, don’t shout down your peers, don’t troll them for trying to make a living. If we all behaved like this, the software world would be a lot kinder.

Rehabilitate package management

I think it’s table stakes for the next iteration of package managers and component sharing platforms to support billing. I’d move to a platform that put creator sustainability at its heart at a moment’s notice.

I have a theory that more organisations would pay for software if there were existing models that supported or allowed it. Most projects couldn’t issue an invoice, or pay taxes, or accept credit cards, if they tried.

Our next-generation platforms need to support this for creator sustainability. We’re seeing the first steps towards these goals with GitHub sponsorships, and nascent projects like SDKBin – the “NuGet, but paid” distribution platform.

Petition platform vendors

A step up from that? I want to pay for libraries that I use in my Azure bill. In my AWS bill. In my GCP bill.

While I’ve railed against large organisations leveraging open-source throughout, large organisations aren’t fundamentally bad, immoral, or evil, I just believe they operate in their best interest. The first platform that lets me sell software components can have their cut too. That’s fair. That’s help.

I think this would unlock a whole category of software sales that just doesn’t exist trivially in the market today. Imagine if instead of trying to work through some asinine procurement process, you could just add NuGet, or NPM, or Cargo packages and it’ll be accounted for and charged appropriately by your cloud platform vendor over a private package feed.

This is the best thing a vendor could do to support creators – they could create a real marketplace. One that’s sustainable for everyone inside of it.

Keep fighting for free software

For users! For teachers! For your friends!

I feel like I need to double down on what I said at the start. I love open-source software dearly. I want it to survive. But we must understand that what got it to a place of success is something that is currently threatening its sustainable existence.

Open-source doesn’t have to a proxy for the exploitation of the individual.

It can be ethical. It can survive this.

I do not want to take your source code away from you, I just desperately want to have enough people think critically about it, that when it’s your great new idea that you think you can do something meaningful with, that it’s you that can execute on and benefit from the idea.

By all means give it away to your peers but spare no pity for large organisations that want to profit from your work at your expense.

Support the scene

In music, there’s the idea of supporting our scene, our heritage, the shared place where “the art” comes from.

This is our culture.

Pay your friends for their software and accept it gracefully if they want to give it you for free.

Footnotes - see this live!

An expanded version of this piece is available as a conference talk - if you would like me to come and talk to your user-group or conference about ethics and open-source, please get in touch.

A Love Letter to Software

12/03/2021 09:00:00

Most software isn’t what people think it is.

It’s a common thread – it’s mistaken for lots of things that it isn’t.

“I bet you’re good at maths!”

Ask the well-intentioned friends and strangers who think we write in ones and zeros.

“Software is engineering”

Cries the mid-career developer, desperately searching for legitimacy in a world that makes it hard for people to feel their own worth.

“Software is architecture!”

Says the lead developer, filling out job applications looking to scale their experience and paycheck.

But programming is programming. And we do it a disservice, we lower it, by claiming it to be something else.

Comparison and the Death of Identity

Software is the youngest of industries, and with youth comes the need for identity, and understanding – because those things bring legitimacy to an occupation.

Every new job must run the gauntlet of people’s understanding – and when a discipline is new, or complicated it is tempting to rely on comparison to borrow credibility, and to feel legitimate.

But comparison is reductive, and over a long enough time can harm the identity of the people that rely on it.

Software is none of the things that it may seem like.

Software is beautiful because of what it is

I’m so proud of software and the people that build it. And I want you to all be proud of what you are and what you do too.

Software is the most important innovation of the last one hundred years. Without software, the modern world wouldn’t exist.

Software is beautiful because we’ve found our own patterns that aren’t engineering, or design. They’re fundamentally of software, and they have legitimacy, and are important.

You wouldn’t TDD a building, even if you might model it first – the disciplines that are of software, are ours, and things to celebrate.

We do not need to borrow the authority or identity of other disciplines on the way to finding our own.

Just as much as we should celebrate our success, we should embrace and be accountable for our failures. It’s important that we embrace and solve our ethical problems, defend the rights of our workers, and are accountable for our mistakes.

Because software does not exist without the humans at the centre of it.

Love What You Are

There’s a trend of negativity that can infect software.

That everything sucks, that everything is buggy, that modern programmers just plug Lego bricks together – and it’s toxic. It is absolutely possible to be critical of things that you love to help them grow, but unbridled aggression and negativity is worthless.

Software, even buggy software, has changed the world.

Software, even software you don’t like, has inspired, and enabled, has been a life changing experience to someone.

As programmers, our software is our work, our literature, our singular creative output.

As romantic and pretentious as that sounds – respect each other, and the work, lest we drown out someone’s beautiful violent urge to create and make things.

Every character matters

I was recently on a podcast where I was asked what advice I’d give to myself twenty years ago, if I could, and after some deliberation I think I finally know the answer.

“Always take more time”

Time is the most finite of resources, and if you want to write beautiful software, you have to do it with intent. With thoughtfulness.

And the only way to do that is to take your time.

We’re often subjected to environments while producing works, where time is the scarcest resource – and simultaneously the thing you need to defend the most to produce work of value and of quality.

If I could have my time again, after every change, every completed story, after everything was done, I’d give myself the time and the mental space to sit with the work, and soak it in, and improve it. Taking time, in the middle of my career, has become the most important part of doing the work.

When you realise that how you do something matters as much as why you’re doing it – that legibility and form directly affect function, when you take time to think articulate why something is – that’s when you do your best work.

The only analogy I subscribe to

And now the contradiction – after writing about how software should be empowered to be its own thing, let me tell you what I think software is really closest to.

Software is fundamentally a work of literature.

You can look at software through the same lens you would as any body of writing. It has text and subtext. It has authorial intent that perhaps is contradictory to its form. It has phrasing, rhythm, and constrained grammar. Software is a method of communicating concepts and ideas between humans, using a subset of language. The fact that it happens to be executed by a computer is almost a beautiful side effect.

Your code tells a story, your commit messages transfer mood. The best code is written for the reader – using form to imitate function, and flow and form to transfer meaning.

I love software, and I love the people that write it – and who do so with intent, and thoughtfulness. You’re not plumbers, or electricians, or engineers, however wonderful those jobs are.

You’re artists. <3

Hiring is broken! Let's fix it with empathy.

09/30/2021 02:30:00

Hiring technical people is difficult, and doubly so if you want to get people who are a good fit for you and the teams you're working with, yet repeatedly we seem to get it awfully wrong as an industry.

The tropes are real – and we're now in our second iteration of "hiring terribly". Where the 80s and early 90s were characterised by mystery puzzle hiring ("how would you work out how many cars you can fit into three cruise ships?"), the 2010s are defined by the tired trope of the interview that is orders of magnitude more difficult to pass and bares increasingly less resemblance to the job you do once you get the role.

Over fifteen years of hiring people for coding jobs, a few things still seem to hold:

  1. The ability to talk fluently about what you like and don't like about code for an hour or so is the most reliable indicator of a good fit.
  2. It's a bad idea to hire someone if you have never seen code they have written.
  3. Interview processes are stressful, unnatural, and frequently don't get the best from people.

We're faced with the quandary – how do we find people from a pool of unknowns, who will quickly be able to contribute, work in relative harmony, and enjoy being a part of your team.

The kind of people who will fit best in your organisations is inevitably variable – as it's driven by the qualities you desire in your team's – but personally, I value kind people who are clear communicators who are a pleasure to work with. Those are not everyone's values, but I want to speak to how I've tried to cultivate those kinds of teams.

You're going to need to know how to write an excellent job spec, construct a good interview process, evaluate technical performance, and give meaningful feedback. Let's cover each of those topics in turn.

How to construct a kind interview process

A good interview process respects everyone's time.

Set amongst the hellscape of FAANG multi-stage interview processes with one hundred asinine divisional directors, it's simple to put together an interview process that isn't hell on earth for everyone involved.

  1. Write a job spec that captures your cultural values.
  2. Have an hour-long conversation with them about themselves, their experiences, and their opinions.
  3. See some code they've written.
  4. Have the team they would join, or someone else representative, talk to them about code.

There's no reason for this process to take any longer than three hours end-to-end, and ideally shouldn't be a chore for anybody involved.

The first bit is all on you, the interviewer. It's important that a job spec contains concrete information on the work that the role involves, that the only skills listened as mandatory are skills used in the actual role, and that you are clear about constraints and salary conditions.

The conversation is what most people are used to as an interview. Be kind. Understand people are humans and might be nervous, make sure they know that the best outcome is that you both "win" – don't be there to get a rise out of someone.

How to be a good interviewer

The first and most important thing about being a good interviewer is that you're not there to trip people up or catch people out. If that's what you feel an interview should be, I implore you to pass on interviewing.

Interviews are not meant to be hostile environments, and as a candidate, if you encounter one, do not under any circumstances take the job.

You're in an interview to verify someone's experience, understand their communication style, and discuss the expectations of the role you're hiring for.

You're there to sell the position, hopefully stimulating enthusiasm in the candidate, and to set expectations of what the job is like, day-to-day, so that neither you nor the candidate is surprised if you both choose to work together.

You need to be honest – both about the problem space, and the work. You need to be clear about where you need to grow as a team or organisation. There is nothing worse, as a candidate, than being sold a lie. Much rather articulate your challenges up front lest you ruin your own reputation.

You need to ask clear and relevant questions – learn from the mistakes of a thousand poor "balance a binary tree" style interview questions and leave that stuff at home.

Ask candidates questions about their relevant experience. Ask them how they would solve problems that you have already solved in the course of your work, or how they would approach them. Don't ask meaningless brain teasers.

You need to give them space to talk about broad topics – I love asking candidates what they think makes good code. I love to ask the question because everyone will say "readable" or "maintainable" and then we get to have a conversation on what they think satisfies those qualities in a codebase.

As an interviewer, I don't care that you learnt to say, "it follows the solid principles", I'd much rather a candidate has the floor to talk about how code makes them feel and why. Nice big broad questions are good at opening the floor to a discussion once you've talked about experience.

Take notes. Don't interrupt the candidate. Give them time to speak, and actively listen.

Seeing some code

You're going to want to see some code for technical roles – this is an absolute minefield, but the thing that I've settled on after trying all sorts of techniques here is to offer the candidates choice.

My standard process here is to offer candidates any of the following:

  • Bring me some code you have written that you're comfortable talking about
  • Do a well-known kata, in your own time, and send it across
  • Set up a one-hour session and I will pair program the kata with you

I ask the candidates to "please pick whichever is less stressful for you".

People perform differently under different types of assessment, and qualitatively, I get the same outcome from a candidate regardless of the path they pick. I like to hope that this opens the door for more neurodiversity in applicants and protects me from only hiring people that share my exact mental model. Choice is good, it doesn't hurt to be kind, it costs nothing.

Each approach has subtle pros and cons – their own arbitrary code might not quite give me the same high-quality signal, but it's a great way for people who are unquestionably competent to avoid wasting their own time. The take-home kata is a nice happy medium, though could potentially accidentally have a candidate thrashing around trying to complete something that doesn't need to be complete. The pairing session requires a little bit more of the interviewer's time and is probably the more high-stress option as people sometimes don't perform well when they feel like they're being actively evaluated, but you know precisely how someone works in those conditions.

Technical tests are intimidating to all but the most confident of candidates, this choice lets them wrestle a little bit of confidence and control back to at least feel like they're not being ambushed by something with which they cannot reckon.

It's the right thing to do.

How to set a good technical test

I've been involved in setting a lot of technical tests over the years – and I'm extremely sensitive to the ire that tech tests often cause in people. I've seen so many borderline abusive practices masquerading as technical tests that I'm not even remotely surprised.

The commandments of good tech tests:

  • A test should take no longer than one hour
  • It should be completable by a junior to the most senior, senior
  • It should not be in your problem domain
  • It should not be unpaid work
  • The answer should be provided in the question

There are a couple of potentially controversial points here.

The tech tests should respect a candidate's time.

You are not the only place they are applying, and the candidate does not owe you their time. Anything more than thirty minutes to an hour can act as implicit discrimination against people that don't have unlimited time, or have families, or other social constraints.

Using the same test for your most junior developers to your most senior allows you to understand the comparative skill of candidates who are applying, on a level playing field. You might not expect the same level of assessment or scrutiny between submissions, but that baseline is a powerful way of removing the vast discrepancies between titles and pay and focusing on a candidate's capability.

The test should be synthetic, and not part of your domain. For years I believed the opposite of this and was a fan of making tests look like "real work", but this often fails because it expects that the candidate often must understand a whole set of new concepts that doesn't help you assess their capability for the job.

And finally, providing the answer in the question deliberately reinforces that it's not a "puzzle", but an interview aid.

If a tech test contains the answer, and isn't domain specific, then what is it really for?

A tech test exists to verify, at the most basic level, that a candidate can code at all. The extremely non-zero number of people I have interviewed that couldn't so much as add new classes to an application is real, and it's why FizzBuzz is a good traditional screening question – it does little more than "test" if you can write an if-statement.

Once you've established a candidate can code, you're looking to see how they approach problem solving.

Do they write tests? Do they write code that is stylistically alike to your team's preferences? Can they clearly articulate why they made the choices they made, however small?

A technical test isn't there to see if a candidate can complete a problem under exam conditions, it's just an indicator as to the way they approach a problem.

A good technical test is the quickest shortcut to providing you these signals. I've come to value well known code katas as recruitment tests as they tend to fulfil most of these criteria trivially, without having to be something of my own invention.

I tend to use the Diamond Kata –

Given a character from the alphabet, print a diamond of its output with that character being the midpoint of the diamond. Write appropriate tests.

Example of the Diamond Kata - find it on github davidwhitney Code Katas

Giving feedback

If a candidate has given you an hour of their time, it's responsible to give them meaningful feedback as notes. It doesn't have to be much, and you don't need to review them – just a few hints as to what they could have done in future to be more successful ("we didn't feel like you had enough experience in Some Framework" or "We didn't feel confident in the tests you were writing") is absolutely fine.

Be kind. Hope they take the feedback away and think about it.

There are hundreds of examples of "failed interview candidate later the hiring manager" out there – being nice to people even if they don't get the job is a good precedent for when you inevitably meet them in the future.

An unfortunate majority of job roles won't contact unsuccessful candidates at all – and there is a balance to be struck. You're certainly not obligated to everyone that applies to a CV screen funnel, but anyone you talk to deserves the courtesy of feedback for their time spent.

Adapt to fit

The best interview processes accurately reflect your own personal values and set the stage for the experience your new team members are going to have when they join your organisation. Because of this, it's an absolute truth that no one way will work for everyone – it's impossible.

I hope that the pointers in here will stimulate a little bit of thought as to how you can re-tool your own interview process to be simpler, kinder, and much quicker.

Below is an appendix about marking technical recruitment tests that may be useful in this process.

Appendix: How to mark a technical test

Because I tend to use the same technical tests for people across the entire skill spectrum, I've come to use a standard marking sheet to understand where a particular candidate fits in the process. I expect less from candidates earlier on in their careers than more experienced individuals – this grading sheet isn't the be all and end all, but as you scale out your process and end up with different people reviewing technical tests and seeing candidates, it's important that people are assessing work they see through the same lens.

Feel free to use this if it is helpful for you understanding what good looks like.

Problem domain and understanding of question

  1. Submitter suggested irrelevant implementation / entirely misunderstood domain
  2. Submitter modelled single concept correctly
  3. Submitter modelled a few concepts in domain
  4. Submitter modelled most concepts in domain
  5. Submitter modelled all concepts in domain

Accuracy of solution

  1. Code does not compile
  2. Code does not function as intended, no features work
  3. Code builds and functions, but only some of the acceptance criteria are met
  4. ~90% of the acceptance criteria are met. Bugs outside of the scope of the acceptance criteria allowed
  5. All acceptance criteria met. Any "hidden" bugs found and solved.

Simplicity of solution

  1. Is hopeless spaghetti code, illegible, confusing, baffling
  2. An overdesigned mess, or nasty hacky code - use of large frameworks for simple problems, misusing DI containers, exceptions as flow control, needless repetition, copy-­pasting of methods, lack of encapsulation, overuse of design patterns to show off, excess of repetitive comments, long methods
  3. Code is concise, size of solution fits the size of the problem, no surprises. Maybe a few needless comments, the odd design smell, but nothing serious
  4. Code is elegant, minimalist, and concise without being code-golf, no side effects, a good read. Methods and functions are descriptive and singular in purpose
  5. Perfect, simple solution. Absolutely no needless comments, descriptive method names. Trivial to read, easy to understand

Presentation of solution

  1. Ugly code, regions, huge comment blocks, inconsistent approach to naming or brace style, weird amounts of whitespace
  2. Average looking code. No regions, fewer odd comment blocks, no bizarre whitespace
  3. Nice respectable code. Good code organisation, no odd comment blocks or lines (no stuff like //======= etc), internally consistent approach to naming and brace style
  4. Utterly consistent, no nasty comment blocks, entirely consistent naming and brace style, effective use of syntactic sugar (modern language features in the given language etc)
  5. Beautiful code. Great naming, internally consistent style. Follows conventions of language of test. Skillful use of whitespace / stanzas in code to logically group lines of code and operations. Code flows well and is optimised for the reader.

Quality of unit tests

  1. No test coverage, tests that are broken, illegible, junk
  2. Tests that don't test the class that's supposed to be under test, some tests test some functionality. Vaguely descriptive naming. AAA pattern in unit tests.
  3. Descriptive, accurate names. AAA in unit tests. Use of test setup to DRY out tests if appropriate. Reasonable coverage.
  4. Complete test coverage to address all acceptance criteria, setup if appropriate, good descriptive names. BDD style tests with contexts are appreciated.
  5. Full coverage, all acceptance criteria covered, great naming that represents the user stories accurately, little to no repetition, no bloated repetitive tests, effective use of data driven tests if appropriate, or other framework features.

How to write a tech talk that doesn’t suck.

06/20/2021 23:00:00

Conferences, user-groups, and internal speaking events are a great way for you to level up your career as a developer.

There are a lot of reasons behind this – the network effect of meeting like minded peers is infectious and probably where your next job will come from. Reaching out into the community will help you discover things you never would alone or inside your organisation. Most importantly of all though - learning to explain a technical thing will help you understand it and literally make you better at your job.

This is all well and good, but if you've never put together written technical content before, the idea can be terrifying.

I regularly have conversations with people who would like to get into speaking, but just don't know where to start. They tell me they stare at an empty slide deck and just feel blank page anxiety , and often give up. I know seasoned veterans of the tech scene who still fight this problem with every new talk they put together.

Before we start, it's worth highlighting that there's not just one way to write a talk , no more than there's one plot structure to a movie or a book. I'm going to talk about a pattern that works for me.

All my talks don't follow this pattern, but most of them start this way before they perhaps become something else – and I'll dissect a few of my talks to show you how this process works on the way.

I can't promise that your talk won't suck by following these instructions, but hopefully it'll give you enough of a direction to remove some of that empty page anxiety.

What is a conference talk meant to do?

Conference talks are meant to be interesting; they are entertainment. It's very easy to lose track of that when you're in the middle of writing a talk.

We've all been in that session.

The room is too hot, and the speaker is dryly reading patch notes.

You want to close your eyes, but you can practically make eye contact with them you're so close and you just want it to be over. Time slows down. There's inexplicably still half an hour left in the session, and you just want to get out.

When you are finally free of the room, you turn to your friend, and in shared misery mutter "god I'd never write a conference talk this boring". A good conference talk entertains the audience, it tells a human story, and it leaves the audience with something to take away at the end.

You're not going to teach the entirety of a topic in an hour or less, accept this, and do not try to.

Your goal is to leave the audience with some points that they can look up** once they've left the room, and a story about your experiences with the thing.

That's the reason you were bored in that talk and just wanted it to end. It's the reason you didn't enjoy a talk by a famous speaker who came unprepared and just "winged it".

Conference talks are not the same as documentation, or blog posts, or even lectures.

Once you accept that the goal of a good conference talk is to be interesting and entertaining it frees you from a lot of the anxiety of "having to be really technical" or "having to show loads of code". Your audience doesn't have the time to read the code as you're showing it, but they have time to listen to you speak about your experiences with it.

The reason it's you giving the talk is often the most interesting bit, the human story, that's what people connect to – because if you were in a specific situation and solved that problem with technology, your audience probably is too.

Understanding your audience.

There are four different kinds of people you will find in your audience.

1. The person who you think is your target audience.

They probably know a little about the thing you're talking about – perhaps they've tried the technology in your talk description once or twice and haven't quite had the time to work out how to use it. They have the right amount of experience, and your content will connect with them.

2. The person who is in the wrong room

This person might have come to your talk without really understanding the technology. Perhaps they're very new and your talk is aimed at existing practitioners. Perhaps they're interested in seeing different types of talks or talks outside of their main area of expertise. They're very unlikely to "just understand" any of the code you show them.

3. The person that is here to see code

There's a special kind of audience member who will never be satisfied unless they're looking at hardcore computer programming source code on a projected slide. You'll find them everywhere – they're the folks that suggest talks "aren't technical enough" because they often expect to see the documentation projected before them.

4. The person who knows more than you about the tech you're speaking about

This person already knows the technology you're going to cover, and they're here either to verify their own positions, or possibly because there wasn't another talk on that interested them, so they just went for something they knew.

Your job is to entertain and inform all of them at once.

You'll see these people time and time again in the audience – remember – they don't know if this talk is going to be for them until they're in the room, so it's your responsibility as the author of the material to entertain them all.

The four-act play.

An entertaining talk lets each of those different audience members to leave the room with something interesting that they learnt to lookup afterwards, and we can do this by splitting our talk into segments that satisfy each of those different audience members.

First and foremost, your talk is both a performance and should tell a story that people can connect to. This means we shouldn't just be pasting code into slide decks and narrating through it for the entire session - those deep dives work far better as an essay or blog post.

What works for me, is to divide talks up into a four-act structure.

  • Introduction
  • Act 1 – The Frame
  • Act 2 – The Technology
  • Act 3 – The Example
  • Act 4 – The Demo
  • Conclusion

Once you've got through the introduction – which is typically just a couple of slides to introduce the talk and help you connect to the audience, each of the "acts" has a purpose, and is sneakily designed to satisfy the different things people are hoping for in your talk.

Act 1 – The Frame

This is the setup for the entire talk, it's the hook, the central conflict, or the context that you're providing your audience, so they understand why you're talking about the thing you're talking about.

It's often about you, or an experience you've had that lead you to a particular piece of technology. It should contextualise your talk in a way that literally anyone can understand – it's a non-detailed scene setter.

In a recent talk I wrote about GameBoy Emulation , the frame was a mixture of two things – my childhood infatuation with the console, along with wanting to explain, at a low level, how computers worked to an audience. To explain emulation, I knew I needed to do a bit of a Comp-sci 101 session in hardware and remembering the things that seemed like magic to me as a child gave me a suitable narrative position to dive into technical details from. It made a talk about CPU opcodes and registers feel like a story.

Another example, from a talk I wrote about writing a 3D ray-casting "engine", my frame was the history of perspective in classic art. Again, because I knew that the technology part of the talk was going to talk about how visual projections were implemented in code, I was able to frame that against the backdrop of 3D in traditional art before leading into the history of 3D in games.

In both examples, the frame helps someone who might not understand the more technical portions of the talk in details find something of interest and hopefully of value to take away from the session.

Act 2 – The Technology

This is "the obvious part", once we've introduced the context that the talk exists in, we can discuss the specific technology / approach / central topic of the talk as it exists in this context.

In the case of my 3D ray-casting talk, at this point we switch from a more general history of perspective in art to the history of 3D games, and what ray-casting is.

You can go into as much detail as you like about the technology at this point, because this part of the talk is for the person who is your intended audience – who knows just enough but is here to learn something new.

I often find this part of my talks are quite code-light, but approach-heavy – explaining what the technology does, rather than how it does it.

Act 3 – The Example

Once we've introduced the technology, your example should work through how you apply it to solve a problem. This is often either a live coding segment (though I'd advise against it) or talking through screenshots of code.

Some people thrive doing live coding segments, but for most, it's a lot of stress, and introduces places where your talk can go badly wrong. Embed a recorded video, embed screenshots and narrate through them – pretty much anything is "safer" than doing live coding, especially when you've not done it before.

This act is for everyone, but especially people who feel cheated if they don't see code in a session.

Act 4 – The Demo

Applying much the same logic as the above – show the thing working. If there's any chance at all that it could fail at runtime, especially for your first few talks, pre-record the demo. Live demos are fun, and even the most veteran speakers have had live demos fail on them while on stage.

Don't panic if it happens, laugh it off, and be prepared with screenshots of the thing working correctly.

The Conclusion

The trick to making your talks seem like a cohesive story or journey is that you then use your conclusion to come all the way back to your framing device again from the start , to answer the question you posed, or explain how the technology helped you overcome whatever the central problem was at the start of your talk.

In my GameBoy talk, the "moral of the story" is all about it being ok to fail at something, and continue until you make progress, the difficulties I initially struggled in building a GameBoy emulator were part of the framing device and coming back to them ends the talk with a resolution.

Similarly, in my 3D rendering talk, the frame is brought back in again as I explain that throughout the process of building a ray-caster, I understood and could answer some questions that confused me about 3D graphics as a child, again, providing resolution to the talk.

While your talks will inevitably not please everyone in attendance, by taking the audience on a journey that they can take different kinds of learnings from throughout, it increases the chance of your talk going down well. This, as a result, makes the topic more memorable and relatable.

The slide deck.

The empty slide deck is a real anxiety inducing nightmare – so the most important thing to do when you need to start writing the talk is just getting something into the deck. I like to imagine my slide deck as an essay plan – with a slide per bullet point.

You're going to want to create your slide deck in multiple passes, and to start with, just add slides with titles and empty bodies to get the main structure of your talk down on paper. You'll want to budget about 2-3 minutes per slide so split your talking points down quite finely.

Getting this first outline done in slides is the single most motivating thing you'll do when writing your talk, because once it's on paper, you no longer have to hold the whole talk in your head, and you can focus on filling out specific slides.

I generally avoid text in my slide deck instead using images that represent the concepts until I need to show concrete things like diagrams, code, or screenshots. This has the nice side effect of having your audience looking at you, and paying attention to what you're saying, rather than trying to read the text on the deck.

Once the main flow of your talk is locked in, leave it, and get on with writing your script – you should come back at the end once the talk is written, and make the slides look good. Trust me – it's easy to get distracted polishing slides before you finish your narrative, and you might find that the slides that you'd previously made beautiful will have to change as a result.

The Script

If you ask people if they script their talks, you'll get a wide variety of responses – some people prefer to talk around their slides, others prefer to read from a script like an auto-cue, and I'm sure you'll find someone everywhere in between.

I script my talks to the word – every utterance, every seemingly throw-away joke, dramatic pause, chuckle, it's all in there. That doesn't mean that I always read my script in real-time – as you do your talk more frequently, you'll amend, and memorise it.

What a tight script is, though, is security. It protects you from illness, hangovers, your own poor life choices, literally anything that could get in between you and you doing a good session. Talks can feel robotic when they're read from the script without a speaker privacy screen, but if you make sure that you write your script to take the flow and rhythm of language into account – literally write it as you would like to say it – you can stop it looking like you're obviously reading a script.

While you don't have to use the script that you write, the act of producing the script is often as good as learning your talk , complete with the added protection if you need it.

The more technical your talk, the more you'll end up needing the script. It doesn't matter how well you know the topic, it's remarkably easy to fumble words when you're speaking and under the stress of being watched by an audience. It's ok, don't feel bad about it. Much rather you have a script than make a mistake and undermine a technical part of your talk.

Once you have your slide deck outline of your talk, you should walk through each slide, and write the script for that slide in the speaker notes of the slide deck. This way, when you're presenting, your speaker view will always have your script in plain sight. PowerPoint / Google Slides / Keynote all do this by default.

Do multiple passes , make sure the script still works if you re-order slides, and make sure that whatever you write in the slides fit into your 2–3-minute budget. Some concepts will take longer than that to explain but it's worth chunking larger topics across multiple slides. It's a useful mental crutch to know that you're keeping good time.

Remember - you do not have to use your script, but your talk will almost certainly be better if you have one.

Knowing what to talk about.

Often the hardest thing to do when you're starting writing conference talks is even knowing what you want to talk about.

Consider these three questions:

  1. Is there something you did that lead you to reconsider your approach or opinion?
  2. Is there something you've worked on where the outcome was surprising or non-obvious?
  3. Is there something you absolutely love?

The things that surprise you, or made you change your views on something are all good candidates for first conference talks. If you love something so much you must talk about it, then the decision makes itself.

If you're still lacking inspiration, you can check user groups / Reddit / Quora / LinkedIn for questions people commonly ask – if you spot a trend in something you know the answer for, it could be a good candidate to tell your story in that context.

There's no sure-fire hit for picking a topic that people will want to watch at a conference, but in my experience, enthusiasm is infectious , and if you care about something it's probably the thing you should speak about to start out.

In Summary.

You can never guarantee that you're going to write a talk that everyone is going to love but following this structure has worked for me and many of my talks over the last 10-12 years of public speaking.

One size doesn't fit all – and there are plenty of different talk formats that I utterly adore, but if you're struggling to get started hopefully some of this structural guidance will help you on your path to write better talks than I ever have.

Good luck!

« Previous Entries

History