Writing good code by understanding cognitive load

Writing good code by understanding cognitive load

02/02/2021 14:22:22

Programming is full of dogma.

Programmers are so often self-reflective and obsessed with the 'new way of doing things'. It can make programming feel like a cottage industry dedicated to best practice and leadership, patterns and practices and trying to write the 'right' code.

Amongst all the noise, it’s easy to lose sight of why people practice certain disciplines. There’s no shortage of people desperate tell you exactly what you should be doing to make your code “good”, but it is less common for people toarticulate why their approach is better.

The pursuit of good code framed as “application architecture” or “patterns and practices” often pushes codebases towards “tried and tested design patterns”.

These design patterns are model answers – examples of how you can solve specific problem in your language and environment. They exist to help people understand how a good, well factored, solution to a problem should look.

At some point, the collective hive mind decided that remembering these patterns was a mark of experience and status, while at the same time often losing the knowledge of why those solutions – the patterns - were reached in the first place.

Unfortunately, because so much of software is context sensitive, there are many times where blindly applying design patterns makes software much more difficult to reason about and increases the cognitive load of understanding what a program does.

So how can we make sure we’re writing our programs in maintainable, sustainable, and most importantly, understandable, ways?

Without patterns, how do we know what good looks like?

Once you have seen enough codebases, in enough languages, you will realise that most of the arbitrary stylistic qualities of code that people often focus on (“tabs vs spaces!”, “brace style!”) are mostly irrelevant, and instead there are a couple of non-negotiable qualities you look for in a codebase that are far less regimented than your average JavaScript style guide.

Good code is:

  • Code that is easy for someone with minimal “domain context” to read Code that your average developer can understand is good code.

  • Code that focuses on developer-experience, debuggability, and usability Code that you cannot debug meaningfully, is not good code.

  • Code where the intent takes up more visual space than the language syntax. Code that buries what it does under the programming language, is not good code.

Regardless of language, or style, or intent, all programmers interact with codebases in the same way – they glance at it to understand what it does, and they read intently to understand how it does it.

Good code, regardless of style, makes that easy.

You don’t like design patterns?

A design pattern is a commonly accepted approach to a specific type of problem.

The original works about design patterns (the famous “Gang of Four” book) managed to capture battle tested solutions that were required so often by the programming languages and runtimes at the time, that they became the “model answers”.

These design patterns were useful because they exposed shortcomings in the languages of the time. They identified common pieces of code that were written time and time again, to solve problems that existed in multiple systems. As languages have changed and evolved, the design patterns that remain relevant have mutated and changed with them.

The reason design patterns are often read as good code is because they act as a shorthand for a shared understanding that teams can reach.

This shared understanding comes with an amount of gatekeeping, and with gatekeeping, a trend for people to believe that they are the only legitimate solutions to some problems.

Because of this, people will reach for the patterns they know in inappropriate contexts, believing that is the only way. People will reach for them to prove they “know good code”. People will reach for them and end up introducing complexity instead of clarity.

I love design patterns and known good solutions when they fit, but only when the need for them can be precisely articulated.

Care about the cost of abstraction

At a really meta level, all code is constructed from abstractions and encapsulation. Regardless of programming language, and regardless of era, this has always been true.

The programming languages, runtimes and frameworks with which we write higher level code go through an enormous amount of compilation, translation, and interpretation to execute.

Programming languages themselves are common abstractions over operating system APIs, which are abstractions over assembly languages of microarchitectures, which are abstractions over physical hardware.

This trend continues upwards; the functions you write, the types you define, and the classes and files you create that make up your application are all examples of both abstraction and encapsulation in action.

For the most part we are blissfully unaware of the huge amount of complexity that exists underneath your programming environment because the value that that abstraction brings vastly outweighs the complexity of the code running underneath it.

It is orders of magnitude easier to comprehend this (in pseudocode here):

File.Open(“testFile.txt”);

than the following roughly equivalent x64 Linux sample (stolen with love from StackOverflow)

; Program to open and write to file
; Compile with:
;     nasm -f elf64 -o writeToFile64.o writeToFile64.asm
; Link with:
;     ld -m elf_x86_64 -o writeToFile64 writeToFile64.o
; Run with:
;     ./writeToFile64
;========================================================
; Author : Rommel Samanez
;========================================================
global _start

%include 'basicFunctions.asm'

section .data
fileName:  db "testFile.txt",0
fileFlags: dq 0102o         ; create file + read and write mode
fileMode:  dq 00600o        ; user has read write permission
fileDescriptor: dq 0
section .rodata    ; read only data section
msg1: db "Write this message to the test File.",0ah,0
msglen equ $ - msg1
msg2: db "File Descriptor=",0

section .text
_start:
    mov rax,2               ;   sys_open
    mov rdi,fileName        ;   const char *filename
    mov rsi,[fileFlags]       ;   int flags
    mov rdx,[fileMode]        ;   int mode
    syscall
    mov [fileDescriptor],rax
    mov rsi,msg2
    call print
    mov rax,[fileDescriptor]
    call printnumber
    call printnewline
    ; write a message to the created file
    mov rax,1                 ; sys_write
    mov rdi,[fileDescriptor]
    mov rsi,msg1
    mov rdx,msglen
    syscall
    ; close file Descriptor
    mov rax,3                 ; sys_close
    mov rdi,[fileDescriptor]
    syscall

    call exit

You never have to worry about all that stuff going on, because “File.Open” and its equivalents reduce the cognitive load on the developer by taking complex implementation details and removing them from your code.

As you implement your own software, sadly it is not a truth that all abstractions reduce cognitive load, and it’s often a very subtle process to understand what does and doesn’t make code easier to work with.

The cognitive trade-off in extracting functions

One of the hardest challenges when working out exactly how to design your code, is understanding when the right time is to introduce abstraction or refactor out code into functions.

Consider this example:

public class Customer
{
    public List<string> Names { get; set; } = new List<string>();
}

public static class Program
{
    public static void Main()
    {
        var input1 = "My Customer Name";
        var customer = new Customer();
        customer.Names = input1.Split(' ').ToList();
    }
}

One of the more common refactorings that could be applied to this code is to extract a method from the logic that computes the customer names based on the input.

public class Customer
{
    public List<string> Names { get; set; } = new List<string>();
}
public static class Program
{
    public static void Main()
    {
        var input1 = "My Customer Name";
        var customer = new Customer();
        ExtractName(customer, input1);
    }

    private static void ExtractName(Customer customer, string input1)
    {
        customer.Names = input1.Split(' ').ToList();
    }
}

This refactoring, in a codebase only 3 lines long, adds nothing for the syntactic weight it adds. Reading the code requires an additional hop on behalf of the reader – and introduces an entire extra concept (parameter passing) for it’s supposed benefit (providing a name to the extraction operation).

In small examples of code, where you can rationalise and read the code in a single pass, extracting this function adds cognitive load rather than removes it.

If you read that example carefully, it relies on the fact that input1 is a well-formed string, with spaces between the Names. A good counter example, where factoring would improve readability, is the following sample that deals with UpperCamelCaseNames.

public class Customer
{
    public List<string> Names { get; set; } = new List<string>();
}

public static class Program
{
    public static void Main()
    {
        var input = "MyCustomerName";
        var customer = new Customer();

        var parts = new List<string>();
        var buffer = "";
        foreach (var letter in input)
        {
            if (char.IsUpper(letter))
            {
                if (!string.IsNullOrWhiteSpace(buffer))
                {
                    parts.Add(buffer);
                }

                buffer = "";
            }

            buffer += letter;
        }

        if (!string.IsNullOrWhiteSpace(buffer))
        {
            parts.Add(buffer);
        }

        customer.Names = parts;
    }
}

By subtly changing the requirements, the amount of code required to solve this problem explodes in size – the cyclomatic complexity (number of conditionals in the code) increases, and with it, the ability to glance and sight read the code.

Refactoring that sample by extracting a well named Split function is vital to keeping the code legible.

public static class Program
{
    public static void Main()
    {
        var input = "MyCustomerName";
        var customer = new Customer();
        customer.Names = SplitOnCapLetters(input);
    }

    private static List<string> SplitOnCapLetters(string input)
    {
        var parts = new List<string>();
        var buffer = "";
        foreach (var letter in input)
        {
            if (char.IsUpper(letter))
            {
                if (!string.IsNullOrWhiteSpace(buffer))
                {
                    parts.Add(buffer);
                }

                buffer = "";
            }

            buffer += letter;
        }

        if (!string.IsNullOrWhiteSpace(buffer))
        {
            parts.Add(buffer);
        }

        return parts;
    }
}

Some of the most illegible and hard to follow codebases I’ve seen fall foul of this anti-pattern of prematurely extracting implementation details from the place they’re most important.

Always try and remember the Principle of locality (borrowed from physics) – “an object is directly influenced only by its immediate surroundings” and keep the implementation of logic close to the place it’s used.

Over deconstruction of implementation can hamper readability because it forces people to context switch. If we consider the first example again –

public static class Program
{
    public static void Main()
    {
        var input1 = "My Customer Name";
        var customer = new Customer();
        ExtractName(customer, input1);
    }
}

With the removal of the implementation of ExtractName, you would be forgiven for thinking that it contained complex or lengthy logic. Since its implementation is elsewhere, it forces you to check that it does not in order to accurately understand the code. It is an abstraction that adds no value.

Extract functions when they enhance readability, rather than just incur the cost of a context switch.

The cognitive trade-off in moving code physically

If moving code between functions has a chance of both increasing or decreasing cognitive load, then moving code between files amplifies this difference.

Files and directories act as a larger context switch – you should expect that things that exist in different files to contain difference concepts inside of your domain, and that things in different modules describe entirely different and complete concepts.

The inverse of this is also true – splitting out code into different modules, that are tightly coupled to concepts used elsewhere in your application, dilutes the readability of the code. Therefore, making apprehending your codebase more complicated

Understanding this is the path towards feature based code organisation, as the natural conclusion is that defining code closest to where it is used in the application or system is often the right answer.

The right time to introduce abstraction.

So far, we’ve covered the cost of context switching when dealing with abstractions, but do not think the message here is “don’t use abstractions in your software”. Abstractions are the thing you use to build up the internal language of your software, they’re what you use to craft your APIs, they exist to enhance clarity, not reduce it.

Let’s think about a few rules to follow:

Introduce abstractions to fully describe concepts in a codebase.

These abstractions can be files, directories, or types (this’ll vary by programming language). You should absolutely create files and types to capture core-domain concepts like “Users” and not have that logic leak throughout your code.

These encapsulations should be kept as near to the parts of the code that use them as possible.

The more distinct a concept is from the code that calls it, the further away it should be physically located in your codebase.

For example, the type or file related to “Users” should be located near the implementation of features that interact with “Users” – not arbitrarily split into another package or assembly, because the concept of “Users” is application specific and doesn’t need to be used elsewhere.

Beware prematurely creating shared Domain or Core packages because they often become dependency magnets that get inappropriately reused and are hard to untangle. Resist the urge to extract the core concepts of your application from the application itself. The moment you extract this code into its own assembly, you introduce the chance that a breaking change impacts another application that use it, which increases cognitive load.

Conversely, abstractions that describe concepts unrelated to what your application does should be kept further and further from where you use them. Don’t bury abstractions relating to infrastructure deep inside feature code and accidentally tie your applications behaviours to its hosting environment.

When things get big – moving to repositories and published packages.

As your application grows, it will become increasingly less maintainable by virtue of its size. This isn’t your fault, it’s an inevitability. The larger something becomes, the harder it is to fit the thing inside your head, and with this will come talk of decomposition.

There are two main ways of decomposing an application:

  • You split out full vertical slices of its functionality into different “bounded contexts” (this is the original microservice vision)

  • You split out lower-level concepts into their own packages or modules and maintain them separately from the application.

Decomposing an application has value – it allows you to fit more people around a problem and scale your teams, but as you sub-divide something that was once a single thing, complexity and interdependencies add to cognitive load.

If you’re lucky, and the original codebase was feature-organised, then splitting the application up should be a relatively simple task - but it will inevitably result in either one of two things:

  • Shared component libraries are extracted from the original application.
  • Code is duplicated between the two new applications.

There’s a natural tension between those two approaches – duplication of the code gives the owners of these respective codebases total control over their entire application but introducing shared dependencies will add a layer of complexity where the applications may require coordination in order to grow.

This cost of code sharing is real and significant and should be considered carefully.

It is important that the shared code encapsulates significant complexity to be worthy of the additional logistical costs of sharing and maintenance. Smaller pieces of shared code should be trivially duplicated to avoid this cost.

The cost of reuse

But reuse is good!

I hear you cry, sobbing in the distance.

The SOLID principles!

DRY!

I’m not listening.

In the eternal tension between DRY (don’t repeat yourself) and SRP (single responsibility), SRP should nail DRY into a coffin.

Extracting something out to be an independent package means that you need to make sure that re-use justifies the cost of:

  • Maintaining a repository
  • Ensuring the repository always has an owner.
  • Running a pull request process for that component.
  • Writing documentation
  • Creating build pipelines to build, test, and release, that component.
  • The extra leap it takes to change this component in any system that use it.

Reuse is great because it stops you solving the same problems twice, but as soon as something gets promoted to being its own first-class concept in your software, it needs to be maintained like one.

This is work. It’s not always complicated work, and you may have good patterns and automation to help you deal with this process, but reusing components isn’t as free as you think it is.

In fact, the cost of change with re-usable code is an order of magnitude higher that just putting some code in your application.

It’s ok! We’ll use a mono-repository!

In languages and ecosystems that were born in the age of package management (Node and Python especially) it has been the norm for years to split dependencies into lots of very small, often open source and reusable packages.

While at first this separation of concerns felt like a good thing, the over-eagerness to divide out applications caused increasing debugging and development friction, to the point where entire tools, like Facebook’s Lerna, were created just to… copy files back into the applications that they should never have been removed from in the first place.

Because those “packages” were never real, they were just the application code.

In a sense, you can look to the proliferation of the term “mono-repository” as something that tracks the presence of codebases so DRY that they might turn to dust.

The real answer is neither “use a mono-repo”, nor “split out all the things”, but in fact is this – whatever moves together, and is versioned together, and deploys together, is the same thing, and lives together.

Thoughtful Code

There’s clearly a cost and benefit to each of the refactorings you can make to your codebase. The cost of introducing abstractions and encapsulations has to be lower than leaving the code where it is or repeating it in another location.

As a guiding, thoughtful principle, it’s always worth thinking “what am I gaining from moving this piece of code or changing the way it describes a concept?” and frequently, that little bit of time and thoughtfulness is enough to combat dogma in your own work.

If you can’t articulate why you are moving something, then you probably shouldn’t do it. Here, in contrast, are some good reasons for introducing abstractions -

Creating an abstraction to describe concepts more clearly.

Extracting to provide some meaningful syntax over what the code does, rather than how it achieves it, is a good way to make your code easier to understand. Much like the name parsing example above, make sure that you’re encapsulating enough behaviour for this to be valuable.

Splitting out something that is expected to grow large.

In many codebases you start building something that is clearly a stub of an implementation, a placeholder for something that’ll become bigger. Appreciating that this is an entirely different concept than the rest of the code it is near is a good reason to either provide an extensibility point or move your nascent implementation into another file somewhere else.

Increasing testability or developer ergonomics.

It’s a common and good thing to extract out particularly significant pieces of code so that they can be independently isolated and tested. It’s vital to keep important code factored out and free from dependencies of the system around it.

For safety.

In some cases, it makes sense to physically move important components out of your application and into a module or package if you explicitly want to introduce ceremony around its modification – either for peer review or safety reasons.

To reduce visual noise at the call-site.

Sometimes it’s important to factor code out to reduce the cognitive load in reading the code that depends upon it. Anything that improves readability and clarity in your application is an excellent reason.

To encourage reuse.

While it’s useful to understand the reasons that you wouldn’t want to reuse code, it’s just as important to understand that reuse is a powerful tool. It can help keep alignment across systems, and to prevent waste in fixing the same problems time and time again.

Appropriate reuse should be encouraged, just so long as you’re aware of the associated costs.

Does this make my code good?

Whenever you introduce anything into code, being thoughtful about the reasons why, and the developer experience of the resulting code should always be your guide.

There isn’t really such a thing as “good code” – there’s just code that works, that is maintainable, and is simple. There’s a good chance you’ll end up in a better place if you spend time and effort making sure the cognitive load of your application is as low as possible while you code.

And on the way, you might realise that removing some of the abstractions in your codebase is a simpler path than over-design.