Archive for September, 2013

Using SimpleServices to build a runnable and installable windows service

Friday, September 27th, 2013

This is an introduction to SimpleServices, showing you how you can trivial create runable, installable, self contained windows services in C#, without resorting to lots and lots of boilerplate code normally associated with adding and configuring service installer classes and their associated designer views and code generation.

Using #NancyFx to make "invisible" applications visible

Thursday, September 26th, 2013



Watch in 720p to read the code.

A very brief introduction to the #NancyFx web framework, and how it can be embedded in existing applications to help build monitoring tools that radiate real time statistics.

Part 2 will be a deeper dive into building small web apps with Nancy for monitoring software, expanding on this concept.

Building software that’s easy to monitor and administer

Monday, September 23rd, 2013

When you’re building software that’s central to what your company does, it’s very important that you build software that’s easy to support.

This might sound like obvious advice, but at the end of the day, if something goes wrong, you’re likely to be the person called to support the live system, so you should make sure that the system you build exposes the information you’re going to need to troubleshoot it. When times are good, it’s also important for you to be able to see the status of a system that’s currently in-flight. People will ask, so you may as well arm yourself with the information you need.

Here’s some simple, and practical advice, to making systems monitoring-friendly.

Make sure you have logging

Again, seemingly obvious advice, but make sure you’re logging important information. Logging is a solved problem across many languages and frameworks, so don’t reinvent it. Use Log4X (.NET/J/whatever), and make sure the logs roll and are available to everyone easily. There are some great services out there that support syslog searching and indexing – check out for my personal favourite.

Track and aggregate errors and exceptions

Understanding what constitutes a “normal” amount of errors in your application is very important. There are plenty of reasons for websites to generate errors under traffic, web-crawlers generating invalid uris, poor or malicious user input, however a single error in a payment processing system is often critical. You should spend time understanding the error profile of your application – fix the bugs that cause the “expected” errors to ensure that “real” errors don’t get mistaken for noise. There are plenty of services out there to help you track and fold errors, I particularly like for .NET and JavaScript projects (though their support is much wider). You want to watch general trends of errors over time, along with new introductions, to understand how to respond to errors in your software after launch.

Windows software? Use the event log!

Log files are great, but some solid event log messaging and some custom performance monitors in your application will make that special sysadmin in your life very happy. There are plenty of tools that can monitor these logs for messages, status codes and spikes in performance counters (including Microsofts own SCOM along with lots of popular third party tools).

Building system services? Don’t hide them!

System services are common and just as easily forgotten. If you’re writing “invisible” software, it’s important to force it into the limelight so people don’t forget it’s there, and especially so they notice if it’s not running. As good practice, I always recommend running monitoring dashboards from inside the system service to ensure people know it’s there. I’m a big fan of embedding a web server in all system services that would otherwise be invisible that provide monitoring dashboards with the kind of statistics that you’d need during troubleshooting. Your applications will know what’s important to them, so measure stats in real time and message them over HTTP – everyone knows how to use a browser, and the presence of the status page is a great way to monitor availability. If you want to do a great job, you can use graphing libraries and expose the data as json for other systems to query. Consider surfacing things like “average time to process a request”, “number of failures since launch”, “throughput” and other metrics that’ll help you if you’re investigating live issues. If you’re working in C# / .NET I highly recommend using NancyFx as an embedded webserver in your system services.

Building APIs? Measure performance and make use of response headers to message information

The performance of your APIs will help the apps that depend on them flourish or fail – and there’s nothing more frustrating than a poor feedback cycle as an API developer. You should measure, in memory, in real time, per node, the number of requests you’re serving, average requests a second, per method, the rate of errors, and the overall percentage of errors in calls. You should return the time taken on the server as a request header (something like “X-ServerTime”) to help the caller debug any weird latency issues they’re encountering, and you should offer this information over the API itself, either via a reporting or status API call, or through a web dashboard. When I was working at JustGiving, we put a lot of effort into the end developer experience, serving both the API docs and single node statistics to the public per node and it saved us weeks of debugging and messaging. You can check out an example of what we did here: JustGiving single node stats page – not only did it help us diagnose problems, but it helped people coding against our APIs verify error behaviour if they experienced it.

Whenever you’re building anything, remember that you, or someone you work with, is going to be the person that has to fix it if it fails. So be nice to that person.

A source control strategy for Git that “just works”

Wednesday, September 11th, 2013

Most of the time when people start to talk about “source control strategies” I flippantly respond with “if you need a strategy for your source control, you’re probably doing it wrong” – and to a point, I want to echo that here; The absolute best thing source control could ever be, is a magic box in which you put your bytes, which is durable, supports quality version tracking, and most importantly, gives your bytes back to you in the exact way you put them in. Any strategy required on top of that, should be absolutely minimal to reduce any mental overhead. Thankfully, git is fantastic at providing those core qualities of a good revision control system, but true to form, it’s flexibility can lead people to wondering what the “right way” to branch merge and tag is.

First though, some assertions:

  1. A branch and a tag in git are functionally equivalent pointers to a hash
  2. When building and releasing software, it’s important to know where the last production build came from
  3. Continuous integration is not optional – good CI is vital to building quality software

Given these assertions, in my opinion there is a fairly obvious and friction free way to use git.

A two branch strategy

I prefer a two branch strategy, and have implemented this in a number of organisations and projects.

The two branches I suggest are:

  • Master
  • Integration

Master is git’s default “main” / “trunk” / “where the code is” branch that you get when you create a git repository. If you’ve used git, you’ll have seen a master branch. Integration is the “working” pre-release branch that your CI server should be continuously building.

The core idea is that master should always represent the last released version of your codebase, while integration provides a branch for your developers to merge their code in many times a day. Your integration branch is then the source of builds for a staging or testing environment in a continuous delivery system.

Here are some common scenarios and how they work in a two branch master/integration setup

I want to deploy the current version of my code

  1. Sign off your integration (either manually or automatically)
  2. Tag the current head of integration with a release or version number
  3. Merge integration into master
  4. Release master

I want to implement a new large feature or user story

  1. Branch off master (last production release) into a short lived feature branch
  2. Write your code
  3. Merge your code into integration for your CI server to test and deploy

I want to implement a fairly trivial change (maybe just a single commit)

Just go ahead and commit that change to integration.

We have two versions of our software, and we need to bug-fix version 1

  1. Find the appropriate version 1 release tag and create a new branch from there
  2. Make your changes on this branch
  3. Compile and build a release
  4. Tag the head of your bug-fix branch with the new release version
  5. Merge your bug-fix branch into integration (resolve any conflicts)
  6. If that merge was conflict free, go ahead and merge it into master, otherwise, merge integration to master with the conflict resolution

Two developers are working on similar features on feature branches but keep getting merge conflicts!

Don’t panic. If you’re continually conflicting while merging your feature branches into integration, it’s an indication you should probably just be sharing a branch and solving this problem upstream.

I’ve been working on my feature for weeks, and when I merged into integration, I had to resolve a huge merge, help!

You need to be merging your feature branch into integration more often than you’re doing to negate this problem. Long lived features branches are a huge anti-pattern because they entirely remove the benefits of continuous integration. Instead of relying on a branch to make your changes “safe”, instead, you should consider ways to integrate your code “inert” if you’re working on a longer lived feature (consider configuration toggles / feature “walling”). Make sure you bring your changes into integration as soon as humanly possible to avoid these kinds of conflicts.


Master is production. Integration is staging. When writing new features, always branch off master, and merge into integration. When you want to promote staging to production, merge integration to master.

This branching strategy provides

  • A known version of the released code
  • A method to bug-fix previous versions
  • Full support for continuous integration
  • A workflow so simple that you don’t pay a mental cost using it
  • Great support for continuous delivery (auto-deploy master to production, and integration to staging, every commit. To release, just merge to master)