hero banner

Continuously deploying Azure services using AppVeyor and GitHub

May 13th, 2014

I put a talk together for the UK Azure user group’s first lightning talks event (hosted by the kind folks at JUST EAT) this evening.

Here’s the slide deck, with an almost-live-coded video where I’ll walk you through putting together automated deployment solution for Cloud services with AppVeyor – a cloud hosted CI platform for .NET.

If you’re interested in hearing more about this or similar topics, either as a user-group slot or professionally, get in touch.

Exploring QueueBackgroundWorkItem in ASP.NET and Framework 4.5.2

May 9th, 2014

One of the more frequently asked questions in ASP.NET web dev is “can I spin up my own thread and do some work from my request” and for a long time, the default answer was always “that’s a terrible idea, you shouldn’t do it” – the framework designers said don’t do it, and people that did it ended up running into fairly terrible problems – mostly because you don’t *really* control application lifecycle in ASP.NET, – IIS is within it’s right to recycle at any point.

For the last couple of years, WebBackgrounder has been the sanctioned-but-with-caveats way to run background tasks inside of your web app – Phill Haack, the author of WebBackgrounder, has a blog posts outlining the perils of background tasks inside of ASP.NET – and explains his motivations behind publishing the package in the first place.

But… it’s a reasonable request, isn’t it?

Truthfully, the motivation behind adding this in a point release of the framework is likely that it’s a scenario that comes up for a lot of people.

Want to do any fire and forget work? Before this, your users are waiting for you to finish before they get a response to their request.

Async and Await make it easier for the server to manage the context switching, but they don’t get the request back to the user any quicker. Some people started firing off Task<T>’s to do the hard work, but these tasks become sensitive to app domain recycles – while they’re probably good enough, the behaviour isn’t guaranteed.

Enter HostingEnvironment.QueueBackgroundWorkItem

As part of the release notes for .NET Framework 4.5.2 there was a single bullet point:

New HostingEnvironment.QueueBackgroundWorkItem method that lets you schedule small background work items. ASP.NET tracks these items and prevents IIS from abruptly terminating the worker process until all background work items have completed. These will enable ASP.NET applications to reliably schedule Async work items.

The takeaway from this is reliably. If you use this new HostingEnvironment queue in an ASP.NET app, any background tasks that can complete execution within the 30 second graceful shutdown period are guaranteed to execute safely.

You can use this functionality trivially in your MVC apps using the following snippets:

image

Just remember – if your tasks take longer than 30 seconds, and the app pool recycles, then all bets are off again and your tasks will be terminated for failing to complete within the graceful shutdown window.

In order to use these new features, you’re going to need to target framework 4.5.2 in your projects. You’ll need to do an in-place upgrade of the framework on all your web servers and build servers, and you’ll need to be mindful of some other changes (the change in behaviour for ViewState MAC will probably be of concern if you also host WebForms pages).

You can get the bytes to install the pre-reqs here:

The HostingEnvironment class is a static class – so if you want to write any tests that assert that your background tasks are being queued, you’re going to have to wrap and inject it.

This is a nice little feature in the latest update to the framework that should let developers remove some of the more flakey code that crops up in web-apps for common tasks like sending emails, updating search indexes and author asynchronous jobs.

Slidedeck + Video – Profiling applications with dotTrace

May 2nd, 2014

Performance profiling appears to have a high barrier to entry if you’ve never done it before. As part of some work for a client, I’ve been helping teach people how to profile code using JetBrains dotTrace. This slide deck is part of a larger session to help developers understand the numbers they’re seeing so they can find performance bottlenecks in their code.

The embedded YouTube video is available here: https://www.youtube.com/watch?v=VrFeayu9LBk

JetBrains publish a wide variety of videos covering the use of their profiler, so I urge you to check them out if you’re interested in deep diving.

Writing technical stuff that people want to read

April 23rd, 2014

As a development community we’re better together. We learn more, we build more awesome things and we’re happier. Blogs and Q&A sites like stackoverflow highlight the growing trend of self-taught programmers learning from the internet, rather than from schools or universities. This isn’t new, programmers have been self-learning for decades – but it used to be from books rather than things written by their peers in blogs and forums.

There are plenty of people who are interested in writing about their experiences – but putting yourself out there and actually doing it can be intimidating. It’s easy to think that you don’t have the skills to articulate your opinion or that you can’t write.

The barriers are real – so lets talk about them…

 

I’m scared of writing!

Writing is like any skill – it takes practise and discipline, and you’re probably not going to be amazing at it straight away. For the vast majority of people, being scared of writing is actually just the fear of sounding stupid – perhaps you don’t think your opinion on something counts, or that there are other smarter people talking about the same sort of things that you’re working on.

The truth is that you will only ever get better at anything by doing it. If a single person benefits from reading about your experiences, you’ve changed the community for the better. There will always be somebody smarter or more knowledgeable than you. If they disagree with what you say and respond to what you write, you learn how to be better – if they don’t, you help others learn.

You will get better – and the best way to get better is to write regularly. If you’re committed to improving, writing every day will help a lot, but practically, once a week is probably enough. When writing about tech, there are lots of small things worth sharing – so write about those, keep it short, and practise.

 

But I can’t write! How do I get started?

There are some easy tricks to get started – remember high school essay-writing? It’s a lot like that. People aren’t going to be interested in what you have to say by default, so you need to hook them and sell them with your title and first paragraph. This is more important in technical writing than other pieces because your reader is often looking for something specific, and you have to give them the impression that you’re dealing with the topics they’re looking for.

When I started high school I had a teacher who described writing essays as

“Tell them what you’re going to tell them.
 Tell them.
 Tell them what you told them.”

…this stuck with me, and it’s a useful structure to follow if you feel like you don’t know how to get started. You open with the hook, follow with the detail, and then close with the key learnings. It’s helpful to outline these sections of your piece as bullet points and then expand them into prose – rather than “just writing”, just like an essay plan in school.

Once you’ve nailed the core of your topic, read it twice and edit it down, cutting out complex or inaccessible language, explaining any jargon and removing filler words. This’ll make sure that what you’ve written is tight and to the point.

 

Telling stories

The purpose of most technical writing is to educate, but good writing informs and entertains. You need to learn how to strike this balance to make what you’re saying easy to follow and engaging. If you look at some of the most popular technical writers over the last decade, many of their most pieces are entertainment with education adding substance. Jeff Atwood and Joel Spolsky are both tell stories while talking about technical things. Stories help the writers establish their voice – making it more likely that readers will return and read subsequent pieces based on the “way they tell them”, regardless of topic.

With those authors, the stories that surround their technical posts are “framing devices” that help readers relate to the topic, sometimes taking the form of allegories to hook the reader. They’re used as introductions and foreshadow the technical parts of their articles, giving them context and motivation.

Contextualising with stories makes the concepts in their writing relatable to people who haven’t been exposed to the exact scenario they’re describing, but may have experienced something similar – helping the reader understand how they can apply or better understand the topic.

Framing your writing with stories helps people connect with what you’re writing, and it’s prevalence in the best technical writing helps offset some of the dryness that plagues most technical documentation.

 

But this is technology, it’s inherently dry and complex!

Just because technology is detailed and tightly articulated, it doesn’t mean that writing about it has to be dry. You’re not writing xmldoc/javadoc documentation, you’re writing content that people need to comprehend and understand – and there’s talent in distilling complex topics and dealing with them plainly. It’s easy for developers to fall back to “just pasting the code into a blog post and expecting someone to read it”, but people are unlikely to read through it.

Technology is complex, though, and if you’re writing things that deep-dive into complex topics, you’ll likely be writing for a specific and skilled audience. Don’t make the mistake in thinking that because your audience is proficient, they won’t appreciate well-articulated guidance.

Jon Skeet’s blog is a great example of writing that edges towards “posting pages of code” while still engaging the reader with narrative. Jon posts snippets of code, building up to larger complete examples – it’s literally “textbook style”, where small pieces of information, narrated and explained, are eventually combined together to form a whole example. This is the correct way to help people work through a detailed code sample, only hitting readers with “the big wall of text” at the end, when they’re equipped to understand it.

Anyone who already understands all the text above it will happily scroll past the explanation looking for the GitHub link – letting you serve both audiences.

 

Layout and formatting

Layout and formatting is a discipline to learn, and can be easier to pick up than the softer narrative skills. Formatting and layout help keep your writing scannable, which is especially important when you’re writing for the web, where people tend to scan content before committing to reading it.

There are a few tricks you can use to help people read your work;

  • Splitting your piece into headed sections with subheadings helps readers “get the gist”
  • Lists can be useful to guide a reader through specific advice (meta!)
  • Images or illustrations help break up longer pieces and can help grab attention
  • Highlighting important phrases with bold or italics helps people skimread
  • When writing for the web, use links to good further resources where applicable

If you’re trying to build a relationship with an audience, chunking longer posts into series can encourage repeat readers – it’s a good way to keep the momentum going when you’re just starting up, but beware starting something that you don’t intend to finish, as it’ll only antagonise your audience. If you’re going to “do a series”, cut a longer piece, rather than writing them piecemeal.

In general, highlight important content, and break up large amounts of text with paragraphs, images, and sub-headings. These splits should be informed by the outline of the piece that you started with, and should be natural, and you’ll get better at them over time.

 

Things to avoid

There are a handful of things that’ll put people off reading very quickly:

“The big wall of text” – characterised by a complete lack of formatting or flow. Pieces written without attention to layout put people off because of they can’t be scanned easily. The longer they get without attention to form, the less likely people are to read them.

“The business domain guy” – people are looking for lessons and flavour from your experiences, they don’t want to learn your entire business domain to understand the concepts you’re trying to explain. As a guideline, if you wouldn’t care to know it about somebody else’s business, they don’t want to know it about yours. If you’re using examples, it’s better to genericise them than to place the burden of understanding on the reader from the start – just explain how the example relates to your specific problem domain briefly.

“The nerd-rager” – as a simple guide, don’t loudly complain about anything that you don’t intend on suggesting a solution to – no good comes of angry people on the internet, so make sure you’re offering productive and constructive criticism rather than raging.

“The rambler” – normally the result of not re-reading the piece and subbing it when you’ve done. Resist the urge to just publish what you’ve written without cutting it down, because there will always be something you can cut.

“The giant code sample”GitHub or BitBucket (oh okay, or SourceForge) are the places for enormous, stand-alone code samples – link to them and resist the urge to post huge code samples without narration.

Steer clear of those archetypes, and you’ll be safe.

 

Wow, it’s *just* that easy?

There’s a lot of detail here, but the most important thing to remember is to avoid analysis paralysis while trying to find the next big thing to write about – just share what you know or what you’re learning and get started. You won’t be the best writer overnight and you’re not just going to stumble into your voice and find a great audience.

If you’re blogging, you’ll want to make sure you share what your write – at least on Twitter and Facebook – and perhaps with the more contentious audiences on Hacker News and reddit. You’ll get feedback, maybe sometimes negative, but don’t let it discourage you.

As a technical community, we need people to share what they know, and the absolute worst thing that will happen is that nobody will read what you publish – so don’t worry, everyone starts somewhere.

 

Slides:

Doing Open Source Right

March 31st, 2014

A brief history of free* and open source software…

The rise of free and open source software is reasonably well documented with several significant projects built and released starting in the 1970s with Emacs and the first version of the GPL. The momentum gathered, pushed on by the popularity of Linux, Perl, Apache and the LAMP stack in the mid to late 1990s.

Free or open source software now drives a huge portion of the web, and during the late 2000’s and early 2010’s the popularisation of source code sharing sites like SourceForge, GitHub and BitBucket along with the realisation that billion dollar businesses could be built and operated on open source software pushed more software towards being “open by default”. What was once perceived as a risk (“giving my property away for free”) started to gain traction in private business and even traditionally open-source-hostile organisations such as Microsoft – started taking pull requests and publishing their source code.

This context is important – even if you’re not the kind of person who previously would’ve ended up writing and publishing your own software, there’s an increasing chance that you now will because the organisation you work for decides open source is worth investing in.

But I’m scared, I don’t understand why we’re doing this?

As a developer, there are lots of passive benefits to “coding in the open” – it’s a great way to learn, it’s a great way to contribute back, and as an individual, it’s the only real opportunity you have to legitimately “take your work with you” from job to job. Open source software can become your professional portfolio, and as it does, getting it right is important.

As an organisation, the motivations behind adopting open source software are obvious – “hey free software!” – but the benefit in publishing your own open source is a little more obfuscated. There’s a moral aspect to it – if you’re building your business on open source software, it’s perhaps the “right” thing to do to give back to the community you’re benefiting from. Giving back isn’t going to make you money – it’ll probably cost you some – but there are good reasons why publishing or contributing back to open source projects is a rational thing for your business to do.

Open source is a great way to attract talent – hiring excellent people is hard and developers who enjoy contributing to open source software will be drawn to businesses willing to pay them to do just that. Their enthusiasm is infectious and will make your teams better. It’s a solid publicity tool to raise the profile of your organisation in the tech community. It’s a good way to enhance confidence in your business amongst technical people – if they can see your code, and it’s good, you’ll win supporters. In the end if you get external contributions back to your open source projects, that’s a nice thing to have.

If that all sounds intimidating, it’s ok – the fear of continual evaluation and scrutiny is human, especially when you consider we’re an industry of professional amateurs learning much of what we do as we go to keep up with the pace of change in the tech industry. As you increase your contributions, you get more familiar with the kind of feedback cycles open source gifts you with, and hopefully it all becomes a lot less intimidating – everyone is in it together. Reading lots of code makes you a better developer, and contributing back makes you better still. You’ll learn from experts, and maybe teach somebody else along the way.

So lets run an open source project!

Like everything else about building great software, open sourcing software requires discipline and effort. But there are some real world, practical tips to making your open source software successful. Remember that open source is a commitment – you can’t trivially “un-open” your software – once it’s done it’s done.

Don’t surprise potential users or contributors

Follow a predictable repository layout. There are some strong language neutral conventions for open source project topology that people have grown to expect. People know to look for familiar signposts in plain text or markdown; README, LICENSE and CONTRIBUTING files are essential and the guidance in them should always be accurate.

The README serves as the top level overview and getting started guide for your project. Compilation instructions, quick-start examples and links to any deeper documentation are essential. The CONTRIBUTING guide should give potential contributors useful information and your LICENSE file will likely be standard and people will expect to be able to see it.

Make sure building and testing is easy

Regardless of language or platform, you should stick to the established language conventions in that ecosystem.

People will give up on your project if the barrier to entry of build and testing is difficult or requires a lot manual configuration. Practically all mainstream programming languages have mature package management solutions, so use them. If you’re doing Ruby, make sure you’ve got a working rakefile, if you’re in .NET, I’d expect “F5” to build and run your project. Keeping that barrier to entry low is essential.

Where possible, leverage cloud continuous integration services to provide confidence in the current build of your software – it’ll help potential contributors know if they’re dealing with “works on my machine” problems.

Guiding contributors

The contributing file is your contract with potential contributors. It should give them useful information. You should make sure that you guide them towards running your test suite, explain how you’d prefer any pull requests or code submissions to be delivered, outline the coding conventions, and highlight key contributors.

Obviously this is a two way street and in order to encourage high quality contributions, you have to keep your end of the deal. It’s common to require a failing test and a fix for any contribution – this’ll make your life easy, but if you don’t publish a decent test suite or set of unit tests, you can’t realistically expect it. A lack of tests will dissuade contributions – would you change some code without knowing what the impact could be?

Be responsive and communicative

You might not want all the contributions that come your way, and it’s perfectly fine as a project owner to say no to a change that isn’t relevant to the software so make it clear what kind of changes you’re interested in. The simplest way to do that, is to guide users to create an issue in an issue tracker before they start working on a code submission. This helps stop people spending time and effort working on code that you later reject, preventing any animosity between potential contributors.

It’s also useful to create a roadmap of issues in your issue tracker, flagging simple changes that may be suitable for first time contributors if you’re looking to encourage submissions. This is a great way to gain confidence in submissions and a clear way to communicate the direction of development.

Finally, it’s important to respond at all. Respond to issues and pull requests in a timely manner, make sure you have a few canned twitter searches or alerts for people struggling with your software, and if need be, use free tools like Google groups to encourage searchable discussions that might help others later rather than private email conversations.

Don’t be afraid of criticism

By publishing your code, you’re welcoming comments and feedback – it’s not always going to be positive, but you should do your best to steer it towards being constructive. It’s worth remembering that if your code made somebodies job easier, or life better, it was worth publishing, even if it’s not the best code you’ve ever written.

Selecting a reasonable license

If you’re releasing source code, you must license it, even if you just want people to be able to “do whatever they want” with it. Choose A License offers excellent overviews of the most popular open source licenses, but the really short version is this:

  • If you care about users of your code contributing back and enforcing “software freedom”, choose a “copyleft” license, probably the latest version of the GPL. If you’re publishing a software library, you probably want the LGPL.
  • If you just want to put the code out in the open, and not have anyone try and use it against you in a court when they destroy their business with it, go with the MIT license.
  • If you’re worried about contributors submitting patented code and were considering the MIT license, you should probably go with the Apache license.

These are the most popular licenses, and they’ll probably cover what you’re trying to do. It’s worth noting that the GPL is a viral license, requiring software that includes GPL’d code to also be released under the GPL – it’s central to the philosophy of the FSF and the free software movement, but can be a barrier to adoption in for-profit organisations who don’t want to open source their own software.

Things to avoid

There are a few anti-patterns when it comes to sharing source code.

Using an open source repository as a “squashed, single commit mirror” defeats much of the purpose. Compressing your commits into single “Version 1.2”, “Version 1.3” commits hides the evolution of the software from people who might have a genuine interest in the changelog. This leads people to believe the the software is “open source in name only” and it’s hostile towards contributions.

Avoid pushing broken builds to the HEAD of your repository – if need be, maintain a development and a master branch, with only good, clean, releasable code going into master. This is just good practice, but when people who you don’t know could well be building on your codebase it becomes a worse than just ruining your colleagues day.

A quick recipe for success

We’ve talked about a broad range of topics here that will help you run an open source project responsibly, and why you’d want to do that – but lets nail down a specific pattern for running your first open source project using GitHub.

  • Sign up for a GitHub personal or company account (free for open source)
  • Select a license (Apache or GPL are sane defaults)
  • Publish your code in a Git repository on GitHub
  • Publish tests with your code
  • Use GitHub issues to construct a roadmap of future features
  • Tag some future features as “trivial” and suitable for new contributors
  • Include a contributing.md file that asks for a test and fix in a pull request
  • Discourage people sending pull requests of refactors or rewrites without prior discussion
  • Include obvious scripts in the root of your repository called things like “build-and-run-tests” to give people the confidence to contribute
 
*Footnote: Free Software vs. Open Source Software

There has long been contention between the concepts of “free software” and “open source software”, and while “all free software qualifies as open source, but not all open source software is free as in freedom” – I’m going to be avoiding the distinction here. If you’re interested in the discussions around this, this summary on Wikipedia is a good place to start, along with GNU’s article on “Why open source misses the point”.

If you’re not familiar with the distinction, free in “free software” is free as in “liberated, independent, without restrictions”, while many mistake it to mean “costs no money”. This is often explained as “free as in speech, not free as in beer” which I’ve never thought as an especially informative one-liner.

ASP.NET MVC 101 – Extensibility Points

March 20th, 2014

Here are the slides from a workshop I ran recently that highlight the pluggable parts of ASP.NET MVC, as a primer to “doing things the ASP.NET MVC way” – targeted at people who had mostly been exposed to “ASP.NET Classic”.

HTML Image tags and onerror javascript handlers

March 5th, 2014

I’ve been doing web stuff for a long, long time (since about 1997) and it never ceases to amaze me that sometimes the most trivial things can pass you by.

I was troubleshooting some weird behaviour on a client site this week – some WebDriver automation tests would sporadically hang forever, seemingly waiting for Amazon CloudFront to serve a file. Calling bullshit on the theory that “oh, CloudFront is just being funny”, we decided to dig a little deeper and see why exactly there were images that upon failing to load, would hang forever without timeouts.

Like a good little solider, I skipped all the diagnostics and just went and looked at the code, and discovered an image tag that looked like this:

<img src="some-broken-image-link.jpg" onerror="LoadDefaultImage(); alt="caption" />

I’m not going to lie, my first response was “that cannot possibly work, I’ve never seen anything that looks like that before” – but lo and behold, after visiting a W3Schools link older than time itself, it appears that in HTML (3+? 4+?) all image tags, by default, have onerror javascript handlers baked in that get invoked if the image returns a non-200 status code.

When you work in technology, everyday really is a school-day, and things that apparently are obvious, are frequently completely unknown to you. So I did a couple of cursory google searches…

“img src”: About 12,110,000,000 results (0.21 seconds) 
“img onerror”: About 7,270,000 results (0.21 seconds)

So, about 0.06% of people out there that have heard of img tags, have heard of its onerror handler, which probably qualifies it for a blog post.

Why is this useful?

You like making websites? You have a load of user generated content? You know what sucks? Broken images. They break your design, they make everything look ugly, and they take time to be requested and time-out. Like your websites to be fast? You’re going to want to get rid of these dead images, and the first step in getting rid of them, is knowing about them.

Firstly, you can use the onerror attribute of the img tag to change the img.src and swap that nasty red cross for a nicer default image that doesn’t break your layout.

Secondly, you could go a step further and fire an analytics event to let you know you’ve got dead images rendering in your pages so you can fix them.

Thirdly, with a bit of javascript magic, you can use HTML5 data attributes to “safely load” images that may or may not exist, making sure you switch out bad images for nice defaults without anyone ever noticing.

I put together a trivial example for my client of a page that does just that, has a bunch of images that may or may not exist, and swaps them out silently for a nice “not found” image when the DOM is ready.

<html>
<head>
	<title>Image errors</title>
</head>
<style>
	.some-style {
		border: 10px solid black; 
		width: 100px; 
		height: 100px;
	}
	.safelyLoadImage {
		display: none;
	}
</style>
<body>

<img src="" data-imgsrc="something.jpg" class="some-style safelyLoadImage" />
	
<script src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
<script src="http://code.jquery.com/jquery-migrate-1.2.1.min.js"></script>
<script>
	$(function(){
	
		var notFoundImage = "http://upload.wikimedia.org/wikipedia/en/thumb/d/da/Ziltoidtheomniscientcover.jpg/220px-Ziltoidtheomniscientcover.jpg";
		var realImageSrc = $(".safelyLoadImage").data("imgsrc");
				
		$(".safelyLoadImage").attr("onerror", "this.onerror=null; this.src='" + notFoundImage + "';");			
		$(".safelyLoadImage").attr("src", realImageSrc);
		$(".safelyLoadImage").removeClass("safelyLoadImage");		
	});
</script>
	
</body>
</html>

 

What was the weird timeout thing in the end?

Turns out, if you don’t remote the img tags onerror handler, and then change the img.src to another image that fails to load, most browsers get into a nasty loop – that’s why in the code sample above we’re setting this.onerror=null; in the onerror handler. Suffice to say, WebDriver wasn’t a huge fan of infinitely loading broken images.

Broken images be damned.

Introducing: ReallySimpleFeatureToggle

March 3rd, 2014

I’ve just open sourced a new NuGet package that’ll help you prefer feature toggles over feature branches.

It’s derived from some battle-tested code used across several systems over the last couple of years and should help you trivially introduce feature toggles into your codebase.

The super happy path looks a little like this:

PM> Install-Package ReallySimpleFeatureToggle

Consider this configuration:

   
<?xml version="1.0" encoding="utf-8" ?>
  <configuration>
    <configsections>
      <section name="features" type="ReallySimpleFeatureToggle.Configuration.AppConfigProvider.FeatureConfigurationSection, ReallySimpleFeatureToggle" />
    </configsections>

    <features>
      <add name="EnabledFeature" state="Enabled" />
      <add name="DisabledFeature" state="Disabled" />
      <add name="EnabledFor50Percent" state="EnabledForPercentage" randompercentageenabled="50" />
    </features>
  </configuration>

With this usage example:

    var config = ReallySimpleFeature.Toggles.GetFeatureConfiguration();

    if (config.IsAvailable(FeaturesEnum.EnabledFeature))
    {
        Console.WriteLine("This feature is clearly enabled");
    }

    if (config.IsAvailable(FeaturesEnum.DisabledFeature))
    {
        Console.WriteLine("You'll never see this.");
    }

    const int maxTries = 50000;
    var wasTrue = 0;
    for (var i = 0; i != maxTries; i++)
    {
        var recalculatedConfiguration = ReallySimpleFeature.Toggles.GetFeatureConfiguration();
        if (recalculatedConfiguration.IsAvailable(FeaturesEnum.EnabledFor50Percent))
        {
            wasTrue++;
        }
    }

    Console.WriteLine("Enabled for 50% was enabled: " + wasTrue + " times out of " + maxTries + " - Approx Percent: " + (100 * (maxTries - wasTrue) / maxTries));

The barrier to entry is really low, and there’s a bunch of extensibility points so you can store your feature configuration in a central location, or add overrides into the configuration pipeline. Hopefully this’ll help you ship your code more often, with a little less fear.

Sold! Give it to me!

Get the source on GitHub: https://github.com/davidwhitney/ReallySimpleFeatureToggle
The package from NuGet: https://www.nuget.org/packages/ReallySimpleFeatureToggle 
Via the package management console: Install-Package ReallySimpleFeatureToggle

Read the documentation here: https://github.com/davidwhitney/ReallySimpleFeatureToggle/blob/master/README.md

SSL Termination and ASP.NET

February 12th, 2014

This seems to hit everyone as they start out using SSL Termination and ASP.NET..

When you have a website running on a server with an SSL certificate installed, HttpRequestBase has a public property you can access as Request.IsSecureConnection that happily tells you if you’re running over SSL or not. This is useful when you need to generate full canonical URIs (for outbound links or return URIs) that include the scheme.

As your business matures and you scale out, maintaining SSL certificates across your IIS cluster becomes a headache, and many people opt to use SSL Termination – installing the SSL certificate only on a load balancer or reverse proxy. The reverse proxy then deals with the SSL encryption and decryption of traffic, forwarding the traffic, unencrypted, to the origin webserver.

So you take your code, that was generating URIs, and move to an SSL terminated environment and your code breaks. The URIs all get generated wrong, and you’re not sure what’s going on. You crack open the codebase on your local dev machine, with your self signed debug certificate and run a few tests – everything works. This is because on your local dev environment, you’re not SSL terminated, and on your staging and production environments, you are – you’re not going crazy, there isn’t any SSL here.

And you still need to generate valid URIs.

Supporting this scenario

There’s a non-standardised convention based header that a lot of terminating load balancers add to the HTTP headers of your request when they encrypt and decrypt your traffic – “X-Forwarded-Proto” which will be set to “https” if your load balancer has modified the scheme of your traffic. Hopefully your load balancer will add it, or support you adding it as a rule or configuration setting.

Request.IsSecureConnection is false in this scenario, and should always be false.- after all, the connection between your server and your load balancer is insecure, and Microsoft should resist the urge to change the meaning of this flag, so you’re going to want to support it another way.

So do the simplest thing that should possibly work, here are some extension methods:

using System;
using System.Linq;
using System.Web;

public static class SslTerminationExtensions
{
    public static bool IsSecureOrTerminatedSecureConnection(this HttpRequestBase request)
    {
        if (!request.IsSslTerminated())
        {
            return request.IsSecureConnection;
        }

        var header = request.Headers["X-Forwarded-Proto"];
        return string.Equals(header, "https", StringComparison.OrdinalIgnoreCase);
    }

    public static bool IsSslTerminated(this HttpRequestBase request)
    {
        return request.Headers.AllKeys.Contains("X-Forwarded-Proto");
    }
}

This will allow you to change any calls to Request.IsSecureConnection for calls to Request.IsSecureOrTerminatedSecureConnection() and your code will work correctly in both your self-signed dev environment, and your SSL terminated dev and QA environments.

The Assassination of OR/Ms …by that coward, relational storage

February 11th, 2014

Here’s a talk I’ve recently given on OR/M’s, what they’re good at, the backlash some people feel towards them, and where they’re awesome.

If you’d like me to talk about this, or any of the topics posted here, get in touch.