Archive for the ‘geek’ Category

Building .NET Apps in VSCode (Not .NetCore)

Thursday, November 17th, 2016

With all the fanfare around .NET Core and VS Code, you might have been lead to believe that you can’t build your boring old .NET apps inside of VS Code, but that’s not the case.

You can build your plain old .NET solutions (PONS? Boring old .NET projects? BONPS? God knows) by shelling out using the external tasks feature of the editor (http://code.visualstudio.com/docs/editor/tasks).

First, make sure you have a couple of plugins available

  • C# for Visual Studio Code (powered by OmniSharp)
  • MSBuild Tools (for syntax highlighting)

Now, “Open Folder” on the root of your repository and press CTRL+SHIFT+B.

VS Code will complain that it can’t build your program, and open up a file it generates .vs\tasks.json in the editor.  It’ll be configured to use msbuild, but won’t work unless MSBuild is in your path, with a trivial edit to correct the path, you’ll be building straight away:


{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "0.1.0",
"command": "C:\\Program Files (x86)\\MSBuild\\14.0\\Bin\\msbuild.exe",
"args": [
"/property:GenerateFullPaths=true"
],
"taskSelector": "/t:",
"showOutput": "silent",
"tasks": [
{
"taskName": "build",
"showOutput": "silent",
"problemMatcher": "$msCompile"
}
]
}

CTRL+SHIFT+B will now build your code by invoking MSBuild.

Cutting CODE!–A livestream show for programmers

Sunday, February 1st, 2015

I’ve spent some time recently thinking and discussing the idea of live-streaming coding sessions. It started with conversations with my brother about how there’s not really a Twitch TV for programming, but if there was I’d be really into that.

In a classic case of “The Simpsons Already Did It” a week after floating the original idea to start a pair-programming streamed show with Rob Cooper, Scott Hanselmen posted “Reality TV for Developers – Where is Twitch.tv for Programmers?”. At about the same time a new sub-reddit grew out of /r/programming called /r/WatchPeopleCode/ – the timing seemed a little too good to be true, so last Sunday I did a stealth trial run and mutely live-coded two hours of hacking on a random library I’ve been spiking. It was fairly dull stuff, but about 80 people came and went over the duration of two hours.

That’s enough of an audience for now. I love writing software, and I love pairing, so earlier today I got together with Chris Bird and we streamed our first “live-on-air code kata”. It clocks in at about two hours, and was fun to put together. Time allowing, I’m going to aim to put one or two of these together a week, ideally sticking to a ping-pong-pairing with conversation format.

Here’s the YouTube recording of the pilot “Cutting CODE!” stream, where we build an image to ASCII art converter in two hours, entirely driven by tests.

Announcing: System.Configuration.Abstractions

Wednesday, November 6th, 2013

What is it?

In most projects, you’re going to have some configuration. In .NET projects, it’ll probably start in your app.config or web.config file.

However, if you love TDD, you’ll likely have notice that all of the built in configuration classes are horribly un-testable. They all revolve around static references to System.Configuration.ConfigurationManager, and don’t really have any interfaces, so in every project, you end up wrapping them into something like “IAppSettingsWrapper”, in order to write tests.

After writing these wrappers what seems like hundreds of times, and being inspired by the excellent System.IO.Abstractions package, I’ve put together a standardised set of wrappers around these core framework classes.

Why Do I Need It?

* Want to mock/stub/whatever out your App/Web.config files?
* Want to assert that the values from configuration are *really* configuring your application?
* Want to add custom hooks around loading configuration values?
* Want stronger typing?

This is for you.

Where do I get it?

From source: https://github.com/davidwhitney/System.Configuration.Abstractions
By hand: https://www.nuget.org/packages/System.Configuration.Abstractions

NuGet:

        PM> Install-Package System.Configuration.Abstractions

Anything else?

I’ve also baked in a few strongly typed helpers to make your code a little less crufty (no more Int32.TryParse’ing your way to victory) along with a strong convention for “ConfigurationInterceptors” in case you’d like to hook in and add code around configuration reading.

Hopefully it’ll make your life just a little bit easier. More details in the readme on GitHub.

Writing great user stories

Wednesday, November 6th, 2013

When you’re building software, defining “what” you’re going to build is as vital as how you’re going to do it. A couple of months ago I put together this deck on writing user stories – hopefully it’ll help you through the more human side of building great software.

JustGiving’s love affair with Nancy – A case study

Monday, October 7th, 2013

During late 2011 to early 2012, JustGiving re-evaluated our approach to internationalisation. We shifted focus from having several systems sharing the same brand, too consolidating around our first, and biggest system, based out of London. As we started to consolidate our technology, we began maturing our software to support both internationalisation and multi-tenancy, making the decision that “one platform to rule them all” was a more appropriate design choice that endlessly porting features between different regional installations.

As part of this process, we had to evaluate the software and frameworks that we used while building new international functionality. One of the cornerstones involved enhancing our existing, fairly simplistic payment processing facilities, and enhancing them to support multiple currencies, multiple operating accounts, in different regions, through different payment service providers. We knew that we couldn’t rely on our sole existing payment provider (well, and PayPal) if we were to accept and settle transactions in more than twenty currencies, and we realised that just sticking with one provider was both risky from a business continuity perspective, and would end up costing us over the odds as we scaled out.

At the time our payment processing facility consisted of a Windows service, and an MSMQ queue installed on each of our web nodes. Users would make donations in their browser, data would be encrypted and stored in the database, and a message to start processing would be popped onto a queue. And… well, and we’d run some SQL and verify that everything had worked. When we were processing UK transactions, straight through, with no complexity, this was just about sufficient. We had debug logs, which were OK, and we could run a query or two to verify the number of transactions processed over a time window. The implementation was similarly straight-forward; a bunch of threads running .NET2.0 style MSMQ listeners, that would block until a message was received, and then call our payment service provider, persisting the result. Our payment service was simple, but it was also simplistic.

Then we sat down to think about what we’d want from an international payment service. We wanted rules to route payments between more than one payment provider, we wanted semi-automatic retrying and resilience, and we wanted to support new types or payments – pledges, things that required different types of processing. But most importantly, with this increased scope of complexity, we needed the kind of visibility we’d never really had with our invisible services before.

Just as we were starting to wrangle with the fact that we needed to completely re-work lots of our payment infrastructure, a framework called Nancy (or #NancyFx) started making ripples in the open source .NET community. There were a couple of frameworks at the time claiming to be .NET implementations of Sinatra (the popular ruby web framework), and we evaluated both Nancy and a competing framework called Nina. At the time, Nina was “feature complete” (and minimalist in its feature set) and Nancy was still under very active development, but there appeared to be some considerable hustle behind Nancy, with an obvious roadmap, and support, or planned support for popular IoC containers, view engines and other useful web stuff. This middle line between being an ultra-lightweight framework while supporting things that our development teams used and understood was compelling, especially when coupled with Nancy’s permissive hosting model – you could use it in IIS, you could use it hosted in WCF, it could host itself. We spiked up a quick sample app in an afternoon and immediately saw how we could iteratively introduce Nancy into our payment services as part of its re-working to give us some of the visibility we needed.

We started our “gentle” introduction of Nancy (with caution), by using its self hosting assembly, and put it side by side into our existing Windows service implementation. We hooked up Ninject, and started using Nancy to produce a simple status page hosted from inside our service. As the iterations progressed and we re-worked the internals of our payment services, we started making more extensive use of Nancy, maturing it from a read only status page into a fully featured dashboard and configuration portal.

As we extended our new payment processing agent to embed a rules engine, we used the dashboard to message the rules that were in play. As we added multiple payment service providers, we provided UI for our devops guys to enable and disable each payment service provider, and as we made our error handling more robust, we provided an interactive retry queue, right from the payment processing agent itself.

We started to use singleton objects shared by both our payment transacting code, and the Nancy modules, to recorded real-time statistics and surface them in graphs from within the application – giving our devops guys the visibility and confidence they needed when introducing new rules, making changes, and monitoring the performance of payment providers. Payment service providers are notoriously flaky, and the kind of statistics we were able to gather (average request times, specific errors, and interactive graphs on the dashboard home page) on several occasions allowed us to be the first client to respond to any outages, notably, before the payment service providers themselves even knew.

Nancy was perfect at being an enabler, while keeping out of the way of our regular development process. It provided us with low friction infrastructure, support for technology we knew and used from ASP.NET MVC, and played along with our other open source components (ninject, nhibernate, automapper). The introduction was painless due to its myriad of hosting options, and it supported a test-first TDD workflow from the start. It just worked, and it worked well enough that we trusted it with millions of pounds worth of transactions each day.

Home

We went on to use Nancy in several other high profile internal projects on the back of our experience with it in the most sensitive area of our system – it made its way into payment settlement and reconciliation, PCI Level 1 compliancy code, and deployment tools. If you’re building test driven modern web apps, APIs or dashboards, I wouldn’t hesitate in recommending it as a solid technology choice.

Building software that’s easy to monitor and administer

Monday, September 23rd, 2013

When you’re building software that’s central to what your company does, it’s very important that you build software that’s easy to support.

This might sound like obvious advice, but at the end of the day, if something goes wrong, you’re likely to be the person called to support the live system, so you should make sure that the system you build exposes the information you’re going to need to troubleshoot it. When times are good, it’s also important for you to be able to see the status of a system that’s currently in-flight. People will ask, so you may as well arm yourself with the information you need.

Here’s some simple, and practical advice, to making systems monitoring-friendly.

Make sure you have logging

Again, seemingly obvious advice, but make sure you’re logging important information. Logging is a solved problem across many languages and frameworks, so don’t reinvent it. Use Log4X (.NET/J/whatever), and make sure the logs roll and are available to everyone easily. There are some great services out there that support syslog searching and indexing – check out papertrailapp.com for my personal favourite.

Track and aggregate errors and exceptions

Understanding what constitutes a “normal” amount of errors in your application is very important. There are plenty of reasons for websites to generate errors under traffic, web-crawlers generating invalid uris, poor or malicious user input, however a single error in a payment processing system is often critical. You should spend time understanding the error profile of your application – fix the bugs that cause the “expected” errors to ensure that “real” errors don’t get mistaken for noise. There are plenty of services out there to help you track and fold errors, I particularly like raygun.io for .NET and JavaScript projects (though their support is much wider). You want to watch general trends of errors over time, along with new introductions, to understand how to respond to errors in your software after launch.

Windows software? Use the event log!

Log files are great, but some solid event log messaging and some custom performance monitors in your application will make that special sysadmin in your life very happy. There are plenty of tools that can monitor these logs for messages, status codes and spikes in performance counters (including Microsofts own SCOM along with lots of popular third party tools).

Building system services? Don’t hide them!

System services are common and just as easily forgotten. If you’re writing “invisible” software, it’s important to force it into the limelight so people don’t forget it’s there, and especially so they notice if it’s not running. As good practice, I always recommend running monitoring dashboards from inside the system service to ensure people know it’s there. I’m a big fan of embedding a web server in all system services that would otherwise be invisible that provide monitoring dashboards with the kind of statistics that you’d need during troubleshooting. Your applications will know what’s important to them, so measure stats in real time and message them over HTTP – everyone knows how to use a browser, and the presence of the status page is a great way to monitor availability. If you want to do a great job, you can use graphing libraries and expose the data as json for other systems to query. Consider surfacing things like “average time to process a request”, “number of failures since launch”, “throughput” and other metrics that’ll help you if you’re investigating live issues. If you’re working in C# / .NET I highly recommend using NancyFx as an embedded webserver in your system services.

Building APIs? Measure performance and make use of response headers to message information

The performance of your APIs will help the apps that depend on them flourish or fail – and there’s nothing more frustrating than a poor feedback cycle as an API developer. You should measure, in memory, in real time, per node, the number of requests you’re serving, average requests a second, per method, the rate of errors, and the overall percentage of errors in calls. You should return the time taken on the server as a request header (something like “X-ServerTime”) to help the caller debug any weird latency issues they’re encountering, and you should offer this information over the API itself, either via a reporting or status API call, or through a web dashboard. When I was working at JustGiving, we put a lot of effort into the end developer experience, serving both the API docs and single node statistics to the public per node and it saved us weeks of debugging and messaging. You can check out an example of what we did here: JustGiving single node stats page – not only did it help us diagnose problems, but it helped people coding against our APIs verify error behaviour if they experienced it.

Whenever you’re building anything, remember that you, or someone you work with, is going to be the person that has to fix it if it fails. So be nice to that person.

Composition over inheritance at a macro level – composing applications from packages

Thursday, March 28th, 2013

Common advice for software developers is that composing your app from loosely coupled things is good

“Prefer composition over inheritance” was one of the things I understood least at the start of my career. People were happy to repeat it to me, and were happy to tell me that inheritance produced messy and hard to maintain software, but as a fledgling developer, I found it a very difficult concept to grasp.

Wikipedia’s definition is fairly succinct:

“Composition over inheritance (or Composite Reuse Principle) in object-oriented programming is a technique by which classes may achieve polymorphic behavior and code reuse by containing other classes that implement the desired functionality instead of through inheritance.”

But this is hard to understand for new programmers – people only understand after feeling the pain

Which is a pretty high science way of saying “make your software by gathering classes together that do little things, into a bigger thing that does what you want”. Which seems like reasonable advice, but I still never really *got* what was wrong with inheritance. People would say things like HAS-A is better than IS-A and I’d nod blindly. I honestly think it’s quite a difficult concept to understand until you’ve felt the pain of maintaining a large system, with lots of inheritance that was starting to atrophy and becoming difficult to change. Until you’ve had to change a class that’s half way down an inheritance chain, and then validate that all the things that depend on it haven’t been broken. Until you’ve updated a common component only to discover the behaviour of your code has inexplicably changed, you just don’t really feel the negative impacts of inheritance in your software.

In practice…

You build your software against interfaces that describe very tight responsibilities, often only single things, and then your application code orchestrates calls to whatever implementation of this interface you have to hand. It’s great for test driven development, it helps drive out behaviour and keep your code focused. You tend to avoid a tangled set of dependencies between components, and you can compose new functionality simply by making use of these defined behaviours. This avoids having to add a method somewhere in a tree of inherited classes and hoping you’ve implemented the right things at the right times. It focuses your software around behaviour.

This doesn’t just apply to your code – you can apply this approach to how you structure your application and it’s dependant libraries

Modern software tends to be made up of code dealing with many different concerns and types of functionality. So where you construct features in your classes by depending on a series of interfaces that expose specific functionality, if you raise the level of abstraction, you construct software from your own code making use of libraries, both internal and external chained together to produce features.

When dealing with external dependencies, it’s preferable to describe them by their behaviour and compose your application of them accordingly, giving you the flexibility to test, explore and replace these dependencies at will. When you’re dealing with writing code for self-contained non-core aspects of your system (a small SDK for some API, a discrete set of code for dealing with a common scenario), it’s best to split out these dependencies into versioned packages of their own.

Look to open source for guidance

A huge amount of open source software is built in this way. Over the last two decades as open source has risen to be a dominant software philosophy, oss applications are frequently composed of code that the authors neither wrote nor understand the internals of. This is a good thing, as it lets everyone focus on getting things done and shipping software, rather than deep diving into detail of minor functionality. It’s the free-outsourcing-that-works of the software world, and it’s good for everybody involved.

As a result, people working with large amounts of open source software started developing package management tools to rationalise the growing chain of dependencies in their applications, but these tools also served the purpose of isolating their applications from change, with new versions of dependencies only ever used at the time the application developer choose.

Conversely, in enterprise and internal software there’s a trend to push dependencies down towards core libraries in order to share code between projects and teams. This is a bad thing. This “owned” code by the organisation is much more likely to be both core in business function, and to be depended on directly by application code.

Shared source code and base classes seem like the easy way to have a technology asset shared by a team of people but in actually ends up as “glue” keeping unrelated applications couple together due to some subtle correlation in feature requirements.

These dependencies are frequently incoherent “Core” projects that ends up as a dependency magnet. This code rots and people are fearful of changing or cleaning the code in question because they simply don’t know where it’s used. As your organisation or codebase grows, this rotten code code, tying your applications together becomes a liability. Conversely, if you were to version these common dependencies in their own smaller libraries that could be used, rather than at the foundations of the application, this coupling can be avoided.

I’ll deal with the topic in a later chapter, but there’s little reason to ever share code between applications aren’t provided in explicitly supplied packages. Shared code file references will couple your applications and restrict their growth. Shared source “core” or “common” projects are an antipattern.

The best software, is the software not set in stone, the software that can most appropriately react to change, and can most easily be modified. The perfect architecture, is the one that makes change safe, painless and fast. Low level common dependencies impede rapid change.

Why is this similar to inheritance?

An application is “built with” these core buisness components, where oss applications tend to be “built using” external components. This is similar to a class being “inherited from” another class, vs a class being “composed of” some functionality.

The application built with common classes is inextricably coupled to them, and any other application that is coupled to them. An application “built using” only relies on described functionality of the component, and this described functionality is owned by the application, not the component.

In all practicality, the software is still built using external libraries / packages or dlls, but the way people treat them is different. External libraries..

  1. tend to be distributed as binaries rather than as source
  2. are linked to, rather than built as part of a deployment
  3. are updated on their own schedule
  4. don’t change due to a change in another consuming system
  5. that appear volatile are wrapped and pushed to the edges of the system using adapters
  1. this relationship parallels to composition, as oss projects are composed of their application code, and a group of packaged, known, modules.
  1. are most frequently used, rather than inherited from in code.
  2. tend to be smaller and singular in purpose, and as a result, easier to manage and understand.

These qualities are beneficial, as they reduce the friction in making changes to your software.

When you’re developing, prefer composing your application “top down” from packages, rather than bottom up, using shared base classes and you’ll keep your code clean, and the intent of your shared libraries distinct.