Cutting CODE!–A livestream show for programmers

Cutting CODE!–A livestream show for programmers

February 1st, 2015

I’ve spent some time recently thinking and discussing the idea of live-streaming coding sessions. It started with conversations with my brother about how there’s not really a Twitch TV for programming, but if there was I’d be really into that.

In a classic case of “The Simpsons Already Did It” a week after floating the original idea to start a pair-programming streamed show with Rob Cooper, Scott Hanselmen posted “Reality TV for Developers – Where is Twitch.tv for Programmers?”. At about the same time a new sub-reddit grew out of /r/programming called /r/WatchPeopleCode/ – the timing seemed a little too good to be true, so last Sunday I did a stealth trial run and mutely live-coded two hours of hacking on a random library I’ve been spiking. It was fairly dull stuff, but about 80 people came and went over the duration of two hours.

That’s enough of an audience for now. I love writing software, and I love pairing, so earlier today I got together with Chris Bird and we streamed our first “live-on-air code kata”. It clocks in at about two hours, and was fun to put together. Time allowing, I’m going to aim to put one or two of these together a week, ideally sticking to a ping-pong-pairing with conversation format.

Here’s the YouTube recording of the pilot “Cutting CODE!” stream, where we build an image to ASCII art converter in two hours, entirely driven by tests.

Deployments in .NET 2014/2015 Report

January 16th, 2015

At the end of last year, I asked for feedback from the community on the types of deployment tooling and environments they were building software for.

I’ve collated the results, and drawn up a report with observations and recommendations.

Download Deployments in .NET 2014/2015 (pdf)
Download Deployments in .NET 2014/2015 Raw Data (xlsx)

The raw data is provided for those of you who wish to dig a little deeper.

Testing an ASP.NET WebAPI app in memory

January 7th, 2015

One of the nice things around the rising tide of OWIN in the .NET ecosystem is that it literally supports “hosting anywhere – as long as there’s a viable host!”. For fans of TDD, the great thing about this is “anywhere” includes “inside your unit tests”.

This lets you build a really powerful development workflow where you can lean entirely on your test runner while building your APIs, and not have to bother with the painful cycle of breaking out into external tools. You get to execute the entire WebAPI stack, from inside your acceptance or unit tests and get rapid feedback.

This only works if you’re using WebAPI hosted “using OWIN”, and allows you to write tests that look like this:

image

Combined with manipulation of your container registrations, you can execute full stack tests with a single mocked out component like your data store like this:

image

Powerful right? Lets take a look at how we put this together over a vanilla WebAPI controller, with a single DI’d dependency.

image

We’re going to use a few nuget packages – the ones provided by Microsoft to host OWIN components in IIS, some testing helpers that support in process hosting of OWIN components outside of IIS, and Ninject an IoC container.  We have a packages.json that looks like this:

image

Our solution looks like this:

image

There are a couple of important pieces here – the WebAPI app with it’s OWIN Startup file, a Ninject dependency resolver to hook up IoC into WebAPI, and the controller described earlier. In addition to this we have the interface IGetValues implemented by ValueService. For this example, ValueService is very simple:

image

Your OWIN startup class is effectively your “Global.asax” – it bootstraps your app and configures components.

image

You’ll see that we’re configuring routes, creating a new Ninject “StandardKernel” IoC container, and using the Ninject.Extensions.Conventions library to bind up all our components – in this case just our IGetValues service. I’ll skip over the implementation of the Ninject dependency resolver, but all it does is delegate calls to create components to the Ninject standard kernel.

Pressing F5 in Visual Studio will launch the WebAPI app in IIS Express, and you can visit it in a browser

image

Now we have a working WebAPI app, lets see how we can test it in memory. We want to build a test that runs the whole of the WebAPI stack in memory, and asserts on its response.

To do this, we need to use a couple of packages – the Microsoft Owin HttpListener package, the Owin Hosting package and Owin testing package, alongside our test framework (NUnit).

image

With those packages installed, we can write a test that wires all these components together.

image

What we’re doing here is using the Owin hosting components to invoke our apps Startup class, hosted over HttpListener, on localhost on the port 8086. We’re then using a regular HTTP client to connect to this server, execute a request, and assert on a response. It’s a small marvel that we can do this at all, but TDDers will be cringing at the amount of noise in the test distracting you from the meaningful parts of your test – the preconditions and assertions. We can do better than this!

A Less Noisy Approach

Consider this class

image

Firstly, it’s a base class that tidies all the noise out of the way – all the boilerplate code is moved into a test fixture setup and teardown, and we’re detecting the first available free port to ensure that tests written execute on any machine they’re executed on. Using this class, our previous test becomes

image

instantly becoming readable and exposing only the things we really care about in the test – what we’re asking our code to do, and what the response is. We can pretty much use this base class everywhere because it’s bootstrapping our entire application for each text fixture.

Given how hard it’s previously been to test the full ASP.NET web stack, this is a revelation – you can write full stack acceptance tests without an installed web server, and with no additional infrastructure or support.

In real world examples your application likely has many components that connect to and use external resources (databases, web services) that you’ll want to isolate and replace with test doubles. With some creative use of our IoC container, we can get the best of both worlds and execute full stack acceptance tests (because at this point, we’re definitely not “unit testing”) while swapping out “just” your data access component or “just” an API client library to provide fake responses.

To do this, we want to rebind parts of our application stack for each test, and luckily, Ninject (and pretty much any other good container) makes this quite easy. Consider the following additions to our base class

image

It’s a little bit long winded, but what we’re doing is maintaining a dictionary of Mock objects (using Moq) and providing some helper methods for use in our unit tests. When a unit test calls the method “MockOut” a mock is generated and returned for the test to use. This Mock is “frozen” for the duration of the test fixture and rebound into our IoC container.

What this means is that we can bootstrap our entire stack, and then re-register a single component to a Mock, letting us manipulate a single external resource. This becomes exceptionally useful in our broad acceptance tests

image

Here you can see we’re replacing the implementation of IGetValues with a mock that returns a known value, and asserting that the full stack call returns a body with that value in it. The test is synthetic, but the application is very real – you could mock out API calls to third parties, mock out your entire data store or a single component and verify the full execution of your entire system.

These broad acceptance tests, combined with unit tests of individual components give you the perfect mix of high level “feature focused” tests (you can add a BDD framework of choice if that’s your thing) that assert on behaviour not implementation and regular “TDD Unit Tests” around the components that your system is composed from.

There are lots of parts of ASP.NET vNEXT that are “coming soon” or “wait and see”, but the community effort around OWIN and the maturation of some of the middleware frameworks makes this a technique you can and should adopt now.

Full code sample on GitHub

Thoughts about delivering platforms from Nick Dentons leaked memo

December 11th, 2014

There’s an interesting piece here about Nick Denton stepping down yesterday as company president of Gawker media, the first “big web media company”.

The interesting, and tech relevant passage from his pretty readable memo is below. For reference, Kinja is gawkers editorial platform underneath. Denton is big and brash in the memo (which is basically his Christmas note / resignation) and there are a few interesting observations about the common trials and tribulations of software development in there.

“The problems I’m going to identify are common. Excellence in software development is elusive; no online publisher has yet succeeded in transforming itself into a platform…

The current principles of software product development hold that candidconversations–with developers, designers and users–lead to a better web experience. We lacked that necessary candor. We left too many opportunities on the table, too many known problems unresolved. And in our external communications, in our stories, we sometimes shied away from controversy, fearful of online critics. We weren’t ourselves.

We all understand how this works. Editorial traffic was lifted but often by viral stories that we would rather mock. We — the freest journalists on the planet — were slaves to the Facebook algorithm. The story of the year — the one story where we were truly at the epicenter — was one that caused dangerous internal dissension. We were nowhere on the Edward Snowden affair. We wrote nothing particularly memorable about NSA surveillance. Gadgets felt unexciting. Celebrity gossip was emptier than usual.

We pushed for conversations in Kinja, but forgot that every good conversation begins with a story. Getting the stories should have come first, because without them we have nothing to talk about.?..

…And the development of Kinja itself was a challenge. Our Tech department proclaimed a new era of multi-disciplinary cross-functional teamwork and collaboration. The reality: the best tech teams in online media in both New York and Budapest, with too many developers grinding away at re-factoring (thankful though we’ll be next year for that prep work). And a product manager on the 2015 design refresh had barely talked to the consultant who had driven the other major new project of the forthcoming year. Open collaboration in theory; the opposite in practice.
Who raised the alarm? Would I have even heard it? For a good 12 months from the summer of 2013 I was variously betrothed, distracted, obsessed by Kinja, off on honeymoon, obsessed by Kinja, off on sabbatical. I’m not sorry for that. For ten years, I’ve danced with this octopus. That’s what one person on Twitter calls Gawker: an octopus armed with chainsaws. I deserved a break.

When I was disengaged, I didn’t leave any real authority in place. In my absence, the company ticks along nicely; with the challenge of Buzzfeed and Vox, ticking along nicely is no longer enough. Even when I’m here, if I’m obsessed by something, other parts of our common project can spin off in unpredictable directions, causing me to overlook developing risks and opportunities. As Joel said, I am the company’s greatest asset — and it’s greatest liability. To be saved from myself, like many of us, I need partners in the fullest sense of the word, to take up the slack or keep me on focus. And I didn’t have them.

During this period I made a mistake in Editorial, hiring a talented guy whose voice and vibe I loved, who represented nerd values, and whom I thrust into a job which changed under his feet: he was competing with Lockhart Steele of Vox and Ben Smith of Buzzfeed, two of the most effective editorial managers in the business, each with the funding to go after the very best talent.

I was so obsessed with the design of Kinja discussions, I didn’t even think to warn that Gawker is always first about the story. I took that for granted. I was in so much in a hurry that I didn’t even look at other candidates, a cardinal sin. I made a mistake, and I’m sorry to Joel, and I’m sorry to those to whom he is a friend.

And during this time too, we embarked piecemeal on a software project whose eventual scope we barely imagined. Tom told me years ago he did not want to run the department beyond 30 people, that he wanted to get back to coding. Tech is now at 55 people. Tom didn’t push me. I didn’t want to mess with what was comfortable, the best relationship with a CTO by far that I’ve ever had in my career. And no other views were solicited.

So we attracted impressive technical talent — with our culture, audience testbed, and idea — and then we let those people down. We embarked on the Kinja expansion before we’d recruited the management; each major hire was reactive, each to fix a problem created by the last. Hire engineers. Now manage engineers. Oh no, we need product people. Lean, what’s that? I had to learn fast. It wasn’t quite that bad; but not that far off.”

Food for thought.

NuGet 101 – A Bootcamp

December 9th, 2014

I’ve been running and recording a lot of workshops over the last couple of months – here is one on NuGet packaging for beginners – starting out as a slide deck then moving into a practical demo.

Slide deck

  • History
  • What’s a package
  • So it’s a zip file right?
  • Why should I use them?
  • An open source mentality
  • Disadvantages
  • Realities

Demo

  • Directory topologies of a library package
  • Adding packages to your solutions
  • Pairing nuspec files with csproj’s
  • Replacement tokens for metadata
  • The NuGet docs
  • Package dependencies
  • Dependency discovery and bundling
  • Versioning and SemVer
  • Package sources
  • NuGet.config

Mentions

Slide Deck

Deployments on Windows / .NET in 2014

November 13th, 2014

Deploying software in the Microsoft ecosystem has long been one of the more unloved, and challenging aspects of .NET development. Over the last 3 years, there have been several improvements to deployment practices around Windows, but no single obvious way. With recent announcements of new deployment partnerships for Windows Azure, I want to take a step back and look at what our environments look like, and what tools we’re using to deploy to them.

The results of this survey will be collated and circulated, and I’ll try express some trends from them if I can get enough responses.

Take the survey here

Garbage Collection in .NET – Workstation vs. Server GC.

November 6th, 2014

So, I’m reading the largely excellent “Writing High-Performance .NET Code” by @benmwatson at the moment, and I wanted to share something that’s expressed especially clearly, that I find ambiguous in many of the official docs.

The garbage collector in .NET is treated by a vast majority of devs as a thing of mystery – and one of those mysterious options is “Do you want to run the GC in ‘Workstation’ or ‘Server’ mode?

Workstation GC

  • All GCs happen on the same thread that triggers the collection
  • They’re all run at the same priority
  • A full suspension occurs before collection

Server GC

  • Creates a dedicated thread for each logical processor or core
  • Each of these threads run at the highest priority
  • These threads sleep between collections
  • A memory heap is created for each GC thread
  • Garbage collections happen in parallel due to multiple heaps

For both, from .NET 4.5

  • A background thread exists for generation 2 garbage collection
  • This dedicated thread can be disabled

As a result

  • Server GC has the lowest latency
  • Choose Server GC if you have a dedicated box for your application
  • If you have many managed processes be more wary – the abundance of high priority background threads could cause a performance problem.
  • You can mitigate this contention by setting process affinity to specific logical processors, at this point the CLR will only create GC threads and managed heaps for the logical processors your process is affinitized to.

TL;DR: Choose server, unless you have *lots* of managed applications on the box. Always choose server in a dedicated one-machine-per-app environment.

Also, read the book!

Code dojos as a learning tool

November 5th, 2014

As part of the “software consultant” gig, I often get involved with mentoring developers and teams. Mentoring developers one on one works pretty well, but after a point, it’s quite hard to scale the face time you can give to teams or departments of 50-60 people.

The truth is, that people learn by doing, and people learn from each other – so one of the most important things to do in your technology department is bake that culture of learning and development into the day-to-day. I’ve had good success using code dojos as a crutch to introduce pair programming and learning into departments in a couple of different companies.

So what’s a code dojo?

A code dojo is session where you perform “coding katas” to practice, improve and learn. The concept comes from Japanese martial arts, and I’m going to liberally borrow from the wikipedia page on the subject.

Kata, a Japanese word, are the detailed choreographed patterns of movements practised either solo or in pairs. The term form is used for the corresponding concept in non-Japanese martial arts in general.

Kata are used in many traditional Japanese arts such as theatre forms like kabuki and schools of tea ceremony (chado), but are most commonly known for the presence in the martial arts. Kata are used by most Japanese and Okinawan martial arts, such as aikido, judo, kendo and karate.

The basic goal of kata is to preserve and transmit proven techniques and to practice self-defence. By practicing in a repetitive manner the learner develops the ability to execute those techniques and movements in a natural, reflex-like manner. Systematic practice does not mean permanently rigid. The goal is to internalize the movements and techniques of a kata so they can be executed and adapted under different circumstances, without thought or hesitation. A novice’s actions will look uneven and difficult, while a master’s appear simple and smooth.

The way that we interpret katas in software development is to practice coding by solving synthetic or interesting problems that are similar to a variety of real world problems. Generally, katas are performed as “pairs”, with two people sharing a single computer, alternating in participation in the task. This is called pair programming, and it’s a great way for people to learn from one another – it’s a common extreme programming / agile practice.

Pair programming is an agile software development technique in which two programmers work together at one workstation.

One, the driver, writes code while the other, the observer, pointer or navigator, reviews each line of code as it is typed in.

The two programmers switch roles frequently.

Why do we do katas?

We learn archetypal solutions to common types of problems – commonly, programming tasks of similar shapes share similar solutions – so practising building quick solutions to these problems helps us in our practical work.

We learn alternative solutions to things we thought we knew well – one of the joys of a kata is the inverse of the first point – because the kata is practice, and explicitly not real code, the risk of failure is removed and we can learn and experiment with alternate approaches to categories of problems.

We learn how our peers approach problems – the way individuals think about solving problems is drastically different, and the best way to improve your skills is to learn from other talented people. A code dojo gives you the opportunity to pair program with someone who you might not normally work with, or who has a distinctly different skill-set than you do.

We get to work on some fun, interesting problems – over multiple sessions, you’ll get exposed to different categories of problems, that should be approached in varying ways – it’s one of the easiest ways to expose yourself to new kinds of development.

It’s safe to try out new languages – katas are one of the best ways of learning new languages – they’re a safe, failure-free environment to play and learn in.

The first kata

I’m working with a new client at the moment, and this week we ran the first code dojo.

The first kata involved building a “zero player videogame” in an hour by implementing Conway’s Game of Life. It’s a really fun first dojo, because it’s out of the comfort zone of the kinds of things most “developers who work on business software” normally get a chance to program.

Some of this is paraphrased from the excellent wikipedia article and I was inspired to pick this particular kata because of a blog post I recently read by Jeremy Bytes about implementing the game in TDD, highlighting the natural fit for a kata.

The Game of Life is a cellular automaton devised by the mathematician John Horton Conway in 1970.

The “game” is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves or, for advanced players, by creating patterns with particular properties. The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, alive or dead.

Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, cells evaluate their state and transition between being alive and dead depending on their neighbours.

The initial pattern constitutes the seed of the system. The first generation is created by a set of rules simultaneously to every cell in the seed. Births and deaths occur simultaneously, and the discrete moment at which this happens is called a tick.

We ping-pong-paired (one person writes a test, the other makes it pass and writes the next) six different solutions to the Game of life, in the room, in just over an hour.

The full outline of the first code kata is available on my github account, if you want to have a go yourself along with my quick (and sub-optimal!) implementation in C# here. If you’re going to have a go, I’d suggest you don’t look at any solutions first.

Here’s one we made earlier:


Consider running some code dojos

The great thing about code dojos is they’re easy to get off the ground.

You just need a few people, some snacks, and a problem. If you want to get started, there’s lots of ideas on http://rubyquiz.com/ that you can base katas from – though it might take a little bit of prep work on behalf of the organiser to reformat some of their questions.

Over the next couple of weeks I’m going to try dig up some of the katas I ran with my last client and make them available – mostly reformats of old rubyquiz quizs – but with structured user stories and examples.

Code dojos and katas help you prevent premature specialisation, and they’re good fun.

Performance Tuning and the Importance of Metrics

October 26th, 2014

Last week I was helping a client with some performance problems in one of their subsystems. Performance profiling is often a tricky subject where there’s no one clear preventative step, but I want to highlight a few positive qualities that it encourages in your codebase.

A wild performance problem appears!

The system in question started exhibiting performance problems. What was more interesting though, was the nature of the performance problem, with calls to save data bottlenecking for minutes at a time – all with a relatively small number of users.

If you’ve ever done any performance tuning in the past, this sounds like a classic resource contention issue – where a scarce resource locks and users are rate limited in their access to it. Conspicuously, there hadn’t been any significant code changes to the portion of the system that saved the data in question.

Reproducing Performance Problems

Like any kind of issue in software development, you can’t do anything to solve a problem unless you can see it, and until you can see it, you can’t start to identify what kind of fixes you could use. Discerning the “what” from a chorus of people frustrated with a system is pretty difficult and we both benefited and suffered from the fact that system in question is part of a small ecosystem of distributed systems.

We were lucky in this case – the system was a JavaScript app that posted data to web services hosted inside itself over HTTP. This means that we had access to IIS logs of the requests. This meant that we could aggregate them to identify the slow calls that the users were experiencing. This was the “canary in the coal mine”, highlighting some API methods that were taking a long time to execute.

To make matters worse, the methods that were taking a huge amount of time to execute, were methods that both interacted with a third party component, and other systems in the distributed architecture.

Perceptions of performance in distributed systems

Performance is frequently hidden behind smoke and mirrors – there’s a big difference between actual performance, and perceived performance. Actual performance is concrete and measurable, where perceived performance is an ephemeral feeling that a user has towards your software.

There’s a really great example of this that everyone recognises. Facebook uses an eventual consistency driven data store for much of their user data.

“Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value.” – Wikipedia

When a Facebook user posts some new content, that content is immediately added into their browsers DOM. Basically, what they’ve posted is put on their screen right in front of them. Other users of the site will see the content seconds later, when it eventually replicates across all their data-stores. This can take seconds to minutes if their system is running slowly, but the data is send and queued, rendered to the user, and the perception is that Facebook is performing very quickly.

The key take away, is that the way users feel about performance directly correlates to their experiences and what they can see, and rarely the performance of the system as a whole.

The perception of poor performance in a distributed system will always fall on the user-facing components of that system. To compound the problem, reproducing “production like” performance problems is often much more difficult, with the strain felt by various components in the system becoming very difficult to isolate and identify.

Performance is a feature of your system

Performance problems are notoriously hard to solve because “performance” is often a blanket term used to describe a wide variety of problems. To combat this, mature systems often have expected performance requirements as a feature – benchmarkable, verifiable performance characteristics that they can be measured and tested against.

I’m not a proponent of performance first design. Performance first design frequently leads to micro-optimisations that ruin the clarity and intent of a codebase, where a higher level macro-optimisation would yield far greater performance improvements. I am, however, a big fan of known, executable, performance verification tests.

Performance verification tests provide a baseline that you can test your system against – they’re often high level (perhaps a subset of your BDD or feature tests), and they’re run in parallel, frequently. These tests are important to establish a baseline for conversations about “performance increasing” or “performance decreasing” because they’ll give you tangible, real world numbers to talk about.  The value of these tests varies throughout development – but they’re the easiest to add at the start of a project and evolve with it.

Performance is a feature, even if it isn’t necessarily the highest priority one.

Measurement vs. Testing

While performance tests are a great way to understand the performance of your system as you build it, real systems in a real production environment will always have a more diverse execution profile. Users use your system in ways that you’re not designing for. It’s ok. We all accept it.

Measurement on the other hand, is the instrumentation and observation of actual running code. Performance tests will help you understand what you expect of your system, but quality measurement and instrumentation will help you understand what’s going on right now.

Measuring the actual performance of your system is vital if you’re investigating performance problems, and luckily, there are great tools out there to do it. We used the excellent New Relic to verify some of our performance related suspicions.

NewRelic is not the only tool that does hardware and software measurement and instrumentation, but it’s certainly one of the best and it’s part of a maturing industry of software as a service offerings that support application, logging, and statistical reporting over servers and apps.

Code reading and profiling

Given that we had a suspicious looking hot-spot that we’d identified from IIS logs, we were also able to lean on old-fashioned code review and profiling tools.

Profiling tools are a bit of a headache. People struggle with them because they often deal with concepts that you’re not exposed to at any time other when you’re thrashing against performance issues. We’re lucky in the .NET ecosystem that we’ve got a couple of sophisticated profiling options to turn to, with both JetBrains’ dotTrace and RedGate’s ANTS Performance Profiler being excellent mature products.

We profiled and read our way through the code and doing so highlighted a few issues.

Firstly, we found some long running calls to another system. These were multi-second HTTP requests that were difficult to identify and isolate without deep code reading because there were no performance metrics or instrumented measurement around them.

Secondly, and much more significantly, was a fundamental design problem in a third party library. Due to some poor design in the internals of this library, it wasn’t able to cope with the capacity of data that we were storing in it. After some investigation, we established a work-around for this third party library problem, and prepared a fix.

How do we prevent this happening?

There are some useful takeaways from this performance journey. The first, is a set of principles should be considered whenever you’re building software.

Monitoring, instrumentation and alerting need to be first class principals in our systems.

This means that we should be recording timings for every single HTTP call we make. We should be alerting set on acceptable performance thresholds and this should all be built into our software from day one.

In order to get the visibility into the software that we need, we need great tooling.

New Relic was instrumental in helping us record the changes in performance while testing our solution. Further monitoring, instrumentation and aggregation of exceptions and stats would have made our lives much simpler – letting us identify potentially long running API calls much quicker.

There are tools on the market (StatsD, LogStash, Papertrail, Kibana, Raygun) that you can buy and implement trivially that’ll vastly increase visibility of these kinds of problems – they’re essential to reliably operate world class software in production, and they’re much cheaper to buy and outsource, than build and operate. If they save a few developer days a month, they pay for themselves.

Poor design ruins systems. In this case, the poor design was in a third party library, but it’s worth reiterating regardless. A design that can’t cope with an order of magnitude increase in load, needs to be evaluated and replaced.

Fit for purpose is very load dependant – we should consider if we can catch these potential problems while evaluating third party libraries that can’t be easily replaced – going to the effort of scripting and importing load, rather than discovering these issues when scaling to a point of failure.

Luckily, much of this will be things we already know – instrumentation is vital, and monitoring and performance metrics help us build great software – but these are some nice, practical and easy wins that can be implemented in your software today.

Lessons learnt running a public API from #dddnorth

October 19th, 2014

Yesterday I gave a talk at #DDDNorth (a free community lead conference in the “Developer! Developer! Developer!” series) about running public facing RESTful APIs. Many thanks to all the kind feedback I’ve had about the talk on social media – thrilled that so many people enjoyed it. It was a varient on previous talks I’ve given at a couple of usergroups on the topic – so here are the updated slides.

Google presentation link