Technology Blog

Projections in Agile Software Development

11/28/2015 16:44:24

Preparing a timeline for the development of software is a difficult problem – teams have proven time and time again that they’re tremendously bad at estimating how long it takes to develop software at any kind of scale.

Unfortunately, there’s a conflict between the need for financial planning and budgeting, and the unpredictable nature of software delivery – to the extent that agile software advocate frequently dismisses the accuracy and need for timelines and schedules.

This presents a difficult problem – organisations need to plan and budget, but their best technical people tell them that accurate planning is impossible.

We can make projections to try and close this gap.

Projections are common in financial planning and they serve the same purpose in software – they’re a forecast.

From http://www.entrepreneur.com/encyclopedia/financial-projections

Planning out and working on your company's financial projections each year could be one of the most important things you do for your business. The results--the formal projections--are often less important than the process itself. If nothing else, strategic planning allows you to "come up for air" from the daily problems of running the company, take stock of where your company is, and establish a clear course to follow.

Importantly

Variances from projections provide early warning of problems. And when variances occur, the plan can provide a framework for determining the financial impact and the effects of various corrective actions.

Projections are formal educated guesses – they don’t claim accuracy, but they intend to give a reasonable idea of progress.

With a well thought out projection, your team can continuously evaluate how their current progress lines up with their projection. As and when the real teams’ world progress and the projection start to diverge, they can realistically measure the difference between the two and adjust the projection accordingly.

Used in this way, projections are a useful tool for communicating with business stakeholders – suggesting a “best guess at when we might be doing this work” and a means of communicating changes in timelines divorced and abstracted away from the “coal face” of user stories, velocity, estimation and planning. Teams use their projections and actual progress to explain and adjust for changes in complexity, scope and delivery time.

If estimation and planning in software is hard, then projection is much harder – it’s “The Lord of the Rings” – a sprawling epic of fiction.

On a small scale, planning is predicated on the understanding that “similar things, with a similar team, take a similar amount of time” – and this generally works quite well.

People understand this quite intuitively – it’s the reason your plumber can tell you roughly how long it’ll take and how much it’ll cost to fit a bathroom. At a larger scale however, planning starts to fall apart. The more complex a job, the more nuance is involved and the more room to loose details and accuracy.

The challenge in putting together a projection is to bridge the gap between the small scale accuracy of planning features, and the long term strategic objectives of a product.

The Simplest Possible Process

In order to put together a projection and make sure that it’s realistic, you need a base-line – some proven and known information to use as a basis of the projection.

You’ll need a few things

  • A backlog of “epics” – or large chunks of work
  • Some user stories of work you’ve already completed
  • A record of the actual time taken to finish that work
  • A big wall
  • Some sticky notes

With these things we’re going to

  • Establish a baseline from the work we’ve already completed
  • Place our completed work on the wall using sticky notes
  • Estimate our new work in relation to our completed work
  • Assign our new work a rough time-box

A note on scales of estimation

Agile teams generally use abstract estimation scales to plan work – frequently using the modified Fibonacci sequence or T-shirt sizes. It’s recommended to use a different scale than you would normally use for story estimation for your projections, to prevent people accidentally equating the estimates of stories in a sprint, to estimates on epics in a roadmap.

I find it most useful to use Fibonacci for scoring user stories, and using t-shirt sizes for projections.

The trick to a successful projection is to find a completed epic or big chunk of work to “hang your hat on” – you can then make a reasonable estimate on new work by asking the question

“Is this new piece of work larger or smaller than the thing we already did?”

This relative estimation technique is called stack ranking – it’s simplistic, but very easy for an entire team to understand.

Take this first item, write it on a sticky note and position it somewhere in the middle of the wall, making a note of the time it took to complete the work as you stick it up.

You should repeat this exercise with a handful of pieces of work that you’ve already completed – giving you a baseline of the kinds of tasks your team performs, and their relative complexity to each other based on their ordering.

As you stick the notes on the wall, if something is significantly more difficult than something else, leave a bit of vertical space between the two notes as you place them on the wall.

clip_image002

Now you’ve established a baseline around work that you’ve already completed, you can start talking – in broad terms, about the large stories or epics you want to projects and schedule.

Work through your backlog of epics, splitting them down if you can into smaller chunks, and arranging them around the stack ranked work on your wall. If something is about as hard as something else, place it at the same level as the thing it’s as hard as.

clip_image004

Once you’ve placed all of your backlog on the wall, arrange a time scale down the right hand side of the wall, based on the work you’ve already completed, and a scale from “XXXS” to “XXXL” on left hand side.

clip_image006

The whole team can then adjust their estimates based on the time windows, and the relative sizes assembled on the wall. Once the team is happy with their relative estimates, they should assign a T-shirt size to each sticky note of work – which in turn, relates to a rough estimate of time.

You can now remove the sticky notes from the wall, and arrange them horizontally into a timeline using the “medium size” sticky note as a guide.

clip_image008

These timelines are based around a known set of completed work, but none the less, are just projections. They’re not delivery dates, they’re not release dates – they’re best guess estimates of the time window in which work will be completed.

Adjusting your projections

Projections are only successful when continually re-adjusted – they’re the early warning, a canary in the coal-mine, a way to know when you’re diverging from your expectations. Treating a projection like a schedule will cause nothing but unmet expectations and disappointment.

As you complete actual pieces of work from your projection, you should verify its original projection against its actual time to completion – making note of the percentage of over or under estimation on that particular piece.

Once you complete a piece, you should adjust your projection based on actual results – checking that your assertion that the “medium sized thing takes three months” holds true.

Once armed with real information, you can have honest conversations about dates based in reality, not fiction, helping your product owners and stakeholders understand if they’re going to be delivering earlier or later than they expected, before it takes them by surprise.

Using C# 6 Language Features In Your Software Tomorrow

07/20/2015 20:42:41

Visual Studio 2015 dropped today, along with C# 6, and a whole host of new language features. While many new features of .NET are tied to framework and library upgrades, language features are specifically tied to compiler upgrades, rather than runtime upgrades.

What this means, is that so long as the build environment for your application supports C# 6, it can produce binaries that can run on earlier versions of the .NET framework, that make use of new language features. The good news for most developers is that this means that you can use these language features in your existing apps today – you don’t have to wait to roll out framework upgrades across your production servers.

You’ll need to install the latest version of Visual Studio 2015 on your dev machines, and you’ll need to make sure your build server has the latest Build Tools 2015 pack installed to be able to compile code that uses C#6.

You can grab the Build Tools 2015 pack from Microsoft here: https://go.microsoft.com/fwlink/?LinkId=615458

I’ve verified C#6 apps, compiled under VS2015, at the very least run on machines that run .NET4.5.
If you’re doing automated deployment with Kudu to Azure, you may need to wait until they roll out the tools pack to the Web Apps infrastructure.

Have fun!

Passenger - A C# Selenium Page Object Library (Video)

04/27/2015 13:37:20

A lap around Passenger - a page object library I’ve been working on for selenium.

In this video, we’ll discuss the general concept of page objects for browser automation testing - then we'll work into a live demo comparing and contrasting traditional C# selenium tests, and tests using Passenger.

 

Resources:
Passenger on GitHub - https://github.com/davidwhitney/Passenger
Passenger
on NuGet - https://www.nuget.org/packages/Passenger/

Page objects for browser automation - 101

04/01/2015 16:56:11

The page object model is a pattern used by UI automation testers to help keep their test code clean. The idea behind it is very simple - rather than directly using your test driver (Selenium / watin etc) you should encapsulate the calls to it's API in objects that describe the pages you're testing.

Page objects gained popularity with people that felt the pain of maintaining a large suite of browser automation tests, and people that practice acceptance test driven development (ATDD) who write their browser automation tests before any of the code to satisfy them - markup included - exists at all.

The common problem in both of these scenarios is that the markup of the pages changes, and the developer has to perform mass find/replaces to deal with the resulting changes. This problem only aggravates over time, with subsequent tests adding to the burden by re-using element selectors throughout the test codebase. In addition to the practical concern of modification, test automation code is natively verbose and often hides the meaning of the interactions its driving.

Page objects offer some relief - by capturing selectors and browser automation calls behind methods on a page object, the behaviour is encapsulated and need only be changed in one place.

Some page objects are anaemic, providing only the selector glue that the orchestrating driver code needs, while others are smarter, encapsulating selection, operations on the page, and navigation.

The simplest page object is easy to home roll, being little more than a POCO / POJO that contains a bunch of strings representing selectors, and methods that the current instance of the automation driver gets passed to. Increasingly sophisticated page objects capture behaviour and will end up looking like small domain specific languages (DSLs) for the page being tested.

The simplest page object could be no bigger than

public class Homepage
{
   public string Uri { get { return "/"; } }
   public string Home { get { return "#home-link"; } }
}

This kind of page object can be used to help cleanup automation code where the navigation and selection code is external to the page object itself. Its a trivial sample to understand, and effectively serves as a kind of hardcoded configuration.

A more sophisticated page object might look like this:

public class Homepage
{
   public string Uri { get { return "/"; } }
   public string Home { get { return "#home-link"; } }

   public void VisitHomeLink(RemoteWebDriver driver)
   {
      driver.SelectElementByCssSelector(Home).Click();
   }
}

This richer model captures the behaviour of the tests by internalising the navigation glue code that would be repeated across multiple tests.

The final piece of the puzzle is the concept of Page Components - reusable chunks of page object that represent a site-wide construct - like a global navigation bar. Page components are functionally identical to page objects, with the single exception being that they represent a portion of the page so won't have a Uri of their own.

By using the idea of page objects, you can codify the application centric portions of your automation suite (selectors, interactions), away from the repetitive test orchestration. You'll end up with cleaner and less repetitive tests, and tests that are more resilient to frequent modification.

Page objects alone aren't perfect - you still end up writing all the boilerplate driver code somewhere, but they're a step in the right direction for your automation suite.

We need to talk about configuration.

03/05/2015 00:41:31

The vast majority of software that you build and use needs to be configured – feature toggles, file and directory paths, startup options. We build configurable software every single day – especially if you build line of business apps.

When we do this we somehow forget that all of the worst software experiences you have suffer from horrible installation and configuration processes.

Ever had to set up a SharePoint cluster?
Muddle through SQL Server replication configuration?
Nobody ever had a good time configuring an application.

Conversely, all the software that you love is defined by its slick user experience. Great software doesn’t require configuration, it just works.

Every time I watch a developer add “just another configuration value” or “just add something extra into the installation script”, I feel a pang of sadness, because I know all that’s really happening is they’re setting themselves a trap for later.

Configuration code takes time to build – you have to build parsers, or pick meaningful names for configuration settings.

You have to plumb those settings into your code somewhere.

For all this work? You’re rewarded with code that people can misconfigure.

A new point of failure in your application.

“But that’s ok!”

I hear you cry.

“I’ll write some documentation!”

I’ve got some bad news for you. That documentation gets out of date, or that person that uses your code doesn’t even know exists.

But lets pretend for a second, that this is all ok.

Lets pretend we correctly configure our software for the environment it’s deployed into, with correct database connection strings, paths to dependent services, and weird internal settings for physical paths on the local machine.  You start the application and it crashes. 

So you trawl through your error logs, and find an exception log that says something like

Application failed to start due to DirectoryNotFoundException – couldn’t open c:\program files (x86)\YourApp\UserData”.

You take a look, and sure enough, the directory doesn’t exist – so you create it and try again and the app starts.

 

Friends don’t let friends configure software

If you bought some software and endured that kind of user experience during installation and configuration, you’d probably give up, yet we don’t think twice about exposing our teams in dev and ops to this kind of complexity every single day.

To make things worse, in my experience the vast majority of configuration in setup that applications go through isn’t even required – it’s just in place to cover the cracks where we could’ve done better as developers.

Every time you’re tempted to add a new configuration setting, ask yourself carefully

“Is there any way my application can infer this configuration setting from it’s environment?”

And every time you’re tempted to add a custom installation step or an extra line to your setup guide, think

“Can my application verify it’s environment and do this task at startup?”

Convention and inference are the antidotes to configuration and complexity in setup, and I want to spend a bit of time explaining how you can infer and discover the vast majority of behaviours and settings you think that you need to configure.
We should aim for our software to be safe and configured by default.

 

Why do we configure?


To understand why we implement configuration, we need to look at the kind of things that get configured.

We configure software…

  • To adjust the behaviour of the application at install time
  • To adapt the application to multiple environments
  • To configure aspects of our applications functionality

Of the things we configure, you can split the types of configuration into a couple of categories

  • Configuration pointing to environment-specific instances of dependant services
  • Configuration that toggles application behaviour
  • Configuration that deals with the internal behaviour of the software being installed

 

Environment specific configuration is frequently unavoidable, often repetitive and badly factored.

Feature toggles and internal configuration are much more contentious, frequently defining functionality that shouldn’t be externally configured.

If you’re building software and you need to deploy it to multiple target environments, you’re probably going to end up with some kind of application specific configuration, defining environmental settings, database connection strings, and URIs that you depend on. There’s a certain amount of necessity to environmental connection strings, but we can make it as simple as possible.

Unfortunately, software that requires a lot of configuration and setup, is by definition more difficult to deploy, one of our key goals in building reliable, deployable software should be supporting simple automation.  Simplicity in usage and installation is mandatory, not optional.

Let’s talk about how we can make our software configuration simpler.

 

Always verify your environment

Your application should attempt to verify the existence of absolutely everything it depends on, creating things as it needs to.

This means that if you application requires certain data directories that don’t exist, or has expectations of other networked resources, it should detected, verify and create them at application startup.

Failure detect dependencies it cannot create should be a fatal error, while the lack of a dependency it can create should result in those dependencies being correctly created.

This includes

  • Creating queues
  • Registering topics on event busses
  • Creating directories
  • Setting permissions
  • Making exploratory calls to web-services

You application should refuse to startup in an invalid state, and ideally, should be able to create its entire execution environment from sensible defaults.

Keeping these checks in the codebase of the application, and executing them at startup rapidly reduces time to resolution of any issues. It’s simple to implement, there is no excuse.

 

Regularity helps

Unpredictable environments have a cost on the complexity of application configuration.

Do your best to ensure that all the target environments you deploy to follow regularly patterned names, and verify these strong conventions.

Use names like

  • service1.production-domain.com
  • service2.production-domain.com
  • service1.uat-domain.com
  • service2.uat-domain.com

This regularity in your environment makes mistakes trivial to spot, and reduces the need for verbose and error prone configuration.

Irregularly formed environments require a huge amount of configuration to automate, and that configuration will be brittle and get broken.

 

Discovery heuristics beat configuration

Anything that a human has to configure, a computer can probably do better. Consider having your software detect the environment it exists in, and configure itself appropriately.

You can lean on environmental variables, or even by detecting which services exist in the environment at startup either by connecting to them, or using a service registry of some kind.

This is an obvious technique to use if you have configuration for failover services – at runtime, your application can attempt to connect to both the primary and secondary service, and select which one to configure itself against. I’ve had good experiences using this technique to load balance across payment gateways that had a tendency to fail.

 

If you must configure, keep it dry and remove repetition

One of the absolute worst things I see regularly, is huge configuration files, and templated transformations, to repeatedly change a single portion of a templated string over and over.

We’ll end up with something like this

<appSettings>

<add key=”Server1” value=”http://someservice1.environment1.com” />

<add key=”Server2” value=”http://someservice2.environment1.com” />

<add key=”Server50” value=”http://someservice50.environment1.com” />

</appSettings>

and a transform file that looks like this…

<appSettings>

<add key=”Server1” value=”http://someservice1.environment2.com” />

<add key=”Server2” value=”http://someservice2.environment2.com” />

<add key=”Server50” value=”http://someservice50.environment2.com” />

</appSettings>

When a much simpler approach would’ve been to configure the application with a string template

<appSettings> <add key=”env” value=”environment1” />

<add key=”Server1” value=”http://someservice1.{env}.com” />

<add key=”Server2” value=”http://someservice2.{env}.com” />

<add key=”Server50” value=”http://someservice50.{env}.com” />

</appSettings>

and allow the application to splice the two strings together. This means that your environmental configuration files replace a single line of configuration – not 50. We can take this further and support overridden settings from environmental variables, all because the application now controls how it’s configured and doesn’t blindly read a text file and just go with it.

I’ve seen this remove literally hundreds of lines of configuration peppered through lengthy transformation files.

Never repeat yourself in a configuration file.

 

Allow important configuration to be overridden

Devise a general way to override any of your inferred or conventional configuration settings.

There will always be the odd environment or special circumstance that requires configuration, so you should make a point of it being the exception rather than the rule.

 

Configuration Is Hell

We’ve talked about how regulating deployment environments allows you to configure your software by convention, how predictability makes this easy and divergence makes it hard.

We’ve discussed how allowing your application to verify and create it’s local dependencies relieves the administrator from doing this mundane tasks.

We’ve also considered how detection heuristics can help you remove any notion of environmental configuration from your code, instead focusing on the services the app can discover when it starts up.

And finally, we’ve looked at how trivial configuration templating can remove much of the friction and duplication from the small amounts of required configuration.

It’s really easy to “just add more configuration” in situations where discovery heuristics and self-configuring applications are the correct answer, but configuring software is a user experience problem. 

You’d not accept it from boxed software you bought, so you should do your best to protect your teams from configuration, and the accidental bugs that occur when they get it wrong.

Cutting CODE!&ndash;A livestream show for programmers

02/01/2015 22:37:15

I’ve spent some time recently thinking and discussing the idea of live-streaming coding sessions. It started with conversations with my brother about how there’s not really a Twitch TV for programming, but if there was I’d be really into that.

In a classic case of “The Simpsons Already Did It” a week after floating the original idea to start a pair-programming streamed show with Rob Cooper, Scott Hanselmen posted “Reality TV for Developers – Where is Twitch.tv for Programmers?”. At about the same time a new sub-reddit grew out of /r/programming called /r/WatchPeopleCode/ – the timing seemed a little too good to be true, so last Sunday I did a stealth trial run and mutely live-coded two hours of hacking on a random library I’ve been spiking. It was fairly dull stuff, but about 80 people came and went over the duration of two hours.

That’s enough of an audience for now. I love writing software, and I love pairing, so earlier today I got together with Chris Bird and we streamed our first “live-on-air code kata”. It clocks in at about two hours, and was fun to put together. Time allowing, I’m going to aim to put one or two of these together a week, ideally sticking to a ping-pong-pairing with conversation format.

Here’s the YouTube recording of the pilot “Cutting CODE!” stream, where we build an image to ASCII art converter in two hours, entirely driven by tests.

Deployments in .NET 2014/2015 Report

01/16/2015 01:50:55

At the end of last year, I asked for feedback from the community on the types of deployment tooling and environments they were building software for.

I’ve collated the results, and drawn up a report with observations and recommendations.

Download Deployments in .NET 2014/2015 (pdf)
Download Deployments in .NET 2014/2015 Raw Data (xlsx)

The raw data is provided for those of you who wish to dig a little deeper.

Testing an ASP.NET WebAPI app in memory

01/07/2015 17:09:32

One of the nice things around the rising tide of OWIN in the .NET ecosystem is that it literally supports “hosting anywhere – as long as there’s a viable host!”. For fans of TDD, the great thing about this is “anywhere” includes “inside your unit tests”.

This lets you build a really powerful development workflow where you can lean entirely on your test runner while building your APIs, and not have to bother with the painful cycle of breaking out into external tools. You get to execute the entire WebAPI stack, from inside your acceptance or unit tests and get rapid feedback.

This only works if you’re using WebAPI hosted “using OWIN”, and allows you to write tests that look like this:

image

Combined with manipulation of your container registrations, you can execute full stack tests with a single mocked out component like your data store like this:

image

Powerful right? Lets take a look at how we put this together over a vanilla WebAPI controller, with a single DI’d dependency.

image

We’re going to use a few nuget packages – the ones provided by Microsoft to host OWIN components in IIS, some testing helpers that support in process hosting of OWIN components outside of IIS, and Ninject an IoC container.  We have a packages.json that looks like this:

image

Our solution looks like this:

image

There are a couple of important pieces here – the WebAPI app with it’s OWIN Startup file, a Ninject dependency resolver to hook up IoC into WebAPI, and the controller described earlier. In addition to this we have the interface IGetValues implemented by ValueService. For this example, ValueService is very simple:

image

Your OWIN startup class is effectively your “Global.asax” – it bootstraps your app and configures components.

image

You’ll see that we’re configuring routes, creating a new Ninject “StandardKernel” IoC container, and using the Ninject.Extensions.Conventions library to bind up all our components – in this case just our IGetValues service. I’ll skip over the implementation of the Ninject dependency resolver, but all it does is delegate calls to create components to the Ninject standard kernel.

Pressing F5 in Visual Studio will launch the WebAPI app in IIS Express, and you can visit it in a browser

image

Now we have a working WebAPI app, lets see how we can test it in memory. We want to build a test that runs the whole of the WebAPI stack in memory, and asserts on its response.

To do this, we need to use a couple of packages – the Microsoft Owin HttpListener package, the Owin Hosting package and Owin testing package, alongside our test framework (NUnit).

image

With those packages installed, we can write a test that wires all these components together.

image

What we’re doing here is using the Owin hosting components to invoke our apps Startup class, hosted over HttpListener, on localhost on the port 8086. We’re then using a regular HTTP client to connect to this server, execute a request, and assert on a response. It’s a small marvel that we can do this at all, but TDDers will be cringing at the amount of noise in the test distracting you from the meaningful parts of your test – the preconditions and assertions. We can do better than this!

A Less Noisy Approach

Consider this class

image

Firstly, it’s a base class that tidies all the noise out of the way – all the boilerplate code is moved into a test fixture setup and teardown, and we’re detecting the first available free port to ensure that tests written execute on any machine they’re executed on. Using this class, our previous test becomes

image

instantly becoming readable and exposing only the things we really care about in the test – what we’re asking our code to do, and what the response is. We can pretty much use this base class everywhere because it’s bootstrapping our entire application for each text fixture.

Given how hard it’s previously been to test the full ASP.NET web stack, this is a revelation – you can write full stack acceptance tests without an installed web server, and with no additional infrastructure or support.

In real world examples your application likely has many components that connect to and use external resources (databases, web services) that you’ll want to isolate and replace with test doubles. With some creative use of our IoC container, we can get the best of both worlds and execute full stack acceptance tests (because at this point, we’re definitely not “unit testing”) while swapping out “just” your data access component or “just” an API client library to provide fake responses.

To do this, we want to rebind parts of our application stack for each test, and luckily, Ninject (and pretty much any other good container) makes this quite easy. Consider the following additions to our base class

image

It’s a little bit long winded, but what we’re doing is maintaining a dictionary of Mock objects (using Moq) and providing some helper methods for use in our unit tests. When a unit test calls the method “MockOut” a mock is generated and returned for the test to use. This Mock is “frozen” for the duration of the test fixture and rebound into our IoC container.

What this means is that we can bootstrap our entire stack, and then re-register a single component to a Mock, letting us manipulate a single external resource. This becomes exceptionally useful in our broad acceptance tests

image

Here you can see we’re replacing the implementation of IGetValues with a mock that returns a known value, and asserting that the full stack call returns a body with that value in it. The test is synthetic, but the application is very real – you could mock out API calls to third parties, mock out your entire data store or a single component and verify the full execution of your entire system.

These broad acceptance tests, combined with unit tests of individual components give you the perfect mix of high level “feature focused” tests (you can add a BDD framework of choice if that’s your thing) that assert on behaviour not implementation and regular “TDD Unit Tests” around the components that your system is composed from.

There are lots of parts of ASP.NET vNEXT that are “coming soon” or “wait and see”, but the community effort around OWIN and the maturation of some of the middleware frameworks makes this a technique you can and should adopt now.

Full code sample on GitHub

Thoughts about delivering platforms from Nick Dentons leaked memo

12/11/2014 10:14:38

There's an interesting piece here about Nick Denton stepping down yesterday as company president of Gawker media, the first "big web media company".

The interesting, and tech relevant passage from his pretty readable memo is below. For reference, Kinja is gawkers editorial platform underneath. Denton is big and brash in the memo (which is basically his Christmas note / resignation) and there are a few interesting observations about the common trials and tribulations of software development in there.

"The problems I’m going to identify are common. Excellence in software development is elusive; no online publisher has yet succeeded in transforming itself into a platform...

The current principles of software product development hold that candidconversations--with developers, designers and users--lead to a better web experience. We lacked that necessary candor. We left too many opportunities on the table, too many known problems unresolved. And in our external communications, in our stories, we sometimes shied away from controversy, fearful of online critics. We weren’t ourselves.

We all understand how this works. Editorial traffic was lifted but often by viral stories that we would rather mock. We -- the freest journalists on the planet -- were slaves to the Facebook algorithm. The story of the year -- the one story where we were truly at the epicenter -- was one that caused dangerous internal dissension. We were nowhere on the Edward Snowden affair. We wrote nothing particularly memorable about NSA surveillance. Gadgets felt unexciting. Celebrity gossip was emptier than usual.

We pushed for conversations in Kinja, but forgot that every good conversation begins with a story. Getting the stories should have come first, because without them we have nothing to talk about.?..

...And the development of Kinja itself was a challenge. Our Tech department proclaimed a new era of multi-disciplinary cross-functional teamwork and collaboration. The reality: the best tech teams in online media in both New York and Budapest, with too many developers grinding away at re-factoring (thankful though we’ll be next year for that prep work). And a product manager on the 2015 design refresh had barely talked to the consultant who had driven the other major new project of the forthcoming year. Open collaboration in theory; the opposite in practice. Who raised the alarm? Would I have even heard it? For a good 12 months from the summer of 2013 I was variously betrothed, distracted, obsessed by Kinja, off on honeymoon, obsessed by Kinja, off on sabbatical. I’m not sorry for that. For ten years, I’ve danced with this octopus. That’s what one person on Twitter calls Gawker: an octopus armed with chainsaws. I deserved a break.

When I was disengaged, I didn’t leave any real authority in place. In my absence, the company ticks along nicely; with the challenge of Buzzfeed and Vox, ticking along nicely is no longer enough. Even when I’m here, if I'm obsessed by something, other parts of our common project can spin off in unpredictable directions, causing me to overlook developing risks and opportunities. As Joel said, I am the company’s greatest asset -- and it’s greatest liability. To be saved from myself, like many of us, I need partners in the fullest sense of the word, to take up the slack or keep me on focus. And I didn’t have them.

During this period I made a mistake in Editorial, hiring a talented guy whose voice and vibe I loved, who represented nerd values, and whom I thrust into a job which changed under his feet: he was competing with Lockhart Steele of Vox and Ben Smith of Buzzfeed, two of the most effective editorial managers in the business, each with the funding to go after the very best talent.

I was so obsessed with the design of Kinja discussions, I didn't even think to warn that Gawker is always first about the story. I took that for granted. I was in so much in a hurry that I didn’t even look at other candidates, a cardinal sin. I made a mistake, and I'm sorry to Joel, and I'm sorry to those to whom he is a friend.

And during this time too, we embarked piecemeal on a software project whose eventual scope we barely imagined. Tom told me years ago he did not want to run the department beyond 30 people, that he wanted to get back to coding. Tech is now at 55 people. Tom didn’t push me. I didn't want to mess with what was comfortable, the best relationship with a CTO by far that I’ve ever had in my career. And no other views were solicited.

So we attracted impressive technical talent -- with our culture, audience testbed, and idea -- and then we let those people down. We embarked on the Kinja expansion before we'd recruited the management; each major hire was reactive, each to fix a problem created by the last. Hire engineers. Now manage engineers. Oh no, we need product people. Lean, what’s that? I had to learn fast. It wasn’t quite that bad; but not that far off."

Food for thought.

NuGet 101 &ndash; A Bootcamp

12/09/2014 16:20:47

I’ve been running and recording a lot of workshops over the last couple of months – here is one on NuGet packaging for beginners – starting out as a slide deck then moving into a practical demo.

Slide deck

  • History
  • What's a package
  • So it's a zip file right?
  • Why should I use them?
  • An open source mentality
  • Disadvantages
  • Realities
Demo
  • Directory topologies of a library package
  • Adding packages to your solutions
  • Pairing nuspec files with csproj's
  • Replacement tokens for metadata
  • The NuGet docs
  • Package dependencies
  • Dependency discovery and bundling
  • Versioning and SemVer
  • Package sources
  • NuGet.config
Mentions Slide Deck
« Previous Entries

History