Wednesday, 27 January 2016

Let’s Talk About Feedback

Photo of the enormous guitar amplifier from the start of "Back to the Future"

There’s been some really interesting blog posts about speaking at conferences recently – from Todd Motto’s “So you want to talk at conferences?” piece, to Basia Fusinska’s “Conference talks, where did we go wrong?” and Pieter Hintjens “Ten Steps to Better Public Speaking”, to Seb’s recent “What’s good feedback from a talk?” post. There’s a lot of very useful general advice in those posts, but I absolutely believe the best way to improve as a speaker is to ask your audience what you could be doing better, and that’s hard.

After most talks you’ll have a couple of people come up to you at the coffee machine or in the bar and say “hey, great talk!” – and that sort of positive feedback is great for your confidence but it doesn’t actually give you much scope to improve unless you take the initiative. Don’t turn it into an interrogation, but it’s easy to spin this into conversation – “hey, really glad you enjoyed it. Anything you think I could have done differently? Could you read the slides?” If there’s something in your talk which you’re not sure about, this is a great chance to get an anecdotal second opinion from somebody who actually saw it.

Twitter is another great way to get anecdotal feedback. Include your Twitter handle and hashtags in your slides – at the beginning AND at the end – and invite people to tweet at you if they have any comments or questions. In my experience, Twitter feedback tends to be positive (or neutral – I’ve been mentioned in lots of “@dylanbeattie talking about yada yada yada at #conference”-type tweets) – and again, whilst it’s good for giving your confidence a boost, it can also be a great opportunity to engage with your audience and try to get some more detailed feedback.

And then there’s the official feedback loops – the colored cards, the voting systems and the feedback app. I really like how  BuildStuff does this. They gather feedback on each talk through a combination of coloured-card voting and an online feedback app. Attendees who give feedback online go into a prize draw, which is a nice incentive to do so – and it makes it an easy sell for the speakers: “Thank you – and please remember to leave me feedback; you might win an iPad!” The other great thing BuildStuff does is to send you your feedback as a JPEG, which looks good and makes it really easy for speakers to share feedback and swap notes afterwards. Here’s mine from Vilnius and Kyiv last year:

Build Stuff Lithuania ratings16 Build Stuff UA6

Now, I’m pretty sure there were more than five people in my talk in Kyiv – so I think something might have got lost in the transcription here – but the real value here is in the anecdotal comments.

Some conferences also run a live “leaderboard”, showing which speakers are getting the highest ratings. I’m generally not a big fan of this – I think it perpetuates a culture of celebrity that runs contrary to the approachability and openness of most conference speakers – but if you are going to do it, then make sure it works. Don’t run a live leaderboard that excludes all the speakers from Room 7 because the voting machine in Room 7 was broken.

Finally, two pieces of feedback I had from my talk about ReST at NDC London this year. The official talk rating, which I’m quite happy with but doesn’t really offer any scope for improvement:

image

And then there’s this, thanks to Seb, who not only came along to my talk but sat in the front row scribbling away on his iPad Pro and then sent me his notes afterwards. There’s some real substance here, some good points to follow up on and things I know I could improve:

ndc_london_seb_notes

This also goes to highlight one of the pitfalls of doing in-depth technical talks – your audience probably aren’t in a position to judge whether there’s flaws in your content or not, so your feedback is more likely to reflect the quality of your slides and your presentation style than the substance of your content. In other words – just because you got 45 green tickets doesn’t mean you can’t improve. Find a subject matter expert and ask if they can watch your talk and give you notes on it. Share your slides and talks online and ask the wider community for their feedback. And don’t get lazy. Even if you’ve given a talk a dozen times before, we’re in a constantly-changing industry and every audience is different.

Sunday, 24 January 2016

“The Face of Things to Come” from PubConf

A version of my talk from PubConf London, “The Face of Things to Come”, is now online. This isn’t a recording of the actual talk – the audio has been recorded specially, one slide has been replaced for copyright reasons, and a couple of things have been fixed in the edit – but it’s close enough.

The Face of Things to Come from Dylan Beattie on Vimeo.

As always, the only way to improve as a speaker is to listen to your audience, so I would love to hear your comments or feedback – leave a comment, email me or ping me on Twitter.

Tuesday, 19 January 2016

Would you like to speak at London .NET User Group in 2016?

The London .NET User Group, aka LDNUG – founded and run by Ian Cooper, with help from Liam Westley, Toby Henderson and me – is now accepting speaker submissions for 2016.

We aim to run at least one meetup a month during 2016, with at least two speakers at each meetup. Meetups are always on weekday evenings in central London, are free, and we want to have at least two speakers at each of our meetups this year. We’re particularly keen to welcome some new faces and new ideas to the London .NET community, so if you’ve ever been at a talk or a conference and thought “hey – maybe I could do that!” – this is your chance.

We’re going to try and introduce some variation on the format this year, so we’re inviting submissions for 45-minute talks, 15-minute short talks and 5-minute lightning talks, on any topic that’s associated with .NET, software development and the developer community. Come along and tell us about your cool new open source library, or that really big project your team’s just shipped. Tell us something we didn’t know about asynchronous programming, or distributed systems architecture. We welcome submissions from subject matter experts but we’re also keen to hear your first-hand stories and experience. Never mind what the documentation said – what worked for you and your team? Why did it work? What would you do differently?

If you’re a new speaker and you’d like some help and support, let us know. We’d be happy to discuss your ideas for potential talks, help you write your summary, rehearse your talk, improve your slide deck. Drop me an email, ping me on Twitter or come and find me after the next meetup (I’m the one in the hat!) and I’ll be happy to help.

So what are you waiting for? Send us your ideas, come along to our meetups, and let’s make 2016 a great year for London.NET.

 

Yes! I want to speak at London.NET in 2016!

Conway’s Law and the Mythical 17:00 Split

I was at PubConf on Saturday. It was an absolutely terrific event – fast-paced, irreverent, thought-provoking and hugely enjoyable. Huge thanks to Todd Gardner for making it happen, to Liam Westley for advanced detail-wrangling, and to NDC London, Track:js, Red Gate and Zopa for their generous sponsorship. 

Seb delivered a great talk about the Mythical 17:00 Split, which he has now written up on his blog. His talk resonated a lot with me, because I also find a lot of things about workplace culture very strange. I’m lucky enough to work somewhere where I seldom encounter these issues directly, but I know people whose managers genuinely believe that the best way to write software is to do it wearing a suit and tie, at eight’o’clock in the morning, whilst sitting in a crowded office full of hedge fund traders.

But Seb’s write-up post said something that really struck a chord.

“Take working in teams. The best teams are made of people that like working together, and the worst teams I’ve had was when a developer had specific issues with me, to the point of causing a lot of tension”

Now, I’m a big fan of Conway’s Law – over the course of my career, I’ve seen (and built) dozens of software systems that turned out to reflect the communication structures of the organizations that created them. I’ve even given a conference talk at BuildStuff 2015 about Conway’s Law with Mel Conway in the audience – which was great fun, if a little nerve-wracking.

In a nutshell, Conway’s Law says of Seb’s observation regarding teams that if you take a bunch of people who are fundamentally incompatible, and force them to work together, you’ll end up with a system which is a bunch of incompatible components that are being forced to work together. If you want to know whether – and how – your systems are going to fail in production, look at the team dynamic of the people who are building them. If watching those people leaves you feeling calm, reassured and relaxed, then you’re gonna end up with something good. If one person is absolutely dominating the conversations, one component is going to end up dominating the architecture. If there’s two people on the team who won’t speak to each other and end up mediating all their communication through somebody else – guess what? You’re going to end up with a broker in your system that has to manage communication between two components that won’t communicate directly.

If your team hate each other, your product will suck – and in the entire history of humankind, there are only two documented exceptions to this rule. One is Guns’n’Roses “Appetite for Destruction” and the other is “Rumours” by Fleetwood Mac.

\m/

Thursday, 14 January 2016

The Rest of ReST at NDC London

A huge thank you to everyone who came along to my ReST talk here at NDC London. Links to a couple of resources you might find useful:

Thanks again for coming – and any comments, questions or feedback, you’ll find me on Twitter as @dylanbeattie.

Thursday, 7 January 2016

Confession Time. I Implemented the EU Cookie Banner

Troy Hunt kicked off 2016 with a great post about poor user experiences online – a catalogue of common UX antipatterns that “make online life much more painful than it needs to be”.

One of the things he picks up on is EU Cookie Warnings – “this is just plain stupid.” And yeah, it is. Absolutely everybody I know who added an EU cookie warning to their website agrees – this is just plain stupid. But for folks outside the European Union, it might be insightful to learn just why these things started appearing all over the place.

First, a VERY brief primer on how the European Union works. There’s currently 28 countries in the EU. The United Kingdom, where I live and work, is one of them. One of the aims of the EU is to create a consistent legal framework that covers all citizens of all its member states. Overseeing all this is the European Parliament. They make laws. It’s then up to the governments of the individual member states to interpret and enforce those laws within their own countries.

So, in 2009, the European Parliament issued a directive called 2009/136/EC – OpenRightsGroup has some good coverage of this. The kicker here is Article 5(3), which says

“The storing of information or the gaining of access to information already stored in the user’s equipment is only allowed on the condition that the subscriber or user concerned has given their consent, having been provided with clear and comprehensive information in accordance with Directive 95/46/EC, inter alia, about the purposes of the processing. This shall not prevent any technical storage or access for the sole purpose of carrying out the transmission of a communication over an electronic communications network, or as strictly necessary in order for the provider of an information society service explicitly requested by the subscriber or user to provide the service.”

In a nutshell, this means you can’t store anything (such as a cookie) on a user’s device, unless

  1. You’ve told them what you’re doing and they’ve given their explicit consent, OR
  2. It’s absolutely necessary to provide the service they’ve asked for.

Directive 2009/136 goes on to state (my emphasis):

“Under the added Article 15a, Member States are obliged to laydown rules on penalties, including criminal sanctions where applicable to infringements of the national provisions, which have been adopted to implement this Directive. The Member States shall also take “all measures necessary” to ensure that these are implemented. The new article further states that “the penalties provided for must be effective, proportionate and dissuasive and may be applied to cover the period of any breach, even where the breach has subsequently been rectified”.

Golly! Criminal sanctions? Retrospectively applied, even for something that we already fixed? That sounds pretty ominous.

Anyway. Here’s what happens next. Directive 2009/136 means that is is now THE LAW that you don’t store cookies without consent, and the various member states swing into action and try to work out what this means and how to enforce it. In the UK, Parliament interpreted this via something called the Privacy and Electronic Communications (EC Directive) (Amendment) Regulations 2011, which would come into effect in 2012.

My team and I found out in late 2011 that, when the new regulations came into force on 26 May 2012, we would be breaking the law if we put cookies on our user’s machines without their explicit consent. And nobody had the faintest idea what that actually meant, because nobody had ever broken this law yet, so nobody knew what the penalties for non-compliance would be. The arm of the UK government that deals with this kind of thing is the Information Commissioner’s Office (ICO), who have a reputation for taking data protection very seriously, and the power to exact fines up to £500,000 for non-compliance. The ICO also usually publish quite clear and reasonable guidelines on how to comply with various elements of the law – but that takes time, so in late 2011 we found ourselves with a tangle of bureacracy, a hard deadline, the possibility of severe penalties, and absolutely no guidance to work from.

So… we implemented it. Despite it being a pointless, stupid, ridiculous endeavour that would waste our time and piss off our users, we did it - because we didn’t want to end up in court and nobody could assure us that we wouldn’t.

We built a nice self-contained JavaScript library to handle displaying the banner across our various sites and pages.

image

Instead of just plastering something on every page saying “We use cookies. Deal with it”, the approach taken by most sites - we actually split our cookies into the essential ones required to make our site work, and the non-essential ones used by Boomerang, Google Analytics and other stats and analytics tools. And we allowed users to opt-out of the non-essential ones. We went live with this on 10th May 2012. Around 30% of our users chose to opt-out of non-essential cookies – meaning they became invisible to Google Analytics and our other tracking software. Here’s our web traffic graph for April – June 2012 – see how the peaks after May 10th are suddenly a lot lower?

image

On 25th May 2012, ONE DAY before the new regulations became law, the ICO issued some new guidance, which significantly relaxed the requirements around ‘consent’. “Implied consent” was suddenly OK – i.e. if your users hadn’t disabled cookies in their browser, you could interpret that as meaning they had consented to receive cookies from your site.

They also announced that any enforcement would be in response to user complaints about a specific site:

“The end of the safe period "doesn't mean the ICO is going to launch a torrent of enforcement action" said the deputy commissioner and it would take serious breaches of data protection that caused "significant distress" to attract the maximum £0.5m non-compliance fine.” (via The Register)

So there you have it. Go to http://www.spotlight.com/ and, just once, you’ll see a nice friendly banner asking if you mind us tracking your session using cookies. And if you opt out, that’s absolutely fine – our site still works and you won’t show up in any of our analytics. Couple of weeks of effort, a nice, clean, technically sound implementation… did it make the slightest bit of difference? Nah. Except now we multiply all our Analytics numbers by 1.5. And yes, we periodically review the latest guidance to see whether the EU has finally admitted the whole thing was a bit silly and maybe isn’t actually helping, but so far nada – and in the absence of any hard evidence to the contrary, it’s hard to make a business case for doing work that would make us technically non-compliant, even if the odds of any enforcement action are minimal.

Now, if the European Parliament really wanted to make the internet a better place, how about they read Troy’s post and ban popover adverts, unnecessary pagination, linkbait headlines and restrictions on passwords? Now that’s the kind of legislation I could really get behind.

Restival Part 6: Who Am I, Revisited

Note: code for this instalment is in https://github.com/dylanbeattie/Restival/tree/v0.0.6

In the last instalment, we looked at adding HTTP Basic authentication to a simple HTTP endpoint - GET /whoami - that returns information about the authenticated user.

Well... I didn't like it. Both the OpenRasta and the WebAPI implementations felt really over-engineered, so I kept digging and discovered a few things that made everything much, much cleaner.

Basic auth in OpenRasta - Take 2

There's an HTTP Basic authentication feature baked into OpenRasta 2.5.x, but all of the classes are marked as deprecated so in my first implementation I avoided using it. After talking with Seb, the creator of OpenRasta, I understand a lot more about the rationale behind deprecating these classes - they're earmarked for migration into a standalone module, not for outright deletion, and they'll definitely remain part of the OpenRasta 2.x codebase for the foreseeable future.

Armed with that knowledge, and the magical compiler directive #pragma warning disable 618 that stops Visual Studio complaining about you using deprecated code, I switched Restival back to running on the OpenRasta NuGet package instead of my forked build, and reimplemented the authentication feature - and yes, it's much, much nicer.

There's a RestivalAuthenticator which implements OpenRasta's IBasicAuthenticator interface - as with the other frameworks, this ends up being a really simple wrapper around the IDataStore:

public class RestivalAuthenticator : IBasicAuthenticator {
  private readonly IDataStore db;

  public RestivalAuthenticator(IDataStore db) {
    this.db = db;
  }

  public AuthenticationResult Authenticate(BasicAuthRequestHeader header) {
    var user = db.FindUserByUsername(header.Username);
    if (user != null && user.Password == header.Password) return (new AuthenticationResult.Success(user.Username, new string[] { }));
    return (new AuthenticationResult.Failed());
  }

  public string Realm { get { return ("Restival.OpenRasta"); } }
}

and then there's the configuration code to initialise the authentication provider.

ResourceSpace.Uses.PipelineContributor<AuthenticationContributor>();
ResourceSpace.Uses.PipelineContributor<AuthenticationChallengerContributor>();
ResourceSpace.Uses.CustomDependency<IAuthenticationScheme, BasicAuthenticationScheme>(DependencyLifetime.Singleton);
ResourceSpace.Uses.CustomDependency<IBasicAuthenticator, RestivalAuthenticator>(DependencyLifetime.Transient);

This one stumped me for a while, until I realised that - unlike, say, Nancy, which just does everything by magic, you need to explicitly register both the AuthenticationContributor and the AuthenticationChallengerContributor. These are the OpenRasta components that handle the HTTP header parsing, decoding and the WWW-Authenticate challenge response, but if you don't explicitly wire them into your pipeline, your custom auth classes will never get called.

Basic auth in WebAPI - Take 2

As part of the last instalment, I wired up the LightInject IoC container to my WebAPI implementation. I love LightInject, but something I had never previously realised is that LightInject can do property injection on your custom attributes. This is a game-changer, because previously I'd been following a pattern of using purely decorative attributes - i.e. with no behaviour - and then implementing a separate ActionFilter that would check for the presence of the corresponding attribute before running some custom code - and all this just so I could inject dependencies into my filter instances.

Well, with LightInject up and running, you don't need to do any of that - you can just put a public IService { get; set; } onto your MyCustomAttribute class, and LightInject will resolve IService at runtime and inject an instance of MyAwesomeService : IService into your attribute code. Which means the half-a-dozen classes worth of custom filters and responses from the last implementation can be ripped out in favour of a single RequireHttpBasicAuthorizationAttribute - which overrides WebAPI's built-in AuthorizeAttribute class to provide authorization header parsing, WWW-Authenticate challenge response, and hook the authentication up to our IDataStore interface.

I'm much happier now with all four implementations, so it raises the interesting question of how much development time is really worth. Based on the code provided here, I suspect a good developer could implement HTTP Basic auth on any of these frameworks in about fifteen minutes - but something that takes fifteen minutes to implement doesn't really count if it takes you two days to work out how to do that fifteen-minute implementation.

In forthcoming instalments, we're going to be adding hypermedia, HAL+JSON and resource expansion - as we move further away from basic HTTP capabilities and into more advanced REST/serialization/content negotiation, it'll be interesting to see how our four frameworks measure up.

Restival Part 5: Who Am I?

NOTE: Code for this article is at https://github.com/dylanbeattie/Restival/tree/v0.0.5

UPDATE: The WebAPI and OpenRasta implementations described here are... inelegant. After this release, I spent a little more time and came up with something much cleaner - which you can read all about in the follow-up post http://dylanbeattie.blogspot.co.uk/2015/12/restival-part-6-who-am-i-revisited.html

Welcome back. It's been a very busy summer that's turned into a very busy autumn - including a fantastic couple of weeks in Eastern Europe, speaking at BuildStuff in Lithuania and Ukraine, where I met a lot of very interesting people, discussed a lot of very interesting ideas, tried a lot of very interesting food, and generally got a nice hard kick out of my comfort zone. Which is good.

If your name's not on the list, you're not coming in!Anyway. As London starts getting frosty and festive, it's time to pick up where we left off earlier in the year with my ongoing Restival project - implementing the same API in four different .NET API frameworks (ServiceStack, NancyFX, OpenRasta and WebAPI).

In this instalment, we're going to add two very straightforward capabilities to our API:

  1. HTTP Basic authentication. Clients can include a username/password in an HTTP Authorization header, and the API will verify their credentials against a simple user store. (If you're doing this in production, you'd enforce HTTPS/TLS so that credentials can't be sniffed in transit, but since this is a demonstration project and I'm optimising for readability, TLS is out of scope for now.)
  2. A /whoami endpoint, where authenticated users can GET details of their own user record.

And that's it. Sounds simple, right? OK, first things first - let's thrash out our requirements into something a little more detailed:

  • GET /whoami with valid credentials should return the current users' details (id, name, username)

But remember - we're building an authentication system, so we should also be testing the negative responses:

  • GET /whoami with invalid credentials returns 401 Unauthorized
    • "invalid" means an unsupported scheme, an unsupported header format, invalid credential encoding, or a correctly formatted header containing an unrecognised username and/or password.
  • GET /whoami without any credentials returns a 401 Unauthorized
  • GET /whoami without any credentials returns a WWW-Authenticate header indicating that we support HTTP Basic authentication.

The WWW-Authenticate thing is partly a node to HATEOAS, and partly there to make it easier to test things from a normal web browser. I tend to use Postman for building and testing any serious HTTP services, but when it comes to quick'n'dirty API discovery and troubleshooting, I can't overstate the convenience of being able to paste something into a normal web browser and get a readable response.

The test code is in WhoAmITestBase.cs. Notice that in this implementation, our users are stored in a fake data store - FakeDataStore.cs - and we're actually using this class as a TestCaseSource in our tests so we maintain parity between the test data and the test coverage.

The WhoAmI endpoint was trivial - find the current user name, look them up in the user store, and convert their user data to a WhoAmIResponse. The fun part here was the authentication.

A Note on Authentication vs Authorization

NOTE: Authentication and authorization are, strictly speaking, different things. Authentication is "who are you?", authorization is "are you allowed to do this thing?" Someone trying to enter the United States with a Libyan passport is authenticated - the US border service know exactly who they are - but they're not authorized because Libyan citizens can't enter the US without a valid visa.

In one respect, HTTP gets this exactly right - a 401 Unauthorized means "we don't know who you are", a 403 Forbidden means "we know who you are, but you're not allowed to do this." In another respect, it gets this badly wrong, because the Authentication header in the HTTP specification is called Authorization.

Authentication - The Best Case Scenario

OK, to build our secure /whoami endpoint, we need a handful of extension points in our framework. We need to:

  1. Validate a supplied username and password against our own credential store
  2. Indicate that specific resources require authorization, so that unauthorized requests for those resources will be rejected
  3. Determine the identity of the authenticated user when responding to an authorized request

The rest - WWW-Authenticate negotiation, decoding base64-encoded Authorization headers, returning 401 Unauthorized responses - is completely generic, so in an ideal world our framework will do all this for us; all we need to do is implement the three extension points above.

Let's look at Nancy first, because alphabetical is as good an order as any.

NancyFX

Implementing HTTP Basic auth in NancyFX proved fairly straightforward. The first head-scratching moment I hit is that - unlike the other frameworks in this project - Nancy is so lightweight that I didn't actually have any kind of bootstrapper or initialization code anywhere, so it wasn't immediately obvious where to configure the additional pipeline steps. A bit of Googling and poking around the NancyFX source led me to the Nancy.Demo.Authentication.Basic project, which made things a lot clearer. From this point, implementation involved:

  1. Add an AuthenticationBootstrapper - which Just WorkedTM without any explicit registration. I'm guessing it's invoked by magic sufficiently advanced technology, because it overrides DefaultNancyBootstrapper.
  2. Implement IUserValidator to connect Nancy to my custom user store - my implementation is just a really simple wrapper around my user data store. My UserValidator depends on my IDataStore interface - and, thanks to Nancy's auto-configuration, I didn't have to explicitly register either of these as dependencies.
  3. In this bootstrapper, call pipelines.EnableBasicAuthentication() and pass in a basic authentication configuration.

protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines) {
    base.ApplicationStartup(container, pipelines);
    var authConfig = new BasicAuthenticationConfiguration(container.Resolve<IUserValidator>(), "Restival.Nancy");
    pipelines.EnableBasicAuthentication(authConfig);
}

Finally, the WhoAmIModule needs a call to this.RequiresAuthentication(), and that's it. Clean, straightforward, no real showstoppers.

UPDATE: The NancyFX documentation has now been updated with a detailed walkthrough on enabling HTTP Basic authentication 

OpenRasta

Adding HTTP Basic auth to OpenRasta was a lot more involved - and the reasons why provide some interesting insight into the underlying development practises of the frameworks we're comparing. In 2013, there were some major changes to the request processing pipeline used by OpenRasta. As part of these changes, the basic authentication features of OpenRasta (dating back to 2008) were marked as deprecated - and until now, nobody's contributed an up-to-date implementation, I'm guessing because none of the people using OpenRasta have needed one. Which left me with a choice - do I use the deprecated approach, contribute my own implementation, or disqualify OpenRasta? Well, disqualification would be no fun, and deprecated code means you get Resharper yelling at you all over the place, so I ended up implementing a pipeline-based HTTP Basic authorization module to the OpenRasta codebase.

Whilst writing this article, I had a long and very enlightening chat with Sebastien Lambla, the creator of OpenRasta, about the current state of the project and specifically the authentication capabilities. It turns out the features marked as deprecated were intended to be migrated into a separate OpenRasta.Security module, thus decoupling authentication concerns from the core pipeline model - but this hasn't happened yet, and so the code is still in the OpenRasta core package. It's up to you, the implementer, whether to use the existing ('deprecated') HTTP authentication provider, or to roll your own based on the new pipeline approach.

The original pull request for this is #89, which is based on the Digest authentication example included in the OpenRasta core codebase - but shortly after it was accepted, I found a bug with the way both the Digest and my new Basic implementation handled access to non-secured resources. The fix for this is in #91, which at the time of writing is still being reviewed, so Restival's currently building against my fork of OpenRasta in order to get the authentication and WhoAmI tests passing properly.

Following those changes to the framework itself, the implementation was pretty easy.

  1. Provide an implementation of IAuthenticationProvider based on my IDataStore interface.
  2. Register the authentication provider and pipeline contributor in the Configuration class:

    ResourceSpace.Uses.CustomDependency<IDataStore, FakeDataStore>(DependencyLifetime.Singleton);
    ResourceSpace.Uses.CustomDependency<IAuthenticationProvider, AuthenticationProvider>(DependencyLifetime.Singleton);
    ResourceSpace.Uses.PipelineContributor<BasicAuthorizerContributor>();

  3. Decorate the WhoAmIHandler with [RequiresBasicAuthentication(realm)]
Total research and Google time took the best part of a day - including implementing the missing pieces of the framework. Implementation time once those were in place was around an hour, although I suspect even if the necessary authorization components already existed I'd have needed a couple of hours Google time to work out exactly how to plug into the pipeline.  

 

ServiceStack

Service was really straightforward, mainly because the extension points for overriding built-in authentication behaviour are logical and clearly documented. Implementation involved creating my own AuthProvider that extends the built-in BasicAuthProvider, injecting an IDataStore into this auth provider, and then registering my provider with ServiceStack in AppHost.Configure().

The only gotcha I encountered here was that the first implementation would response with an HTTP redirect to a login page if the request wasn't authorized - but Googling "servicestack authentication redirecting to /login" brought up this StackOverflow post, which explained that (a) this only happens for HTML content-type requests (my fault for using a web browser to test my HTTP API, I guess!), and that you can disable it by specifying HtmlRedirect = null when you initialize the auth feature:

public class AppHost : AppHostBase {
  public AppHost() : base("Restival", typeof(HelloService).Assembly) { }
  public override void Configure(Container container) {
    var db = new FakeDataStore();
    container.Register<IDataStore>(c => db).ReusedWithin(ReuseScope.Container);
    var auth = new AuthFeature(() => new AuthUserSession(), new IAuthProvider[] { new RestivalAuthProvider(db) }) {
      HtmlRedirect = null
    };
    Plugins.Add(auth);
  }
}

Total implementation time was about half an hour here - but bear in mind I've worked with ServiceStack a lot, so I'm familiar with the plugin architecture and the pipeline.

WebAPI

It took a surprisingly long time to come up with an HTTP Basic auth implementation on WebAPI. The first stumbling block here was the sheer number of posts, articles and examples demonstrating how to do it. There's a lot of excellent resources online about adding HTTP Basic authentication support to WebAPI, most of which are subtle variations on a common theme:

Now, the optimal number of references in a scenario like this is one. A single article, ideally written by the framework authors, saying "this is how you implement this very simple thing", and identifying the available integration points. Any framework where 3+ people have written in-depth articles on how to add something as simple as HTTP Basic authentication is insufficiently opinionated. Then there's the realization that WebAPI follows the ASP.NET MVC convention of decorating controllers and actions with attributes to support features like authentication - and injecting dependencies into attributes is fiddly, because of the way that attributes are instantiated by the runtime. So... fast-forward several hours of Googling and forking and compiling things and generally poking around, and I decided that Robert Muehsig's approach is the closest thing to what I'm after. And since it's on GitHub under an MIT license, I borrowed it. Most of the classes in Restival.Api.WebApi.Security are lifted directly from Robert's sample code. This produced a working implementation that passed all the tests, but I must admit I'm still not happy with it. It's definitely one I plan to revisit. It's notable that this implementation explicitly decouples the attribute itself from the filter implementation - this is so I can inject dependencies into the filter at runtime whilst still using the attribute declaratively, but it's rather inelegant and I can't say I'm terribly happy with it.

Total implementation time for this was at least a day, including research, reading examples, prototyping and digging into the WebAPI source to work out which bits to override. That's significantly more than I thought it would be, and I'm rather expecting someone to comment on this post "you're doing it wrong!" and link me to ANOTHER blog post or web page which demonstrates a different approach that "only takes five minutes". We shall see. :)

Conclusions

Here's how the four authentication implementations stack up. The implementation time here isn't how long it took, it's how long I think it would take to do it again now I'm familiar with the various frameworks authentication/plugin patterns.

Framework Lines of code New Classes Research time Implementation Time
WebAPI ~ 250 6 (here) 1 day 2-3 hours
ServiceStack ~ 20 1 (RestivalAuthProvider) 15 minutes 15 mins
NancyFX ~ 25 2 (UserIdentity, UserValidator) 30 minutes 15 mins
OpenRasta ~ 30

plus changes to framework
1 (AuthenticationProvider)

plus two new framework classes
1 day 1-2 hours

HTTP Basic auth proved an interesting example, because whilst it's an obvious feature on any API feature 'checklist', any reasonably mature HTTP API will almost certainly have moved beyond basic authentication to something more sophisticated such as per-client API keys or OAuth2 tokens. 

When it comes to ease of implementation, I'd say it's a dead heat between NancyFX and ServiceStack. I have more experience working with ServiceStack than I do with Nancy, and I suspect my familiarity with their plugin model made things a little easier, but I really don't think there's much in it - the integration points are sensible, the documentation is comprehensive (and accurate!), and I'm pretty confident the implementations presented here reflect the idioms of the corresponding platforms.

OpenRasta and WebAPI were much less straightforward, but for very different reasons, In a nutshell, I think WebAPI is insufficiently opinionated - rather than encouraging or enforcing a particular approach, it supports a wealth of different integration patterns and extension points, and it's really not clear why any one is better than another. OpenRasta, on the other hand, has a very strong authentication pattern that makes a lot of sense once you get your head around it - but a lack of examples and up-to-date documentation means it's quite hard to work out what the pattern is without digging into the framework source code and getting your hands dirty.

Tune in next time, when we're going to start adding hypermedia to our /whoami endpoint response.

Thanks to Jonathan Channon, Steven Robbins and Sebastien Lambla for their help and feedback on this instalment.

Tuesday, 5 January 2016

In London next Saturday? Come to PubConf!

PubConfNext Saturday January 16th, some of the best speakers in the IT industry – and me – will be appearing at PubConf, a single-track, rapid-fire conference. A series of five-minute Ignite-style talks, curated around three themes that we hope are relevant and familiar to anybody who does “computering” – managing yourself; the future of computering, and ranting about broken things.

I’ll be talking about “The Face Of Things To Come” on the computering track – all about recent advances in computer vision, face detection and recognition, emotion analysis, and how I think this kind of technology is going to change the world. For real, this time.

It promises to be a really great event – and it’s completely absolutely free. Yep, free-as-in-beer free. Read more about it at pubconf.io, and get your tickets at EventBrite.