Tuesday, 8 December 2015

Restival Part 6: Who Am I, Revisited

Note: code for this instalment is in https://github.com/dylanbeattie/Restival/tree/v0.0.6

In the last instalment, we looked at adding HTTP Basic authentication to a simple HTTP endpoint - GET /whoami - that returns information about the authenticated user.

Well... I didn't like it. Both the OpenRasta and the WebAPI implementations felt really over-engineered, so I kept digging and discovered a few things that made everything much, much cleaner.

Basic auth in OpenRasta - Take 2

There's an HTTP Basic authentication feature baked into OpenRasta 2.5.x, but all of the classes are marked as deprecated so in my first implementation I avoided using it. After talking with Seb, the creator of OpenRasta, I understand a lot more about the rationale behind deprecating these classes - they're earmarked for migration into a standalone module, not for outright deletion, and they'll definitely remain part of the OpenRasta 2.x codebase for the foreseeable future.

Armed with that knowledge, and the magical compiler directive #pragma warning disable 618 that stops Visual Studio complaining about you using deprecated code, I switched Restival back to running on the OpenRasta NuGet package instead of my forked build, and reimplemented the authentication feature - and yes, it's much, much nicer.

There's a RestivalAuthenticator which implements OpenRasta's IBasicAuthenticator interface - as with the other frameworks, this ends up being a really simple wrapper around the IDataStore:

public class RestivalAuthenticator : IBasicAuthenticator {
  private readonly IDataStore db;

  public RestivalAuthenticator(IDataStore db) {
    this.db = db;
  }

  public AuthenticationResult Authenticate(BasicAuthRequestHeader header) {
    var user = db.FindUserByUsername(header.Username);
    if (user != null && user.Password == header.Password) return (new AuthenticationResult.Success(user.Username, new string[] { }));
    return (new AuthenticationResult.Failed());
  }

  public string Realm { get { return ("Restival.OpenRasta"); } }
}

and then there's the configuration code to initialise the authentication provider.

ResourceSpace.Uses.PipelineContributor<AuthenticationContributor>();
ResourceSpace.Uses.PipelineContributor<AuthenticationChallengerContributor>();
ResourceSpace.Uses.CustomDependency<IAuthenticationScheme, BasicAuthenticationScheme>(DependencyLifetime.Singleton);
ResourceSpace.Uses.CustomDependency<IBasicAuthenticator, RestivalAuthenticator>(DependencyLifetime.Transient);

This one stumped me for a while, until I realised that - unlike, say, Nancy, which just does everything by magic, you need to explicitly register both the AuthenticationContributor and the AuthenticationChallengerContributor. These are the OpenRasta components that handle the HTTP header parsing, decoding and the WWW-Authenticate challenge response, but if you don't explicitly wire them into your pipeline, your custom auth classes will never get called.

Basic auth in WebAPI - Take 2

As part of the last instalment, I wired up the LightInject IoC container to my WebAPI implementation. I love LightInject, but something I had never previously realised is that LightInject can do property injection on your custom attributes. This is a game-changer, because previously I'd been following a pattern of using purely decorative attributes - i.e. with no behaviour - and then implementing a separate ActionFilter that would check for the presence of the corresponding attribute before running some custom code - and all this just so I could inject dependencies into my filter instances.

Well, with LightInject up and running, you don't need to do any of that - you can just put a public IService { get; set; } onto your MyCustomAttribute class, and LightInject will resolve IService at runtime and inject an instance of MyAwesomeService : IService into your attribute code. Which means the half-a-dozen classes worth of custom filters and responses from the last implementation can be ripped out in favour of a single RequireHttpBasicAuthorizationAttribute - which overrides WebAPI's built-in AuthorizeAttribute class to provide authorization header parsing, WWW-Authenticate challenge response, and hook the authentication up to our IDataStore interface.

I'm much happier now with all four implementations, so it raises the interesting question of how much development time is really worth. Based on the code provided here, I suspect a good developer could implement HTTP Basic auth on any of these frameworks in about fifteen minutes - but something that takes fifteen minutes to implement doesn't really count if it takes you two days to work out how to do that fifteen-minute implementation.

In forthcoming instalments, we're going to be adding hypermedia, HAL+JSON and resource expansion - as we move further away from basic HTTP capabilities and into more advanced REST/serialization/content negotiation, it'll be interesting to see how our four frameworks measure up.

Monday, 7 December 2015

Restival Part 5: Who Am I?

NOTE: Code for this article is at https://github.com/dylanbeattie/Restival/tree/v0.0.5

UPDATE: The WebAPI and OpenRasta implementations described here are... inelegant. After this release, I spent a little more time and came up with something much cleaner - which you can read all about in the follow-up post http://dylanbeattie.blogspot.co.uk/2015/12/restival-part-6-who-am-i-revisited.html

Welcome back. It's been a very busy summer that's turned into a very busy autumn - including a fantastic couple of weeks in Eastern Europe, speaking at BuildStuff in Lithuania and Ukraine, where I met a lot of very interesting people, discussed a lot of very interesting ideas, tried a lot of very interesting food, and generally got a nice hard kick out of my comfort zone. Which is good.

If your name's not on the list, you're not coming in!Anyway. As London starts getting frosty and festive, it's time to pick up where we left off earlier in the year with my ongoing Restival project - implementing the same API in four different .NET API frameworks (ServiceStack, NancyFX, OpenRasta and WebAPI).

In this instalment, we're going to add two very straightforward capabilities to our API:

  1. HTTP Basic authentication. Clients can include a username/password in an HTTP Authorization header, and the API will verify their credentials against a simple user store. (If you're doing this in production, you'd enforce HTTPS/TLS so that credentials can't be sniffed in transit, but since this is a demonstration project and I'm optimising for readability, TLS is out of scope for now.)
  2. A /whoami endpoint, where authenticated users can GET details of their own user record.

And that's it. Sounds simple, right? OK, first things first - let's thrash out our requirements into something a little more detailed:

  • GET /whoami with valid credentials should return the current users' details (id, name, username)

But remember - we're building an authentication system, so we should also be testing the negative responses:

  • GET /whoami with invalid credentials returns 401 Unauthorized
    • "invalid" means an unsupported scheme, an unsupported header format, invalid credential encoding, or a correctly formatted header containing an unrecognised username and/or password.
  • GET /whoami without any credentials returns a 401 Unauthorized
  • GET /whoami without any credentials returns a WWW-Authenticate header indicating that we support HTTP Basic authentication.

The WWW-Authenticate thing is partly a node to HATEOAS, and partly there to make it easier to test things from a normal web browser. I tend to use Postman for building and testing any serious HTTP services, but when it comes to quick'n'dirty API discovery and troubleshooting, I can't overstate the convenience of being able to paste something into a normal web browser and get a readable response.

The test code is in WhoAmITestBase.cs. Notice that in this implementation, our users are stored in a fake data store - FakeDataStore.cs - and we're actually using this class as a TestCaseSource in our tests so we maintain parity between the test data and the test coverage.

The WhoAmI endpoint was trivial - find the current user name, look them up in the user store, and convert their user data to a WhoAmIResponse. The fun part here was the authentication.

A Note on Authentication vs Authorization

NOTE: Authentication and authorization are, strictly speaking, different things. Authentication is "who are you?", authorization is "are you allowed to do this thing?" Someone trying to enter the United States with a Libyan passport is authenticated - the US border service know exactly who they are - but they're not authorized because Libyan citizens can't enter the US without a valid visa.

In one respect, HTTP gets this exactly right - a 401 Unauthorized means "we don't know who you are", a 403 Forbidden means "we know who you are, but you're not allowed to do this." In another respect, it gets this badly wrong, because the Authentication header in the HTTP specification is called Authorization.

Authentication - The Best Case Scenario

OK, to build our secure /whoami endpoint, we need a handful of extension points in our framework. We need to:

  1. Validate a supplied username and password against our own credential store
  2. Indicate that specific resources require authorization, so that unauthorized requests for those resources will be rejected
  3. Determine the identity of the authenticated user when responding to an authorized request

The rest - WWW-Authenticate negotiation, decoding base64-encoded Authorization headers, returning 401 Unauthorized responses - is completely generic, so in an ideal world our framework will do all this for us; all we need to do is implement the three extension points above.

Let's look at Nancy first, because alphabetical is as good an order as any.

NancyFX

Implementing HTTP Basic auth in NancyFX proved fairly straightforward. The first head-scratching moment I hit is that - unlike the other frameworks in this project - Nancy is so lightweight that I didn't actually have any kind of bootstrapper or initialization code anywhere, so it wasn't immediately obvious where to configure the additional pipeline steps. A bit of Googling and poking around the NancyFX source led me to the Nancy.Demo.Authentication.Basic project, which made things a lot clearer. From this point, implementation involved:

  1. Add an AuthenticationBootstrapper - which Just WorkedTM without any explicit registration. I'm guessing it's invoked by magic sufficiently advanced technology, because it overrides DefaultNancyBootstrapper.
  2. Implement IUserValidator to connect Nancy to my custom user store - my implementation is just a really simple wrapper around my user data store. My UserValidator depends on my IDataStore interface - and, thanks to Nancy's auto-configuration, I didn't have to explicitly register either of these as dependencies.
  3. In this bootstrapper, call pipelines.EnableBasicAuthentication() and pass in a basic authentication configuration.

protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines) {
    base.ApplicationStartup(container, pipelines);
    var authConfig = new BasicAuthenticationConfiguration(container.Resolve<IUserValidator>(), "Restival.Nancy");
    pipelines.EnableBasicAuthentication(authConfig);
}

Finally, the WhoAmIModule needs a call to this.RequiresAuthentication(), and that's it. Clean, straightforward, no real showstoppers.

UPDATE: The NancyFX documentation has now been updated with a detailed walkthrough on enabling HTTP Basic authentication 

OpenRasta

Adding HTTP Basic auth to OpenRasta was a lot more involved - and the reasons why provide some interesting insight into the underlying development practises of the frameworks we're comparing. In 2013, there were some major changes to the request processing pipeline used by OpenRasta. As part of these changes, the basic authentication features of OpenRasta (dating back to 2008) were marked as deprecated - and until now, nobody's contributed an up-to-date implementation, I'm guessing because none of the people using OpenRasta have needed one. Which left me with a choice - do I use the deprecated approach, contribute my own implementation, or disqualify OpenRasta? Well, disqualification would be no fun, and deprecated code means you get Resharper yelling at you all over the place, so I ended up implementing a pipeline-based HTTP Basic authorization module to the OpenRasta codebase.

Whilst writing this article, I had a long and very enlightening chat with Sebastien Lambla, the creator of OpenRasta, about the current state of the project and specifically the authentication capabilities. It turns out the features marked as deprecated were intended to be migrated into a separate OpenRasta.Security module, thus decoupling authentication concerns from the core pipeline model - but this hasn't happened yet, and so the code is still in the OpenRasta core package. It's up to you, the implementer, whether to use the existing ('deprecated') HTTP authentication provider, or to roll your own based on the new pipeline approach.

The original pull request for this is #89, which is based on the Digest authentication example included in the OpenRasta core codebase - but shortly after it was accepted, I found a bug with the way both the Digest and my new Basic implementation handled access to non-secured resources. The fix for this is in #91, which at the time of writing is still being reviewed, so Restival's currently building against my fork of OpenRasta in order to get the authentication and WhoAmI tests passing properly.

Following those changes to the framework itself, the implementation was pretty easy.

  1. Provide an implementation of IAuthenticationProvider based on my IDataStore interface.
  2. Register the authentication provider and pipeline contributor in the Configuration class:

    ResourceSpace.Uses.CustomDependency<IDataStore, FakeDataStore>(DependencyLifetime.Singleton);
    ResourceSpace.Uses.CustomDependency<IAuthenticationProvider, AuthenticationProvider>(DependencyLifetime.Singleton);
    ResourceSpace.Uses.PipelineContributor<BasicAuthorizerContributor>();

  3. Decorate the WhoAmIHandler with [RequiresBasicAuthentication(realm)]
Total research and Google time took the best part of a day - including implementing the missing pieces of the framework. Implementation time once those were in place was around an hour, although I suspect even if the necessary authorization components already existed I'd have needed a couple of hours Google time to work out exactly how to plug into the pipeline.  

 

ServiceStack

Service was really straightforward, mainly because the extension points for overriding built-in authentication behaviour are logical and clearly documented. Implementation involved creating my own AuthProvider that extends the built-in BasicAuthProvider, injecting an IDataStore into this auth provider, and then registering my provider with ServiceStack in AppHost.Configure().

The only gotcha I encountered here was that the first implementation would response with an HTTP redirect to a login page if the request wasn't authorized - but Googling "servicestack authentication redirecting to /login" brought up this StackOverflow post, which explained that (a) this only happens for HTML content-type requests (my fault for using a web browser to test my HTTP API, I guess!), and that you can disable it by specifying HtmlRedirect = null when you initialize the auth feature:

public class AppHost : AppHostBase {
  public AppHost() : base("Restival", typeof(HelloService).Assembly) { }
  public override void Configure(Container container) {
    var db = new FakeDataStore();
    container.Register<IDataStore>(c => db).ReusedWithin(ReuseScope.Container);
    var auth = new AuthFeature(() => new AuthUserSession(), new IAuthProvider[] { new RestivalAuthProvider(db) }) {
      HtmlRedirect = null
    };
    Plugins.Add(auth);
  }
}

Total implementation time was about half an hour here - but bear in mind I've worked with ServiceStack a lot, so I'm familiar with the plugin architecture and the pipeline.

WebAPI

It took a surprisingly long time to come up with an HTTP Basic auth implementation on WebAPI. The first stumbling block here was the sheer number of posts, articles and examples demonstrating how to do it. There's a lot of excellent resources online about adding HTTP Basic authentication support to WebAPI, most of which are subtle variations on a common theme:

Now, the optimal number of references in a scenario like this is one. A single article, ideally written by the framework authors, saying "this is how you implement this very simple thing", and identifying the available integration points. Any framework where 3+ people have written in-depth articles on how to add something as simple as HTTP Basic authentication is insufficiently opinionated. Then there's the realization that WebAPI follows the ASP.NET MVC convention of decorating controllers and actions with attributes to support features like authentication - and injecting dependencies into attributes is fiddly, because of the way that attributes are instantiated by the runtime. So... fast-forward several hours of Googling and forking and compiling things and generally poking around, and I decided that Robert Muehsig's approach is the closest thing to what I'm after. And since it's on GitHub under an MIT license, I borrowed it. Most of the classes in Restival.Api.WebApi.Security are lifted directly from Robert's sample code. This produced a working implementation that passed all the tests, but I must admit I'm still not happy with it. It's definitely one I plan to revisit. It's notable that this implementation explicitly decouples the attribute itself from the filter implementation - this is so I can inject dependencies into the filter at runtime whilst still using the attribute declaratively, but it's rather inelegant and I can't say I'm terribly happy with it.

Total implementation time for this was at least a day, including research, reading examples, prototyping and digging into the WebAPI source to work out which bits to override. That's significantly more than I thought it would be, and I'm rather expecting someone to comment on this post "you're doing it wrong!" and link me to ANOTHER blog post or web page which demonstrates a different approach that "only takes five minutes". We shall see. :)

Conclusions

Here's how the four authentication implementations stack up. The implementation time here isn't how long it took, it's how long I think it would take to do it again now I'm familiar with the various frameworks authentication/plugin patterns.

Framework Lines of code New Classes Research time Implementation Time
WebAPI ~ 250 6 (here) 1 day 2-3 hours
ServiceStack ~ 20 1 (RestivalAuthProvider) 15 minutes 15 mins
NancyFX ~ 25 2 (UserIdentity, UserValidator) 30 minutes 15 mins
OpenRasta ~ 30

plus changes to framework
1 (AuthenticationProvider)

plus two new framework classes
1 day 1-2 hours

HTTP Basic auth proved an interesting example, because whilst it's an obvious feature on any API feature 'checklist', any reasonably mature HTTP API will almost certainly have moved beyond basic authentication to something more sophisticated such as per-client API keys or OAuth2 tokens. 

When it comes to ease of implementation, I'd say it's a dead heat between NancyFX and ServiceStack. I have more experience working with ServiceStack than I do with Nancy, and I suspect my familiarity with their plugin model made things a little easier, but I really don't think there's much in it - the integration points are sensible, the documentation is comprehensive (and accurate!), and I'm pretty confident the implementations presented here reflect the idioms of the corresponding platforms.

OpenRasta and WebAPI were much less straightforward, but for very different reasons, In a nutshell, I think WebAPI is insufficiently opinionated - rather than encouraging or enforcing a particular approach, it supports a wealth of different integration patterns and extension points, and it's really not clear why any one is better than another. OpenRasta, on the other hand, has a very strong authentication pattern that makes a lot of sense once you get your head around it - but a lack of examples and up-to-date documentation means it's quite hard to work out what the pattern is without digging into the framework source code and getting your hands dirty.

Tune in next time, when we're going to start adding hypermedia to our /whoami endpoint response.

Thanks to Jonathan Channon, Steven Robbins and Sebastien Lambla for their help and feedback on this instalment.

Tuesday, 29 September 2015

Membership and Dynamics CRM - How's It All Going, Then?

A few months back, I blogged about how Spotlight has chosen Microsoft Dynamics CRM 2015 as the platform for delivering our new membership system, and our high-level plan for incorporating Dynamics into our existing infrastructure. Six months on, I thought it'd be worth revisiting to talk about how the project is progressing and the lessons we've learned so far. Particularly since people on Twitter keep asking me if I'm regretting it yet. :)

Now, I wear many hats. I'm a systems architect. I sit on the strategic board at Spotlight. I've been here long enough that I understand most of our business processes in excruciating detail. I'm interested in UX and interaction design. And, sometimes, when nothing else is going on, I write code and build stuff.

With my business hat on, Dynamics CRM is clearly the right solution for us. It works. It supports a business-to-consumer sales model, out of the box, which works really well for us (it surprised me how many of the other CRM systems we looked at assume that every customer is a company and every sale starts with a quote.) Dynamics CRM offers features like case management, service calendars, Outlook integration, multiple price lists and currencies, mobile device support, Active Directory integration - things that would probably never make it to the top of the backlog if we were building our own system. And that's fine - we are not in the CRM business. We build software to help people cast actors and make movies; everything else is overhead.

Strategically, it makes sense. Our initial customization and integration phase will end up being less than six months, after which membership and CRM will be very much a "solved problem" as far as our development team is concerned, freeing us up to focus on other things. Using an off-the-shelf product as the backbone of our membership and business activity gives us lots more options for making improvements in future - certainly more than if we'd built our own system.

So I still believe Dynamics CRM was, and remains, the best solution to our business requirements.

But... (you knew that was coming, didn't you?) as a software developer, it has frustrations, which can be broadly categorized as good, bad and ugly.

The Good...

It's a hugely complex product, and it has a learning curve. There's frequently several ways to achieve something - multiple email integration patterns, several different ways to implement single sign-on for user authentication. Almost anything is possible once you know how. This is frustrating when you don't know how, but once you get the hang of it, it works pretty well.

You can also write custom code in C# that runs directly within Dynamics, which - again - has a hell of a learning curve but is actually a really powerful technique. With a little ingenuity, you can even build C# components that implement your business logic and rules, and then decide later where that logic should be deployed. I like that sort of flexibility.

...the Bad...

Dynamics CRM is the heart of a software ecosystem that's still a long, long way from the software-as-craft ethos which is becoming fairly common elsewhere. Partly, this is a question of tooling and process. Things like revision control, continuous deployment and unit testing become really quite hard when your "project" is a ZIP file full of proprietary XML - no branching, no merging, only the most rudimentary support for rollbacks. As a developer, this is frustrating - but it's important to remember that systems like Dynamics blur the boundary between "developers" and "users" that's prevalent in most development projects. It's a platform. The people who use it every day for the next decade WILL be making changes to it - because the whole reason we're doing it is to empower The BusinessTM to manage their own membership system - and those people aren't software developers.

More worryingly, Dynamics CRM appears to be one of those products where you can make a very good living as an "expert" without actually knowing anything. The amount of misinformation and bad advice surrounding this product is astonishing. I've read blog posts promoting the most dreadful solutions. I've spoken to consultants - and interviewed contractors on quite generous day-rates - who are completely unaware of whole swathes of the core product's capabilities. That makes it really difficult to know who you can trust - and how much time to invest in validating your assumptions before committing to any kind of delivery.

This isn't a problem with Microsoft Dynamics per se - I think it's probably characteristic of any market sector where we see the dreaded "point and click - no development required!" shibboleth - but it's still a lot less fun than working on, say, Nancy or ServiceStack, where the code is open source and most of the community seem to know what they're doing.

...and the Ugly

And then there's the stuff that's just plain stupid. For example, there's an entity in CRM called a Contract. Once a contract has been "invoiced" - which you can do by clicking a single button in the UI - it is immutable. There is literally NOTHING you can do - even with all the Administrator permissions in the world - to change any detail on that contract. Ever. There's various workarounds for this. Microsoft's recommendation is that you need to "clone" the Contract as a draft, cancel the original and replace it with the clone. Our own solution was to write a custom plugin in C# that throws an exception if you ever try to invoice a contract - this works, but it's not pleasant having to work around such an arbitrary restriction.

They say there's only two hard problems in software development - cache invalidation and naming things. Well, Dynamics CRM will remind you of that on a daily basis. Almost everything in CRM needs to have a name otherwise you can't save the record. For people, companies and products, this makes sense... but, really, when you're creating an order to allow somebody to renew their membership, does it need a name? Really? So we're having to build our own code to automatically create these names all over the place, and it's a waste of time.

As for cache invalidation - there's a thing called the "portal toolkit", aka the Developer Extensions for Microsoft Dynamics CRM 2015. You can read all about it here. Pay particular attention to the sentence that says:

The Customer Portal and Partner Relationship Management (PRM) Portal solutions for Microsoft Dynamics CRM 2015 will be available from the Microsoft Dynamics Marketplace soon.

I don't know when "soon" is, but at the moment, if you start using the portal toolkit, you end up with a cache that you can't invalidate - the only supported invalidation mechanism involves deploying an (unreleased) solution to your CRM server, that calls an (undocumented) DLL that you're expected to deploy to your web server. Oh, and if you're running a server farm, you'll need to work out how to configure your load balancer to forward these cache invalidation calls to all your backend servers simultaneously.

So...

Like democracy, it's probably fair to say that Dynamics CRM is the worst option apart from all of the alternatives. It certainly isn't perfect, but it is pretty damn good, and I think the long-term advantages will significantly outweigh the short-term headaches we're having. And if they don't? Well, that's why we have loosely-coupled architecture, isn't it?

Tuesday, 11 August 2015

Dynamics CRM Online: Any SMTP Server You Like! (as long as it's Gmail or Yahoo)

UPDATE: I take it all back. Following a really useful call from Dynamics CRM support, it turns out there are three different mail integration scenarios, only there's no reference to the third one in the CRM Online configuration screens.

  1. Server-side sync using Exchange
  2. Server-side sync using POP3/SMTP - this is the one that's restricted to Yahoo/Gmail, presumably because it IS doing something very clever.
  3. Dynamics CRM Email Relay - this is a completely separate tool that you need to download and install, and use it to configure your email routing.

I'm going through the mail relay config as we speak - will let you know how it goes. Sorry, Microsoft. :)


We are doing a lot of work at the moment with Microsoft Dynamics CRM Online. It's generally very nice - as a business tool, it's excellent; the UI is great, it's a good fit for our requirements, and despite a couple of headaches caused by the restrictions of the CRM Online hosting environment, we've now got a pretty elegant two-way data sync system up and running using EasyNetQ and webhooks.

Now, one of our key requirements for CRM is being able to send email to our customers. Yeah, I know - audacious, right? CRM Online is our first foray into the cloudy world of Microsoft Office 365, and all the rest of our infrastructure (Exchange, Active Directory, etc.) is still hosted on-premise. For testing and staging systems, we use mailtrap.io - a really slick service that will accept incoming mail and store it so you can test things are relaying properly without actually sending any real emails. For production, we use Mandrill, which is a commercial mail relay service - high availability, reputation management, excellent delivery statistics. We send about a million emails a month through Mandrill, and it works beautifully.

So... this morning I log into CRM Online, go into Settings => Email Configuration => New POP3/SMTP Server Profile. Looks pretty straightforward, right? I enter some details, click "save" and get this:

image

Weird. I don't want to set up server-side synchronization - I just want to send email. So I start poking around, Googling, that kind of thing, and find this article, which says:

You may have read in the documentation that GMail and Yahoo are listed as supported pop3/smtp providers for Microsoft Dynamics CRM Online and

"Although other POP3/SMTP systems may work with Microsoft Dynamics CRM, those systems were not been tested by Microsoft and are not supported."

Let’s be clear about “not supported“. In this context it means precisely “you will not be able to go past server profile screen as we will reject any pop3/smtp provider that is not GMail or Yahoo.”

And that is exactly what the email relay screen does. If you enter any value other than "smtp.gmail.com" or "smtp.mail.yahoo.com" in the Outgoing Server Location field, you get the "unsupported email server" message. I've even tried modifying the configuration using the SDK instead of the UI, but get the same response - "the email server is unsupported".

There's two possibilities here:

  1. Microsoft have worked closely with Yahoo and GMail to provide first-class business email support, with all sorts of clever features and proprietary extensions that aren't supported by any other SMTP mail servers. (UPDATE - yes, it appears this is more-or-less what they've done. See above.)
  2. Somebody has arbitrarily decided to support GMail and Yahoo and exclude all other SMTP servers. Including, by the way, Hotmail (owned by Microsoft), Outlook.com (owned by Microsoft), and our on-premise SMTP relay (powered by Microsoft Exchange)

If I were a betting man, I know which one I'd be putting money on. I'm really disappointed about this. CRM Online isn't a toy. It's a seriously powerful platform, with a hefty price tag attached... and just when everything is going really nicely, something like this comes along and our beautiful product vision is completely hamstrung by the fact that if we want to email our customers, we need to do it via GMail or Yahoo - and there's absolutely no rational justification for this.

Anyone else encountered with this particular restriction? Anyone have any bright ideas as to how we can work around it?

Tuesday, 4 August 2015

As a Developer, I Want to Abolish User Stories, So That I Can Ship Great Products Faster.

Once upon a time, when you programmed by the seat of your pants and the handlebars of your moustache, lots of people wrote specifications. And there was this lovely idea that you could get a Business Analyst to write you a specification, which laid out in very precise detail exactly how The Software was going to function, and then you would give the specification to a development team and they would build it for you, and Everything Would Be Lovely. Except, of course, it wasn't. Only one team ever shipped a massively successful project using this sort of specification-driven waterfall approach, and trust me - they were smarter than you, and they had a much bigger budget. So some bright folks started wondering why software always went wrong, and suggested a whole load of things that would probably help, and came up with things like scrum, and unit testing, and continuous integration.

One of the great ideas that came out this movement was the idea of user stories. See, specifications tended to be incredibly dry and formal, and often did a really bad job of communicating exactly why you were doing something. Joel Spolsky wrote a great article years ago on writing painless functional specifications, but user stories takes this idea even further. And, like a lot of the good ideas that came out of agile, there's a descriptive element to it and a prescriptive element to it. The descriptive part says "write short, simple stories, not detailed specifications" - and the prescriptive part suggests some 'templates' to help you get the hang of this. There's two formats that became popular for working with user stories - given-when-then and as-a-I-want-so-that.

As-a-I-want-so-that is pretty good for describing, at a very high level, what you are trying to do:

As a marketing coordinator, I want to send email to everyone on our mailing list, so that I can tell them about our big summer sale.

And then you'll add a couple of acceptance criteria to the story:

Given I am not subscribed to the mailing list, when the marketing coordinator sends an email about the summer sale, then I should not receive the email.

Given I have received a newsletter email, when I click the unsubscribe link, then I should be removed from the mailing list.

This sort of clarity makes it easy to build the feature, easy to test the feature, and easy to understand exactly what you're doing and why. See? Simple.

Right. Imagine we have a spec from the Olden Days, and Volume 6, Section 8, Subsection 14, Paragraph 9 says:

The handset will feature a Maplin RK82D component installed on the side of the unit. The RK82D will be positioned exactly 45mm from the top of the unit. When activated, the RK82D component will cause the internal speaker to emit a square wave signal at 16Hz, at a volume of not less than 90dBA as measured from a distance of 1 metre.

Now, let's take an old-school project manager and turn them into an agile product owner. Probably by sending them on a three-day course with a certificate and everything.  When they get back, they'll dig out the old spec, and they'll laugh at how dry and opaque it all is. And they'll sit down, all excited, and start writing stories that look like this:

As a handset user, I want a Maplin N27QQ component installed on the side of the unit, so that when the component is activated the device will emit a square wave signal at 16Hz at a volume of not less than 90dBA measured from a distance of 1m

And then they'll add acceptance criteria:

Given I am a handset user, when I activate the N27QQ component, then the frequency of the square wave signal will be 16Hz

Given I am a handset user, when I activate the N27QQ component and measure the signal volume at a distance of 1m, then the volume of the square wave will be not less than 90dBA

Given I am a handset user, when I examine the handset, then the distance of the N27QQ component shall be 45mm from the top of the unit

and everything is AWESOME because we're being AGILE! The story will sit there in the icebox for a couple of weeks, until it gets bumped up the list and included in a backlog refinement meeting. At which point this conversation will happen:

Dev: "Er... why are we doing this? I don't understand the justification for including this feature."
PM: "Sorry, what?"
Dev: "Well... do you know what a Maplin N27QQ component is?"
PM: "It's the component specified in the specification"
Dev: "Yes... it's also a big round red plastic button the size of a softball"
PM: "Oh. Well, it's in the specification now."
Dev: "Right. Explain to me what happens when you press it"
PM: "Oh, easy. The unit emits a square wave signal at 16Hz, at a volume of..."
Dev: "Yeah, yeah. Do you know what that sounds like?"
PM: "Er... it's probably some sort of ring tone"
Dev: "No, it's a fart noise as loud as a motorcycle"
PM: "Oh. Well, can we estimate the story, please?"

At which point the developer will start flicking through LinkedIn on their phone and wondering how long it is until they can go to the pub.

You know what the story should have said? It should have said something like:

As Bongo the Clown, I want my new phone handset to feature a massive red fart button, so that I can make children laugh at parties when I answer it.

First of all, unless you happen to be in the circus supply business, someone's going to say "hang on - why are we making phone handsets for clowns? Is that a thing now?"

Second, anybody who reads that story can immediately picture what they're building, who wants it, and why. It leads to creative discussion and suggestions. Someone might have a brilliant idea about replacing the fart noise with a trombone, or making the button light up when you press it. One of your team might remember how Dad used to get mad every time Tony Blair came on the radio, but the volume knob on Dad's stereo fell off when he tried to turn the radio off, and how hilarious it was watching him chase it across the kitchen while cursing and muttering under his breath, and maybe we should make the fart button actually fall off when you press it so Bongo has to chase it around the room? Clear stories let you cut through the waffle and get straight to the important bit - what is this? How do we build it? How might we make it better?

Now, compare these two sentences:

  1. As a handset user, I want a Maplin N27QQ component installed on the side of the unit, so that when the component is activated the device will emit a square wave signal at 16Hz at a volume of not less than 90dBA measured from a distance of 1m
  2. We'll put a giant red fart button on the side of the phone, so that Bongo the Clown can use it to make kids laugh when he's doing children's parties.

Which one makes more sense to you, as a reader? Which one is going to lead to better decisions, better estimation and less time wasted in meetings?

As-a-I-want-so-that is not some sort of witchcraft. It doesn't magically translate your dry, meaningless specification into readable English. Like writing unit tests, it can help to keep you honest when you're breaking bad habits, and it's one more tool to keep handy when you're doing this stuff for real, but it is not why we're here. It's the story that matters, not the syntax. And if you can't tell me a short, simple story about what you want and why, then maybe you don't actually understand the thing you're trying to build, and no amount of syntactic convention is going to fix that.

Wednesday, 29 July 2015

The Mysterious Case of the Missing Milliseconds

Strings are the lingua franca of many distributed systems, and Spotlight is no different. Earlier today, we hit a weird head-scratching bug in one of our services, and - surprise, surprise - turns out it's all do with strings. To work around limitations of an old line-of-business application, we have a database trigger (no, really) that captures changes made to a particular table, serializes them into an XML message, and pushes this into a SQL Service Broker queue; we then have a Windows service that pulls messages off the queue, parses the XML, breaks it into nicely manageable chunks and publishes them all to RabbitMQ using EasyNetQ. SImple. Except, once in a while, it blows up and starts complaining about FormatExceptions.

Now... within the database trigger, we're doing this:

SELECT @OccurredAtUtc = CONVERT(VARCHAR(128), GETUTCDATE(), 126)

which returns 2015-07-29T20:55:21.130 as you'd expect.

There's then a line of code in the Windows service that says:

var format = "yyyy-MM-ddTHH:mm:ss.fff";
DateTime.ParseExact(d, format, CultureInfo.InvariantCulture, DateTimeStyles.AdjustToUniversal | DateTimeStyles.AssumeUniversal);

Now, this is the code of somebody who knows that turning datetimes into strings and back again can get a bit tricky, and so has left absolutely nothing to chance - they've supplied an exact date format, they've specified a culture, they've even gone so far as to specify the DateTimeStyles. There's unit tests and integration tests, and everything looks absolutely lovely. And then it blows up. Very occasionally,

Except... SQL Server does something weird.

DECLARE @DateTime DATETIME
SELECT @DateTime = '2015-07-29 21:59:15:123'
SELECT CONVERT(VARCHAR(128), @DateTime, 126) -- returns 2015-07-29T21:59:15.123 (fine!)

SELECT @DateTime = '2015-07-29 21:59:15:000'
SELECT CONVERT(VARCHAR(128), @DateTime, 126) -- returns 2015-07-29T21:59:15

SELECT @DateTime = '2015-07-29 21:59:15:999'
SELECT CONVERT(VARCHAR(128), @DateTime, 126) -- returns 2015-07-29T21:59:16

SELECT @DateTime = '2015-07-29 21:59:15:001'
SELECT CONVERT(VARCHAR(128), @DateTime, 126) -- returns 2015-07-29T21:59:15

First, SQL Server doesn't have true millisecond precision - the milliseconds part will often get rounded by +- 0.001 seconds. Second - if the milliseconds part is zero, it'll be omitted from the string representation. Which means our incredibly specific and detailed date parsing routine will choke, because suddenly it has a date that doesn't match the format we've specified, and DateTime.ParseExact will throw a FormatException. Unit tests don't pick it up, because why would you mock such completely bizarre (and undocumented) behaviour, when you don't even know it exists?

What this means is that, since any changes done between .999 and .001 milliseconds will blow up, roughly 0.3% of all our transactions will be failing with a FormatException rather than getting synced to the rest of our systems. Which means fishing them out of the error queue and sorting them out manually - ah, the joy of distributed systems. This formatting weirdness happens on every version of SQL back as far as 2003, but there's no reference to it in the documentation until SQL Server 2012. It's been raised as a bug and closed as 'by design' because "the ISO 8601 spec leaves the conversion semantics for fractional seconds up to the implementation" - which I'm pretty sure didn't mean "go ahead and be internally inconsistent!" but as with so many other issues like this, fixing the bug would change behaviour that's been in place for years and could break things. I've no idea how - or why - anyone would build a system that genuinely relies on this bizarre idiosyncrasy, but I'll bet good money somebody out there has done it.

The beautiful irony, of course, is that if we'd used DateTime.Parse instead of ParseExact, we'd never have had a problem. :)

Friday, 26 June 2015

REST Workshop at Progressive.NET 2015 next week

I'll be delivering a hands-on workshop at Progressive.NET 2015 at SkillsMatter here in London next week, where I'll be talking about advanced REST architectural patterns, and actually implementing some of those patterns in .NET using several of the frameworks available for building HTTP/REST APIs on ASP.NET

I've tried quite hard to avoid any esoteric requirements, so attendees should only need

  • A laptop running Visual Studio 2013, Powershell and IIS
  • A reasonable working knowledge of C# and HTTP
  • A test runner capable of running NUnit tests - personally I love NCrunch deeply but ReSharper or plain old NUnit will do just fine./
  • Some familiarity with Git and GitHub - if you know how to fork a repo and clone it to your workstation, you should be fine.

The repo we'll be working from is https://github.com/dylanbeattie/Restival - there shouldn't be a great deal of setup required, but if you want to clone the repository, check it compiles, and set up your local IIS endpoints by running deploy.ps1 ahead of time, it'll save a little time on the day.

During the workshop, we'll be discussing advanced HTTP/REST API patterns - hypermedia, pagination, resource expansion, HTTP PATCH, OAuth2 - and showing off some tools that can help us design and monitor our HTTP APIs. Alongside the discussion, we'll be implementing some of the techniques covered using your preferred choice of WebAPI, ServiceStack, OpenRasta or NancyFX - or even all four, if you're feeling productive - and then discussing the relative pros and cons of these frameworks for each of the patterns we're implementing.

See you there!

Friday, 19 June 2015

Slides and code from NDC Oslo 2015

I’m here at the Oslo Spektrum in Norway at NDC 2015, where I’ve been talking about the machine code of the web, SASS, TypeScript, CoffeeScript, bundle transformations, web optimisation in ASP.NET, ReST, hypermedia, resource expansion, API versioning, OAuth2, Apiary, NGrok, RunScope – and that’s just the stuff I actually managed to cover in my two talks. It’s been a really great few days, and huge thanks to the organisers for going to such great lengths to make sure everything has gone so smoothly.

A couple of non-software highlights that I really liked:

  • The catering has been excellent, and having food available throughout the day is a great way to avoid the lunchtime rush. (And the free coffee at the Twilio stand is excellent!)
  • The overflow area – where you can tune into any of the 9 talks currently in progress via a wireless headset, or just sit and channel-surf – is a great idea. (But remember it’s there if you’re doing live demos with audience participation – I’m pretty sure the “winner” of my NGrok demo was one of the people in the overflow area!)
  • If you ever get the chance to see the Norwegian band LoveShack, do it. They played the conference after-party last night, and closed their set with a note-perfect 20-minute medley which went through (I think!) Jump, Celebrate, Girls! Girls! Girls!, Welcome to the Jungle, Paradise City, the theme from Baywatch, Livin’ on a Prayer, Radio Gaga and a half-dozen more before dropping back into Jump mid-guitar-solo without skipping a beat. They’re playing the John Dee bar in Oslo this evening, and I’m almost tempted to change my flight just to stick around and see them again…

Slides, Links and Code Samples

The slides and code samples for the talks I’ve given are up on GitHub: the repo is at https://github.com/dylanbeattie/ndc-oslo-2015 or if you want to download the slide decks directly, links are:

Front-End Fun with Sass and Coffee

The Rest of ReST

I also want to follow up on one specific question somebody asked after my ReST talk this morning, which can be  paraphrased as “are you comfortable recommending that people use HAL, seeing as it’s basically a dead specification?” An excellent question, and one that probably slightly more detailed answer than the one I gave on the spot. To put this in context, the HAL specification was submitted to the IETF as a draft-kelly-json-hal-06 in October 2013; that draft expired in 2014 and hasn’t been updated or ratified since, so I can see how you could argue that HAL is “dead”.

First – I’d disagree with that. Although the specification itself hasn’t changed in a while, the mailing list and community is still relatively active, and I’ve no doubt would still welcome engagement and contributions from anybody who wished to participate. Second – the spec still provides a perfectly valid approach. It’s a specification, not a tool or a framework, and in terms of delivering working software, if HAL helps you solve your problem then I say go go for it. Third – and I should have made this more obvious in this morning’s talk – HAL is just one of several approaches for delivering hypermedia over JSON. I used HAL in my examples because I think it’s the most readable, but that doesn’t mean it’s the best choice for your application. (Remember, one of my requirements for a hypermedia language in this context was “looks good on Powerpoint slides”.) If you’re interested, I would recommend also looking at JSONAPI, JSON-LD, Collection+JSON and SIREN. There is a great post by Kevin Sookocheff, which succintly summarises the difference between four of them – it doesn’t cover JsonAPI - and concludes “there is no clear winner. It depends on the constraints in place on your API”

Right. I’m going to watch Troy Hunt making hacking child’s play for an hour, and then head to the airport. Thank you, Oslo. It’s been a blast.

IMG_4233

Friday, 12 June 2015

Restival Part 4: Deployment and Fun with Case Sensitivity

Before we dive into the next phase of API development, I wanted to make it a little easier to install and run the Restival app on your own machine, so I've added a Powershell Deploy.ps1 script to the project which will:

  • Create a new local IIS website called Restival, bound to http://restival.local/
  • Create applications for each of our API implementations
  • Configure the whole thing to run under the ASP.NET v4.0 application pool.

One interesting quirk I discovered whilst doing this is that OpenRasta appears to be case-sensitive when it comes to request URLs. I'd initially created the applications like this:

The test project uses lowercase URLs - so http://restival.local/api.nancy/ - and for some strange reason, the OpenRasta implementation just doesn't work if the IIS application name differs in case from the URL in the unit test. I'll dig into this a little further but for now, I've just modified the deploy script to do a .ToLower() on the application name and everything's working. Code for this instalment is in https://github.com/dylanbeattie/Restival/tree/v0.0.4

Tuesday, 19 May 2015

Restival Part 2 Revisited: Attribute Routing in WebAPI

(Code for this instalment is version 0.0.3 on GitHub if you're following along.)

Mike Thomas commented on my last post, asking "any reason why you are not looking at attribute routing in WebAPI"? To which my answer is "yes - I didn't know it existed", which I'd argue is a pretty good reason why I hadn't looked at it! But Mike's absolutely right to bring it up - if we're comparing frameworks, it makes a lot of sense to really explore the full capabilities of those frameworks. So I've been reading up on attribute routing, and have to say it looks rather nice - and will, I suspect, help out with a lot of the more advanced stuff that's coming up in future instalments.

According to the documentation on www.asp.net:

Web API 2 supports a new type of routing, called attribute routing. As the name implies, attribute routing uses attributes to define routes. Attribute routing gives you more control over the URIs in your web API. For example, you can easily create URIs that describe hierarchies of resources.

The earlier style of routing, called convention-based routing, is still fully supported. In fact, you can combine both techniques in the same project.

Sounds good, right? So what do we need to do to make Restival's WebAPI implementation run on attribute routing instead of convention-based routing?

As it turns out, not much. Well, apart from a little light yak-shaving. Attribute routing is in the Microsoft.AspNet.WebApi.WebHost package - so let's install it:

PM> Install-Package Microsoft.AspNet.WebApi.WebHost
Attempting to resolve dependency 'Microsoft.AspNet.WebApi.Core (≥ 5.2.3 && < 5.3.0)'.
Attempting to resolve dependency 'Microsoft.AspNet.WebApi.Client (≥ 5.2.3)'.

...

Install failed. Rolling back...
Install-Package : Could not install package 'Microsoft.AspNet.WebApi.Client 5.2.3'. You are trying to install this package
into a project that targets '.NETFramework,Version=v4.0', but the package does not contain any assembly references or content
files that are compatible with that framework. For more information, contact the package author.At line:1 char:1
+ Install-Package Microsoft.AspNet.WebApi.WebHost
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Install-Package], InvalidOperationException
    + FullyQualifiedErrorId : NuGetCmdletUnhandledException,NuGet.PowerShell.Commands.InstallPackageCommand

OK, no problem - we're currently targeting .NET 4.0 and it looks like WebApi.WebHost wants .NET 4.5. Right-click, properties, Target Framework to .NET Framework 4.5.1, done. Shift-Ctrl-B... what's this?

image

Oh. OK. Let's enable NuGet Package Restore so it'll reinstall packages when we compile the solution... oh dear:

image

Oh, joy. Right. This just stopped being fun, because now the solution has entered a sort of weird limbo-state where it's not restoring packages, but the option to enable package restore has disappeared. Time for a tried and tested troubleshooting routine:

  1. Close Visual Studio. Completely. SHUT IT DOWN. Yes. And the other instance you've got open. In fact, reboot the machine. DO IT.
  2. Whilst it reboots, get something to drink. Coffee if you're on the clock (did I mention Spotlight has a bean-to-cup espresso machine? We're hiring, you know...) - or something a little stronger if you're not.
  3. Put on "Turn Up the Radio" by Autograph.
  4. Take a deep breath.
  5. Re-open Visual Studio, re-open your solution, try building it again.

This time, it builds. It gives a warning about assembly version conflicts, and then settles down to 0 errors and 1 warning:

Warning: Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information.  Packages affected: EntityFramework, Microsoft.Net.Http

Well, we're not using Entity Framework so I can just remove it. Except I can't, because Microsoft.AspNet.Providers.Core uses EntityFramework, and Microsoft.AspNet.Providers.LocalDB uses Providers.Core... but since I'm not using ANY of those, we can remove LocalDB, which removes Core, which removes EntityFramework, and we're down to a single warning about Microsoft.Net.Http, which we can fix with a NuGet package reinstall. Simple.

PM> Update-Package -reinstall Microsoft.Net.Http
Removing 'Microsoft.Net.Http 2.0.20710.0' from Restival.Api.WebApi.
Successfully removed 'Microsoft.Net.Http 2.0.20710.0' from Restival.Api.WebApi.
Removing 'Microsoft.Net.Http 2.0.20710.0' from Restival.Api.ServiceStack.
Successfully removed 'Microsoft.Net.Http 2.0.20710.0' from Restival.Api.ServiceStack.
Removing 'Microsoft.Net.Http 2.0.20710.0' from Restival.Api.OpenRasta.
Successfully removed 'Microsoft.Net.Http 2.0.20710.0' from Restival.Api.OpenRasta.
Uninstalling 'Microsoft.Net.Http 2.0.20710.0'.
Successfully uninstalled 'Microsoft.Net.Http 2.0.20710.0'.
Installing 'Microsoft.Net.Http 2.0.20710.0'.
You are downloading Microsoft.Net.Http from Microsoft, the license agreement to which is available at
http://www.microsoft.com/web/webpi/eula/MVC_4_eula_ENU.htm. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device.
Successfully installed 'Microsoft.Net.Http 2.0.20710.0'.
Adding 'Microsoft.Net.Http 2.0.20710.0' to Restival.Api.WebApi.
Install failed. Rolling back...
Update-Package : Unable to uninstall 'Microsoft.Net.Http 2.0.20710.0' because 'Microsoft.AspNet.WebApi.OData 4.0.30506'
depends on it.At line:1 char:1
+ Update-Package -reinstall Microsoft.Net.Http
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Update-Package], Exception
    + FullyQualifiedErrorId : NuGetCmdletUnhandledException,NuGet.PowerShell.Commands.UpdatePackageCommand
 
'Microsoft.Net.Http 2.0.20710.0' already installed.
Adding 'Microsoft.Net.Http 2.0.20710.0' to Restival.Api.ServiceStack.
Successfully added 'Microsoft.Net.Http 2.0.20710.0' to Restival.Api.ServiceStack.
'Microsoft.Net.Http 2.0.20710.0' already installed.
Adding 'Microsoft.Net.Http 2.0.20710.0' to Restival.Api.OpenRasta.
Successfully added 'Microsoft.Net.Http 2.0.20710.0' to Restival.Api.OpenRasta.

PM>

Hmm. I don't even know what WebApi.OData is, and I'm pretty sure I'm not using it... Let's remove it. Which, of course, means removing a bunch of other things, including Microsoft.Net.Http... which requires another Visual Studio restart. And this time, Update-Package fails because Microsoft.Net.Http is actually gone... so let's install it:

PM> Install-Package Microsoft.Net.Http

Zing! Done. Clean build, zero errors, zero warnings, it works. Now we can actually implement attribute routing.

First, we're going to remove our existing Hello route and enable attribute routing:

public static class WebApiConfig {
    public static void Register(HttpConfiguration config) {
        config.MapHttpAttributeRoutes();
        //config.Routes.MapHttpRoute(
        //    "Hello", // route name
        //    "hello/{name}", // route template
        //    new { Controller = "Hello", Name = "World" } // defaults
        //    );

    }
}

Next, we need to decorate our HelloController with the route attribute:

public class HelloController : ApiController {
    [Route("hello/{name}")]
    public Greeting Get(string name) {
        return (new Greeting(name));
    }
}

FInally, we need to change the way we register our configuration, because the old WebAPI 1.x convention isn't compatible with attribute routing:

protected void Application_Start() {
    // Old WebAPI 1.x syntax - not compatible with attribute routing:
    // WebApiConfig.Register(GlobalConfiguration.Configuration);

    // New WebAPI 2.x configuration via delegate instead of direct method call

    GlobalConfiguration.Configure(WebApiConfig.Register);
}

That works - well, everything works except our default "Hello, World" scenario - so let's add a default value to the {name} parameter in our route attribute:

public class HelloController : ApiController {
    [Route("hello/{name=World}")]
    public Greeting Get(string name) {
        return (new Greeting(name));
    }
}

And there you go. Attribute routing works, all tests are passing - and a yak so impeccably shaved you could use it to sell cologne. Not bad.

Thursday, 7 May 2015

Restival Part 3: Hello, World!

So far, we've got a /hello service up and running in four different frameworks (five if you include Andrew 'kolektiv' Cherry's F#/Freya implementation, which looks really interesting). Last time, we looked at how our frameworks handle routing; in this instalment, we're going to look at adding default values to our route parameters. Specifically, if you just call /hello, the service should return "Hello, World!"

GET /hello

should return

200 OK

{ "Message" : "Hello, World!" }

OK, so how we do this? There's three places we can potentially do it - the routing, the handlers, or the underlying Greeting object itself - but not all our frameworks support all three options.

Only WebAPI supports explicit defaults in the routing configuration, using this syntax:

config.Routes.MapHttpRoute(
    "Hello", // route name
    "hello/{name}", // route template
    new { Controller = "Hello", Name = "World" } // defaults
);

ServiceStack doesn't support routing defaults per se, but because the request is decoupled from the rest of our implementation, we can specify the default inside our request DTO - and thanks to the single responsibility principle, we know this isn't going to affect any other details of our implementation

[Route("/hello")]
[Route("/hello/{name}")]
public class Hello {
    private string name = "World";
    public string Name {
        get { return name; }
        set { name = value; }
    }
}

OpenRasta's routing maps our requests directly onto the Get method of our HelloHandler, which makes it really easy to use .NET's optional parameter syntax to supply a default value to the handler method itself:

public class HelloHandler {
    public object Get(string name = "World") {
        return (new Greeting(name));
    }
}

Finally, NancyFX has such sparse routing that there's only really one line of code we could modify - after all, our entire API's only two lines of code, and the other one's doing our /hello/{name} implementation.

public class HelloModule : NancyModule {
    public HelloModule() {
        Get["/hello"] = _ => new Greeting("World");
        Get["/hello/{name}"] = parameters => new Greeting(parameters.name);
    }
}

Astute readers will be going "but hang on - you've just defined the same default in four different places! That's not very SOLID! What if it changes?" - and yes, you're exactly right. Because this is a demo application, I'm favouring readability over abstraction; if you had to define this kind of default behaviour in four different places in a real app, you'd do it using constants, or implement it on the Greeting object itself so the code is reused between all four endpoints.

Code for this post is on GitHub as v0.0.2.

Now we've got a basic skeleton API up and running, it's probably time to share my to-do list - things we'll be implementing, in no particular order

  • List resources, and retrieving multiple objects
  • Support for PUT and DELETE (easy), POST (interesting) and PATCH (really interesting)
  • Hypermedia links using HAL
  • Pagination and resource expansion, as described in this excellent post on the Stormpath blog and this blog post from Etsy's Venkata Mahalingam
  • API versioning - because, sooner or later, we're going to break something and need to maintain backward compatibility for older clients. Check out Troy Hunt's great post on API versioning for a preview of how we'll be getting this wrong in three different ways.
  • OAuth2 and bearer authentication - this one's going to take some work, because before you can build an OAuth2 API, you need an OAuth2 authentication server, but I've wanted to build a really lightweight .NET OAuth2 server for a while so I'm quite looking forward to it.
  • API documentation using tools like Swagger, which generates documentation based on metadata exposed by your API itself.

See you next time.

Monday, 4 May 2015

Restival Part 2: All Aboard The Routemaster

Hello, and welcome to the second instalment of Restival: The Great .NET ReST Showdown (part 1 is here if you missed it)  So far, our API defines a single very simple method – “hello”. Making a GET request to /hello will return { "Message" : "Hello, World!" }, and making a GET request to /hello/chris will return { "Message" : "Hello, Chris!" }

The code we're discussing here is on GitHub as release v0.0.1. This release supports /hello/{name}, which demonstrates routing and parameter binding. I've deliberately not implemented "Hello, World" at /hello yet,   because I want to do that by using the various frameworks' conventions for specifying default parameter values and that logically can't happen until you've defined your routes. Even at this incredibly early stage, there's some interesting design decisions going on.

Routing and Route Binding

Routing controls how your app will map incoming HTTP requests to methods - it's the set of rules that say "when you get a request that looks like X, run method Y on class Z"

Nancy has a really lightweight routing syntax inspired by Sinatra - by inheriting from NancyModule, you get access to a RouteBuilder, a sort of dictionary that maps routes to anonymous methods, for each supported HTTP verb (DELETE, GET, HEAD, OPTIONS, POST, PUT and PATCH) - to add a route, you supply the pattern to match and the method implementation:

public class HelloModule : NancyModule {
    public HelloModule() {
        Get["/hello/{name}"] = parameters => new Greeting(parameters.name);
    }
}

Note the Nancy convention whereby we use an underscore to indicate "we're not using this variable for anything" in handlers that don't actually use their parameter dictionary. It's also worth noting that Nancy's lightweight syntax won't stop you defining multiple handlers for the same route - but this can lead to non-deterministic behaviour, so don't do it :)

WebAPI uses an explicit routing table that's configured during application startup - in WebAPI, there's a call to WebApiConfig.Register(GlobalConfiguration.Configuration) in Application_Start, and routes are mapped by specifying the name, the URL pattern and the defaults to use for that route. (If you're familiar with routing in ASP.NET MVC, WebAPI uses a very similar routing configuration, but with the 'action' mapped to the HTTP verb instead of to a path segment.)

config.Routes.MapHttpRoute(
    "Hello",   // route name
    "hello/{name}", // route template
    new { Controller = "Hello" } // route defaults
);

OpenRasta and ServiceStack are both far more explicit about the relationship between resources, routes and handlers. OpenRasta uses a fluent configuration interface to declare your resources (i.e. the bits of data we're interested in), your URIs (routes), handlers (the bits of code that actually do the work), and contracts (which handle things like serialization and content types)

public class Configuration : IConfigurationSource {
    public void Configure() {
        using (OpenRastaConfiguration.Manual) {
            ResourceSpace.Has.ResourcesOfType<Greeting>()
                AtUri("/hello/{name}")
                .HandledBy<HelloHandler>()
                .AsJsonDataContract();
        }
    }
}

Finally, ServiceStack requires you to explicitly define requests (DTOs representing the incoming request data), services (analogous to handlers in our other frameworks) and responses. This is far more verbose than the other frameworks, but providing these abstraction layers between every aspect of your ReST API and your underlying codebase gives you more flexibility to evolve your API independently of the underlying business logic. You map your routes to your request DTOs using the Route attribute, and inherit from ServiceStack.Service when implementing your handlers. ServiceStack maps HTTP verbs onto service method names - HelloService.Get(Hello dto), HelloService.Post(Hello dto), etc. - but also supports a catch-all Any() method which will map incoming requests regardless of the request verb.

[Route("/hello")]
[Route("/hello/{name}")]
public class Hello {
    public string Name { get; set; }
}

public class HelloResponse {
    public string Message { get; set; }
}

public class HelloService : Service {
  public HelloResponse Any(Hello dto) {
    var greeting = new Greeting(dto.Name);
    var response = new HelloResponse() { Message = greeting.Message };
    return (response);
  }
}

So there you go. /hello/{name} takes one line in NancyFX, a couple of lines in OpenRasta and WebAPI, and three entire classes in ServiceStack. Before you draw any conclusions, though, try pointing a browser at the root URL of each API implementation.

Nancy gives you this rather splendid 404 page - complete with Tumbeast:

image

Running under IIS, WebAPI and OpenRasta both interpret GET / as a directory browse request, and give you the all-too-familiar IIS 7.5 HTTP error screen:

image

But the pay-off for the extra boilerplate required by ServiceStack is this rather nice API documentation page, describing the services and encoding formats supported by the API and providing WSDL files for adding our API as a service endpoint. Now, we're not actually using any of that yet... but as our API grows, it's going to be interesting to see how much extra work the other frameworks require to do things that ServiceStack provides for free. (Or for $800 per developer, depending on what you're doing with it.)

image

Now, it's important to remember that we're trying to reflect the conventions and idioms of our chosen frameworks here. You could, without too much difficulty, implement the request/service/response pattern favoured by ServiceStack on any of the other frameworks, or to get your ServiceStack services to return raw entities instead of mapping them into Response objects - but if you're trying to make framework A behave like framework B, you might as well just switch to framework B and be done with it.

In the next episode, we're going to make GET /hello return "Hello, World!", and in the process look at how to define default values for our route parameters in each of our frameworks. Until then, happy hacking!

Tuesday, 28 April 2015

One API, Four Frameworks: The Great .NET ReST Showdown

There’s only two hard problems in software: cache invalidation, naming things, off-by-one errors, null-terminated lists, and choice overload. Back when I started building data-driven websites, classic ASP gave you Request, Response, Application, Session and Server, and you built the rest yourself - and it was uphill both ways! These days, we’re spoiled for choice. Whatever you’re trying to achieve, you can bet good money that somebody’s been there before you. If you’re lucky, they’ve shared some of their code – and if you’re really lucky, they’ve built a framework or a library you can use.

Since the heady days of classic ASP, I’ve built quite a few systems that have to expose data or services to other systems – from standalone HTTP endpoints that just return true or false, to full-featured ReST APIs. I’ve come across several frameworks that are designed to help developers create ReSTful APIs using Microsoft .NET, each of which doubtless has its own strengths and weaknesses – and now I find myself facing the aforementioned choice overload, because if I was starting a new API project right now, I honestly don’t know which framework I’d use. Whilst I’ve got hands-on experience with most of them, I’ve never had the opportunity to try implementing the same API spec in multiple frameworks to compare and contrast their capabilities on a like-for-like basis.

So here goes. Over the next few months, I’m going to be developing the same API in four different frameworks, side-by-side. The APIs have to use the same back-end code – same data access, same business logic – and have to pass the same integration test suite. I’m going to start out really simple = “hello, world!” simple – and gradually introduce features like resource expansion, pagination, OAuth2, content negotiation. The idea is that some of these features will actually break the interface, so I’ll also be looking at how to handle API versioning in each of my chosen frameworks. I’m also going to try and respect the idioms and conventions of each of the frameworks I’m working with – good frameworks tend to be opinionated, and if you don’t agree with their opinions you’re probably not going to find them particularly helpful. 

The Frameworks

Microsoft ASP.NET WebAPI (http://www.asp.net/web-api)

Microsoft’s out-of-the-box solution for building HTTP-driven APIs. Superficially similar to ASP.NET MVC but I suspect there’s much more to it than that. I’ve built a couple of small standalone APIs in WebAPI but not used it for anything too substantial.

ServiceStack (https://servicestack.net/)

For a long while I was completely smitten with ServiceStack. Then it hit v4.0 and went from free-as-in-beer to expensive-as-in-$999-per-developer for any reasonable-sized project – there is a free usage tier, but it’s pretty restrictive. That said, it’s still a really powerful, flexible framework. Version 3 is still on NuGet, is still available under a BSD license, and there’s at least one active project based on a fork of the ServiceStack v3 codebase.  I like ServiceStack’s conventions and idioms; working with it taught me a lot about ReST; it has great support for things like SwaggerUI, and I suspect as I start implementing various API features ServiceStack is going to deliver additional value and capabilities which the other frameworks can’t match. Will it add enough value to justify $999 per developer? We’ll see :)

OpenRasta (http://openrasta.org/)

I’ve played with OpenRasta on-and-off over the years, though I’ve never used it on anything substantial, but I know the folks over at Huddle are using it in a big way and having great results with it, so I’m really curious to see how it stacks up against the others. (I should probably disclose a slight bias here in that Sebastien Lambla, the creator of OpenRasta, is a friend of mine; it was Seb who first got me thinking about ReST via London .NET User Group and prolonged conversations in the pub afterwards.)

NancyFX (http://nancyfx.org/)

This one is completely new to me – until last week, I’d never even looked at it. But so far, it looks really nice – minimalist, elegant and expressive, and I’m looking forward to seeing what it can do.

Other Candidates

It would be interesting to get some F# into the mix – mainly because I’ve never used F# and I’m curious. I’ve heard interesting things about WebSharper and Freya – and, of course, if anyone wants to add an F# implementation and send me a pull request, go for it!

The API

I’m using Apiary.IO to design the API itself – you can check it out at http://docs.restival.apiary.io/

The Code

The code is on GitHub - https://github.com/dylanbeattie/Restival.

If you want to run it, you’ll need to set up IIS applications for each of the four framework implementations – I use a hosts file hack to point restival.local at 127.0.0.1, and I’ve set up http://restival.local/api.webapi, http://restival.local/api.nancy, http://restival.local/api.servicestack and http://restival.local/api.openrasta

The test suite is NUnit, uses RestSharp to make real live HTTP requests, and all the actual test logic is in the abstract base class. There’s four concrete implementations, and the only difference is the URL of the API endpoint under test.

The Backlog

Features I want to implement include, but are probably not restricted to…

  • Pagination. What’s a good way to handle huge result sets without causing timeouts and performance problems?
  • Resource expansion. How do you request a customer, and all their orders, and the invoices linked to those orders, in a single API call?
  • API versioning – using all three different ways of doing it wrong
    • Version numbers in the URL (api.foo.com/v1/)
    • Version numbers in a custom HTTP header (X-Restival-Version: 1.0)
    • Content negotation based on MIME types (Accept: application/vnd.restival.v1.0+json)
  • OAuth2 and bearer token authentication. (You’ll like this one because I’m not using DotNetOpenAuth)
  • API documentation – probably by seeing how easily I can add Swagger support to each implementation

In every case, this is stuff I’ve already implemented in at least one project over the last couple of years, so it’s going to be interesting seeing how readily those implementations translate across to the other frameworks I’m using.

Sound like fun? You bet it does. Tune in and see how it unfolds. Or come to NDC Oslo in June or Progressive.NET in London in July, where you not only get to listen to me talk about ReST, you get several days of talks and workshops from some of the best speakers in the industry.