Tuesday, 5 December 2017

Thank you for flying Analysis Airways

If you work in software – or even if you don’t – it’s likely that, at some point, you’ll find yourself working with a team who are completely unfamiliar with your systems. A management consultancy, a development partner, a new supplier. A team of smart, capable people who are hoping to work with you to deliver something, whether that’s process improvements or a reduction in costs or some shiny new product.

A common pattern here is that, at the start of the engagement, they’ll appoint some business analysts to spend a whole lot of time talking with people from your organisation to get a better idea of what it is you do. They sometimes call this ‘gathering requirements’. You’ll know it’s happening when you get half-a-dozen invitations to four-hour ‘workshops’ from somebody you’ve never met, normally with a note saying something like ‘Hey everyone! Let’s get these in the diary!’

Now, there’s a problem here. Asking people what’s happening is almost never the best way to find out what’s actually happening. You don’t get the truth, you get a version of the truth that’s been twisted through a series of individual perspectives, and when you’re using these interviews to develop your understanding of an unfamiliar system, this can lead to an incredibly distorted view of the organisation. Components and assemblies aren’t ranked according to cost, or risk, or complexity. They’re ranked according to how many hours a day somebody spends dealing with them. And when you consider that in these days of scripted infrastructure and continuous deployment, a decent engineer can provision an entire virtual hosting environment in the time it takes to deal with one customer service phone call, what you end up with is a view of your organisation that ranks ‘phone calls’ equal to ‘hosting environment’ in terms of their strategic value and significance.

When you factor in the Dunning-Kruger effect, the errors and omissions, the inevitable confusion about naming things, and the understandable desire to manage complexity by introducing abstractions, you can end up with a very pretty and incredibly misleading diagram that claims to be a ‘high-level view’ of an organization’s systems.

There’s a wonderful example of this in neurology – a thing called the ‘cortical homunculus’; a distorted representation of the human body where the various parts of the body are magnified based on the density of nerve endings found therein. Looks like this:


It’s recognisably human, sure. But it’s a grotesque distortion of what a human being actually looks like – brilliant for demonstrating neurology, but if you used it as a model when designing clothes or furniture your customers would be in for one hell of a shock. And we know it’s grotesque, because we know what human beings are supposed to look like – in fact, it’s the difference between the ordinary and the grotesque that makes these cortical homunculi interesting.

The problem with software is that it’s made out of invisible electric magic, and the only way to see it at all is to rely on some incredibly coarse abstractions and some very rudimentary visualisation tools.


Imagine, for one second, that we’ve hired some consultants to help us design an aircraft. They send over some business analysts, and book some time with the ‘domain experts’ to talk over the capabilities of the existing system and gather requirements. The experts, of course, being the pilots and cabin crew – which, for a Boeing 747 like Ed Force One, is three flight crew and somewhere around dozen cabin attendants. They spend a couple of very long days interviewing all these experts; maybe they even have the opportunity to watch them at work in the course of a typical flight.

And then they come up with this: the high-level architectural diagram of a long-range passenger airliner:


Now, to an average passenger, that probably looks like a pretty reasonable representation of the major systems of a Boeing 747. Right? Take a look. Can you, off the top of your head, highlight the things that are factually incorrect?

That’s why this diagram is dangerous. It’s nicely laid out and easy to understand. It looks good. It inspires trust… and it’s a grotesque misrepresentation of what’s actually happening. Like the cortical homunculus, it’s not actually wrong, but it’s horribly distorted. In this case, the systems associated with the cabin attendants are massively overrepresented - because there’s 12 of them, as opposed to three flight crew – so 400% more workshop time and 400% more anecdotal insight. The top-level domains – flight deck, first class, economy class – are based on a valid but profoundly misleading perspective on the systems architecture of an airliner. The avionics and flight control systems are reduced to a footnote based on three days of interviews with the pilots, somebody with a bit of technical knowledge has connected the engines to the pedals (like a car, right?) and the rudder to the steering wheel (yes, a 747 does have a steering wheel), the wings are connected to the engines as a sort of afterthought…

Now, when the project is something tangible – like an office building or a bridge or an airliner, it won’t take long at all before somebody goes ‘um… I hate to say it, but this is wrong. This is so utterly totally wrong I can’t even begin to explain how wrong it is.’ Even the most inexperienced project manager will probably smell a rat when they notice that 20% of the budget for a new transatlantic airliner has been allocated to drinks trolleys and laminated safety cards.

But when the project is a software application – you know, a couple of million moving parts made out of invisible electronic thought-stuff that bounce around the place at the speed of light, merrily flipping bits and painting pixels and only sitting still when you catch one of them in a debugger and poke it line-by-line to see what it does – that moment of clarity might never happen. We can’t see software. We don’t know what it’s supposed to look like. We don’t have any instinct for distinguishing the ordinary from the grotesque. We rely on lines and rectangles, and we sort of assume that the person drawing the diagram knew what they were doing and that somebody else is looking after all the intricate detail that didn’t make it into the diagram.

And remember, nobody here has screwed up. The worst thing about these kinds of diagrams is that they’re produced by competent, honest, capable people. The organization allocates time for everybody to be involved. The stakeholders answer all the questions as honestly as they can. The consultants capture all of that detail and synthesise it into diagrams and documents, and everybody comes away with the satisfying sense of a job well done.

That’s not to say there’s no value in this process. But these kinds of diagrams are just one perspective on a system, and that’s dangerous unless you have a bunch of other perspectives to provide a basis for comparison. A conceptual model of a Boeing 747 based on running cost – suddenly the engines are a hell of a lot more important than the drinks trolley. A conceptual model based on electrical systems. Another based on manufacturing cost. Another based on air traffic control systems and airport infrastructure considerations. And yes, producing all these models takes a lot more than arranging a week of interviews with people who are already on the payroll, which is why so many projects get as far as that high-level system diagram and then start delivering things.

And why, somewhere in your system you almost certainly have the software equivalent of a hundred-million-dollar drinks trolley.

Thank you for flying Analysis Airways.

Friday, 24 November 2017

Goodbye Spotlight… Hello Skills Matter!

I have some exciting news. I'll be leaving Spotlight at the end of January, to take up a new role as CTO at Skills Matter. After fourteen years at Spotlight, this is a massive change for me, it's a massive change for Spotlight, and (I hope!) it's going to be a massive change for Skills Matter as well - but it's also a very natural next step for me, and a move that I think is going to unlock all sorts of exciting possibilities over the coming years.

I first started working with Spotlight way back in 2000 - in Internet time, that's about a hundred million years ago. My first job after I graduated was working for a company who built data-driven websites in ASP; spotlight.com was the first big commercial site I ever built, and I've been working with them ever since - initially as a supplier, then as webmaster (remember those?), then head of IT, then systems architect. I've come with them on a journey from Netscape Navigator and dial-up modems to smartphones and REST APIs and progressive web apps, and it's been a blast - we've shipped some really excellent projects, we've learned a lot together, and I've had the pleasure of working with awesome people and a lot of very cool technology.

Around ten years ago, up to my eyeballs in ASP.NET WebForms and wondering if everybody found them as unpleasant as I did, I started going along to some of the tech industry events that were happening here in London to get a bit of external perspective. I was at the first Future Of Web Apps conference back in 2007, the first Alt .NET UK 'unconference', the DDD events held at Microsoft a few times a year... and it wasn't long after that that I met Wendy Devolder and Nick Macris, who at the time were still running Skills Matter out of a basement in Sekforde Street, and hiring the crypt underneath St James Church on Clerkenwell Green for bigger events. That's where I did my first-ever technical talk - a fifteen-minute introduction to jQuery which I presented at the Open Source .NET Exchange. And, as the saying goes, I've never really looked back. This year, I've spoken at 20+ tech events in nine countries, from huge international conferences to local user groups around the UK. I'm on the programme committee for NDC London, FullStack and Progressive.NET, I'm helping to run the London .NET User Group, I've started giving training courses and workshops on building hypermedia APIs and scalable systems. And, because I'm one of those kinds of nerds, every time I take part in a tech event I come away from it buzzing with ideas about how to make it better - for attendees, organisers, volunteers, speakers... everybody. The only problem was finding the time to implement those ideas, and so when Wendy got in touch a few months ago to ask if I'd be interested in joining the team at Skills Matter and putting some of those ideas into practice, the timing was just right.

Now, this isn't intended to be a puff-piece. I've known the team here at Skills Matter for many, many years, we’ve done some really excellent things together, and I'm really excited to be joining them. I’ve also spoken to literally hundreds of people whose experience at their conferences and events has been nothing but positive, and that’s played a big part in my decision to come on board here. However, I also know that a few of you have had some frustrating experiences with them in the past - about how they organise their community events, about logistics and speaker invitations for their bigger conferences... even things like having the ‘wrong kind of cider’ in the Space Bar. :) Well, I want to hear all the details. I know we can't please all of the people all of the time, but the fact the team here has hired me means they're up for a bit of spirited discussion and an influx of new ideas, so if you wanna drop me an email or bend my ear over a beer sometime, I'd love to hear from you and see what we can do about it.

For the team at Spotlight, this marks the dawn of a new era. I’ve come dangerously close to becoming a dungeon master, and to me that’s a clear sign that it’s time to hand over the lines and rectangles to somebody new. To quote from Alberto Brandolini’s “The Rise and Fall of the Dungeon Master”:

“I am not suggesting to hire a hitman, but to acknowledge that the project would be better off without the Dungeon Master around. If you’re looking for a paradigm shift, you’ll need a team with a different attitude and emotional bonding with the legacy.”

I'm sad not to be coming with them on the next phase of their journey, but it's clear that Spotlight's future lies along a different path from mine. They've got an absolutely first-class team there, I'm sure they're going to go on to even bigger and better things, and I'm looking forward to catching up with them all over a beer every once in a while and seeing how it's all going.

First and foremost, though, this is a chance for me to get more involved with the global tech community. I absolutely love the part of my life that involves bouncing all over the world talking about tech, sharing ideas and meeting interesting people; and I'll be taking advantage of all those opportunities to chat about what Skills Matter can do to help support the tech community - whether it's open source projects or corporate development teams, tiny meetups or big international conferences. I'll also be working with the software team here to create some really exciting things, and of course I'll be getting even more closely involved in the conferences and meetups that we host here at CodeNode and at various venues around London. For Skills Matter, my experience as a developer, user group organiser and programme committee volunteer - not to mention my connections in the tech industry all over the world - will play a big part in deciding our plans and priorities for the next few years.

And the best part? I even get to work the odd shift behind the bar here at CodeNode once in a while, thus becoming the tech industry’s answer to Neville the Part-Time Barman. So next time you’re round Moorgate of an evening, drop in and say hi.

Exciting times, indeed.

Thursday, 9 November 2017

London .NET User Group Open Mic Night

Last night’s London .NET meetup was an ‘open mic’ session. Rather than inviting speakers to talk for an hour or so as we normally do, we invited members of the group to come along and talk for 10-15 minutes about their own projects, things they’re working on, or just stuff they think is cool. It’s the first open mic session we’ve had in a long while, but based on the success of last night’s event I think we’ll try to make these more of a regular thing in 2018. Quite a few of the speakers who came along were talking about their own .NET open source projects, and showing off some very, very cool things; here’s a quick rundown.

First up was Phil Pursglove, giving us a whistle stop tour of Cosmos DB, a new database platform that Microsoft are now offering on Azure. I’ve seen a couple of talks this year about Cosmos, and it looks really rather nice. It’s got protocol-level compatibility with MongoDB (what Microsoft call ‘bug compatible’), plus support for SQL and a couple of other language bindings. One of the coolest features of Cosmos is native support for multiple consistency models, allowing you to optimise your own application for your particular requirements - with the ability to override the global consistency model on a per-request basis. There’s a time-limited free trial available here - check it out.

Next up, Robin Minto gave us a run-through of OWASP ZAP, a proxy-based web security tool created by the Open Web Application Security Project (OWASP). ZAP is beautifully simple; you install it (or fire up the Docker image), it acts as a web proxy whilst you navigate through some of the primary user journeys on your web application, but in the background it’s probing your server and scanning your HTML for a whole range of common security vulnerabilities - and when you’re done, it’ll generate a security report you can share with the rest of your team.

James Singleton - author of ASP.NET Core 2 High Performance - gave us a very cool live demo of some of the cross-platform capabilities of .NET Core 2.0, including using his Windows laptop to cross-compile a web app for ARM Linux and running it live on a Raspberry Pi. What I found really impressive about this is that it didn’t require any .NET framework or runtime install on the Pi - it’s just vanilla (well, raspberry!) Raspbian Linux with a couple of things like libssl, and everything else is included in the deployment package created by the dot net tooling.

Ed Thomson, the Git program manager for Visual Studio Team System, did a great walkthrough of his open source project libgit2sharp - a set of C# language bindings for working with local and remote git repositories. If you’ve ever had to parse the output from the Windows git command line tools, you’ll know how painful it can be - but with libgit2sharp, you can use C# or Powershell to create automated build tools, manipulate your git repositories and do all sorts of cool stuff.

Jason Dryhurst-Smith gave us a demo of his CorrelatorSharp project - “your one-stop shop for context-aware logging and diagnostics” - a library that allows you to track the context of operations across multiple services and operations, including support for frameworks like NLog and RestSharp and a client-side JavaScript library.

Ben Abelshausen came along to show us Itinero, an open-source route planning library for .NET which you can use within your own applications to calculate routes and analyse map data.

Finally, we had Matt Ellis from Jetbrains giving us ’Ten Jetbrains Rider Debugging Tips in Ten Minutes’ - including some really neat stuff like cross-platform support for DebuggerDisplay attributes and the ability to define interdependent breakpoints in your code.

Huge thanks to all our speakers and to everyone who came along - it’s great to see so much enthusiasm and activity going in with .NET open source, and with .NET Core 2.0 going fully cross-platform and tools like Rider and Visual Studio Code, it’s only going to get more interesting.

See you all at the next one! 

Tuesday, 17 October 2017

Why You Weren’t Picked For NDC London

Over the past two weeks, a small team of us has been putting together the agenda for NDC London 2018, which is happening at the Queen Elizabeth II Centre in Westminster this coming January. As somebody who’s submitted a lot of conference talks over the years, I know how exciting it is getting the email confirming that you’ve been accepted — and how demoralizing it can be getting the email saying ‘We’re sorry to say that…’

Well, now that I’ve been on the other side of that process a few times, I see things very differently, so I wanted to take this chance to tell you all how the selection programme actually works, why you didn’t get picked, and why it shouldn’t put you off.

First, though, I want to say a huge thank you to everybody who submitted, and congratulations to the people who have been selected. It’s been a lot of work to pull everything together, but I’m really happy about the programme of amazing people and top-class talks that we’ve got this year — and I’m particularly excited that we’ve been able to include so many great speakers from the developer community who will be speaking at NDC for the first time. Welcome, all of you.

For all the speakers who didn’t get selected this time around, I really hope this article might help you understand why. (TL;DR: we just had too many good talks submitted!)

First, a basic rule of conferences — they have to generate a certain amount of money in order to operate. Yes, there’s community events like DDD that are free to attend and are run with minimal sponsorship — and I think those events are every bit as valuable to our community as the big commercial conferences — but once you start dealing with international speakers and multi-track events running across several days, you need to start thinking about commercial considerations. Conferences generate revenue from two sources — ticket sales and sponsorship. Both of those boil down to creating an event that people want to attend (and, in many cases, can persuade their boss is worth the cost and the time), so that’s what we, as the programme committee, are trying to do.

I should mention that on every conference I’ve worked with, the programme committee has consisted entirely of volunteers — four or five people from within the industry who are happy to donate their free time to help put the programme together. They’ll normally get a complimentary ticket to the conference in return, but it’s not a job, and nobody’s getting paid to do it.

Most software conferences run a public ‘call for submissions’ (aka ‘call for papers’ or CfP) where anybody interested in speaking can submit talks for consideration. Once the CfP has closed, the programme committee has the unenviable task of going through all the talks that have been submitted and picking the ones that they think should be included. NDC London has five tracks of one-hour talks, over three days. Once you’ve allowed for keynotes and lunch breaks, that gives you fewer than 100 talk slots. We had 732 sessions submitted. If we’d said ‘yes’ to all of them, we’d have ended up with a conference lasting three weeks with a ticket price of over £7,500… good luck getting your manager to sign off on attending that one.

So, how do you pick the top 100 talks from 732 submissions? Well, here’s what we look for.

First — quality. This one doesn’t really help much, because the vast majority of the talks submitted to a high-profile event like NDC are excellent, but there might be a handful that you can reject outright. These tend to be the talks that look like sales pitches — single-vendor solutions submitted by somebody who works for the vendor in question. You want to use a conference to sell your product? No problem — get a booth, or buy a sponsorship package. But we’re not going to include your sales pitch at the expense of someone else’s submission.

It’s also worth pointing out that when we get to the final round of submissions, when it’s getting really hard to make a call, we’ll look very critically at the quality of the submission itself. We’ll look for a good title, a clear, concise summary, a succinct speaker biography and a good-quality headshot — basically, a submission that makes it really clear who you are, what you’re talking about, why it’ll be interesting, and that you’re going to put in the effort required to deliver a great presentation. There’s no hard-and-fast rule to this; there are some excellent articles out there about how to write good proposals. My own rule of thumb when I’m submitting is 100 characters for the title, 2,000 for the abstract and 1,000 for the speaker bio, and I often use Ted Neward’s approach of ‘pain and promise’ — ‘here’s a problem you’ve had, here’s an idea that might fix it’ — when I’m writing proposals. Every speaker is different, and we all have our own style, but if your talk summary is only one or two lines or you’ve pasted in your entire professional CV instead of writing a short speaker bio, you may well get rejected in favour of somebody who’s clearly put more time into drafting their submission.

Second — relevance. Conferences have a target audience; NDC has evolved from being primarily a .NET conference into an event with a much broader scope, but we know the kind of developers who normally come to NDC London and what sort of things they’re interested in. For example, this year we decided to decline any C++ talks, purely because there’s very few C++ developers in our target audience. That said, we do try to include things that might be interesting to our audience, even if they’re not immediately relevant. Topics like Kotlin and Elm are still relatively esoteric in terms of numbers of users, but the industry buzz around them means we try to include things like this on the programme because we’re confident that people will want to see them.

Third — diversity. We could very easily have filled the entire programme with well-known white male speakers talking about JavaScript — but we think a good conference should feature diverse speakers, presenting diverse topics, with a range of presentation styles. We’re not just talking about gender and ethnicity, either — we want to see new faces alongside regular speakers, and bleeding-edge technology topics alongside established patterns and practices. That said, we would never accept a substandard talk purely for the sake of diversity; the way to get more balance in the programme without compromising quality is to start with a wider range of submissions. Many conferences suffer from a lack of diversity in the talks that get submitted — it’s just the same people submitting the same topics year after year — so throughout the year people like me go to tech events and conferences, look out for great speakers and talks that haven’t appeared at NDC before, and invite them to submit.

Finally, there’s the invited speakers — the people who we definitely want to see on the programme. They fall into two broad camps. There’s the big names — the people with 100K+ Twitter followers, the published authors, the high-profile open source project leaders. Now, being famous-on-the-internet doesn’t automatically mean you’re a brilliant speaker — speaking for an hour in front of a few hundred people takes a very different set of skills from running an open-source project or writing a book — but we’re lucky in our industry to have a lot of people who are well known, well respected, and really, really good on a conference stage. And these people are important, because their involvement gives us a fantastic signal boost. More interest translates into more ticket sales — which means more budget for covering speaker travel, catering, facilities, the entertainment at the afterparty, and all the other things that make a good conference such a positive experience for the attendees.

The programme committee also invites a lot of new speakers because it’s a great way of getting some new faces and new ideas on to the programme. As I mentioned above, many of us spend a lot of time going to user groups, meetups and conferences, and when I see somebody deliver a really good talk, I’ll invariably get in touch afterwards and ask if they’d be interested in submitting it to an event like NDC.

So, those are our constraints. We want to deliver a balanced mix of big names and new faces. We want to promote diversity in an industry that’s still overwhelmingly white and male (never mind the ongoing fascination with JS frameworks and microservices). We want to offer a compelling combination of established technology and interesting esoterica; of stuff that’s interesting, stuff that’s relevant, and stuff that’s fun…

And we get to pick fewer than 100 talks out of 750 submissions, which means whatever we do, we’re going to be sending a whole lot of emails at the end of it saying ‘Sorry, your talk wasn’t accepted…’ — and that’s not much fun, because we’re turning down good content from good speakers. In many cases those people are good friends as well — the tech industry is a very friendly place and I count the people I’ve met through it among my closest friends. But that’s how it works.

It’s humbling to be part of an industry where so many talented people are willing to invest their time in sharing their own expertise. Speaking at conferences is a really rewarding experience, but it’s also a huge amount of work, preparation, rehearsal, logistics, and time away from home. Without speakers, there’d be no conferences — but, as I hope I’ve explained here, there are more excellent people and talks out there than any one event can ever hope to accommodate.

The thing is, there’s absolutely no shortage of great conferences and events. Don’t be discouraged. Keep submitting. If you didn’t make the cut for NDC London, submit to Oslo and Sydney. And BuildStuff, and DevSum, and Øredev, and FullStack, and Progressive.NET, and DotNext, and Sela Developer Practice, and SDD, and QCon, and WebSummit. And if none of those grab you, sign up for services like The Weekly CFP and Technically Speaking that will email you about conferences that are looking for speakers and submissions.

Finally, you know the one thing that every really good speaker I’ve ever seen has in common? It’s that they work hard on interesting things, and they love what they do. Maybe it’s their job. Maybe they lead an open-source project, or run a user group, or they’re writing a book. But if you love what you do and you want to share that enthusiasm, it’ll happen. Just give it time.

Saturday, 23 September 2017

London, London, Uber Alles

I read with some interest yesterday that Transport for London (TfL) are not renewing Uber’s license to operate in London. TfL have cited concerns over Uber’s driver screening and background checks, and Uber’s use of ‘Greyball’, a software component designed and built by Uber to bypass all sorts of regulatory mechanisms, including using a phone’s GPS to recognise when the phone is at being used at Apple HQ so that the Apple engineers who review iOS applications won’t see the hidden features that Apple aren’t supposed to know about.

I use Uber a lot. Their service used to be absolutely excellent, and is still pretty good. It’s not as good as it used to be. It takes longer to get a car than it used to, particularly in central London. But, as a passenger (and yes, I know I’m a white male passenger, although some of my Uber experience dates from a period when I did have quite serious mobility issues whilst recovering from a skiing injury), I have found Uber to be a really good service. I’ve used it all over the world — London, Bristol, Brussels, Kyiv, Saint-Petersburg, and as of last night, Minsk. I missed it in Tel Aviv, where it’s been outlawed and everyone uses Gett instead, although Gett in Israel appears to operate exactly the same as Uber does in London so I’m not quite sure what the distinction is.

I’ve had some seriously impressive experiences as an Uber customer. In London, I once left my guitar in the back of an Uber that dropped me home at 4am after a horribly delayed flight. I realised within minutes. I used the app to phone the driver, who immediately turned around and brought it back, and was very taken aback when I insisted on paying him for the extra journey. In Kyiv I’ve used Uber to travel safely from a place I couldn’t pronounce to a place I couldn’t find, in a city where I couldn’t speak the language or even read the alphabet, with a driver who spoke no English, and I’ve done it with absolutely no fear of getting ripped off or robbed.

It’s not all been smooth. In Bristol I once had an Uber driver — sorry, “partner-driver” — who missed the turning four times in a row and then explained, giggling, that he’d never driven a cab before and didn’t know how to use satellite navigation. In London I’ve actually been in an Uber car that was pulled over by the police for speeding.. the driver panicked, drove away, realised what he’d done, thought better of it and reversed down Moorgate, in rush hour traffic, back to where a rather surprised-looking police officer immediately placed him under arrest. In both of those cases I complained, Uber investigated (something they can do really easily, thanks to GPS tracking of exactly what all their vehicles are doing at any moment) and notified me within 24 hours that the drivers in question had been suspended and would not be driving for the company again. I’ve used their app to claim refunds where I was overcharged — but also to reimburse drivers who undercharged me because of technical problems.

Now, here’s the two points I think are really important. One — Uber didn’t actually solve any hard problems. They didn’t invent GPS, or cellular phone networks, or draw their own maps of the world’s major cities. They just waited until exactly the right moment — the moment everyone had GPS, and online payments were easy, and smartphones were cheap enough that it was cost-effective to use them as the basis of a ride-sharing application — and then they pounced. Now don’t get me wrong, they did it incredibly well, and the user experience — particularly in the early days — was absolutely first-class. Usability features like being able to photograph your credit card using your phone camera instead of having to type the number in were a game-changer — the kind of features that people would show off to their friends in the pub just because no-one had ever seen it before. But Uber didn’t solve any hard engineering problems. There’s no PageRank algorithms or Falcon 9 reusable rockets being invented here. Uber’s success is down to absolutely first-class customer experience, aggressive expansion, and their willingness to enter target markets without the slightest consideration for how their service might affect the status quo. And their protestations about the ruling threatening 40,000 jobs is a bit rich given how adamantly they insist that they don’t actually employ any drivers, despite a court ruling to the contrary.

If it hadn’t been them, it would have been someone else, which brings me to my second point. The London taxi cab industry was just crying out for somebody to come in and kick seven bells out of it. I’ve lived in London since 2003, and I’ve taken many, many taxis over the years. For all the talk about “The Knowledge”, I have lost count of the number of times I’ve had to give a black cab driver turn-by-turn directions because they don’t know where I’m going and they’re not allowed to rely on sat-nav. I’ve lost count of the number of times I’ve had to ask a cab driver to stop at a cash machine because they don’t take credit cards. I’ve lost track of the number of hours of my life I’ve spent stood on street corners, in the rain, with more luggage than I can possibly carry home on the bus, waiting for the glow of an orange light that might take me home. Or might just ask where I’m going and drive away with a shake of the head and a ‘no, mate’ because my journey isn’t convenient for them.

The alternative, of course, was minicabs. Booked in advance from a reputable operator, they were generally pretty good. Not always, but most of the time they’d show up on time and take you where you wanted to go. Or, at the end of a long night out, you’d wander up to one of Soho’s many illustrious minicab offices, and end up in the back seat of an interesting-smelling Toyota Corolla wondering if the driver was really the person who’d passed all the necessary tests and checks, or one of their cousins who’d borrowed the cab for the night to earn an extra few quid. (On two separate occasions I’ve been offered this as an explanation for a minicab driver not knowing where they’re going…)

A lot of people are very upset about yesterday’s news — a petition to ‘save Uber’ has attracted nearly half a million signatures since the announcement — but in the long term, I don’t think it really matters whether Uber’s license is renewed or not. There’s no way people in London are going to go back to standing on rainy street corners waving their arms at every orange light that goes past. Uber has changed the game. It’s now a legal requirement that black cabs in London accept payment by card — something which even a few years ago was still hugely controversial. And that’s actually really sad, because once upon a time, London taxis had an international reputation for innovation and excellence. The London ‘black cab’ is a global icon, but it’s also an incredibly well-designed vehicle — high-visibility handholds, wheelchair accessible, capable of carrying five passengers and negotiating the narrow tangled streets of one of the world’s oldest cities. London taxis have been regulated since the 17th century. “The Knowledge” — the exam all London black cab drivers are required to pass, generally regarded as one of the hardest examinations of any profession anywhere in the world — was introduced in the 1850s, after visitors travelling to London for the Great Exhibition complained that their hackney-carriage drivers didn’t know where they were going. One of the great joys of visiting London in the days before sat-nav was hailing a cab, giving the driver the name of some obscure pub halfway across the city, and watching as they’d think for one second, nod, and then whisk you there without further hesitation. But in the age of ubiquitous satellite navigation, who cares whether your driver can remember the way from Narrow Street to Penton Place after you’ve had to stand in the rain for fifteen minutes trying to hail a cab? What Uber did — and did incredibly well — is they thought about all the elements of the passenger experience that happen outside the car. Finding a cab. Knowing how much it’s likely to cost. Paying the bill. Recovering lost property. And they made it cheap. They created a user experience that, for the majority of passengers, was easier, cheaper and more convenient than traditional black cabs. It’s no wonder the establishment freaked out.

So what happens now? Maybe Uber win their appeal. Maybe someone else moves into that space. Maybe things get a bit more expensive for passengers — and, frankly, I think they should. I think Uber is too cheap. If you want the luxury of somebody else driving you to your door in a private car, — and for most of us, that IS a luxury — then I believe the person providing that service deserves to make a decent living out of providing it, and one of the most welcome features Uber’s introduced recently is the ability to tip drivers via the app.

Uber has taken a stagnant industry that was in dire need of a kick up the arse, and it’s done it. They’re not the only cab hailing app in the market. Gett, mytaxi (formerly Hailo), Kabbee, private hire firms like Addison Lee — not to mention just phoning a minicab office and asking them to send a car round. In a typical month in London, I’ll use buses, the Tube, national rail services, the Overground, Uber AND black cabs — not to mention a fair bit of walking and cycling. I even have an actual car, which I use about once a month to go to B&Q or IKEA or somewhere. None of them’s perfect, but on the whole, transport in London works pretty well, and I’ve found Uber a really welcome addition to the range of transport services that’s on offer.

But Uber vs TfL is just a tiny taste of what’s coming. I honestly believe that within the next ten years, we’ll be using our smartphones to summon driverless autonomous electric vehicles to take us home after a night out. In all sorts of shapes and sizes, too — after all, why should it take an entire Toyota Prius to transport a 5’2”, 65kg human with a small shoulder bag from Wardour Street to Battersea? We’ll be going to IKEA in a tiny little electric bubble-bike, buying a new kitchen, and taking it home in a driverless van that waits outside whilst we unload it and then burbles off happily into the sunset to pick up the next waiting fare. Never mind the cab wars between the black cab drivers and the minicab drivers — what happens when a robot cab will pick you up anywhere in town and take you home for a quid? A robot cab that doesn’t have a mortgage to pay or kids to feed? That runs on cheap, green energy, that doesn’t get bored or tired or distracted?

There are going to be problems. There are going to be collisions, fatalities, lawsuits, prosecutions, appeals and counter-appeals. And the disruptions will keep coming, faster and faster. Just as Transport for London think they’ve got workable legislation for driverless cars, someone’s going to invent a drone that can fly from Covent Garden to Dulwich carrying a passenger, and whilst they’re busy arguing in court over whether it’s a helicopter or not someone’s gonna shoot one down with a flare pistol and all merry hell’s going to break loose.

What Uber shows us is that technology isn’t going to self-regulate. The digital economy moves too fast for pre-emptive legislation and licensing frameworks. There’s fortunes to made in the months or years between your product hitting the market and the authorities deciding to shut you down. The future of Uber doesn’t depend on getting their TfL license renewed. They’ve been kicked out of entire countries before and it doesn’t seem to have slowed them down. To them this is just a bump in the road. Their future is about being the first company to roll out self-driving cabs, and you can guarantee they’re working, right now, on finding the legislative loopholes in their various target markets that will allow them to launch first and ask questions later.

Black cabs aren’t going away. Buses aren’t going away. Much as I’d love it, they’re not going to decommission all the rolling stock and turn the London Underground network into a giant dodgems track. Technology is going to disrupt, government is going to react — and whilst that model doesn’t always work terribly well, I can’t see any feasible alternative. Engineers are going to solve hard problems. Touch screens, cellular networks, GPS, space travel, battery capacity, biological interfaces, machine learning. Companies like Apple and Google and SpaceX and Tesla are going to put those solutions in our pockets, and in our buildings and on our streets, and companies like Uber and Tinder and AirBnB are going to find ways to turn those solutions into products that you just can’t WAIT to show your friends in the pub.

And, short of the nightmare scenarios my friend Chris has so lovingly documented over on H+Pedia, society will continue. Uber will run surveillance software on your phone that’s a fraction of what the Home Office are doing all day every day, but Greyball isn’t going to cause the downfall of society. Self-driving cars will kill people. Yes, they will. But 3,000 people already die in road traffic accidents every day, and we think that’s normal, and as long as it doesn’t impact our lives directly we just sort of shrug and ignore it and get on with our lives.

I like Uber, and I feel guilty for liking them because I know they do some truly horrible things. I also travel by air, and I eat meat, and I wear leather and I use an iPhone and don’t always recycle. And I used to download MP3s all the time, and feel guilty about it, and then Spotify came along and now I don’t download MP3s any more. And I used to download movies and TV shows, and now I have Amazon Prime and Netflix and I haven’t downloaded a movie in YEARS.

Very few people are extremists. For every militant vegan, there’s someone out hunting their own meat, and a couple of hundred of us who just go with whatever’s convenient. If you’re trying to change the world, ignore the extremists. Don’t outlaw things you think people shouldn’t be doing. Change the world by using technology to deliver compelling alternatives. I want vat-grown burgers and steaks that taste as good as the real thing, and yes, I’ll pay. I want boots and jackets made from synthetic textiles that actually look properly road-worn after a couple of years instead of just falling apart. I want to travel by hypersonic maglev underground train instead of flying. OK, maybe not that last part… I’m writing this from 30,000 feet above the Lithuanian border, the setting sun behind us is catching the wingtips as we soar above an endless sea of cloud, and I’m being reminded how much I love flying. But you could definitely make air travel a LOT more expensive before I stopped doing it completely.

If TfL really want to get rid of Uber, revoking their license isn’t the way to do it. They’ll appeal, you’ll fight, a lot of lawyers will get rich and TfL will either lose and look like an idiot, or win and look like an asshole. How about TfL find the company that’s trying to beat Uber anyway — they’re out there, somewhere — and offer to work with them instead? As Rowland Manthorpe wrote in WIRED yesterday, why don’t we create a socially responsible, employee-owned, ride sharing platform that gives passengers everything Uber does without the institutionalised nastiness and the guilty aftertaste? What ethical transport startup would not jump at the fact to sign up the greatest city in the world as their development partner?

You think the future is about using an iPhone to summon a guy in a Toyota Prius, and the important question is who wrote the app they’re using? Come on, people. This is London calling. We can think a hell of a lot bigger than that.

Monday, 7 August 2017

Generating self-signed HTTPS certificates with subjectAltNames

We provide online services via a bunch of different websites, using federated authentication so that if you sign in to our authentication server, you get a *.mydomain.com cookie that’s sent to any other server on our domain.

We use local wildcard DNS, so there’s a *.mydomain.com.local record that resolves everything to, and for each developer machine we create a  *.mydomain.com.hostname record that’s an alias for hostname, so you can browse to www.mydomain.com.<machine> to see code running on another developer’s workstation, or www.mydomain.com.local to view your own local development code.

This works pretty well, but getting a local development system set up involves running local versions of several different apps – and since Google Chrome now throws a security error for any HTTPS site whose certificate doesn’t include a “subject alternative name” field, getting a bunch of local sites all happily sharing the same cookies over HTTPS proved a bit fiddly.

So… here’s a batch file that will spit out a bunch of very useful certificates, adapted from this post on serverfault.com.

How it works

  1. Get openssl.exe working - I use the version that's shipped with Cygwin, installed into C:\Windows\Cygwin64\bin\ and added to my system path.
  2. Run makecert.bat. If you don't want to specify a password, just provide a blank one (press Enter). This will spit out three files:
    • local_and_hostname.crt
    • local_and_hostname.key
    • local_and_hostname.pfx
  3. Double-click the local_and_hostname.crt file, click "Install Certificate", and use the Certificate Import Wizard to import it. Choose "Local Machine" as the Store Location, and "Trusted Root Certification Authorities" as the Certificate Store.
  4. Open IIS, select your machine, open "Server Certificates" from the IIS snapin, click "Import..." in the Actions panel
  5. Select the local_and_hostname.pfx certificate created by the batch file. If you used a password when exporting your PKCS12 (.pfx) file, you'll need to provide it here
  6. Finally, set up your IIS HTTPS bindings to use your new certificate.

Yay! Security! 

Monday, 31 July 2017

Deployment Through the Ages

Vanessa Love just posted this intriguing little snippet on Twitter:

And I got halfway through sticking some notes into the Google doc, and then thought actually this might make a fun blog post. So here’s how deployment has evolved over the 14 years since I first took over the hallowed mantle of [email protected].

2003: Beyond Compare (maybe?)

The whole site was classic ASP – no compilation, no build process, all connection credentials and other settings were managed as application variables in the global.asa file. On a good day, I’d get code running on my workstation, test it in our main target browsers, and deploy it using a visual folder comparison tool. It might have been Beyond Compare; it might have been something else. I honestly can’t remember and the whole thing is lost in the mists of time. But that was basically the process – you’d have the production codebase on one half of your screen and your localhost codebase on the other half, and you’d cherry-pick the bits that needed to be copied across.

Of course, when something went wrong in production, I’d end up reversing the process – edit code directly on live (via UNC share), normally with the phone wedged against my shoulder and a user on the other end; fix the bug, verify the user was happy, and then do a file sync in the other direction to get everything from production back onto localhost. Talk about a tight feedback loop – sometimes I’d do half-a-dozen “deployments” in one phone call. It was a simpler time, dear reader. Rollback plan was to hammer Ctrl-Z until it’s working again; disaster recovery was tape backups of the complete source tree and database every night, and the occasional copy’n’paste backup of wwwroot before doing something ambitious.

Incidentally, I still use Beyond Compare almost daily – I have it configured as my merge tool for fixing Git merge conflicts. It’s fantastic.

2005: Subversion

Once we hired a second developer (hey Dan!) the Beyond Compare approach didn’t really work so well any more, so we set up a Subversion server. You’d get stuff running on localhost, test it, maybe share an http://www.spotlight.com.dylan-pc/ link (hooray for local wildcard DNS) so other people could see it, and when they were happy, you’d do an svn commit, log into the production web server (yep, the production web server – just the one!) and do an svn update. That would pull down the latest code, update everything in-place. There was still the occasional urgent production bugfix. One of my worst habits was that I’d fix something on production and then forget to svn commit the changes, so the next time someone did a deployment (hey Dan!) they’d inadvertently reintroduce whatever bug had just been fixed and we’d get upset people phoning up asking why it was broken AGAIN.

2006: FinalBuilder

This is where we start doing things with ASP.NET in a big way. I still dream about OnItemDataBound sometimes… and wake up screaming, covered in sweat. Fun times. The code has all long since been deleted but I fear the memories will haunt me to my grave.

Anyway. By this point we already had the Subversion server, so we had a look around for something that would check out and compile .NET code, and went with FinalBuilder. It had a GUI for authoring build pipelines and processes, some very neat features, and could deploy .NET applications to IIS servers. This was pretty sophisticated for 2006. 

2008: test server and msdeploy

After one too many botched FinalBuilder deployments, we decided that a dedicated test environment and a better deployment process might be a good idea. Microsoft had just released a preview of a new deployment tool called MSDeploy, and it was awesome. We set up a ‘staging environment’ – it was a spare Dell PowerEdge server that lived under my desk, and I’d know when somebody accidentally wrote an infinite loop because I’d hear the fans spin up. We’d commit changes to Subversion, FinalBuilder would build and deploy them onto the test server, we’d give everything a bit of a kicking in IE8 and Firefox (no Google Chrome until September 2008, remember!) and then – and this was magic back in 2008 – you’d use msdeploy.exe to replicate the entire test server into production! Compared to the tedious and error-prone checking of IIS settings, application pools and so on, this was brilliant. Plus we’d use msdeploy to replicate the live server onto new developers’ workstations, which was a really fast, easy way to get them a local snapshot of a working live system. For the bits that still ran interpreted code, anyway.

2011: TeamCity All The Things!

By now we had separate dev, staging and production environments, and msdeploy just wasn’t cutting it any more. We needed something that can actually build different deployments for each environments – connection strings, credentials, and so on. And there’s now support in Visual Studio for doing XML configuration transforms, so you create a different config file for every environment, check those into revision control, and get different builds for each environment. I can’t remember exactly why we abandoned FinalBuilder for TeamCity, but it was definitely a good idea – TeamCity has been the backbone of our build process ever since, and it’s a fantastically powerful piece of kit.

2012: Subversion to GitHub

At this point, we’d grown from me, on my own doing webmaster stuff, to a team of about six developers. Even Subversion is starting to creak a bit, especially when you’re trying to merge long-lived branches and getting dozens of merge conflicts, so we start moving stuff across to GitHub. It takes a while – I’m talking months – for the whole team to stop thinking of Git as ‘unnecessarily complicated Subversion’ and really grok the workflow, but we got there in the end.

Our deployment process at this point was to commit to the Git master branch, and wait for TeamCity to build the development version of the package. This would get built and deployed. Once it was tested, you’d use TeamCity to build and deploy the staging version – and if that went OK, you’d build and deploy production. Like very step on this journey, it was better than anything we’d had before, but had some obvious drawbacks. Like the fact we had several hundred separate TeamCity jobs and no consistent way of managing them all.

2013: Octopus Deploy and Klondike

When we started migrating from TeamCity 6 to TeamCity 7, it became rapidly apparent that our “build everything several times” process… well, it sucked. It was high-maintenance, used loads of storage space and unnecessary CPU cycles, and we needed a better system.

Enter Octopus Deploy, whose killer feature for us was the ability to compile a .NET web application or project into a deployment NuGet package (an “octopack”), and then apply configuration settings during deployment. We could build a single package, and then use Octopus to deploy and configure it to dev, staging and live. This was an absolute game-changer for us. We set up TeamCity to do continuous integration, so that every commit to a master branch would trigger a package build… and before long, our biggest problem was that we had so many packages in TeamCity that the built-in NuGet server started creaking.

This started life as an experimental build of themotleyfool/NuGet.Lucene – which we actually deployed onto a server we called “Klondike” (because klondike > gold rush > get nuggets fast!) – and it worked rather nicely. Incidentally, that NuGet.Lucene library is now the engine behind themotleyfool/Klondike, a full-spec NuGet hosting application – and I believe our internal hostname was actually the inspiration for their project name. That was a lot of fun for the 18 months or so that Klondike existed but we were still running the old NuGet.Lucene codebase on a server called ‘klondike’. It’s OK, we’ve now upgraded it and everything’s lovely.

It was also in 2013 that we started exploring the idea of automatic semantic versioning – I wrote a post in October 2013 explaining how we hacked together an early version of this. Here’s another post from January 2017 explaining how it’s evolved. We’re still working on it. Versioning is hard.

And now?

So right now, our build process works something like this.

  1. Grab the ticket you’re working on – we use Pivotal Tracker to manage our backlogs
  2. Create a new GitHub branch, with a name like 12345678_fix_the_microfleems – where 12345678 is the ticket ID number
  3. Fix the microfleems.
  4. Push your changes to your branch, and open a pull request. TeamCity will have picked up the pull request, checked out the merge head and built a deployable pre-release package (on some projects, versioning for this is completely automated)
  5. Use Octopus Deploy to deploy the prerelease package onto the dev environment. This is where you get to tweak and troubleshoot your deployment steps.
  6. Once you’re happy, mark the ticket as ‘finished’. This means it’s ready for code review. One of the other developers will pick it up, read the code, make sure it runs locally and deploys to the dev environment, and then mark it as ‘delivered’.
  7. Once it’s delivered, one of our testers will pick it up, test it on the dev environment, run it past any business stakeholders or users, and make sure we’ve done the right thing and done it right.
  8. Finally, the ticket is accepted. The pull request is merged, the branch is deleted. TeamCity builds a release package. We use Octopus to deploy that to staging, check everything looks good, and then promote it to production.

And what’s on our wishlist?

  • Better production-grade smoke testing. Zero-footprint tests we can run that will validate common user journeys and scenarios as part of every deployment – and which potentially also run as part of routine monitoring, and can even be used as the basis for load testing.
  • Automated release notes. Close the loop, link the Octopus deployments back to the Pivotal tickets, so that when we do a production deployment, we can create release notes based on the ticket titles, we can email the person who requested the ticket saying that it’s now gone live, that kind of thing.
  • Deployments on the dashboards. We want to see every deployment as an event on the same dashboards that monitor network, CPU, memory, user sessions – so if you deploy a change that radically affects system resources, it’s immediately obvious there might be a correlation.
  • Full-on continuous deployment. Merge the PR and let the machines do the rest.

So there you go – fourteen years worth of continuous deployments. Of course, alongside all this, we’ve moved from unpacking Dell PowerEdge servers and installing Windows 2003 on them to running Chef scripts that spin up virtual machines in AWS and shut them down again when nobody’s using them – but hey, that’s another story.

Thursday, 27 July 2017

Securing Blogger with CloudFlare and HTTPS

As you may have read, life is about to get a whole lot harder for websites without HTTPS. Now this site is hosted on Blogger – I used to run my own MovableType server, but I realised I was spending way more time messing around with the software than I was actually writing blog posts, so I shifted the whole thing across to Blogger about a decade ago and never really looked back.

One of the limitations of Blogger is that it doesn’t support HTTPS if you’re using custom domains – there’s no way to install your own certificate or anything. So, since Chrome’s about to crank up the warnings for any websites that don’t use HTTPS, I figured I ought to set something up. Enter CloudFlare, who are really rather splendid.

First, you sign up. (bonus points for them NOT forcing you to choose a password that contains a lowercase letter, an uppercase letter, a number, a special character, the poo emoji and the Mongolian vowel separator).

Second, you tell them which domain you want to protect:


They scan all your DNS records, which takes about a minute – and not only is there a nice real-time progress bar keeping you in the loop, they use this opportunity to play a really short video explaining what's going on. I think this is absolute genius.


Finally, after checking it's picked up all your DNS records properly (it had), you tell your domain registrar to update the nameservers for your domain to CloudFlare's DNS servers, give it up to 24 hours, and you're done. Zero downtime, zero service interruption – the whole thing was smooth, simple, and completely free-as-in-beer.

Yes, I realise this does not encrypt content end-to-end. For what we're doing here, this is absolutely fine. It'll secure your traffic against dodgy hotel wi-fi and unscrupulous internet service providers - and if anyone's genuinely intercepting HTTP traffic between CloudFlare and Google, I'm sure they can think of more exciting things to do with it than mess around with my blog posts.

Having done that, I then had to use the Google Chrome console to track down the resources – photos and the odd bit of script – that were being hosted via HTTP, and update them to be HTTPS. The only thing I couldn't work out how to fix was the search bar that's embedded in Blogger's default page layout – it's injected by JavaScript, it's hosted by Google's CDN (so I can't use any of CloudFlare's clever rewriting tricks to fix it), it's stuck inside an IFRAME, and it points to http://www.dylanbeattie.net/search – see the plain HTTP with no S?


After an hour or so of messing around with CSS, I gave up, posted a question on the ProWebmasters Stack Exchange, and – of course, immediately found the solution; go into Blogger, Layout, find the Navbar gadget, click Edit, and there's an option to switch the nav off entirely.

So there you go. Thanks to CloudFlare, https://www.dylanbeattie.net/ now has a green padlock on it. I don't know about you, but I take comfort in that.


Friday, 21 July 2017

Summer 2017 .NET Community Update

Summer here in the UK is normally pretty quiet, but this year there's so much going on around .NET and the .NET community that I thought this would be a great opportunity to do a bit of a round-up and let you all know about some of the great stuff that's going on.

First, there's the news of two new .NET user groups starting up in southern England. Earlier this week, I was down in Bournemouth speaking at the first-ever meetup of the new .NET Bournemouth group, and thoroughly enjoyed it. Three speakers – Stuart Blackler, Tommy Long and me – with talks on leadership, agile approaches to information security, and an updated version of my "happy code" talk I've done at a few conferences already this year. The venue and A/V setup worked flawlessly, there was a strong turnout, and some really good questions and discussion after each of the talks – I think it's going to turn out to be a really engaging group, so if you're in that part of the world, stop by and check them out. Their next few meetup dates are on meetup.com/Net-Bournemouth already.

Next month, Steve Gordon is starting a new .NET South East group based in Brighton, who will be kicking off with their inaugural meetup on August 22nd with Steve talking about Docker for .NET developers.

Brighton based .NET South East user group logo

There's a great post on Steve's blog explaining what he's doing and what he's hoping to get out of the group, and they're also on meetup.com/dotnetsoutheast (and I have to say, they've done an excellent job of branding the Meetup site – nice work!)

It's an exciting time for .NET – between the cross-platform stuff that's happening around Xamarin and .NET Core, new tooling like JetBrains Rider and Visual Studio Code, and the growing number of cloud providers who are supporting C# and .NET Core for building serverless cloud applications, we've come a long, long way from the days of building Windows Forms and databinding in Visual Studio .NET.

If you're interested in really getting to grips with the future of .NET, join us at the Progressive.NET Tutorials here in London in September. With a great line-up of speakers including Julie Lerman, Jon Skeet, Jon Galloway, Clemens Vasters and Rachel Appel – plus Carl Franklin and Rich Campbell from DotNetRocks, and a few familiar faces you might recognise from the London.NET gang – it promises to be a really excellent event. It goes a lot deeper than most conferences – with one day of talks and two days of hands-on workshops, the idea is that attendees don't just go away with good ideas, they actually leave with running code, on their laptops, that they can refer back to when they take those ideas back to the office or to their own projects. Check out the programme, follow #ProgNET on Twitter, and hopefully see some of you there. 

Then on Saturday 16th September – the day after Progressive.NET – is the fourth DDD East Anglia community conference in Cambridge. Their call for speakers is now closed, but voting is open until July 29th – so sign up, vote on the sessions you want to see – or just vote for mine if you can't make your mind up ;) - and hopefully I'll see some of you in Cambridge.

t_shirt_logo_thumb[23]Finally, just in case any readers of this blog DON'T know about the London .NET User Group… yep, we have .NET User Group! In London! I know, right? We're on meetup.com/London-NET-User-Group, and on Twitter as @LondonDotNet, and we meet every month at SkillsMatter's CodeNode building near Moorgate.

Our next meetup is on August 8th, with Ana Balica talking about the history and future of HTTP and HTTP/2, and Steve Gordon – and on September 12th we've got Rich Campbell joining us for a Progressive.NET special meetup and presenting the History of .NET as you've never heard it before.

New people, new meetups, new platforms and new ideas. Like I said, it's a really exciting time to be part of the .NET community – join us, come to a meetup, follow us online, and let's make good things happen.

Tuesday, 4 July 2017

Use Flatscreens

This started life as a lightning talk for PubConf after NDC in Sydney, back in August 2016… and after quite a lot of tweaking, editing and learning to do all sorts of fun things with Adobe AfterEffects and Premiere, it's finally on YouTube. The inspiration is, of course, "Wear Sunscreen", Baz Luhrmann's 1999 hit song based on an essay written by Mary Schmich. Video footage and stock photography is all credited at the end of the clip, and the music, vocals, video, audio and, well, basically everything else is by me. Happy listening - and don't forget to use flatscreens :)

Ladies and gentlemen of the class of 2017… use flat screens. If I could offer you only one tip for the future, flat screens would be it. The benefits of flat screens have been proved by Hollywood, whereas the rest of my advice has no basis more reliable than my own meandering experience.
I will dispense this advice... now.

Enjoy the confidence and optimism of greenfield projects. Oh, never mind. You will not appreciate the confidence and optimism of greenfield until everything starts going to hell. But trust me, when you finally ship, you'll look back at the code you wrote and recall in a way you can't grasp now how simple everything seemed, and how productive you really were. Your code is not as bad as you imagine.

Don't worry about changing database providers. Or worry, but know that every company who ever used an OR/M in case they needed to switch databases never actually did it. The real problems in your projects are the dependencies you don't control; the leaking air conditioner that floods your data centre at 5pm on the Thursday before Christmas.

Learn one thing every day that scares you.


Don't reformat other people's codebases; don't put up with people who reformat yours.


Don't get obsessed with frameworks. Sometimes they help, sometimes they hurt. It's the user experience that matters, and the user doesn't care how you created it.

Remember the retweets you receive; forget the flame bait. If you succeed in doing this, tell me how.

Keep your old hard drives. Throw away your old network cards.


Don't feel guilty if you don't understand f#. Some of the most productive junior developers I've worked with didn't know F#. Some of the best systems architects I know still don't.

Write plenty of tests.

Be kind to your keys; you'll miss them when they're gone.

Maybe you have a degree; maybe you don't. Maybe you have an open source project; maybe you won't. Maybe you wrote code that flew on the Space Shuttle; maybe you worked on Microsoft SharePoint. Whatever you do, keep improving, and don't worry where your next gig is coming from. There's a big old world out there, and they're always going to need good developers.

Look after your brain. Don't burn out, don't be afraid to take a break. It is the most powerful computer you will ever own.

Launch, even if you have no users but your own QA team.

Have a plan, even if you choose not to follow it.

Do NOT read the comments on YouTube : they will only make you feel angry.

Cache your package dependencies; you never know when they'll be gone for good.

Read your log files. They're your best source of information, and the first place you'll notice if something's starting to go wrong.

Understand that languages come and go, and that it's the underlying patterns that really matter. Work hard to fill the gaps in your knowledge, because the wiser you get,  the more you'll regret the things you didn't know when you were young.

Develop in 86 assembler once, but stop before it makes you smug; develop in Visual Basic once, but stop before it makes you stupid.


Accept certain inalienable truths. Your code has bugs, you will miss your deadlines, and you, too, might end up in management. And when you do, you'll fantasize that back when you were a developer, code was bug-free, deadlines were met, and developers tuned their database indexes.

Tune your database indexes.

Don't deploy your code without testing it. Maybe you have a QA team. Maybe you have integration tests. You never know when either one might miss something.

Don't mess too much with your user interface, or by the time you ship, it will look like a Japanese karaoke booth.

Be grateful for open source code, but be careful whose code you run. Writing good code is hard, and open source is a way of taking bits from your projects folder, slapping a readme on them, and hoping if you put them on GitHub somebody else will come along and fix your problems.

But trust me on the flat screens.

Wednesday, 28 June 2017

Interview with Channel 9 at NDC Oslo

I was in Oslo earlier this month, where – as well as doing the opening keynote, a couple of talks, a workshop on hypermedia systems and PubConf – I had the chance to chat with Seth Juarez from Channel 9 about code, culture, speaking at conferences, and… all kinds of things, really.

The interview's here, or you can watch it over on Channel9.msdn.com - thanks Seth and co for taking the time to put this together!

Saturday, 13 May 2017

Interview with habrahabr.ru about HTTP APIs in .NET

Next week I'll be in Russia, where I'm speaking about HTTP APIs and REST at the DotNext conference in Saint Petersburg. As part of this event, I've done an interview with the Russian tech site Хабрахабр about the history and future of API development on the web and in Microsoft.NET. The interview's available on their site habrahabr.ru (in Russian), but for readers who are interested but can't read Russian, here's the original English version.

Cathedral in Saint Petersburg, Russia. goodfreephotos.com / Photo by DEZALB.

Q: What kind of APIs are you designing? Where does API design fit into software development?

That’s kind of an interesting question, because I think one of the biggest misconceptions in software is that designing APIs is an activity that happens separately to everything else. Sure, there are certain kinds of API projects – particularly things like HTTP APIs which are going to be open to the public – where it might make sense to consider API design as a specific piece of work. But the truth is that most developers are actually creating APIs all the time – they just don’t realise they’re doing it. Every time you write a public method on one of your classes or choose a name for a database table, you’re creating an interface – in the everyday English sense of the word – that will end up being used by other developers at some point in the future. Other people on your team will use your classes and methods. Other teams will use your data schema or your message format.

What’s interesting, though, is that once developers realize that the code they’re working on will probably form part of an API, they tend to go too far in the other direction. They’ll implement edge cases and things that they don’t actually need, just in case somebody else might need it later. I think there’s a very fine balance, and I think the key to that balance is to be very strict about only building the features that you need right now, but to make those things as reusable and self-explanatory as you can. There’s a great essay by Pieter Hintjens, Ten Rules for Good API Design, that goes into more detail about these kinds of ideas.

The biggest API project I’m working on at the moment is a thing I’m building at Spotlight in the UK, where I work. It’s a hypermedia API exposing information about professional actors, acting jobs in film and television, and various other kinds of data used in the casting industry. We’re building it in the architectural style known as REST – and if you’re not sure what REST is, you should come to my talk at DotNext in Saint-Petersburg and learn all about it. There’s lots of different patterns for creating HTTP APIs – there’s REST, there’s GraphQL, there’s things like SOAP and RPC – but for me, the biggest appeal of REST is that I think the constraints of the RESTful style lead to a natural decoupling of the concepts and operations that your API needs to support, which makes it easier to change things and evolve the API design over time.

Q: One of the most famous applications that was "killed" by backward compatibility is IE. The problem of this browser was that it has too large number of applications for which it was required to have backward compatibility. Problem was solved by adding new application Edge, which is updatable and supports all new standards. Can you give an piece of advice on how not to get caught by that backward-compatibility trap? As an example could it be a modularity which doesn't have layers? May be there is a way to replace API with RESTful API, Service Oriented Architecture or something else?

I’ve been building web applications for a long, long time – I wrote my first HTML page a couple of years before Internet Explorer even existed, back when the only browsers were NCSA Mosaic and Erwise. It’s fascinating to look back at the history of the web, and how the web that exists today has been shaped and influenced by things like the evolution of Internet Explorer – and you’re absolutely right; one of the reasons why Microsoft has introduced a completely new browser, Edge, in the latest versions of Windows is that Internet Explorer’s commitment to backwards-compatibility has made it really difficult to implement support for modern web standards alongside the existing IE codebase.

Part of the reason why that backwards compatibility exists is that, around the year 2000, there was a massive shift in the way that corporate IT systems were developed. There are countless corporations who have bespoke applications for doing all sorts of business operations – stock control, inventory, HR, project management, all kinds of things. Way back in the 1980s and early 1990s, most of them used a central mainframe system and employees would have to use something like a terminal emulator to connect to that central server, but after the first wave of the dotcom boom hit in the late 1990s, companies realised that most of their PCs now had a web browser and an network connection, and so they could replace their old mainframe terminal applications with web applications. Windows had enormous market share at the time, and Internet Explorer was the default browser on most Windows PCs, so lots of organizations built intranet web applications that only had to work on a specific version of Internet Explorer. Sometimes they did this to take advantage of specific features, like ActiveX support; more often I think they just did it save money because it meant they didn’t have to do cross-browser testing. This happened with some pretty big commercial applications as well; as late as 2011, Microsoft Dynamics CRM still offered no support for any browser other than Internet Explorer.

So you’ve got all these companies who have invested lots of time and money in building applications that only work with Internet Explorer. Those applications aren’t built using web standards or progressive enhancement or with any notion of ‘forward compatibility’ – they’re explicitly targeting one version of one browser running on one operating system. And so when Microsoft releases a new version of Internet Explorer, those applications fail – and the companies don’t want to invest in upgrading their legacy intranet applications, so they blame the browser. So we end up with this weird situation where here in 2017, Microsoft are still shipping Internet Explorer 11, which has a compatibility mode where it switches back to the IE9 rendering engine but sends a user agent string claiming that it's IE7. Meanwhile, everyone I know uses Google Chrome or Safari for all their web browsing - but still has an IE shortcut on their desktop for when they have to log in to one of those legacy systems. .

So… to go back to the original question: is there anything Microsoft could have done to avoid this trap? I think there’s a lot of things they could have done. Building IE from the ground up with a modular rendering engine, so that later versions could selectively load the appropriate engine for rendering a particular website or application. They could have made more effort to embrace the web standards that existed at the time, instead of implementing ad-hoc support for things like the MARQUEE tag and ActiveX plugins, which would have avoided the headache of having to support these esoteric features in later versions. The point is, though, none of this mattered at the time. Their focus – the driving force behind the early versions of Internet Explorer – was not to create a great application with first-class support for web standards. They were trying to kill Netscape Navigator and win market share – and it worked.

Q: Let’s imagine someone is going to introduce an API. So they collect some requirements, propose a version and gets feedback. That’s rather simple and straightforward thing. But if there are any hidden obstacles there down the road?

Always! Requirements are going to change – in fact, one of the biggest mistakes you can make is to try and anticipate those changes and make your design ‘future-proof’. Sometimes that pays off, but what mostly happens is that you end up with a much more complicated design purely because you’re trying to anticipate those future changes. Those obstacles are often things outside your control. There’s a change to the law that means you need to expose certain data in a different way. There’s a change to one of the other systems in your organization, or one of your cloud hosting providers announces that they’re deprecating a particular feature that you were relying on.

The best thing to do is to identify something simple and usable, ship it, and get as quickly as you can to a point where your API is stable, there’s no outstanding technical debt, and your team is free to move on to the next thing. That way when you do encounter one of those ‘hidden obstacles’, you have a stable codebase to use as a basis for your solution, and you have a team who have the time and the bandwidth to deal with it. And if by some stroke of luck you don’t hit any hidden obstacles, then you just move on to the next thing on your backlog.

Q: Continue with API design. We’ve released the v1.0 of our API and now v1.1 is approaching. I believe many of us noticed http://example.com/v1/test and http://example.com/v1.1/test or something. What are the best practices (a couple of points) you can think of that can help a developer to design a good v1.1 API in respect to v1.0?

It’s worth reading up on the concept of semantic versioning, (SemVer) and taking the time to really understand the distinction between major, minor and patch versions. SemVer says that you shouldn’t introduce any breaking changes between version x.0 and version x.1, so the most important thing is to understand what would constitute a breaking change for your particular API.

If you’re working with HTTP APIs that return JSON, for example, a typical non-breaking change would be adding a new data field to one of your resources. Clients that are using version 1.1 and are expecting to see this additional field can take advantage of it, whereas clients that are still using version 1.0 can just discard the unrecognised property.

There’s a related question about how you should manage versioning in your APIs. One very common solution is to expose URLs via routing – api.example.com/v1/ as opposed to api.example.com/v1.1 – but if you’re adhering to the constraints of a RESTful system, you really need to understand whether the change in version represents a change in the underlying resource or just the representation. Remember that a URI is a Uniform Resource Identifier, and so we really shouldn’t be changing the URI that we’re using to refer to the same resource.

For example – if we have a resource api.example.com/images/monalisa. We could request that resource as a JPEG (Accept: image/jpeg), or as a PNG (Accept: image/png), or ask the server if it has a plain-text representation of the resource (Accept: text/plain) – but they’re just different representations of the same underlying resource, and so they should all use the same URI.

If – say – you’ve completely replaced the CRM system used by your organization, and so “version 1” of a customer represents a record used in the old CRM system and “version 2” represents that same customer after they’ve been migrated onto a completely new platform, then it probably makes sense to treat them as separate resources and give them different URIs.

Versioning is hard, though. The easiest thing to do is never change anything

Q: .NET Core - what do you think about its API?

When .NET Core was first announced in 2015, back when it was going to be be called .NET Core 5.0, it was going to be a really stripped-down, lightweight alternative to the .NET Framework and the Common Language Runtime. That was an excellent idea in terms of making it easier to port .NET Core to different platforms, but it also left a sizable gap between the API exposed by .NET Core, and the ‘standard’ .NET/CLR API that most applications are built against.

I believe – and this is just my interpretation based on what I’ve read and people I’ve talked to – that the idea was that .NET Core would provide the fundamental building blocks. It would provide things like threading, filesystem access, network access, and then a combination of platform vendors and the open source community would develop the modules and packages that would eventually match the level of functionality offered by something like the Java Class Library or the .NET Framework. That’s a great idea in principle, but it also creates a chicken-and-egg situation: people won’t build libraries for a platform with no users, but nobody wants to use a platform that doesn’t have any libraries.

So, the decision was made that cross-platform .NET needed a standard API specification that would provide the libraries that users and application developers expected to be available on the various supported platforms. This is .NET Standard 2.0, which is already fully supported by the .NET Framework 4.6.1 and will be supported in the next versions of .NET Core and Xamarin. Of course, .NET Core 1.1 is out, and works just fine, and you can use it right now to build web apps in C# regardless of whether you’re running Windows or Linux or macOS, which is pretty awesome – but I think the next release of .NET Core is going to be the trigger for a lot of framework and package developers to migrate their projects across to .NET Core, which in turn should make it easier for developers and organizations to migrate their own applications.

Tram on Moscow Gate Square in Saint Petersburg. freegoodphotos.com /  Photo by Dinamik.

API flexibility VS. API precision. One can design a method API so it can accept many different types of values. It’s flexibility. We also can design a method API with lots of rules on input parameters. Both ways are correct. Where is the boundary across these approaches? When should I make a “strict” API and when should I make a more “flexible” design? Don’t forget that you should take backward-compatibility into account.

By implementing an API where the method signatures are flexible, all you’re doing is pushing the complexity to somewhere else in your stack. Say we’re building an API for finding skiing holidays, and we have a choice between

DoSearch(SearchCriteria criteria)


DoSearch(string resortName, string countryCode, int minAltitude, int maxDistanceToSkiList)

One of those methods is pretty easily extensible, because we can extend the definition of the SearchCriteria object without changing the method signature – but we’re still changing the behaviour of the system, we’re just not changing that particular method. By contrast, we could add new arguments to our DoSearch method signature, but if we’re working in a language like C# where you can provide default argument values, you won’t break anything by doing that as long as you provide sensible defaults for the new arguments.

At some point, though, you need to communicate to the API consumers what search parameters are accepted by your API, and there’s lots of ways to accomplish that. If you’re building a .NET API that’s installed as a NuGet package and used from within code, then using XML comments on your methods and properties is a great way to explain to your users what they need to specify when making calls to your API. If your API is an HTTP service, look at using hypermedia and formats like SIREN to define what parameter values and ranges are acceptable.

I should add that I think within the next decade, we’re going to start seeing a whole different category of APIs powered by machine learning systems, where a lot of the conventional rules of API design won’t apply. It wouldn’t surprise me if we got an API for finding skiing holidays where you just specify what you want in natural language, and so there’s not even a method signature – you just call something like DoSearch(“ski chalet, in France or Italy, 1400m or higher, that sleeps 12 people in 8 bedrooms, available from 18-25 January 2018”) – and the underlying system will work it all out for us. Those sorts of development in machine learning are hugely exciting, but they’re also going to create a lot of interesting challenges for the developers and designers trying to incorporate them into our products and applications. 

Thanks to Alexej Sommer for taking the time to set this up (and for translating my answers into Russian – Спасибо!), and if you're at DotNext next week and want to chat about APIs, hypermedia or any of the stuff in the interview, please come and say hi!