Tuesday, 5 December 2017

Thank you for flying Analysis Airways

If you work in software – or even if you don’t – it’s likely that, at some point, you’ll find yourself working with a team who are completely unfamiliar with your systems. A management consultancy, a development partner, a new supplier. A team of smart, capable people who are hoping to work with you to deliver something, whether that’s process improvements or a reduction in costs or some shiny new product.

A common pattern here is that, at the start of the engagement, they’ll appoint some business analysts to spend a whole lot of time talking with people from your organisation to get a better idea of what it is you do. They sometimes call this ‘gathering requirements’. You’ll know it’s happening when you get half-a-dozen invitations to four-hour ‘workshops’ from somebody you’ve never met, normally with a note saying something like ‘Hey everyone! Let’s get these in the diary!’

Now, there’s a problem here. Asking people what’s happening is almost never the best way to find out what’s actually happening. You don’t get the truth, you get a version of the truth that’s been twisted through a series of individual perspectives, and when you’re using these interviews to develop your understanding of an unfamiliar system, this can lead to an incredibly distorted view of the organisation. Components and assemblies aren’t ranked according to cost, or risk, or complexity. They’re ranked according to how many hours a day somebody spends dealing with them. And when you consider that in these days of scripted infrastructure and continuous deployment, a decent engineer can provision an entire virtual hosting environment in the time it takes to deal with one customer service phone call, what you end up with is a view of your organisation that ranks ‘phone calls’ equal to ‘hosting environment’ in terms of their strategic value and significance.

When you factor in the Dunning-Kruger effect, the errors and omissions, the inevitable confusion about naming things, and the understandable desire to manage complexity by introducing abstractions, you can end up with a very pretty and incredibly misleading diagram that claims to be a ‘high-level view’ of an organization’s systems.

There’s a wonderful example of this in neurology – a thing called the ‘cortical homunculus’; a distorted representation of the human body where the various parts of the body are magnified based on the density of nerve endings found therein. Looks like this:

Homunculushttps://www.maxplanckflorida.org/fitzpatricklab/homunculus/

It’s recognisably human, sure. But it’s a grotesque distortion of what a human being actually looks like – brilliant for demonstrating neurology, but if you used it as a model when designing clothes or furniture your customers would be in for one hell of a shock. And we know it’s grotesque, because we know what human beings are supposed to look like – in fact, it’s the difference between the ordinary and the grotesque that makes these cortical homunculi interesting.

The problem with software is that it’s made out of invisible electric magic, and the only way to see it at all is to rely on some incredibly coarse abstractions and some very rudimentary visualisation tools.


https://ironmaiden.com/ed-force-one

Imagine, for one second, that we’ve hired some consultants to help us design an aircraft. They send over some business analysts, and book some time with the ‘domain experts’ to talk over the capabilities of the existing system and gather requirements. The experts, of course, being the pilots and cabin crew – which, for a Boeing 747 like Ed Force One, is three flight crew and somewhere around dozen cabin attendants. They spend a couple of very long days interviewing all these experts; maybe they even have the opportunity to watch them at work in the course of a typical flight.

And then they come up with this: the high-level architectural diagram of a long-range passenger airliner:

image

Now, to an average passenger, that probably looks like a pretty reasonable representation of the major systems of a Boeing 747. Right? Take a look. Can you, off the top of your head, highlight the things that are factually incorrect?

That’s why this diagram is dangerous. It’s nicely laid out and easy to understand. It looks good. It inspires trust… and it’s a grotesque misrepresentation of what’s actually happening. Like the cortical homunculus, it’s not actually wrong, but it’s horribly distorted. In this case, the systems associated with the cabin attendants are massively overrepresented - because there’s 12 of them, as opposed to three flight crew – so 400% more workshop time and 400% more anecdotal insight. The top-level domains – flight deck, first class, economy class – are based on a valid but profoundly misleading perspective on the systems architecture of an airliner. The avionics and flight control systems are reduced to a footnote based on three days of interviews with the pilots, somebody with a bit of technical knowledge has connected the engines to the pedals (like a car, right?) and the rudder to the steering wheel (yes, a 747 does have a steering wheel), the wings are connected to the engines as a sort of afterthought…

Now, when the project is something tangible – like an office building or a bridge or an airliner, it won’t take long at all before somebody goes ‘um… I hate to say it, but this is wrong. This is so utterly totally wrong I can’t even begin to explain how wrong it is.’ Even the most inexperienced project manager will probably smell a rat when they notice that 20% of the budget for a new transatlantic airliner has been allocated to drinks trolleys and laminated safety cards.

But when the project is a software application – you know, a couple of million moving parts made out of invisible electronic thought-stuff that bounce around the place at the speed of light, merrily flipping bits and painting pixels and only sitting still when you catch one of them in a debugger and poke it line-by-line to see what it does – that moment of clarity might never happen. We can’t see software. We don’t know what it’s supposed to look like. We don’t have any instinct for distinguishing the ordinary from the grotesque. We rely on lines and rectangles, and we sort of assume that the person drawing the diagram knew what they were doing and that somebody else is looking after all the intricate detail that didn’t make it into the diagram.

And remember, nobody here has screwed up. The worst thing about these kinds of diagrams is that they’re produced by competent, honest, capable people. The organization allocates time for everybody to be involved. The stakeholders answer all the questions as honestly as they can. The consultants capture all of that detail and synthesise it into diagrams and documents, and everybody comes away with the satisfying sense of a job well done.

That’s not to say there’s no value in this process. But these kinds of diagrams are just one perspective on a system, and that’s dangerous unless you have a bunch of other perspectives to provide a basis for comparison. A conceptual model of a Boeing 747 based on running cost – suddenly the engines are a hell of a lot more important than the drinks trolley. A conceptual model based on electrical systems. Another based on manufacturing cost. Another based on air traffic control systems and airport infrastructure considerations. And yes, producing all these models takes a lot more than arranging a week of interviews with people who are already on the payroll, which is why so many projects get as far as that high-level system diagram and then start delivering things.

And why, somewhere in your system you almost certainly have the software equivalent of a hundred-million-dollar drinks trolley.

Thank you for flying Analysis Airways.

Friday, 24 November 2017

Goodbye Spotlight… Hello Skills Matter!

I have some exciting news. I'll be leaving Spotlight at the end of January, to take up a new role as CTO at Skills Matter. After fourteen years at Spotlight, this is a massive change for me, it's a massive change for Spotlight, and (I hope!) it's going to be a massive change for Skills Matter as well - but it's also a very natural next step for me, and a move that I think is going to unlock all sorts of exciting possibilities over the coming years.

I first started working with Spotlight way back in 2000 - in Internet time, that's about a hundred million years ago. My first job after I graduated was working for a company who built data-driven websites in ASP; spotlight.com was the first big commercial site I ever built, and I've been working with them ever since - initially as a supplier, then as webmaster (remember those?), then head of IT, then systems architect. I've come with them on a journey from Netscape Navigator and dial-up modems to smartphones and REST APIs and progressive web apps, and it's been a blast - we've shipped some really excellent projects, we've learned a lot together, and I've had the pleasure of working with awesome people and a lot of very cool technology.

Around ten years ago, up to my eyeballs in ASP.NET WebForms and wondering if everybody found them as unpleasant as I did, I started going along to some of the tech industry events that were happening here in London to get a bit of external perspective. I was at the first Future Of Web Apps conference back in 2007, the first Alt .NET UK 'unconference', the DDD events held at Microsoft a few times a year... and it wasn't long after that that I met Wendy Devolder and Nick Macris, who at the time were still running Skills Matter out of a basement in Sekforde Street, and hiring the crypt underneath St James Church on Clerkenwell Green for bigger events. That's where I did my first-ever technical talk - a fifteen-minute introduction to jQuery which I presented at the Open Source .NET Exchange. And, as the saying goes, I've never really looked back. This year, I've spoken at 20+ tech events in nine countries, from huge international conferences to local user groups around the UK. I'm on the programme committee for NDC London, FullStack and Progressive.NET, I'm helping to run the London .NET User Group, I've started giving training courses and workshops on building hypermedia APIs and scalable systems. And, because I'm one of those kinds of nerds, every time I take part in a tech event I come away from it buzzing with ideas about how to make it better - for attendees, organisers, volunteers, speakers... everybody. The only problem was finding the time to implement those ideas, and so when Wendy got in touch a few months ago to ask if I'd be interested in joining the team at Skills Matter and putting some of those ideas into practice, the timing was just right.

Now, this isn't intended to be a puff-piece. I've known the team here at Skills Matter for many, many years, we’ve done some really excellent things together, and I'm really excited to be joining them. I’ve also spoken to literally hundreds of people whose experience at their conferences and events has been nothing but positive, and that’s played a big part in my decision to come on board here. However, I also know that a few of you have had some frustrating experiences with them in the past - about how they organise their community events, about logistics and speaker invitations for their bigger conferences... even things like having the ‘wrong kind of cider’ in the Space Bar. :) Well, I want to hear all the details. I know we can't please all of the people all of the time, but the fact the team here has hired me means they're up for a bit of spirited discussion and an influx of new ideas, so if you wanna drop me an email or bend my ear over a beer sometime, I'd love to hear from you and see what we can do about it.

For the team at Spotlight, this marks the dawn of a new era. I’ve come dangerously close to becoming a dungeon master, and to me that’s a clear sign that it’s time to hand over the lines and rectangles to somebody new. To quote from Alberto Brandolini’s “The Rise and Fall of the Dungeon Master”:

“I am not suggesting to hire a hitman, but to acknowledge that the project would be better off without the Dungeon Master around. If you’re looking for a paradigm shift, you’ll need a team with a different attitude and emotional bonding with the legacy.”

I'm sad not to be coming with them on the next phase of their journey, but it's clear that Spotlight's future lies along a different path from mine. They've got an absolutely first-class team there, I'm sure they're going to go on to even bigger and better things, and I'm looking forward to catching up with them all over a beer every once in a while and seeing how it's all going.

First and foremost, though, this is a chance for me to get more involved with the global tech community. I absolutely love the part of my life that involves bouncing all over the world talking about tech, sharing ideas and meeting interesting people; and I'll be taking advantage of all those opportunities to chat about what Skills Matter can do to help support the tech community - whether it's open source projects or corporate development teams, tiny meetups or big international conferences. I'll also be working with the software team here to create some really exciting things, and of course I'll be getting even more closely involved in the conferences and meetups that we host here at CodeNode and at various venues around London. For Skills Matter, my experience as a developer, user group organiser and programme committee volunteer - not to mention my connections in the tech industry all over the world - will play a big part in deciding our plans and priorities for the next few years.

And the best part? I even get to work the odd shift behind the bar here at CodeNode once in a while, thus becoming the tech industry’s answer to Neville the Part-Time Barman. So next time you’re round Moorgate of an evening, drop in and say hi.

Exciting times, indeed.

Thursday, 9 November 2017

London .NET User Group Open Mic Night

Last night’s London .NET meetup was an ‘open mic’ session. Rather than inviting speakers to talk for an hour or so as we normally do, we invited members of the group to come along and talk for 10-15 minutes about their own projects, things they’re working on, or just stuff they think is cool. It’s the first open mic session we’ve had in a long while, but based on the success of last night’s event I think we’ll try to make these more of a regular thing in 2018. Quite a few of the speakers who came along were talking about their own .NET open source projects, and showing off some very, very cool things; here’s a quick rundown.

First up was Phil Pursglove, giving us a whistle stop tour of Cosmos DB, a new database platform that Microsoft are now offering on Azure. I’ve seen a couple of talks this year about Cosmos, and it looks really rather nice. It’s got protocol-level compatibility with MongoDB (what Microsoft call ‘bug compatible’), plus support for SQL and a couple of other language bindings. One of the coolest features of Cosmos is native support for multiple consistency models, allowing you to optimise your own application for your particular requirements - with the ability to override the global consistency model on a per-request basis. There’s a time-limited free trial available here - check it out.

Next up, Robin Minto gave us a run-through of OWASP ZAP, a proxy-based web security tool created by the Open Web Application Security Project (OWASP). ZAP is beautifully simple; you install it (or fire up the Docker image), it acts as a web proxy whilst you navigate through some of the primary user journeys on your web application, but in the background it’s probing your server and scanning your HTML for a whole range of common security vulnerabilities - and when you’re done, it’ll generate a security report you can share with the rest of your team.

James Singleton - author of ASP.NET Core 2 High Performance - gave us a very cool live demo of some of the cross-platform capabilities of .NET Core 2.0, including using his Windows laptop to cross-compile a web app for ARM Linux and running it live on a Raspberry Pi. What I found really impressive about this is that it didn’t require any .NET framework or runtime install on the Pi - it’s just vanilla (well, raspberry!) Raspbian Linux with a couple of things like libssl, and everything else is included in the deployment package created by the dot net tooling.

Ed Thomson, the Git program manager for Visual Studio Team System, did a great walkthrough of his open source project libgit2sharp - a set of C# language bindings for working with local and remote git repositories. If you’ve ever had to parse the output from the Windows git command line tools, you’ll know how painful it can be - but with libgit2sharp, you can use C# or Powershell to create automated build tools, manipulate your git repositories and do all sorts of cool stuff.

Jason Dryhurst-Smith gave us a demo of his CorrelatorSharp project - “your one-stop shop for context-aware logging and diagnostics” - a library that allows you to track the context of operations across multiple services and operations, including support for frameworks like NLog and RestSharp and a client-side JavaScript library.

Ben Abelshausen came along to show us Itinero, an open-source route planning library for .NET which you can use within your own applications to calculate routes and analyse map data.

Finally, we had Matt Ellis from Jetbrains giving us ’Ten Jetbrains Rider Debugging Tips in Ten Minutes’ - including some really neat stuff like cross-platform support for DebuggerDisplay attributes and the ability to define interdependent breakpoints in your code.

Huge thanks to all our speakers and to everyone who came along - it’s great to see so much enthusiasm and activity going in with .NET open source, and with .NET Core 2.0 going fully cross-platform and tools like Rider and Visual Studio Code, it’s only going to get more interesting.


See you all at the next one! 

Tuesday, 17 October 2017

Why You Weren’t Picked For NDC London

Over the past two weeks, a small team of us has been putting together the agenda for NDC London 2018, which is happening at the Queen Elizabeth II Centre in Westminster this coming January. As somebody who’s submitted a lot of conference talks over the years, I know how exciting it is getting the email confirming that you’ve been accepted — and how demoralizing it can be getting the email saying ‘We’re sorry to say that…’

Well, now that I’ve been on the other side of that process a few times, I see things very differently, so I wanted to take this chance to tell you all how the selection programme actually works, why you didn’t get picked, and why it shouldn’t put you off.

First, though, I want to say a huge thank you to everybody who submitted, and congratulations to the people who have been selected. It’s been a lot of work to pull everything together, but I’m really happy about the programme of amazing people and top-class talks that we’ve got this year — and I’m particularly excited that we’ve been able to include so many great speakers from the developer community who will be speaking at NDC for the first time. Welcome, all of you.

For all the speakers who didn’t get selected this time around, I really hope this article might help you understand why. (TL;DR: we just had too many good talks submitted!)

First, a basic rule of conferences — they have to generate a certain amount of money in order to operate. Yes, there’s community events like DDD that are free to attend and are run with minimal sponsorship — and I think those events are every bit as valuable to our community as the big commercial conferences — but once you start dealing with international speakers and multi-track events running across several days, you need to start thinking about commercial considerations. Conferences generate revenue from two sources — ticket sales and sponsorship. Both of those boil down to creating an event that people want to attend (and, in many cases, can persuade their boss is worth the cost and the time), so that’s what we, as the programme committee, are trying to do.

I should mention that on every conference I’ve worked with, the programme committee has consisted entirely of volunteers — four or five people from within the industry who are happy to donate their free time to help put the programme together. They’ll normally get a complimentary ticket to the conference in return, but it’s not a job, and nobody’s getting paid to do it.

Most software conferences run a public ‘call for submissions’ (aka ‘call for papers’ or CfP) where anybody interested in speaking can submit talks for consideration. Once the CfP has closed, the programme committee has the unenviable task of going through all the talks that have been submitted and picking the ones that they think should be included. NDC London has five tracks of one-hour talks, over three days. Once you’ve allowed for keynotes and lunch breaks, that gives you fewer than 100 talk slots. We had 732 sessions submitted. If we’d said ‘yes’ to all of them, we’d have ended up with a conference lasting three weeks with a ticket price of over £7,500… good luck getting your manager to sign off on attending that one.

So, how do you pick the top 100 talks from 732 submissions? Well, here’s what we look for.

First — quality. This one doesn’t really help much, because the vast majority of the talks submitted to a high-profile event like NDC are excellent, but there might be a handful that you can reject outright. These tend to be the talks that look like sales pitches — single-vendor solutions submitted by somebody who works for the vendor in question. You want to use a conference to sell your product? No problem — get a booth, or buy a sponsorship package. But we’re not going to include your sales pitch at the expense of someone else’s submission.

It’s also worth pointing out that when we get to the final round of submissions, when it’s getting really hard to make a call, we’ll look very critically at the quality of the submission itself. We’ll look for a good title, a clear, concise summary, a succinct speaker biography and a good-quality headshot — basically, a submission that makes it really clear who you are, what you’re talking about, why it’ll be interesting, and that you’re going to put in the effort required to deliver a great presentation. There’s no hard-and-fast rule to this; there are some excellent articles out there about how to write good proposals. My own rule of thumb when I’m submitting is 100 characters for the title, 2,000 for the abstract and 1,000 for the speaker bio, and I often use Ted Neward’s approach of ‘pain and promise’ — ‘here’s a problem you’ve had, here’s an idea that might fix it’ — when I’m writing proposals. Every speaker is different, and we all have our own style, but if your talk summary is only one or two lines or you’ve pasted in your entire professional CV instead of writing a short speaker bio, you may well get rejected in favour of somebody who’s clearly put more time into drafting their submission.

Second — relevance. Conferences have a target audience; NDC has evolved from being primarily a .NET conference into an event with a much broader scope, but we know the kind of developers who normally come to NDC London and what sort of things they’re interested in. For example, this year we decided to decline any C++ talks, purely because there’s very few C++ developers in our target audience. That said, we do try to include things that might be interesting to our audience, even if they’re not immediately relevant. Topics like Kotlin and Elm are still relatively esoteric in terms of numbers of users, but the industry buzz around them means we try to include things like this on the programme because we’re confident that people will want to see them.

Third — diversity. We could very easily have filled the entire programme with well-known white male speakers talking about JavaScript — but we think a good conference should feature diverse speakers, presenting diverse topics, with a range of presentation styles. We’re not just talking about gender and ethnicity, either — we want to see new faces alongside regular speakers, and bleeding-edge technology topics alongside established patterns and practices. That said, we would never accept a substandard talk purely for the sake of diversity; the way to get more balance in the programme without compromising quality is to start with a wider range of submissions. Many conferences suffer from a lack of diversity in the talks that get submitted — it’s just the same people submitting the same topics year after year — so throughout the year people like me go to tech events and conferences, look out for great speakers and talks that haven’t appeared at NDC before, and invite them to submit.

Finally, there’s the invited speakers — the people who we definitely want to see on the programme. They fall into two broad camps. There’s the big names — the people with 100K+ Twitter followers, the published authors, the high-profile open source project leaders. Now, being famous-on-the-internet doesn’t automatically mean you’re a brilliant speaker — speaking for an hour in front of a few hundred people takes a very different set of skills from running an open-source project or writing a book — but we’re lucky in our industry to have a lot of people who are well known, well respected, and really, really good on a conference stage. And these people are important, because their involvement gives us a fantastic signal boost. More interest translates into more ticket sales — which means more budget for covering speaker travel, catering, facilities, the entertainment at the afterparty, and all the other things that make a good conference such a positive experience for the attendees.

The programme committee also invites a lot of new speakers because it’s a great way of getting some new faces and new ideas on to the programme. As I mentioned above, many of us spend a lot of time going to user groups, meetups and conferences, and when I see somebody deliver a really good talk, I’ll invariably get in touch afterwards and ask if they’d be interested in submitting it to an event like NDC.

So, those are our constraints. We want to deliver a balanced mix of big names and new faces. We want to promote diversity in an industry that’s still overwhelmingly white and male (never mind the ongoing fascination with JS frameworks and microservices). We want to offer a compelling combination of established technology and interesting esoterica; of stuff that’s interesting, stuff that’s relevant, and stuff that’s fun…

And we get to pick fewer than 100 talks out of 750 submissions, which means whatever we do, we’re going to be sending a whole lot of emails at the end of it saying ‘Sorry, your talk wasn’t accepted…’ — and that’s not much fun, because we’re turning down good content from good speakers. In many cases those people are good friends as well — the tech industry is a very friendly place and I count the people I’ve met through it among my closest friends. But that’s how it works.

It’s humbling to be part of an industry where so many talented people are willing to invest their time in sharing their own expertise. Speaking at conferences is a really rewarding experience, but it’s also a huge amount of work, preparation, rehearsal, logistics, and time away from home. Without speakers, there’d be no conferences — but, as I hope I’ve explained here, there are more excellent people and talks out there than any one event can ever hope to accommodate.

The thing is, there’s absolutely no shortage of great conferences and events. Don’t be discouraged. Keep submitting. If you didn’t make the cut for NDC London, submit to Oslo and Sydney. And BuildStuff, and DevSum, and ├średev, and FullStack, and Progressive.NET, and DotNext, and Sela Developer Practice, and SDD, and QCon, and WebSummit. And if none of those grab you, sign up for services like The Weekly CFP and Technically Speaking that will email you about conferences that are looking for speakers and submissions.

Finally, you know the one thing that every really good speaker I’ve ever seen has in common? It’s that they work hard on interesting things, and they love what they do. Maybe it’s their job. Maybe they lead an open-source project, or run a user group, or they’re writing a book. But if you love what you do and you want to share that enthusiasm, it’ll happen. Just give it time.

Saturday, 23 September 2017

London, London, Uber Alles

I read with some interest yesterday that Transport for London (TfL) are not renewing Uber’s license to operate in London. TfL have cited concerns over Uber’s driver screening and background checks, and Uber’s use of ‘Greyball’, a software component designed and built by Uber to bypass all sorts of regulatory mechanisms, including using a phone’s GPS to recognise when the phone is at being used at Apple HQ so that the Apple engineers who review iOS applications won’t see the hidden features that Apple aren’t supposed to know about.

I use Uber a lot. Their service used to be absolutely excellent, and is still pretty good. It’s not as good as it used to be. It takes longer to get a car than it used to, particularly in central London. But, as a passenger (and yes, I know I’m a white male passenger, although some of my Uber experience dates from a period when I did have quite serious mobility issues whilst recovering from a skiing injury), I have found Uber to be a really good service. I’ve used it all over the world — London, Bristol, Brussels, Kyiv, Saint-Petersburg, and as of last night, Minsk. I missed it in Tel Aviv, where it’s been outlawed and everyone uses Gett instead, although Gett in Israel appears to operate exactly the same as Uber does in London so I’m not quite sure what the distinction is.

I’ve had some seriously impressive experiences as an Uber customer. In London, I once left my guitar in the back of an Uber that dropped me home at 4am after a horribly delayed flight. I realised within minutes. I used the app to phone the driver, who immediately turned around and brought it back, and was very taken aback when I insisted on paying him for the extra journey. In Kyiv I’ve used Uber to travel safely from a place I couldn’t pronounce to a place I couldn’t find, in a city where I couldn’t speak the language or even read the alphabet, with a driver who spoke no English, and I’ve done it with absolutely no fear of getting ripped off or robbed.

It’s not all been smooth. In Bristol I once had an Uber driver — sorry, “partner-driver” — who missed the turning four times in a row and then explained, giggling, that he’d never driven a cab before and didn’t know how to use satellite navigation. In London I’ve actually been in an Uber car that was pulled over by the police for speeding.. the driver panicked, drove away, realised what he’d done, thought better of it and reversed down Moorgate, in rush hour traffic, back to where a rather surprised-looking police officer immediately placed him under arrest. In both of those cases I complained, Uber investigated (something they can do really easily, thanks to GPS tracking of exactly what all their vehicles are doing at any moment) and notified me within 24 hours that the drivers in question had been suspended and would not be driving for the company again. I’ve used their app to claim refunds where I was overcharged — but also to reimburse drivers who undercharged me because of technical problems.

Now, here’s the two points I think are really important. One — Uber didn’t actually solve any hard problems. They didn’t invent GPS, or cellular phone networks, or draw their own maps of the world’s major cities. They just waited until exactly the right moment — the moment everyone had GPS, and online payments were easy, and smartphones were cheap enough that it was cost-effective to use them as the basis of a ride-sharing application — and then they pounced. Now don’t get me wrong, they did it incredibly well, and the user experience — particularly in the early days — was absolutely first-class. Usability features like being able to photograph your credit card using your phone camera instead of having to type the number in were a game-changer — the kind of features that people would show off to their friends in the pub just because no-one had ever seen it before. But Uber didn’t solve any hard engineering problems. There’s no PageRank algorithms or Falcon 9 reusable rockets being invented here. Uber’s success is down to absolutely first-class customer experience, aggressive expansion, and their willingness to enter target markets without the slightest consideration for how their service might affect the status quo. And their protestations about the ruling threatening 40,000 jobs is a bit rich given how adamantly they insist that they don’t actually employ any drivers, despite a court ruling to the contrary.

If it hadn’t been them, it would have been someone else, which brings me to my second point. The London taxi cab industry was just crying out for somebody to come in and kick seven bells out of it. I’ve lived in London since 2003, and I’ve taken many, many taxis over the years. For all the talk about “The Knowledge”, I have lost count of the number of times I’ve had to give a black cab driver turn-by-turn directions because they don’t know where I’m going and they’re not allowed to rely on sat-nav. I’ve lost count of the number of times I’ve had to ask a cab driver to stop at a cash machine because they don’t take credit cards. I’ve lost track of the number of hours of my life I’ve spent stood on street corners, in the rain, with more luggage than I can possibly carry home on the bus, waiting for the glow of an orange light that might take me home. Or might just ask where I’m going and drive away with a shake of the head and a ‘no, mate’ because my journey isn’t convenient for them.

The alternative, of course, was minicabs. Booked in advance from a reputable operator, they were generally pretty good. Not always, but most of the time they’d show up on time and take you where you wanted to go. Or, at the end of a long night out, you’d wander up to one of Soho’s many illustrious minicab offices, and end up in the back seat of an interesting-smelling Toyota Corolla wondering if the driver was really the person who’d passed all the necessary tests and checks, or one of their cousins who’d borrowed the cab for the night to earn an extra few quid. (On two separate occasions I’ve been offered this as an explanation for a minicab driver not knowing where they’re going…)

A lot of people are very upset about yesterday’s news — a petition to ‘save Uber’ has attracted nearly half a million signatures since the announcement — but in the long term, I don’t think it really matters whether Uber’s license is renewed or not. There’s no way people in London are going to go back to standing on rainy street corners waving their arms at every orange light that goes past. Uber has changed the game. It’s now a legal requirement that black cabs in London accept payment by card — something which even a few years ago was still hugely controversial. And that’s actually really sad, because once upon a time, London taxis had an international reputation for innovation and excellence. The London ‘black cab’ is a global icon, but it’s also an incredibly well-designed vehicle — high-visibility handholds, wheelchair accessible, capable of carrying five passengers and negotiating the narrow tangled streets of one of the world’s oldest cities. London taxis have been regulated since the 17th century. “The Knowledge” — the exam all London black cab drivers are required to pass, generally regarded as one of the hardest examinations of any profession anywhere in the world — was introduced in the 1850s, after visitors travelling to London for the Great Exhibition complained that their hackney-carriage drivers didn’t know where they were going. One of the great joys of visiting London in the days before sat-nav was hailing a cab, giving the driver the name of some obscure pub halfway across the city, and watching as they’d think for one second, nod, and then whisk you there without further hesitation. But in the age of ubiquitous satellite navigation, who cares whether your driver can remember the way from Narrow Street to Penton Place after you’ve had to stand in the rain for fifteen minutes trying to hail a cab? What Uber did — and did incredibly well — is they thought about all the elements of the passenger experience that happen outside the car. Finding a cab. Knowing how much it’s likely to cost. Paying the bill. Recovering lost property. And they made it cheap. They created a user experience that, for the majority of passengers, was easier, cheaper and more convenient than traditional black cabs. It’s no wonder the establishment freaked out.

So what happens now? Maybe Uber win their appeal. Maybe someone else moves into that space. Maybe things get a bit more expensive for passengers — and, frankly, I think they should. I think Uber is too cheap. If you want the luxury of somebody else driving you to your door in a private car, — and for most of us, that IS a luxury — then I believe the person providing that service deserves to make a decent living out of providing it, and one of the most welcome features Uber’s introduced recently is the ability to tip drivers via the app.

Uber has taken a stagnant industry that was in dire need of a kick up the arse, and it’s done it. They’re not the only cab hailing app in the market. Gett, mytaxi (formerly Hailo), Kabbee, private hire firms like Addison Lee — not to mention just phoning a minicab office and asking them to send a car round. In a typical month in London, I’ll use buses, the Tube, national rail services, the Overground, Uber AND black cabs — not to mention a fair bit of walking and cycling. I even have an actual car, which I use about once a month to go to B&Q or IKEA or somewhere. None of them’s perfect, but on the whole, transport in London works pretty well, and I’ve found Uber a really welcome addition to the range of transport services that’s on offer.

But Uber vs TfL is just a tiny taste of what’s coming. I honestly believe that within the next ten years, we’ll be using our smartphones to summon driverless autonomous electric vehicles to take us home after a night out. In all sorts of shapes and sizes, too — after all, why should it take an entire Toyota Prius to transport a 5’2”, 65kg human with a small shoulder bag from Wardour Street to Battersea? We’ll be going to IKEA in a tiny little electric bubble-bike, buying a new kitchen, and taking it home in a driverless van that waits outside whilst we unload it and then burbles off happily into the sunset to pick up the next waiting fare. Never mind the cab wars between the black cab drivers and the minicab drivers — what happens when a robot cab will pick you up anywhere in town and take you home for a quid? A robot cab that doesn’t have a mortgage to pay or kids to feed? That runs on cheap, green energy, that doesn’t get bored or tired or distracted?

There are going to be problems. There are going to be collisions, fatalities, lawsuits, prosecutions, appeals and counter-appeals. And the disruptions will keep coming, faster and faster. Just as Transport for London think they’ve got workable legislation for driverless cars, someone’s going to invent a drone that can fly from Covent Garden to Dulwich carrying a passenger, and whilst they’re busy arguing in court over whether it’s a helicopter or not someone’s gonna shoot one down with a flare pistol and all merry hell’s going to break loose.

What Uber shows us is that technology isn’t going to self-regulate. The digital economy moves too fast for pre-emptive legislation and licensing frameworks. There’s fortunes to made in the months or years between your product hitting the market and the authorities deciding to shut you down. The future of Uber doesn’t depend on getting their TfL license renewed. They’ve been kicked out of entire countries before and it doesn’t seem to have slowed them down. To them this is just a bump in the road. Their future is about being the first company to roll out self-driving cabs, and you can guarantee they’re working, right now, on finding the legislative loopholes in their various target markets that will allow them to launch first and ask questions later.

Black cabs aren’t going away. Buses aren’t going away. Much as I’d love it, they’re not going to decommission all the rolling stock and turn the London Underground network into a giant dodgems track. Technology is going to disrupt, government is going to react — and whilst that model doesn’t always work terribly well, I can’t see any feasible alternative. Engineers are going to solve hard problems. Touch screens, cellular networks, GPS, space travel, battery capacity, biological interfaces, machine learning. Companies like Apple and Google and SpaceX and Tesla are going to put those solutions in our pockets, and in our buildings and on our streets, and companies like Uber and Tinder and AirBnB are going to find ways to turn those solutions into products that you just can’t WAIT to show your friends in the pub.

And, short of the nightmare scenarios my friend Chris has so lovingly documented over on H+Pedia, society will continue. Uber will run surveillance software on your phone that’s a fraction of what the Home Office are doing all day every day, but Greyball isn’t going to cause the downfall of society. Self-driving cars will kill people. Yes, they will. But 3,000 people already die in road traffic accidents every day, and we think that’s normal, and as long as it doesn’t impact our lives directly we just sort of shrug and ignore it and get on with our lives.

I like Uber, and I feel guilty for liking them because I know they do some truly horrible things. I also travel by air, and I eat meat, and I wear leather and I use an iPhone and don’t always recycle. And I used to download MP3s all the time, and feel guilty about it, and then Spotify came along and now I don’t download MP3s any more. And I used to download movies and TV shows, and now I have Amazon Prime and Netflix and I haven’t downloaded a movie in YEARS.

Very few people are extremists. For every militant vegan, there’s someone out hunting their own meat, and a couple of hundred of us who just go with whatever’s convenient. If you’re trying to change the world, ignore the extremists. Don’t outlaw things you think people shouldn’t be doing. Change the world by using technology to deliver compelling alternatives. I want vat-grown burgers and steaks that taste as good as the real thing, and yes, I’ll pay. I want boots and jackets made from synthetic textiles that actually look properly road-worn after a couple of years instead of just falling apart. I want to travel by hypersonic maglev underground train instead of flying. OK, maybe not that last part… I’m writing this from 30,000 feet above the Lithuanian border, the setting sun behind us is catching the wingtips as we soar above an endless sea of cloud, and I’m being reminded how much I love flying. But you could definitely make air travel a LOT more expensive before I stopped doing it completely.

If TfL really want to get rid of Uber, revoking their license isn’t the way to do it. They’ll appeal, you’ll fight, a lot of lawyers will get rich and TfL will either lose and look like an idiot, or win and look like an asshole. How about TfL find the company that’s trying to beat Uber anyway — they’re out there, somewhere — and offer to work with them instead? As Rowland Manthorpe wrote in WIRED yesterday, why don’t we create a socially responsible, employee-owned, ride sharing platform that gives passengers everything Uber does without the institutionalised nastiness and the guilty aftertaste? What ethical transport startup would not jump at the fact to sign up the greatest city in the world as their development partner?

You think the future is about using an iPhone to summon a guy in a Toyota Prius, and the important question is who wrote the app they’re using? Come on, people. This is London calling. We can think a hell of a lot bigger than that.

Monday, 7 August 2017

Generating self-signed HTTPS certificates with subjectAltNames

We provide online services via a bunch of different websites, using federated authentication so that if you sign in to our authentication server, you get a *.mydomain.com cookie that’s sent to any other server on our domain.

We use local wildcard DNS, so there’s a *.mydomain.com.local record that resolves everything to 127.0.0.1, and for each developer machine we create a  *.mydomain.com.hostname record that’s an alias for hostname, so you can browse to www.mydomain.com.<machine> to see code running on another developer’s workstation, or www.mydomain.com.local to view your own local development code.

This works pretty well, but getting a local development system set up involves running local versions of several different apps – and since Google Chrome now throws a security error for any HTTPS site whose certificate doesn’t include a “subject alternative name” field, getting a bunch of local sites all happily sharing the same cookies over HTTPS proved a bit fiddly.

So… here’s a batch file that will spit out a bunch of very useful certificates, adapted from this post on serverfault.com.

How it works

  1. Get openssl.exe working - I use the version that's shipped with Cygwin, installed into C:\Windows\Cygwin64\bin\ and added to my system path.
  2. Run makecert.bat. If you don't want to specify a password, just provide a blank one (press Enter). This will spit out three files:
    • local_and_hostname.crt
    • local_and_hostname.key
    • local_and_hostname.pfx
  3. Double-click the local_and_hostname.crt file, click "Install Certificate", and use the Certificate Import Wizard to import it. Choose "Local Machine" as the Store Location, and "Trusted Root Certification Authorities" as the Certificate Store.
  4. Open IIS, select your machine, open "Server Certificates" from the IIS snapin, click "Import..." in the Actions panel
  5. Select the local_and_hostname.pfx certificate created by the batch file. If you used a password when exporting your PKCS12 (.pfx) file, you'll need to provide it here
  6. Finally, set up your IIS HTTPS bindings to use your new certificate.

Yay! Security! 

Monday, 31 July 2017

Deployment Through the Ages

Vanessa Love just posted this intriguing little snippet on Twitter:

And I got halfway through sticking some notes into the Google doc, and then thought actually this might make a fun blog post. So here’s how deployment has evolved over the 14 years since I first took over the hallowed mantle of [email protected].

2003: Beyond Compare (maybe?)

The whole site was classic ASP – no compilation, no build process, all connection credentials and other settings were managed as application variables in the global.asa file. On a good day, I’d get code running on my workstation, test it in our main target browsers, and deploy it using a visual folder comparison tool. It might have been Beyond Compare; it might have been something else. I honestly can’t remember and the whole thing is lost in the mists of time. But that was basically the process – you’d have the production codebase on one half of your screen and your localhost codebase on the other half, and you’d cherry-pick the bits that needed to be copied across.

Of course, when something went wrong in production, I’d end up reversing the process – edit code directly on live (via UNC share), normally with the phone wedged against my shoulder and a user on the other end; fix the bug, verify the user was happy, and then do a file sync in the other direction to get everything from production back onto localhost. Talk about a tight feedback loop – sometimes I’d do half-a-dozen “deployments” in one phone call. It was a simpler time, dear reader. Rollback plan was to hammer Ctrl-Z until it’s working again; disaster recovery was tape backups of the complete source tree and database every night, and the occasional copy’n’paste backup of wwwroot before doing something ambitious.

Incidentally, I still use Beyond Compare almost daily – I have it configured as my merge tool for fixing Git merge conflicts. It’s fantastic.

2005: Subversion

Once we hired a second developer (hey Dan!) the Beyond Compare approach didn’t really work so well any more, so we set up a Subversion server. You’d get stuff running on localhost, test it, maybe share an http://www.spotlight.com.dylan-pc/ link (hooray for local wildcard DNS) so other people could see it, and when they were happy, you’d do an svn commit, log into the production web server (yep, the production web server – just the one!) and do an svn update. That would pull down the latest code, update everything in-place. There was still the occasional urgent production bugfix. One of my worst habits was that I’d fix something on production and then forget to svn commit the changes, so the next time someone did a deployment (hey Dan!) they’d inadvertently reintroduce whatever bug had just been fixed and we’d get upset people phoning up asking why it was broken AGAIN.

2006: FinalBuilder

This is where we start doing things with ASP.NET in a big way. I still dream about OnItemDataBound sometimes… and wake up screaming, covered in sweat. Fun times. The code has all long since been deleted but I fear the memories will haunt me to my grave.

Anyway. By this point we already had the Subversion server, so we had a look around for something that would check out and compile .NET code, and went with FinalBuilder. It had a GUI for authoring build pipelines and processes, some very neat features, and could deploy .NET applications to IIS servers. This was pretty sophisticated for 2006. 

2008: test server and msdeploy

After one too many botched FinalBuilder deployments, we decided that a dedicated test environment and a better deployment process might be a good idea. Microsoft had just released a preview of a new deployment tool called MSDeploy, and it was awesome. We set up a ‘staging environment’ – it was a spare Dell PowerEdge server that lived under my desk, and I’d know when somebody accidentally wrote an infinite loop because I’d hear the fans spin up. We’d commit changes to Subversion, FinalBuilder would build and deploy them onto the test server, we’d give everything a bit of a kicking in IE8 and Firefox (no Google Chrome until September 2008, remember!) and then – and this was magic back in 2008 – you’d use msdeploy.exe to replicate the entire test server into production! Compared to the tedious and error-prone checking of IIS settings, application pools and so on, this was brilliant. Plus we’d use msdeploy to replicate the live server onto new developers’ workstations, which was a really fast, easy way to get them a local snapshot of a working live system. For the bits that still ran interpreted code, anyway.

2011: TeamCity All The Things!

By now we had separate dev, staging and production environments, and msdeploy just wasn’t cutting it any more. We needed something that can actually build different deployments for each environments – connection strings, credentials, and so on. And there’s now support in Visual Studio for doing XML configuration transforms, so you create a different config file for every environment, check those into revision control, and get different builds for each environment. I can’t remember exactly why we abandoned FinalBuilder for TeamCity, but it was definitely a good idea – TeamCity has been the backbone of our build process ever since, and it’s a fantastically powerful piece of kit.

2012: Subversion to GitHub

At this point, we’d grown from me, on my own doing webmaster stuff, to a team of about six developers. Even Subversion is starting to creak a bit, especially when you’re trying to merge long-lived branches and getting dozens of merge conflicts, so we start moving stuff across to GitHub. It takes a while – I’m talking months – for the whole team to stop thinking of Git as ‘unnecessarily complicated Subversion’ and really grok the workflow, but we got there in the end.

Our deployment process at this point was to commit to the Git master branch, and wait for TeamCity to build the development version of the package. This would get built and deployed. Once it was tested, you’d use TeamCity to build and deploy the staging version – and if that went OK, you’d build and deploy production. Like very step on this journey, it was better than anything we’d had before, but had some obvious drawbacks. Like the fact we had several hundred separate TeamCity jobs and no consistent way of managing them all.

2013: Octopus Deploy and Klondike

When we started migrating from TeamCity 6 to TeamCity 7, it became rapidly apparent that our “build everything several times” process… well, it sucked. It was high-maintenance, used loads of storage space and unnecessary CPU cycles, and we needed a better system.

Enter Octopus Deploy, whose killer feature for us was the ability to compile a .NET web application or project into a deployment NuGet package (an “octopack”), and then apply configuration settings during deployment. We could build a single package, and then use Octopus to deploy and configure it to dev, staging and live. This was an absolute game-changer for us. We set up TeamCity to do continuous integration, so that every commit to a master branch would trigger a package build… and before long, our biggest problem was that we had so many packages in TeamCity that the built-in NuGet server started creaking.

This started life as an experimental build of themotleyfool/NuGet.Lucene – which we actually deployed onto a server we called “Klondike” (because klondike > gold rush > get nuggets fast!) – and it worked rather nicely. Incidentally, that NuGet.Lucene library is now the engine behind themotleyfool/Klondike, a full-spec NuGet hosting application – and I believe our internal hostname was actually the inspiration for their project name. That was a lot of fun for the 18 months or so that Klondike existed but we were still running the old NuGet.Lucene codebase on a server called ‘klondike’. It’s OK, we’ve now upgraded it and everything’s lovely.

It was also in 2013 that we started exploring the idea of automatic semantic versioning – I wrote a post in October 2013 explaining how we hacked together an early version of this. Here’s another post from January 2017 explaining how it’s evolved. We’re still working on it. Versioning is hard.

And now?

So right now, our build process works something like this.

  1. Grab the ticket you’re working on – we use Pivotal Tracker to manage our backlogs
  2. Create a new GitHub branch, with a name like 12345678_fix_the_microfleems – where 12345678 is the ticket ID number
  3. Fix the microfleems.
  4. Push your changes to your branch, and open a pull request. TeamCity will have picked up the pull request, checked out the merge head and built a deployable pre-release package (on some projects, versioning for this is completely automated)
  5. Use Octopus Deploy to deploy the prerelease package onto the dev environment. This is where you get to tweak and troubleshoot your deployment steps.
  6. Once you’re happy, mark the ticket as ‘finished’. This means it’s ready for code review. One of the other developers will pick it up, read the code, make sure it runs locally and deploys to the dev environment, and then mark it as ‘delivered’.
  7. Once it’s delivered, one of our testers will pick it up, test it on the dev environment, run it past any business stakeholders or users, and make sure we’ve done the right thing and done it right.
  8. Finally, the ticket is accepted. The pull request is merged, the branch is deleted. TeamCity builds a release package. We use Octopus to deploy that to staging, check everything looks good, and then promote it to production.

And what’s on our wishlist?

  • Better production-grade smoke testing. Zero-footprint tests we can run that will validate common user journeys and scenarios as part of every deployment – and which potentially also run as part of routine monitoring, and can even be used as the basis for load testing.
  • Automated release notes. Close the loop, link the Octopus deployments back to the Pivotal tickets, so that when we do a production deployment, we can create release notes based on the ticket titles, we can email the person who requested the ticket saying that it’s now gone live, that kind of thing.
  • Deployments on the dashboards. We want to see every deployment as an event on the same dashboards that monitor network, CPU, memory, user sessions – so if you deploy a change that radically affects system resources, it’s immediately obvious there might be a correlation.
  • Full-on continuous deployment. Merge the PR and let the machines do the rest.

So there you go – fourteen years worth of continuous deployments. Of course, alongside all this, we’ve moved from unpacking Dell PowerEdge servers and installing Windows 2003 on them to running Chef scripts that spin up virtual machines in AWS and shut them down again when nobody’s using them – but hey, that’s another story.