Tuesday, 17 October 2017

Why You Weren’t Picked For NDC London

Over the past two weeks, a small team of us has been putting together the agenda for NDC London 2018, which is happening at the Queen Elizabeth II Centre in Westminster this coming January. As somebody who’s submitted a lot of conference talks over the years, I know how exciting it is getting the email confirming that you’ve been accepted — and how demoralizing it can be getting the email saying ‘We’re sorry to say that…’

Well, now that I’ve been on the other side of that process a few times, I see things very differently, so I wanted to take this chance to tell you all how the selection programme actually works, why you didn’t get picked, and why it shouldn’t put you off.

First, though, I want to say a huge thank you to everybody who submitted, and congratulations to the people who have been selected. It’s been a lot of work to pull everything together, but I’m really happy about the programme of amazing people and top-class talks that we’ve got this year — and I’m particularly excited that we’ve been able to include so many great speakers from the developer community who will be speaking at NDC for the first time. Welcome, all of you.

For all the speakers who didn’t get selected this time around, I really hope this article might help you understand why. (TL;DR: we just had too many good talks submitted!)

First, a basic rule of conferences — they have to generate a certain amount of money in order to operate. Yes, there’s community events like DDD that are free to attend and are run with minimal sponsorship — and I think those events are every bit as valuable to our community as the big commercial conferences — but once you start dealing with international speakers and multi-track events running across several days, you need to start thinking about commercial considerations. Conferences generate revenue from two sources — ticket sales and sponsorship. Both of those boil down to creating an event that people want to attend (and, in many cases, can persuade their boss is worth the cost and the time), so that’s what we, as the programme committee, are trying to do.

I should mention that on every conference I’ve worked with, the programme committee has consisted entirely of volunteers — four or five people from within the industry who are happy to donate their free time to help put the programme together. They’ll normally get a complimentary ticket to the conference in return, but it’s not a job, and nobody’s getting paid to do it.

Most software conferences run a public ‘call for submissions’ (aka ‘call for papers’ or CfP) where anybody interested in speaking can submit talks for consideration. Once the CfP has closed, the programme committee has the unenviable task of going through all the talks that have been submitted and picking the ones that they think should be included. NDC London has five tracks of one-hour talks, over three days. Once you’ve allowed for keynotes and lunch breaks, that gives you fewer than 100 talk slots. We had 732 sessions submitted. If we’d said ‘yes’ to all of them, we’d have ended up with a conference lasting three weeks with a ticket price of over £7,500… good luck getting your manager to sign off on attending that one.

So, how do you pick the top 100 talks from 732 submissions? Well, here’s what we look for.

First — quality. This one doesn’t really help much, because the vast majority of the talks submitted to a high-profile event like NDC are excellent, but there might be a handful that you can reject outright. These tend to be the talks that look like sales pitches — single-vendor solutions submitted by somebody who works for the vendor in question. You want to use a conference to sell your product? No problem — get a booth, or buy a sponsorship package. But we’re not going to include your sales pitch at the expense of someone else’s submission.

It’s also worth pointing out that when we get to the final round of submissions, when it’s getting really hard to make a call, we’ll look very critically at the quality of the submission itself. We’ll look for a good title, a clear, concise summary, a succinct speaker biography and a good-quality headshot — basically, a submission that makes it really clear who you are, what you’re talking about, why it’ll be interesting, and that you’re going to put in the effort required to deliver a great presentation. There’s no hard-and-fast rule to this; there are some excellent articles out there about how to write good proposals. My own rule of thumb when I’m submitting is 100 characters for the title, 2,000 for the abstract and 1,000 for the speaker bio, and I often use Ted Neward’s approach of ‘pain and promise’ — ‘here’s a problem you’ve had, here’s an idea that might fix it’ — when I’m writing proposals. Every speaker is different, and we all have our own style, but if your talk summary is only one or two lines or you’ve pasted in your entire professional CV instead of writing a short speaker bio, you may well get rejected in favour of somebody who’s clearly put more time into drafting their submission.

Second — relevance. Conferences have a target audience; NDC has evolved from being primarily a .NET conference into an event with a much broader scope, but we know the kind of developers who normally come to NDC London and what sort of things they’re interested in. For example, this year we decided to decline any C++ talks, purely because there’s very few C++ developers in our target audience. That said, we do try to include things that might be interesting to our audience, even if they’re not immediately relevant. Topics like Kotlin and Elm are still relatively esoteric in terms of numbers of users, but the industry buzz around them means we try to include things like this on the programme because we’re confident that people will want to see them.

Third — diversity. We could very easily have filled the entire programme with well-known white male speakers talking about JavaScript — but we think a good conference should feature diverse speakers, presenting diverse topics, with a range of presentation styles. We’re not just talking about gender and ethnicity, either — we want to see new faces alongside regular speakers, and bleeding-edge technology topics alongside established patterns and practices. That said, we would never accept a substandard talk purely for the sake of diversity; the way to get more balance in the programme without compromising quality is to start with a wider range of submissions. Many conferences suffer from a lack of diversity in the talks that get submitted — it’s just the same people submitting the same topics year after year — so throughout the year people like me go to tech events and conferences, look out for great speakers and talks that haven’t appeared at NDC before, and invite them to submit.

Finally, there’s the invited speakers — the people who we definitely want to see on the programme. They fall into two broad camps. There’s the big names — the people with 100K+ Twitter followers, the published authors, the high-profile open source project leaders. Now, being famous-on-the-internet doesn’t automatically mean you’re a brilliant speaker — speaking for an hour in front of a few hundred people takes a very different set of skills from running an open-source project or writing a book — but we’re lucky in our industry to have a lot of people who are well known, well respected, and really, really good on a conference stage. And these people are important, because their involvement gives us a fantastic signal boost. More interest translates into more ticket sales — which means more budget for covering speaker travel, catering, facilities, the entertainment at the afterparty, and all the other things that make a good conference such a positive experience for the attendees.

The programme committee also invites a lot of new speakers because it’s a great way of getting some new faces and new ideas on to the programme. As I mentioned above, many of us spend a lot of time going to user groups, meetups and conferences, and when I see somebody deliver a really good talk, I’ll invariably get in touch afterwards and ask if they’d be interested in submitting it to an event like NDC.

So, those are our constraints. We want to deliver a balanced mix of big names and new faces. We want to promote diversity in an industry that’s still overwhelmingly white and male (never mind the ongoing fascination with JS frameworks and microservices). We want to offer a compelling combination of established technology and interesting esoterica; of stuff that’s interesting, stuff that’s relevant, and stuff that’s fun…

And we get to pick fewer than 100 talks out of 750 submissions, which means whatever we do, we’re going to be sending a whole lot of emails at the end of it saying ‘Sorry, your talk wasn’t accepted…’ — and that’s not much fun, because we’re turning down good content from good speakers. In many cases those people are good friends as well — the tech industry is a very friendly place and I count the people I’ve met through it among my closest friends. But that’s how it works.

It’s humbling to be part of an industry where so many talented people are willing to invest their time in sharing their own expertise. Speaking at conferences is a really rewarding experience, but it’s also a huge amount of work, preparation, rehearsal, logistics, and time away from home. Without speakers, there’d be no conferences — but, as I hope I’ve explained here, there are more excellent people and talks out there than any one event can ever hope to accommodate.

The thing is, there’s absolutely no shortage of great conferences and events. Don’t be discouraged. Keep submitting. If you didn’t make the cut for NDC London, submit to Oslo and Sydney. And BuildStuff, and DevSum, and Øredev, and FullStack, and Progressive.NET, and DotNext, and Sela Developer Practice, and SDD, and QCon, and WebSummit. And if none of those grab you, sign up for services like The Weekly CFP and Technically Speaking that will email you about conferences that are looking for speakers and submissions.

Finally, you know the one thing that every really good speaker I’ve ever seen has in common? It’s that they work hard on interesting things, and they love what they do. Maybe it’s their job. Maybe they lead an open-source project, or run a user group, or they’re writing a book. But if you love what you do and you want to share that enthusiasm, it’ll happen. Just give it time.

Saturday, 23 September 2017

London, London, Uber Alles

I read with some interest yesterday that Transport for London (TfL) are not renewing Uber’s license to operate in London. TfL have cited concerns over Uber’s driver screening and background checks, and Uber’s use of ‘Greyball’, a software component designed and built by Uber to bypass all sorts of regulatory mechanisms, including using a phone’s GPS to recognise when the phone is at being used at Apple HQ so that the Apple engineers who review iOS applications won’t see the hidden features that Apple aren’t supposed to know about.

I use Uber a lot. Their service used to be absolutely excellent, and is still pretty good. It’s not as good as it used to be. It takes longer to get a car than it used to, particularly in central London. But, as a passenger (and yes, I know I’m a white male passenger, although some of my Uber experience dates from a period when I did have quite serious mobility issues whilst recovering from a skiing injury), I have found Uber to be a really good service. I’ve used it all over the world — London, Bristol, Brussels, Kyiv, Saint-Petersburg, and as of last night, Minsk. I missed it in Tel Aviv, where it’s been outlawed and everyone uses Gett instead, although Gett in Israel appears to operate exactly the same as Uber does in London so I’m not quite sure what the distinction is.

I’ve had some seriously impressive experiences as an Uber customer. In London, I once left my guitar in the back of an Uber that dropped me home at 4am after a horribly delayed flight. I realised within minutes. I used the app to phone the driver, who immediately turned around and brought it back, and was very taken aback when I insisted on paying him for the extra journey. In Kyiv I’ve used Uber to travel safely from a place I couldn’t pronounce to a place I couldn’t find, in a city where I couldn’t speak the language or even read the alphabet, with a driver who spoke no English, and I’ve done it with absolutely no fear of getting ripped off or robbed.

It’s not all been smooth. In Bristol I once had an Uber driver — sorry, “partner-driver” — who missed the turning four times in a row and then explained, giggling, that he’d never driven a cab before and didn’t know how to use satellite navigation. In London I’ve actually been in an Uber car that was pulled over by the police for speeding.. the driver panicked, drove away, realised what he’d done, thought better of it and reversed down Moorgate, in rush hour traffic, back to where a rather surprised-looking police officer immediately placed him under arrest. In both of those cases I complained, Uber investigated (something they can do really easily, thanks to GPS tracking of exactly what all their vehicles are doing at any moment) and notified me within 24 hours that the drivers in question had been suspended and would not be driving for the company again. I’ve used their app to claim refunds where I was overcharged — but also to reimburse drivers who undercharged me because of technical problems.

Now, here’s the two points I think are really important. One — Uber didn’t actually solve any hard problems. They didn’t invent GPS, or cellular phone networks, or draw their own maps of the world’s major cities. They just waited until exactly the right moment — the moment everyone had GPS, and online payments were easy, and smartphones were cheap enough that it was cost-effective to use them as the basis of a ride-sharing application — and then they pounced. Now don’t get me wrong, they did it incredibly well, and the user experience — particularly in the early days — was absolutely first-class. Usability features like being able to photograph your credit card using your phone camera instead of having to type the number in were a game-changer — the kind of features that people would show off to their friends in the pub just because no-one had ever seen it before. But Uber didn’t solve any hard engineering problems. There’s no PageRank algorithms or Falcon 9 reusable rockets being invented here. Uber’s success is down to absolutely first-class customer experience, aggressive expansion, and their willingness to enter target markets without the slightest consideration for how their service might affect the status quo. And their protestations about the ruling threatening 40,000 jobs is a bit rich given how adamantly they insist that they don’t actually employ any drivers, despite a court ruling to the contrary.

If it hadn’t been them, it would have been someone else, which brings me to my second point. The London taxi cab industry was just crying out for somebody to come in and kick seven bells out of it. I’ve lived in London since 2003, and I’ve taken many, many taxis over the years. For all the talk about “The Knowledge”, I have lost count of the number of times I’ve had to give a black cab driver turn-by-turn directions because they don’t know where I’m going and they’re not allowed to rely on sat-nav. I’ve lost count of the number of times I’ve had to ask a cab driver to stop at a cash machine because they don’t take credit cards. I’ve lost track of the number of hours of my life I’ve spent stood on street corners, in the rain, with more luggage than I can possibly carry home on the bus, waiting for the glow of an orange light that might take me home. Or might just ask where I’m going and drive away with a shake of the head and a ‘no, mate’ because my journey isn’t convenient for them.

The alternative, of course, was minicabs. Booked in advance from a reputable operator, they were generally pretty good. Not always, but most of the time they’d show up on time and take you where you wanted to go. Or, at the end of a long night out, you’d wander up to one of Soho’s many illustrious minicab offices, and end up in the back seat of an interesting-smelling Toyota Corolla wondering if the driver was really the person who’d passed all the necessary tests and checks, or one of their cousins who’d borrowed the cab for the night to earn an extra few quid. (On two separate occasions I’ve been offered this as an explanation for a minicab driver not knowing where they’re going…)

A lot of people are very upset about yesterday’s news — a petition to ‘save Uber’ has attracted nearly half a million signatures since the announcement — but in the long term, I don’t think it really matters whether Uber’s license is renewed or not. There’s no way people in London are going to go back to standing on rainy street corners waving their arms at every orange light that goes past. Uber has changed the game. It’s now a legal requirement that black cabs in London accept payment by card — something which even a few years ago was still hugely controversial. And that’s actually really sad, because once upon a time, London taxis had an international reputation for innovation and excellence. The London ‘black cab’ is a global icon, but it’s also an incredibly well-designed vehicle — high-visibility handholds, wheelchair accessible, capable of carrying five passengers and negotiating the narrow tangled streets of one of the world’s oldest cities. London taxis have been regulated since the 17th century. “The Knowledge” — the exam all London black cab drivers are required to pass, generally regarded as one of the hardest examinations of any profession anywhere in the world — was introduced in the 1850s, after visitors travelling to London for the Great Exhibition complained that their hackney-carriage drivers didn’t know where they were going. One of the great joys of visiting London in the days before sat-nav was hailing a cab, giving the driver the name of some obscure pub halfway across the city, and watching as they’d think for one second, nod, and then whisk you there without further hesitation. But in the age of ubiquitous satellite navigation, who cares whether your driver can remember the way from Narrow Street to Penton Place after you’ve had to stand in the rain for fifteen minutes trying to hail a cab? What Uber did — and did incredibly well — is they thought about all the elements of the passenger experience that happen outside the car. Finding a cab. Knowing how much it’s likely to cost. Paying the bill. Recovering lost property. And they made it cheap. They created a user experience that, for the majority of passengers, was easier, cheaper and more convenient than traditional black cabs. It’s no wonder the establishment freaked out.

So what happens now? Maybe Uber win their appeal. Maybe someone else moves into that space. Maybe things get a bit more expensive for passengers — and, frankly, I think they should. I think Uber is too cheap. If you want the luxury of somebody else driving you to your door in a private car, — and for most of us, that IS a luxury — then I believe the person providing that service deserves to make a decent living out of providing it, and one of the most welcome features Uber’s introduced recently is the ability to tip drivers via the app.

Uber has taken a stagnant industry that was in dire need of a kick up the arse, and it’s done it. They’re not the only cab hailing app in the market. Gett, mytaxi (formerly Hailo), Kabbee, private hire firms like Addison Lee — not to mention just phoning a minicab office and asking them to send a car round. In a typical month in London, I’ll use buses, the Tube, national rail services, the Overground, Uber AND black cabs — not to mention a fair bit of walking and cycling. I even have an actual car, which I use about once a month to go to B&Q or IKEA or somewhere. None of them’s perfect, but on the whole, transport in London works pretty well, and I’ve found Uber a really welcome addition to the range of transport services that’s on offer.

But Uber vs TfL is just a tiny taste of what’s coming. I honestly believe that within the next ten years, we’ll be using our smartphones to summon driverless autonomous electric vehicles to take us home after a night out. In all sorts of shapes and sizes, too — after all, why should it take an entire Toyota Prius to transport a 5’2”, 65kg human with a small shoulder bag from Wardour Street to Battersea? We’ll be going to IKEA in a tiny little electric bubble-bike, buying a new kitchen, and taking it home in a driverless van that waits outside whilst we unload it and then burbles off happily into the sunset to pick up the next waiting fare. Never mind the cab wars between the black cab drivers and the minicab drivers — what happens when a robot cab will pick you up anywhere in town and take you home for a quid? A robot cab that doesn’t have a mortgage to pay or kids to feed? That runs on cheap, green energy, that doesn’t get bored or tired or distracted?

There are going to be problems. There are going to be collisions, fatalities, lawsuits, prosecutions, appeals and counter-appeals. And the disruptions will keep coming, faster and faster. Just as Transport for London think they’ve got workable legislation for driverless cars, someone’s going to invent a drone that can fly from Covent Garden to Dulwich carrying a passenger, and whilst they’re busy arguing in court over whether it’s a helicopter or not someone’s gonna shoot one down with a flare pistol and all merry hell’s going to break loose.

What Uber shows us is that technology isn’t going to self-regulate. The digital economy moves too fast for pre-emptive legislation and licensing frameworks. There’s fortunes to made in the months or years between your product hitting the market and the authorities deciding to shut you down. The future of Uber doesn’t depend on getting their TfL license renewed. They’ve been kicked out of entire countries before and it doesn’t seem to have slowed them down. To them this is just a bump in the road. Their future is about being the first company to roll out self-driving cabs, and you can guarantee they’re working, right now, on finding the legislative loopholes in their various target markets that will allow them to launch first and ask questions later.

Black cabs aren’t going away. Buses aren’t going away. Much as I’d love it, they’re not going to decommission all the rolling stock and turn the London Underground network into a giant dodgems track. Technology is going to disrupt, government is going to react — and whilst that model doesn’t always work terribly well, I can’t see any feasible alternative. Engineers are going to solve hard problems. Touch screens, cellular networks, GPS, space travel, battery capacity, biological interfaces, machine learning. Companies like Apple and Google and SpaceX and Tesla are going to put those solutions in our pockets, and in our buildings and on our streets, and companies like Uber and Tinder and AirBnB are going to find ways to turn those solutions into products that you just can’t WAIT to show your friends in the pub.

And, short of the nightmare scenarios my friend Chris has so lovingly documented over on H+Pedia, society will continue. Uber will run surveillance software on your phone that’s a fraction of what the Home Office are doing all day every day, but Greyball isn’t going to cause the downfall of society. Self-driving cars will kill people. Yes, they will. But 3,000 people already die in road traffic accidents every day, and we think that’s normal, and as long as it doesn’t impact our lives directly we just sort of shrug and ignore it and get on with our lives.

I like Uber, and I feel guilty for liking them because I know they do some truly horrible things. I also travel by air, and I eat meat, and I wear leather and I use an iPhone and don’t always recycle. And I used to download MP3s all the time, and feel guilty about it, and then Spotify came along and now I don’t download MP3s any more. And I used to download movies and TV shows, and now I have Amazon Prime and Netflix and I haven’t downloaded a movie in YEARS.

Very few people are extremists. For every militant vegan, there’s someone out hunting their own meat, and a couple of hundred of us who just go with whatever’s convenient. If you’re trying to change the world, ignore the extremists. Don’t outlaw things you think people shouldn’t be doing. Change the world by using technology to deliver compelling alternatives. I want vat-grown burgers and steaks that taste as good as the real thing, and yes, I’ll pay. I want boots and jackets made from synthetic textiles that actually look properly road-worn after a couple of years instead of just falling apart. I want to travel by hypersonic maglev underground train instead of flying. OK, maybe not that last part… I’m writing this from 30,000 feet above the Lithuanian border, the setting sun behind us is catching the wingtips as we soar above an endless sea of cloud, and I’m being reminded how much I love flying. But you could definitely make air travel a LOT more expensive before I stopped doing it completely.

If TfL really want to get rid of Uber, revoking their license isn’t the way to do it. They’ll appeal, you’ll fight, a lot of lawyers will get rich and TfL will either lose and look like an idiot, or win and look like an asshole. How about TfL find the company that’s trying to beat Uber anyway — they’re out there, somewhere — and offer to work with them instead? As Rowland Manthorpe wrote in WIRED yesterday, why don’t we create a socially responsible, employee-owned, ride sharing platform that gives passengers everything Uber does without the institutionalised nastiness and the guilty aftertaste? What ethical transport startup would not jump at the fact to sign up the greatest city in the world as their development partner?

You think the future is about using an iPhone to summon a guy in a Toyota Prius, and the important question is who wrote the app they’re using? Come on, people. This is London calling. We can think a hell of a lot bigger than that.

Monday, 7 August 2017

Generating self-signed HTTPS certificates with subjectAltNames

We provide online services via a bunch of different websites, using federated authentication so that if you sign in to our authentication server, you get a *.mydomain.com cookie that’s sent to any other server on our domain.

We use local wildcard DNS, so there’s a *.mydomain.com.local record that resolves everything to 127.0.0.1, and for each developer machine we create a  *.mydomain.com.hostname record that’s an alias for hostname, so you can browse to www.mydomain.com.<machine> to see code running on another developer’s workstation, or www.mydomain.com.local to view your own local development code.

This works pretty well, but getting a local development system set up involves running local versions of several different apps – and since Google Chrome now throws a security error for any HTTPS site whose certificate doesn’t include a “subject alternative name” field, getting a bunch of local sites all happily sharing the same cookies over HTTPS proved a bit fiddly.

So… here’s a batch file that will spit out a bunch of very useful certificates, adapted from this post on serverfault.com.

How it works

  1. Get openssl.exe working - I use the version that's shipped with Cygwin, installed into C:\Windows\Cygwin64\bin\ and added to my system path.
  2. Run makecert.bat. If you don't want to specify a password, just provide a blank one (press Enter). This will spit out three files:
    • local_and_hostname.crt
    • local_and_hostname.key
    • local_and_hostname.pfx
  3. Double-click the local_and_hostname.crt file, click "Install Certificate", and use the Certificate Import Wizard to import it. Choose "Local Machine" as the Store Location, and "Trusted Root Certification Authorities" as the Certificate Store.
  4. Open IIS, select your machine, open "Server Certificates" from the IIS snapin, click "Import..." in the Actions panel
  5. Select the local_and_hostname.pfx certificate created by the batch file. If you used a password when exporting your PKCS12 (.pfx) file, you'll need to provide it here
  6. Finally, set up your IIS HTTPS bindings to use your new certificate.

Yay! Security! 

Monday, 31 July 2017

Deployment Through the Ages

Vanessa Love just posted this intriguing little snippet on Twitter:

And I got halfway through sticking some notes into the Google doc, and then thought actually this might make a fun blog post. So here’s how deployment has evolved over the 14 years since I first took over the hallowed mantle of [email protected].

2003: Beyond Compare (maybe?)

The whole site was classic ASP – no compilation, no build process, all connection credentials and other settings were managed as application variables in the global.asa file. On a good day, I’d get code running on my workstation, test it in our main target browsers, and deploy it using a visual folder comparison tool. It might have been Beyond Compare; it might have been something else. I honestly can’t remember and the whole thing is lost in the mists of time. But that was basically the process – you’d have the production codebase on one half of your screen and your localhost codebase on the other half, and you’d cherry-pick the bits that needed to be copied across.

Of course, when something went wrong in production, I’d end up reversing the process – edit code directly on live (via UNC share), normally with the phone wedged against my shoulder and a user on the other end; fix the bug, verify the user was happy, and then do a file sync in the other direction to get everything from production back onto localhost. Talk about a tight feedback loop – sometimes I’d do half-a-dozen “deployments” in one phone call. It was a simpler time, dear reader. Rollback plan was to hammer Ctrl-Z until it’s working again; disaster recovery was tape backups of the complete source tree and database every night, and the occasional copy’n’paste backup of wwwroot before doing something ambitious.

Incidentally, I still use Beyond Compare almost daily – I have it configured as my merge tool for fixing Git merge conflicts. It’s fantastic.

2005: Subversion

Once we hired a second developer (hey Dan!) the Beyond Compare approach didn’t really work so well any more, so we set up a Subversion server. You’d get stuff running on localhost, test it, maybe share an http://www.spotlight.com.dylan-pc/ link (hooray for local wildcard DNS) so other people could see it, and when they were happy, you’d do an svn commit, log into the production web server (yep, the production web server – just the one!) and do an svn update. That would pull down the latest code, update everything in-place. There was still the occasional urgent production bugfix. One of my worst habits was that I’d fix something on production and then forget to svn commit the changes, so the next time someone did a deployment (hey Dan!) they’d inadvertently reintroduce whatever bug had just been fixed and we’d get upset people phoning up asking why it was broken AGAIN.

2006: FinalBuilder

This is where we start doing things with ASP.NET in a big way. I still dream about OnItemDataBound sometimes… and wake up screaming, covered in sweat. Fun times. The code has all long since been deleted but I fear the memories will haunt me to my grave.

Anyway. By this point we already had the Subversion server, so we had a look around for something that would check out and compile .NET code, and went with FinalBuilder. It had a GUI for authoring build pipelines and processes, some very neat features, and could deploy .NET applications to IIS servers. This was pretty sophisticated for 2006. 

2008: test server and msdeploy

After one too many botched FinalBuilder deployments, we decided that a dedicated test environment and a better deployment process might be a good idea. Microsoft had just released a preview of a new deployment tool called MSDeploy, and it was awesome. We set up a ‘staging environment’ – it was a spare Dell PowerEdge server that lived under my desk, and I’d know when somebody accidentally wrote an infinite loop because I’d hear the fans spin up. We’d commit changes to Subversion, FinalBuilder would build and deploy them onto the test server, we’d give everything a bit of a kicking in IE8 and Firefox (no Google Chrome until September 2008, remember!) and then – and this was magic back in 2008 – you’d use msdeploy.exe to replicate the entire test server into production! Compared to the tedious and error-prone checking of IIS settings, application pools and so on, this was brilliant. Plus we’d use msdeploy to replicate the live server onto new developers’ workstations, which was a really fast, easy way to get them a local snapshot of a working live system. For the bits that still ran interpreted code, anyway.

2011: TeamCity All The Things!

By now we had separate dev, staging and production environments, and msdeploy just wasn’t cutting it any more. We needed something that can actually build different deployments for each environments – connection strings, credentials, and so on. And there’s now support in Visual Studio for doing XML configuration transforms, so you create a different config file for every environment, check those into revision control, and get different builds for each environment. I can’t remember exactly why we abandoned FinalBuilder for TeamCity, but it was definitely a good idea – TeamCity has been the backbone of our build process ever since, and it’s a fantastically powerful piece of kit.

2012: Subversion to GitHub

At this point, we’d grown from me, on my own doing webmaster stuff, to a team of about six developers. Even Subversion is starting to creak a bit, especially when you’re trying to merge long-lived branches and getting dozens of merge conflicts, so we start moving stuff across to GitHub. It takes a while – I’m talking months – for the whole team to stop thinking of Git as ‘unnecessarily complicated Subversion’ and really grok the workflow, but we got there in the end.

Our deployment process at this point was to commit to the Git master branch, and wait for TeamCity to build the development version of the package. This would get built and deployed. Once it was tested, you’d use TeamCity to build and deploy the staging version – and if that went OK, you’d build and deploy production. Like very step on this journey, it was better than anything we’d had before, but had some obvious drawbacks. Like the fact we had several hundred separate TeamCity jobs and no consistent way of managing them all.

2013: Octopus Deploy and Klondike

When we started migrating from TeamCity 6 to TeamCity 7, it became rapidly apparent that our “build everything several times” process… well, it sucked. It was high-maintenance, used loads of storage space and unnecessary CPU cycles, and we needed a better system.

Enter Octopus Deploy, whose killer feature for us was the ability to compile a .NET web application or project into a deployment NuGet package (an “octopack”), and then apply configuration settings during deployment. We could build a single package, and then use Octopus to deploy and configure it to dev, staging and live. This was an absolute game-changer for us. We set up TeamCity to do continuous integration, so that every commit to a master branch would trigger a package build… and before long, our biggest problem was that we had so many packages in TeamCity that the built-in NuGet server started creaking.

This started life as an experimental build of themotleyfool/NuGet.Lucene – which we actually deployed onto a server we called “Klondike” (because klondike > gold rush > get nuggets fast!) – and it worked rather nicely. Incidentally, that NuGet.Lucene library is now the engine behind themotleyfool/Klondike, a full-spec NuGet hosting application – and I believe our internal hostname was actually the inspiration for their project name. That was a lot of fun for the 18 months or so that Klondike existed but we were still running the old NuGet.Lucene codebase on a server called ‘klondike’. It’s OK, we’ve now upgraded it and everything’s lovely.

It was also in 2013 that we started exploring the idea of automatic semantic versioning – I wrote a post in October 2013 explaining how we hacked together an early version of this. Here’s another post from January 2017 explaining how it’s evolved. We’re still working on it. Versioning is hard.

And now?

So right now, our build process works something like this.

  1. Grab the ticket you’re working on – we use Pivotal Tracker to manage our backlogs
  2. Create a new GitHub branch, with a name like 12345678_fix_the_microfleems – where 12345678 is the ticket ID number
  3. Fix the microfleems.
  4. Push your changes to your branch, and open a pull request. TeamCity will have picked up the pull request, checked out the merge head and built a deployable pre-release package (on some projects, versioning for this is completely automated)
  5. Use Octopus Deploy to deploy the prerelease package onto the dev environment. This is where you get to tweak and troubleshoot your deployment steps.
  6. Once you’re happy, mark the ticket as ‘finished’. This means it’s ready for code review. One of the other developers will pick it up, read the code, make sure it runs locally and deploys to the dev environment, and then mark it as ‘delivered’.
  7. Once it’s delivered, one of our testers will pick it up, test it on the dev environment, run it past any business stakeholders or users, and make sure we’ve done the right thing and done it right.
  8. Finally, the ticket is accepted. The pull request is merged, the branch is deleted. TeamCity builds a release package. We use Octopus to deploy that to staging, check everything looks good, and then promote it to production.

And what’s on our wishlist?

  • Better production-grade smoke testing. Zero-footprint tests we can run that will validate common user journeys and scenarios as part of every deployment – and which potentially also run as part of routine monitoring, and can even be used as the basis for load testing.
  • Automated release notes. Close the loop, link the Octopus deployments back to the Pivotal tickets, so that when we do a production deployment, we can create release notes based on the ticket titles, we can email the person who requested the ticket saying that it’s now gone live, that kind of thing.
  • Deployments on the dashboards. We want to see every deployment as an event on the same dashboards that monitor network, CPU, memory, user sessions – so if you deploy a change that radically affects system resources, it’s immediately obvious there might be a correlation.
  • Full-on continuous deployment. Merge the PR and let the machines do the rest.

So there you go – fourteen years worth of continuous deployments. Of course, alongside all this, we’ve moved from unpacking Dell PowerEdge servers and installing Windows 2003 on them to running Chef scripts that spin up virtual machines in AWS and shut them down again when nobody’s using them – but hey, that’s another story.

Thursday, 27 July 2017

Securing Blogger with CloudFlare and HTTPS

As you may have read, life is about to get a whole lot harder for websites without HTTPS. Now this site is hosted on Blogger – I used to run my own MovableType server, but I realised I was spending way more time messing around with the software than I was actually writing blog posts, so I shifted the whole thing across to Blogger about a decade ago and never really looked back.

One of the limitations of Blogger is that it doesn’t support HTTPS if you’re using custom domains – there’s no way to install your own certificate or anything. So, since Chrome’s about to crank up the warnings for any websites that don’t use HTTPS, I figured I ought to set something up. Enter CloudFlare, who are really rather splendid.

First, you sign up. (bonus points for them NOT forcing you to choose a password that contains a lowercase letter, an uppercase letter, a number, a special character, the poo emoji and the Mongolian vowel separator).

Second, you tell them which domain you want to protect:

image

They scan all your DNS records, which takes about a minute – and not only is there a nice real-time progress bar keeping you in the loop, they use this opportunity to play a really short video explaining what's going on. I think this is absolute genius.

image

Finally, after checking it's picked up all your DNS records properly (it had), you tell your domain registrar to update the nameservers for your domain to CloudFlare's DNS servers, give it up to 24 hours, and you're done. Zero downtime, zero service interruption – the whole thing was smooth, simple, and completely free-as-in-beer.

Yes, I realise this does not encrypt content end-to-end. For what we're doing here, this is absolutely fine. It'll secure your traffic against dodgy hotel wi-fi and unscrupulous internet service providers - and if anyone's genuinely intercepting HTTP traffic between CloudFlare and Google, I'm sure they can think of more exciting things to do with it than mess around with my blog posts.

Having done that, I then had to use the Google Chrome console to track down the resources – photos and the odd bit of script – that were being hosted via HTTP, and update them to be HTTPS. The only thing I couldn't work out how to fix was the search bar that's embedded in Blogger's default page layout – it's injected by JavaScript, it's hosted by Google's CDN (so I can't use any of CloudFlare's clever rewriting tricks to fix it), it's stuck inside an IFRAME, and it points to http://www.dylanbeattie.net/search – see the plain HTTP with no S?

image

After an hour or so of messing around with CSS, I gave up, posted a question on the ProWebmasters Stack Exchange, and – of course, immediately found the solution; go into Blogger, Layout, find the Navbar gadget, click Edit, and there's an option to switch the nav off entirely.

So there you go. Thanks to CloudFlare, https://www.dylanbeattie.net/ now has a green padlock on it. I don't know about you, but I take comfort in that.

image

Friday, 21 July 2017

Summer 2017 .NET Community Update

Summer here in the UK is normally pretty quiet, but this year there's so much going on around .NET and the .NET community that I thought this would be a great opportunity to do a bit of a round-up and let you all know about some of the great stuff that's going on.

First, there's the news of two new .NET user groups starting up in southern England. Earlier this week, I was down in Bournemouth speaking at the first-ever meetup of the new .NET Bournemouth group, and thoroughly enjoyed it. Three speakers – Stuart Blackler, Tommy Long and me – with talks on leadership, agile approaches to information security, and an updated version of my "happy code" talk I've done at a few conferences already this year. The venue and A/V setup worked flawlessly, there was a strong turnout, and some really good questions and discussion after each of the talks – I think it's going to turn out to be a really engaging group, so if you're in that part of the world, stop by and check them out. Their next few meetup dates are on meetup.com/Net-Bournemouth already.

Next month, Steve Gordon is starting a new .NET South East group based in Brighton, who will be kicking off with their inaugural meetup on August 22nd with Steve talking about Docker for .NET developers.

Brighton based .NET South East user group logo

There's a great post on Steve's blog explaining what he's doing and what he's hoping to get out of the group, and they're also on meetup.com/dotnetsoutheast (and I have to say, they've done an excellent job of branding the Meetup site – nice work!)

It's an exciting time for .NET – between the cross-platform stuff that's happening around Xamarin and .NET Core, new tooling like JetBrains Rider and Visual Studio Code, and the growing number of cloud providers who are supporting C# and .NET Core for building serverless cloud applications, we've come a long, long way from the days of building Windows Forms and databinding in Visual Studio .NET.

If you're interested in really getting to grips with the future of .NET, join us at the Progressive.NET Tutorials here in London in September. With a great line-up of speakers including Julie Lerman, Jon Skeet, Jon Galloway, Clemens Vasters and Rachel Appel – plus Carl Franklin and Rich Campbell from DotNetRocks, and a few familiar faces you might recognise from the London.NET gang – it promises to be a really excellent event. It goes a lot deeper than most conferences – with one day of talks and two days of hands-on workshops, the idea is that attendees don't just go away with good ideas, they actually leave with running code, on their laptops, that they can refer back to when they take those ideas back to the office or to their own projects. Check out the programme, follow #ProgNET on Twitter, and hopefully see some of you there. 

Then on Saturday 16th September – the day after Progressive.NET – is the fourth DDD East Anglia community conference in Cambridge. Their call for speakers is now closed, but voting is open until July 29th – so sign up, vote on the sessions you want to see – or just vote for mine if you can't make your mind up ;) - and hopefully I'll see some of you in Cambridge.

t_shirt_logo_thumb[23]Finally, just in case any readers of this blog DON'T know about the London .NET User Group… yep, we have .NET User Group! In London! I know, right? We're on meetup.com/London-NET-User-Group, and on Twitter as @LondonDotNet, and we meet every month at SkillsMatter's CodeNode building near Moorgate.

Our next meetup is on August 8th, with Ana Balica talking about the history and future of HTTP and HTTP/2, and Steve Gordon – and on September 12th we've got Rich Campbell joining us for a Progressive.NET special meetup and presenting the History of .NET as you've never heard it before.

New people, new meetups, new platforms and new ideas. Like I said, it's a really exciting time to be part of the .NET community – join us, come to a meetup, follow us online, and let's make good things happen.

Tuesday, 4 July 2017

Use Flatscreens

This started life as a lightning talk for PubConf after NDC in Sydney, back in August 2016… and after quite a lot of tweaking, editing and learning to do all sorts of fun things with Adobe AfterEffects and Premiere, it's finally on YouTube. The inspiration is, of course, "Wear Sunscreen", Baz Luhrmann's 1999 hit song based on an essay written by Mary Schmich. Video footage and stock photography is all credited at the end of the clip, and the music, vocals, video, audio and, well, basically everything else is by me. Happy listening - and don't forget to use flatscreens :)

Ladies and gentlemen of the class of 2017… use flat screens. If I could offer you only one tip for the future, flat screens would be it. The benefits of flat screens have been proved by Hollywood, whereas the rest of my advice has no basis more reliable than my own meandering experience.
 
I will dispense this advice... now.

Enjoy the confidence and optimism of greenfield projects. Oh, never mind. You will not appreciate the confidence and optimism of greenfield until everything starts going to hell. But trust me, when you finally ship, you'll look back at the code you wrote and recall in a way you can't grasp now how simple everything seemed, and how productive you really were. Your code is not as bad as you imagine.

Don't worry about changing database providers. Or worry, but know that every company who ever used an OR/M in case they needed to switch databases never actually did it. The real problems in your projects are the dependencies you don't control; the leaking air conditioner that floods your data centre at 5pm on the Thursday before Christmas.

Learn one thing every day that scares you.

Optimise.

Don't reformat other people's codebases; don't put up with people who reformat yours.

Rebase

Don't get obsessed with frameworks. Sometimes they help, sometimes they hurt. It's the user experience that matters, and the user doesn't care how you created it.

Remember the retweets you receive; forget the flame bait. If you succeed in doing this, tell me how.

Keep your old hard drives. Throw away your old network cards.

Refactor.

Don't feel guilty if you don't understand f#. Some of the most productive junior developers I've worked with didn't know F#. Some of the best systems architects I know still don't.

Write plenty of tests.

Be kind to your keys; you'll miss them when they're gone.

Maybe you have a degree; maybe you don't. Maybe you have an open source project; maybe you won't. Maybe you wrote code that flew on the Space Shuttle; maybe you worked on Microsoft SharePoint. Whatever you do, keep improving, and don't worry where your next gig is coming from. There's a big old world out there, and they're always going to need good developers.

Look after your brain. Don't burn out, don't be afraid to take a break. It is the most powerful computer you will ever own.

Launch, even if you have no users but your own QA team.

Have a plan, even if you choose not to follow it.

Do NOT read the comments on YouTube : they will only make you feel angry.

Cache your package dependencies; you never know when they'll be gone for good.

Read your log files. They're your best source of information, and the first place you'll notice if something's starting to go wrong.

Understand that languages come and go, and that it's the underlying patterns that really matter. Work hard to fill the gaps in your knowledge, because the wiser you get,  the more you'll regret the things you didn't know when you were young.

Develop in 86 assembler once, but stop before it makes you smug; develop in Visual Basic once, but stop before it makes you stupid.

Read.

Accept certain inalienable truths. Your code has bugs, you will miss your deadlines, and you, too, might end up in management. And when you do, you'll fantasize that back when you were a developer, code was bug-free, deadlines were met, and developers tuned their database indexes.

Tune your database indexes.

Don't deploy your code without testing it. Maybe you have a QA team. Maybe you have integration tests. You never know when either one might miss something.

Don't mess too much with your user interface, or by the time you ship, it will look like a Japanese karaoke booth.

Be grateful for open source code, but be careful whose code you run. Writing good code is hard, and open source is a way of taking bits from your projects folder, slapping a readme on them, and hoping if you put them on GitHub somebody else will come along and fix your problems.

But trust me on the flat screens.

Wednesday, 28 June 2017

Interview with Channel 9 at NDC Oslo

I was in Oslo earlier this month, where – as well as doing the opening keynote, a couple of talks, a workshop on hypermedia systems and PubConf – I had the chance to chat with Seth Juarez from Channel 9 about code, culture, speaking at conferences, and… all kinds of things, really.

The interview's here, or you can watch it over on Channel9.msdn.com - thanks Seth and co for taking the time to put this together!

Saturday, 13 May 2017

Interview with habrahabr.ru about HTTP APIs in .NET

Next week I'll be in Russia, where I'm speaking about HTTP APIs and REST at the DotNext conference in Saint Petersburg. As part of this event, I've done an interview with the Russian tech site Хабрахабр about the history and future of API development on the web and in Microsoft.NET. The interview's available on their site habrahabr.ru (in Russian), but for readers who are interested but can't read Russian, here's the original English version.


Cathedral in Saint Petersburg, Russia. goodfreephotos.com / Photo by DEZALB.

Q: What kind of APIs are you designing? Where does API design fit into software development?

That’s kind of an interesting question, because I think one of the biggest misconceptions in software is that designing APIs is an activity that happens separately to everything else. Sure, there are certain kinds of API projects – particularly things like HTTP APIs which are going to be open to the public – where it might make sense to consider API design as a specific piece of work. But the truth is that most developers are actually creating APIs all the time – they just don’t realise they’re doing it. Every time you write a public method on one of your classes or choose a name for a database table, you’re creating an interface – in the everyday English sense of the word – that will end up being used by other developers at some point in the future. Other people on your team will use your classes and methods. Other teams will use your data schema or your message format.

What’s interesting, though, is that once developers realize that the code they’re working on will probably form part of an API, they tend to go too far in the other direction. They’ll implement edge cases and things that they don’t actually need, just in case somebody else might need it later. I think there’s a very fine balance, and I think the key to that balance is to be very strict about only building the features that you need right now, but to make those things as reusable and self-explanatory as you can. There’s a great essay by Pieter Hintjens, Ten Rules for Good API Design, that goes into more detail about these kinds of ideas.

The biggest API project I’m working on at the moment is a thing I’m building at Spotlight in the UK, where I work. It’s a hypermedia API exposing information about professional actors, acting jobs in film and television, and various other kinds of data used in the casting industry. We’re building it in the architectural style known as REST – and if you’re not sure what REST is, you should come to my talk at DotNext in Saint-Petersburg and learn all about it. There’s lots of different patterns for creating HTTP APIs – there’s REST, there’s GraphQL, there’s things like SOAP and RPC – but for me, the biggest appeal of REST is that I think the constraints of the RESTful style lead to a natural decoupling of the concepts and operations that your API needs to support, which makes it easier to change things and evolve the API design over time.

Q: One of the most famous applications that was "killed" by backward compatibility is IE. The problem of this browser was that it has too large number of applications for which it was required to have backward compatibility. Problem was solved by adding new application Edge, which is updatable and supports all new standards. Can you give an piece of advice on how not to get caught by that backward-compatibility trap? As an example could it be a modularity which doesn't have layers? May be there is a way to replace API with RESTful API, Service Oriented Architecture or something else?

I’ve been building web applications for a long, long time – I wrote my first HTML page a couple of years before Internet Explorer even existed, back when the only browsers were NCSA Mosaic and Erwise. It’s fascinating to look back at the history of the web, and how the web that exists today has been shaped and influenced by things like the evolution of Internet Explorer – and you’re absolutely right; one of the reasons why Microsoft has introduced a completely new browser, Edge, in the latest versions of Windows is that Internet Explorer’s commitment to backwards-compatibility has made it really difficult to implement support for modern web standards alongside the existing IE codebase.

Part of the reason why that backwards compatibility exists is that, around the year 2000, there was a massive shift in the way that corporate IT systems were developed. There are countless corporations who have bespoke applications for doing all sorts of business operations – stock control, inventory, HR, project management, all kinds of things. Way back in the 1980s and early 1990s, most of them used a central mainframe system and employees would have to use something like a terminal emulator to connect to that central server, but after the first wave of the dotcom boom hit in the late 1990s, companies realised that most of their PCs now had a web browser and an network connection, and so they could replace their old mainframe terminal applications with web applications. Windows had enormous market share at the time, and Internet Explorer was the default browser on most Windows PCs, so lots of organizations built intranet web applications that only had to work on a specific version of Internet Explorer. Sometimes they did this to take advantage of specific features, like ActiveX support; more often I think they just did it save money because it meant they didn’t have to do cross-browser testing. This happened with some pretty big commercial applications as well; as late as 2011, Microsoft Dynamics CRM still offered no support for any browser other than Internet Explorer.

So you’ve got all these companies who have invested lots of time and money in building applications that only work with Internet Explorer. Those applications aren’t built using web standards or progressive enhancement or with any notion of ‘forward compatibility’ – they’re explicitly targeting one version of one browser running on one operating system. And so when Microsoft releases a new version of Internet Explorer, those applications fail – and the companies don’t want to invest in upgrading their legacy intranet applications, so they blame the browser. So we end up with this weird situation where here in 2017, Microsoft are still shipping Internet Explorer 11, which has a compatibility mode where it switches back to the IE9 rendering engine but sends a user agent string claiming that it's IE7. Meanwhile, everyone I know uses Google Chrome or Safari for all their web browsing - but still has an IE shortcut on their desktop for when they have to log in to one of those legacy systems. .

So… to go back to the original question: is there anything Microsoft could have done to avoid this trap? I think there’s a lot of things they could have done. Building IE from the ground up with a modular rendering engine, so that later versions could selectively load the appropriate engine for rendering a particular website or application. They could have made more effort to embrace the web standards that existed at the time, instead of implementing ad-hoc support for things like the MARQUEE tag and ActiveX plugins, which would have avoided the headache of having to support these esoteric features in later versions. The point is, though, none of this mattered at the time. Their focus – the driving force behind the early versions of Internet Explorer – was not to create a great application with first-class support for web standards. They were trying to kill Netscape Navigator and win market share – and it worked.

Q: Let’s imagine someone is going to introduce an API. So they collect some requirements, propose a version and gets feedback. That’s rather simple and straightforward thing. But if there are any hidden obstacles there down the road?

Always! Requirements are going to change – in fact, one of the biggest mistakes you can make is to try and anticipate those changes and make your design ‘future-proof’. Sometimes that pays off, but what mostly happens is that you end up with a much more complicated design purely because you’re trying to anticipate those future changes. Those obstacles are often things outside your control. There’s a change to the law that means you need to expose certain data in a different way. There’s a change to one of the other systems in your organization, or one of your cloud hosting providers announces that they’re deprecating a particular feature that you were relying on.

The best thing to do is to identify something simple and usable, ship it, and get as quickly as you can to a point where your API is stable, there’s no outstanding technical debt, and your team is free to move on to the next thing. That way when you do encounter one of those ‘hidden obstacles’, you have a stable codebase to use as a basis for your solution, and you have a team who have the time and the bandwidth to deal with it. And if by some stroke of luck you don’t hit any hidden obstacles, then you just move on to the next thing on your backlog.

Q: Continue with API design. We’ve released the v1.0 of our API and now v1.1 is approaching. I believe many of us noticed http://example.com/v1/test and http://example.com/v1.1/test or something. What are the best practices (a couple of points) you can think of that can help a developer to design a good v1.1 API in respect to v1.0?

It’s worth reading up on the concept of semantic versioning, (SemVer) and taking the time to really understand the distinction between major, minor and patch versions. SemVer says that you shouldn’t introduce any breaking changes between version x.0 and version x.1, so the most important thing is to understand what would constitute a breaking change for your particular API.

If you’re working with HTTP APIs that return JSON, for example, a typical non-breaking change would be adding a new data field to one of your resources. Clients that are using version 1.1 and are expecting to see this additional field can take advantage of it, whereas clients that are still using version 1.0 can just discard the unrecognised property.

There’s a related question about how you should manage versioning in your APIs. One very common solution is to expose URLs via routing – api.example.com/v1/ as opposed to api.example.com/v1.1 – but if you’re adhering to the constraints of a RESTful system, you really need to understand whether the change in version represents a change in the underlying resource or just the representation. Remember that a URI is a Uniform Resource Identifier, and so we really shouldn’t be changing the URI that we’re using to refer to the same resource.

For example – if we have a resource api.example.com/images/monalisa. We could request that resource as a JPEG (Accept: image/jpeg), or as a PNG (Accept: image/png), or ask the server if it has a plain-text representation of the resource (Accept: text/plain) – but they’re just different representations of the same underlying resource, and so they should all use the same URI.

If – say – you’ve completely replaced the CRM system used by your organization, and so “version 1” of a customer represents a record used in the old CRM system and “version 2” represents that same customer after they’ve been migrated onto a completely new platform, then it probably makes sense to treat them as separate resources and give them different URIs.

Versioning is hard, though. The easiest thing to do is never change anything

Q: .NET Core - what do you think about its API?

When .NET Core was first announced in 2015, back when it was going to be be called .NET Core 5.0, it was going to be a really stripped-down, lightweight alternative to the .NET Framework and the Common Language Runtime. That was an excellent idea in terms of making it easier to port .NET Core to different platforms, but it also left a sizable gap between the API exposed by .NET Core, and the ‘standard’ .NET/CLR API that most applications are built against.

I believe – and this is just my interpretation based on what I’ve read and people I’ve talked to – that the idea was that .NET Core would provide the fundamental building blocks. It would provide things like threading, filesystem access, network access, and then a combination of platform vendors and the open source community would develop the modules and packages that would eventually match the level of functionality offered by something like the Java Class Library or the .NET Framework. That’s a great idea in principle, but it also creates a chicken-and-egg situation: people won’t build libraries for a platform with no users, but nobody wants to use a platform that doesn’t have any libraries.

So, the decision was made that cross-platform .NET needed a standard API specification that would provide the libraries that users and application developers expected to be available on the various supported platforms. This is .NET Standard 2.0, which is already fully supported by the .NET Framework 4.6.1 and will be supported in the next versions of .NET Core and Xamarin. Of course, .NET Core 1.1 is out, and works just fine, and you can use it right now to build web apps in C# regardless of whether you’re running Windows or Linux or macOS, which is pretty awesome – but I think the next release of .NET Core is going to be the trigger for a lot of framework and package developers to migrate their projects across to .NET Core, which in turn should make it easier for developers and organizations to migrate their own applications.

Tram on Moscow Gate Square in Saint Petersburg. freegoodphotos.com /  Photo by Dinamik.

API flexibility VS. API precision. One can design a method API so it can accept many different types of values. It’s flexibility. We also can design a method API with lots of rules on input parameters. Both ways are correct. Where is the boundary across these approaches? When should I make a “strict” API and when should I make a more “flexible” design? Don’t forget that you should take backward-compatibility into account.

By implementing an API where the method signatures are flexible, all you’re doing is pushing the complexity to somewhere else in your stack. Say we’re building an API for finding skiing holidays, and we have a choice between

DoSearch(SearchCriteria criteria)

and

DoSearch(string resortName, string countryCode, int minAltitude, int maxDistanceToSkiList)

One of those methods is pretty easily extensible, because we can extend the definition of the SearchCriteria object without changing the method signature – but we’re still changing the behaviour of the system, we’re just not changing that particular method. By contrast, we could add new arguments to our DoSearch method signature, but if we’re working in a language like C# where you can provide default argument values, you won’t break anything by doing that as long as you provide sensible defaults for the new arguments.

At some point, though, you need to communicate to the API consumers what search parameters are accepted by your API, and there’s lots of ways to accomplish that. If you’re building a .NET API that’s installed as a NuGet package and used from within code, then using XML comments on your methods and properties is a great way to explain to your users what they need to specify when making calls to your API. If your API is an HTTP service, look at using hypermedia and formats like SIREN to define what parameter values and ranges are acceptable.

I should add that I think within the next decade, we’re going to start seeing a whole different category of APIs powered by machine learning systems, where a lot of the conventional rules of API design won’t apply. It wouldn’t surprise me if we got an API for finding skiing holidays where you just specify what you want in natural language, and so there’s not even a method signature – you just call something like DoSearch(“ski chalet, in France or Italy, 1400m or higher, that sleeps 12 people in 8 bedrooms, available from 18-25 January 2018”) – and the underlying system will work it all out for us. Those sorts of development in machine learning are hugely exciting, but they’re also going to create a lot of interesting challenges for the developers and designers trying to incorporate them into our products and applications. 


Thanks to Alexej Sommer for taking the time to set this up (and for translating my answers into Russian – Спасибо!), and if you're at DotNext next week and want to chat about APIs, hypermedia or any of the stuff in the interview, please come and say hi!

Thursday, 27 April 2017

It Works On My Machine!

I saw on Twitter this morning that Derick Bailey is looking for people to share their own “works on my machine” stories… and halfway through filling out his survey, I decided this would probably be much more fun if I nicked his survey questions and turned them into headings in a blog post. Mainly ‘cos writing for an audience appeals to me far more than filling out survey – but Derick (and anyone else who cares?) is very welcome to use anything in this post as part of their own research.

What's typically going through your head when you say "works on my machine" to a QA person or another developer?

I think the interesting question here is actually – what did somebody say to you that caused you to respond with “it works on my machine?”

Here’s three fairly common scenarios:

Q: This code throws an exception when we run it on the staging environment…
A: It works on my machine.

Q: How are you getting on with that improvement to the search algorithm?
A: It works on my machine.

Q: Did you get anywhere with that really weird solution to the mapping problem that Chris found on StackOverflow?
A: It works on my machine…

See how in each case, there’s a sort of implicit subtext? See, I think we all understand that there’s often quite a big difference between solving a problem and delivering a solution. In almost all development scenarios, the first step is to get the code you’re working on running locally and doing the right thing on your development system – and often to do that, we have to hack things around. Running web servers as a local admin user. Granting “Everyone Full Control” of all the files in the media folder. Manually tweaking registry keys, installing DLLs, reusing credentials for APIs and external services – there’s a whole lot of stuff that has to happen as well as just writing some code, but most of the time, the code is the focus and the rest just feels like friction.

So… to answer the original question, when I tell somebody something works on my machine, I’m thinking “ok, what else, other than my own code, do we need to do to deploy the solution, close the ticket and move on?” When you’re working on spikes and prototypes, that’s a natural part of the conversation. If you’ve submitted a ticket to QA for final pre-release testing and it doesn’t work, there’s naturally a bit of tension because implicit in the conversation is the fact that somebody thinks you haven’t done your job properly – and “well it works on my machine” can come across as defensive.

Can you share a story about a time when you have said, or thought, this?

Ah, dozens. The most common example for me is when I’m making a change that spans code in 4-5 different projects, so I’ve checked them all out… in one project I’ve added a database column, in another there’s some new HTTP request routing, in the third there’s a new message queue subscriber, and then there’s the new feature code that relies on all of those changes to work properly. And it works on MY machine because locally I’ve already made all of those changes, but it get it working anywhere else, all five projects need to be reviewed, built, packaged, configured and deployed onto the same environment. Or, for another developer to work on it, they need to check out five specific branches from five specific projects and then probably run a couple of configuration scripts as well – and there’s invariable one or two little things that didn’t require any explicit configuration on my own workstation but then it turns out your teammate has enabled WebDAV publishing under Windows Programs and Features and so their IIS configuration isn’t the same as yours.

How do you typically feel when someone says, "works on my machine," to you?

First off, I’m happy. See, I’ve worked with a very small number of developers who didn’t bother doing even this most basic validation of the work they were doing. They’d commit something, open a pull request and ask for a code review, and you’d look at what they did and think “that’s a bit weird”, so you’d wander over and say “hey, can you show me how this feature works?” – and they will actually say “I don’t know, I can’t run that project”. 

“Works on my machine” at least indicates that they’ve got all the code checked out, they’ve compiled it, and they’ve actually got it running. That’s a good start. That’s something you can work with. And at that point, it’s a great opportunity to explain things like configuration management or deployment scripts.

Can you share a story about a time when someone said this to you?

We had a problem a few weeks ago where we ported an old project from VS2010 to VS2015, and TeamCity wouldn’t build it properly. An absolute textbook example – another developer comes to me and says “well, it works on my machine; it builds fine and I can run all the tests, but TeamCity won’t run any of the unit tests and so the build keeps failing.” Turned out to be a rogue wildcard somewhere in the TeamCity build config settings that was causing it to pick up unit test DLLs from the \obj\ folders instead of \bin\. Which, of course, doesn’t happen when you’re running tests using Resharper or NCrunch, because those tools are smart enough to understand path conventions.

What are the 3 largest causes of someone saying "works on my machine"?

In my experience? The biggest one is dependencies between multiple projects. The “feature”, the unit of business value we’re trying to deliver, requires changes across several different projects and so those changes need to be coordinated and deployed together in order to run and test the new feature, and it’s very easy to miss a step when you’re trying to capture and package all of those code and configuration dependencies.

Second biggest would have to be mismatches between developer environments. We have some people running Windows 8.1, some people running Windows 10, some people working via remote desktop onto virtual machines in the cloud, and then all the various quirks of people’s individual OS configurations like the aforementioned WebDAV Authoring support.

Third? Probably data. Lookup tables, test records, and code that’s brittle because it depends on specific records existing in a particular state, and when you check out the code it doesn’t include the migration steps or SQL scripts that are necessary to set up those records.

Actually, I’m going to go for four, because the one that bites me all the time – probably once a week – is that when you add a new file to a Visual Studio solution, it doesn’t save the .csproj file by default. The new file gets added to the repo and pushed up to GitHub, but the project reference to that new file still only exists on your local machine. Sometimes it’ll crash the build; sometimes – if it’s an image or a script file or something – it’ll build, pass tests, deploy, and then fail on the test server because the new file isn’t included in the .csproj and so wasn’t included when the deployment package was built. If you think your organisation doesn’t have this problem, search your GitHub repo for commit messages including the phrase “csproj file”…

How do you combat the "works on my machine" problem?

There’s a couple of things that have definitely made a big difference to our team at Spotlight. One is setting up an internal NuGet server (we’re running Klondike), and making sure that if your project code references DLLs or any other static components, those dependencies are managed as NuGet packages. That way the first time you build the project, it’ll download all of those obscure DLLs for you instead of waiting for you to get an error message, look it up on the wiki, etc.

One is giving everybody the ability to build and deploy pre-release packages. We have TeamCity and GitHub configured so that as soon as you open a pull request, TeamCity will try and build a deployable package based on the merge head of your feature branch. This means you, the developer, can get packaged builds of your work in progress, deploy it onto one of our testing environments and see for yourself whether it’s going to work or not. Which means you get the chance to fix the bugs and configuration problems before passing it on to anyone else to review or test.

belgian-beers

Oh, and we have something called escrow beers. If you want to introduce a new tool, dependency, language or something into one of our projects, you have to put a six-pack of beer (or a box of cookies or something similarly delicious) in escrow, in the kitchen. Put a post-it note on it saying what it’s for – and then when some poor developer is working late to get a feature out and they discover that they need to install grunt or gulp or bower or yeoman or FAKE or PSake or whatever, there’s goodies in the fridge that will help. That’s doesn’t necessarily inhibit the adoption of new tech, but having to go out and buy beer or cookies makes people stop to think about how their changes might affect their teammates, and so they’ll add some checkout scripts to get the new thing working, or document it on the wiki, or organise a demo to show everyone what they need to know. It's also funny how often somebody thinks a new tool or language is ABSOLUTELY TOTALLY AMAZING and there's no way we can possibly live without it… except it's not actually quite amazing enough to justify walking to a shop at lunchtime and buying a box of cookies.

So there you go… more than you ever wanted to know about code that works on my machine. Thanks again to Derick Bailey for the idea – and just to be clear, you’re welcome to use it, and please credit me by name if you use the information provided here in any follow-up posts or other material.

Monday, 24 April 2017

Robert M. Pirsig on "Stuckness"

Robert M. Pirsig, the author of "Zen and the Art of Motorcycle Maintenance", died today aged 88. I've read and re-read that book many times over the years. As somebody who has always found tranquillity in tinkering, I found that "Zen" evokes that meditative, transcendental state that one can achieve whilst doing mechanical maintenance better than anything I've read… and in others, it captures perfectly the awful frustration that can only be experienced when a perfectly simple job turns into a protracted bout of yak-shaving.

ZAMMcoverold

Of all the passages in the book, the one that has stayed with me the most is the one I've included below, on the subject of 'stuckness'. After countless evenings spent tweaking and tuning mountain bikes in my dad's garage, experiencing first-hand the frustration of a £800 mountain bike rendered completely useless by stripping the head off a 50p bolt, this passage resonated with me more than anything I think I've ever read. I still think of it frequently, normally when I find myself stuck on some hitherto inconsequential detail of a software project that's somehow managed to derail the entire team for days at a time. The book is excellent, and if you haven't read it I highly recommend it, but the passage in question is here. I hope Mr Pirsig's lawyers don't mind. :)

Stuckness. That's what I want to talk about today.

A screw sticks, for example, on a side cover assembly. You check the manual to see if there might be any special cause for this screw to come off so hard, but all it says is "Remove side cover plate" in that wonderful terse technical style that never tells you what you want to know. There's no earlier procedure left undone that might cause the cover screws to stick.

If you're experienced you'd probably apply a penetrating liquid and an impact driver at this point. But suppose you're inexperienced and you attach a self-locking plier wrench to the shank of your screwdriver and really twist it hard, a procedure you've had success with in the past, but which this time succeeds only in tearing the slot of the screw.

Your mind was already thinking ahead to what you would do when the cover plate was off, and so it takes a little time to realize that this irritating minor annoyance of a torn screw slot isn't just irritating and minor. You're stuck. Stopped. Terminated. It's absolutely stopped you from fixing the motorcycle.

This isn't a rare scene in science or technology. This is the commonest scene of all. Just plain stuck. In traditional maintenance this is the worst of all moments, so bad that you have avoided even thinking about it before you come to it.

The book's no good to you now. Neither is scientific reason. You don't need any scientific experiments to find out what's wrong. It's obvious what's wrong. What you need is an hypothesis for how you're going to get that slotless screw out of there and scientific method doesn't provide any of these hypotheses. It operates only after they're around.

This is the zero moment of consciousness. Stuck. No answer. Honked. Kaput. It's a miserable experience emotionally. You're losing time. You're incompetent. You don't know what you're doing. You should be ashamed of yourself. You should take the machine to a real mechanic who knows how to figure these things out.

It's normal at this point for the fear-anger syndrome to take over and make you want to hammer on that side plate with a chisel, to pound it off with a sledge if necessary. You think about it, and the more you think about it the more you're inclined to take the whole machine to a high bridge and drop it off. It's just outrageous that a tiny little slot of a screw can defeat you so totally.

What you're up against is the great unknown, the void of all Western thought. You need some ideas, some hypotheses. Traditional scientific method, unfortunately, has never quite gotten around to say exactly where to pick up more of these hypotheses. Traditional scientific method has always been at the very best, 20-20 hindsight. It's good for seeing where you've been. It's good for testing the truth of what you think you know, but it can't tell you where you ought to go, unless where you ought to go is a continuation of where you were going in the past. Creativity, originality, inventiveness, intuition, imagination..."unstuckness," in other words...are completely outside its domain.

We're still stuck on that screw and the only way it's going to get unstuck is by abandoning further examination of the screw according to traditional scientific method. That won't work. What we have to do is examine traditional scientific method in the light of that stuck screw.

We have been looking at that screw "objectively." According to the doctrine of "objectivity," which is integral with traditional scientific method, what we like or don't like about that screw has nothing to do with our correct thinking. We should not evaluate what we see. We should keep our mind a blank tablet which nature fills for us, and then reason disinterestedly from the facts we observe.

But when we stop and think about it disinterestedly, in terms of this stuck screw, we begin to see that this whole idea of disinterested observation is silly. Where are those facts? What are we going to observe disinterestedly? The torn slot? The immovable side cover plate? The color of the paint job? The speedometer? The sissy bar? As Poincaré would have said, there are an infinite number of facts about the motorcycle, and the right ones don't just dance up and introduce themselves. The right facts, the ones we really need, are not only passive, they are damned elusive, and we're not going to just sit back and "observe" them. We're going to have to be in there looking for them or we're going to be here a long time. Forever. As Poincaré pointed out, there must be a subliminal choice of what facts we observe.

The difference between a good mechanic and a bad one, like the difference between a good mathematician and a bad one, is precisely this ability to select the good facts from the bad ones on the basis of quality. He has to care! This is an ability about which formal traditional scientific method has nothing to say. It's long past time to take a closer look at this qualitative preselection of facts which has seemed so scrupulously ignored by those who make so much of these facts after they are "observed." I think that it will be found that a formal acknowledgment of the role of Quality in the scientific process doesn't destroy the empirical vision at all. It expands it, strengthens it and brings it far closer to actual scientific practice.

I think the basic fault that underlies the problem of stuckness is traditional rationality's insistence upon "objectivity," a doctrine that there is a divided reality of subject and object. For true science to take place these must be rigidly separate from each other. "You are the mechanic. There is the motorcycle. You are forever apart from one another. You do this to it. You do that to it. These will be the results."

This eternally dualistic subject-object way of approaching the motorcycle sounds right to us because we're used to it. But it's not right. It's always been an artificial interpretation superimposed on reality. It's never been reality itself. When this duality is completely accepted a certain nondivided relationship between the mechanic and motorcycle, a craftsmanlike feeling for the work, is destroyed. When traditional rationality divides the world into subjects and objects it shuts out Quality, and when you're really stuck it's Quality, not any subjects or objects, that tells you where you ought to go.

By returning our attention to Quality it is hoped that we can get technological work out of the noncaring subject-object dualism and back into craftsmanlike self-involved reality again, which will reveal to us the facts we need when we are stuck.

Let's consider a reevaluation of the situation in which we assume that the stuckness now occurring, the zero of consciousness, isn't the worst of all possible situations, but the best possible situation you could be in. After all, it's exactly this stuckness that Zen Buddhists go to so much trouble to induce; through koans, deep breathing, sitting still and the like. Your mind is empty, you have a "hollow-flexible" attitude of "beginner's mind." You're right at the front end of the train of knowledge, at the track of reality itself. Consider, for a change, that this is a moment to be not feared but cultivated. If your mind is truly, profoundly stuck, then you may be much better off than when it was loaded with ideas.

The solution to the problem often at first seems unimportant or undesirable, but the state of stuckness allows it, in time, to assume its true importance. It seemed small because your previous rigid evaluation which led to the stuckness made it small.

But now consider the fact that no matter how hard you try to hang on to it, this stuckness is bound to disappear. Your mind will naturally and freely move toward a solution. Unless you are a real master at staying stuck you can't prevent this. The fear of stuckness is needless because the longer you stay stuck the more you see the Quality...reality that gets you unstuck every time. What's really been getting you stuck is the running from the stuckness through the cars of your train of knowledge looking for a solution that is out in front of the train.

Stuckness shouldn't be avoided. It's the psychic predecessor of all real understanding. An egoless acceptance of stuckness is a key to an understanding of all Quality, in mechanical work as in other endeavors. It's this understanding of Quality as revealed by stuckness which so often makes self-taught mechanics so superior to institute-trained men who have learned how to handle everything except a new situation.

Normally screws are so cheap and small and simple you think of them as unimportant. But now, as your Quality awareness becomes stronger, you realize that this one, individual, particular screw is neither cheap nor small nor unimportant. Right now this screw is worth exactly the selling price of the whole motorcycle, because the motorcycle is actually valueless until you get the screw out. With this reevaluation of the screw comes a willingness to expand your knowledge of it.

- from "Zen and the Art of Motorcycle Maintenance" by Robert M Pirsig (September 6, 1928 – April 24, 2017)