Monday, 24 April 2017

Robert M. Pirsig on "Stuckness"

Robert M. Pirsig, the author of "Zen and the Art of Motorcycle Maintenance", died today aged 88. I've read and re-read that book many times over the years. As somebody who has always found tranquillity in tinkering, I found that "Zen" evokes that meditative, transcendental state that one can achieve whilst doing mechanical maintenance better than anything I've read… and in others, it captures perfectly the awful frustration that can only be experienced when a perfectly simple job turns into a protracted bout of yak-shaving.

Related image

Of all the passages in the book, the one that has stayed with me the most is the one I've included below, on the subject of 'stuckness'. After countless evenings spent tweaking and tuning mountain bikes in my dad's garage, experiencing first-hand the frustration of a £800 mountain bike rendered completely useless by stripping the head off a 50p bolt, this passage resonated with me more than anything I think I've ever read. I still think of it frequently, normally when I find myself stuck on some hitherto inconsequential detail of a software project that's somehow managed to derail the entire team for days at a time. The book is excellent, and if you haven't read it I highly recommend it, but the passage in question is here. I hope Mr Pirsig's lawyers don't mind. :)

Stuckness. That's what I want to talk about today.

A screw sticks, for example, on a side cover assembly. You check the manual to see if there might be any special cause for this screw to come off so hard, but all it says is "Remove side cover plate" in that wonderful terse technical style that never tells you what you want to know. There's no earlier procedure left undone that might cause the cover screws to stick.

If you're experienced you'd probably apply a penetrating liquid and an impact driver at this point. But suppose you're inexperienced and you attach a self-locking plier wrench to the shank of your screwdriver and really twist it hard, a procedure you've had success with in the past, but which this time succeeds only in tearing the slot of the screw.

Your mind was already thinking ahead to what you would do when the cover plate was off, and so it takes a little time to realize that this irritating minor annoyance of a torn screw slot isn't just irritating and minor. You're stuck. Stopped. Terminated. It's absolutely stopped you from fixing the motorcycle.

This isn't a rare scene in science or technology. This is the commonest scene of all. Just plain stuck. In traditional maintenance this is the worst of all moments, so bad that you have avoided even thinking about it before you come to it.

The book's no good to you now. Neither is scientific reason. You don't need any scientific experiments to find out what's wrong. It's obvious what's wrong. What you need is an hypothesis for how you're going to get that slotless screw out of there and scientific method doesn't provide any of these hypotheses. It operates only after they're around.

This is the zero moment of consciousness. Stuck. No answer. Honked. Kaput. It's a miserable experience emotionally. You're losing time. You're incompetent. You don't know what you're doing. You should be ashamed of yourself. You should take the machine to a real mechanic who knows how to figure these things out.

It's normal at this point for the fear-anger syndrome to take over and make you want to hammer on that side plate with a chisel, to pound it off with a sledge if necessary. You think about it, and the more you think about it the more you're inclined to take the whole machine to a high bridge and drop it off. It's just outrageous that a tiny little slot of a screw can defeat you so totally.

What you're up against is the great unknown, the void of all Western thought. You need some ideas, some hypotheses. Traditional scientific method, unfortunately, has never quite gotten around to say exactly where to pick up more of these hypotheses. Traditional scientific method has always been at the very best, 20-20 hindsight. It's good for seeing where you've been. It's good for testing the truth of what you think you know, but it can't tell you where you ought to go, unless where you ought to go is a continuation of where you were going in the past. Creativity, originality, inventiveness, intuition, imagination..."unstuckness," in other words...are completely outside its domain.

We're still stuck on that screw and the only way it's going to get unstuck is by abandoning further examination of the screw according to traditional scientific method. That won't work. What we have to do is examine traditional scientific method in the light of that stuck screw.

We have been looking at that screw "objectively." According to the doctrine of "objectivity," which is integral with traditional scientific method, what we like or don't like about that screw has nothing to do with our correct thinking. We should not evaluate what we see. We should keep our mind a blank tablet which nature fills for us, and then reason disinterestedly from the facts we observe.

But when we stop and think about it disinterestedly, in terms of this stuck screw, we begin to see that this whole idea of disinterested observation is silly. Where are those facts? What are we going to observe disinterestedly? The torn slot? The immovable side cover plate? The color of the paint job? The speedometer? The sissy bar? As Poincaré would have said, there are an infinite number of facts about the motorcycle, and the right ones don't just dance up and introduce themselves. The right facts, the ones we really need, are not only passive, they are damned elusive, and we're not going to just sit back and "observe" them. We're going to have to be in there looking for them or we're going to be here a long time. Forever. As Poincaré pointed out, there must be a subliminal choice of what facts we observe.

The difference between a good mechanic and a bad one, like the difference between a good mathematician and a bad one, is precisely this ability to select the good facts from the bad ones on the basis of quality. He has to care! This is an ability about which formal traditional scientific method has nothing to say. It's long past time to take a closer look at this qualitative preselection of facts which has seemed so scrupulously ignored by those who make so much of these facts after they are "observed." I think that it will be found that a formal acknowledgment of the role of Quality in the scientific process doesn't destroy the empirical vision at all. It expands it, strengthens it and brings it far closer to actual scientific practice.

I think the basic fault that underlies the problem of stuckness is traditional rationality's insistence upon "objectivity," a doctrine that there is a divided reality of subject and object. For true science to take place these must be rigidly separate from each other. "You are the mechanic. There is the motorcycle. You are forever apart from one another. You do this to it. You do that to it. These will be the results."

This eternally dualistic subject-object way of approaching the motorcycle sounds right to us because we're used to it. But it's not right. It's always been an artificial interpretation superimposed on reality. It's never been reality itself. When this duality is completely accepted a certain nondivided relationship between the mechanic and motorcycle, a craftsmanlike feeling for the work, is destroyed. When traditional rationality divides the world into subjects and objects it shuts out Quality, and when you're really stuck it's Quality, not any subjects or objects, that tells you where you ought to go.

By returning our attention to Quality it is hoped that we can get technological work out of the noncaring subject-object dualism and back into craftsmanlike self-involved reality again, which will reveal to us the facts we need when we are stuck.

Let's consider a reevaluation of the situation in which we assume that the stuckness now occurring, the zero of consciousness, isn't the worst of all possible situations, but the best possible situation you could be in. After all, it's exactly this stuckness that Zen Buddhists go to so much trouble to induce; through koans, deep breathing, sitting still and the like. Your mind is empty, you have a "hollow-flexible" attitude of "beginner's mind." You're right at the front end of the train of knowledge, at the track of reality itself. Consider, for a change, that this is a moment to be not feared but cultivated. If your mind is truly, profoundly stuck, then you may be much better off than when it was loaded with ideas.

The solution to the problem often at first seems unimportant or undesirable, but the state of stuckness allows it, in time, to assume its true importance. It seemed small because your previous rigid evaluation which led to the stuckness made it small.

But now consider the fact that no matter how hard you try to hang on to it, this stuckness is bound to disappear. Your mind will naturally and freely move toward a solution. Unless you are a real master at staying stuck you can't prevent this. The fear of stuckness is needless because the longer you stay stuck the more you see the Quality...reality that gets you unstuck every time. What's really been getting you stuck is the running from the stuckness through the cars of your train of knowledge looking for a solution that is out in front of the train.

Stuckness shouldn't be avoided. It's the psychic predecessor of all real understanding. An egoless acceptance of stuckness is a key to an understanding of all Quality, in mechanical work as in other endeavors. It's this understanding of Quality as revealed by stuckness which so often makes self-taught mechanics so superior to institute-trained men who have learned how to handle everything except a new situation.

Normally screws are so cheap and small and simple you think of them as unimportant. But now, as your Quality awareness becomes stronger, you realize that this one, individual, particular screw is neither cheap nor small nor unimportant. Right now this screw is worth exactly the selling price of the whole motorcycle, because the motorcycle is actually valueless until you get the screw out. With this reevaluation of the screw comes a willingness to expand your knowledge of it.

- from "Zen and the Art of Motorcycle Maintenance" by Robert M Pirsig (September 6, 1928 – April 24, 2017)

Friday, 21 April 2017

There’s a problem with the phalange!

Yesterday I was throwing together a quick Entity Framework prototype to inspect and wrangle some data held in one of our legacy databases. I’m using the Entity Framework “Code first from Database” approach, where you use the tooling to generate your initial model for you but thereafter you modify it by hand. One of the tables I’m working with here is called LookupRanges, so when I generated a bunch of entities and DbSet<> mappings, I was a bit surprised when I ended up with a class called LookupRanx in my new model.

It took a minute or two to ascertain that yes, Entity Framework had mapped my LookupRanges (plural) table name onto a class called LookupRanx. But where on earth did that ‘Ranx’ come from? My hunch here is that somebody who worked on this pluralization code remembered that the English word phalanx has the plural form phalanges – and so implemented a rule that says ‘any word ending in –anges should be singularizaed to –anx. Out of curiousity, I dug out my huge ASCII file of English words, found all the words ending in *anges, and hacked up a quick SQL script to create tables named for all these words so I could run them through EF and see what class names were generated.

Well, it gets ‘changes’ and ‘phalanges’ right – and gets literally every other case wrong.

image

Now this, to me, is an outstanding example of one of the biggest problems in software development… smart people like working on things that are interesting, and will frequently spend time doing something that’s interesting instead of something that’s important.

Writing a library that can singularize and pluralize English words is fascinating. It’s a never-ending problem with dozens of rules and hundreds of edge cases, and you learn a lot of weird and cool esoteric facts about language and etymology whilst you’re doing it. But in this instance, something started out as a good idea (“hey – wouldn’t it be cool if the model generator would convert plural table names to singular class names?”), and got bogged down in edge cases (“is the plural of ‘tableau’ really ‘tableaux’?”) and – in this instance – ended up with a bizarre bug because one of those so-called edge cases actually ended up breaking the default – and entirely correct – behaviour.

First, phalanges is extremely unlikely to ever show up as the name of a table in an Entity Framework database model. I can think of dozens of real-world scenarios where you’d end up with a table name ending with –Ranges, –Exchanges or –Interchanges, but I’m honestly struggling to think of any remotely likely scenario where you have a Phalanges table in a SQL Server database. This is the kind of thing where the ticket or the user story probably just says ‘implement pluralization’, and then there’s no sub-prioritization or further analysis about just how much pluralization needs to be implemented. Second – it’s kind of a stupid edge case. We’re not trying to win points on University Challenge here, we’re building software. In contemporary English, phalange is an acceptable singular form, and phalanxes is an acceptable plural form. There’s no reason at all why they needed to implement support for this particular edge case. And third: if you really found yourself in a scenario where you had to map the Phalanges table to the Phalanx class, you can just rename it. It’s easy. Visual Studio has first-class support for this kind of refactoring.

And, as if that wasn’t confusing enough, there’s actually source code for an EnglishPluralizationService.cs on Microsoft’s GitHub repository. Somebody obviously had a lot of fun building this, tracking down all those bizarre little edge cases like seraph/seraphim, hippopotamus/hippopotami – but according to this implementation, the plural of phalanx is…  go on. Go and take a look.

Now, though, I’m going to eat an oranx and check my email. Using Microsoft Exchanx, of course.

Monday, 3 April 2017

The Pursuit of APIness: The Secret to Happy Code

I'll be giving a new talk at the London.NET User Group meetup here in London next Tuesday, based on an idea I've had rattling around for a decade or more now. See, it seems to me that over the course of my career, there's been a strong correlation between happy developers and successful projects. I can't think of any examples where a miserable death-march project has resulted in high-quality working software, and I can't think of too many instances where a group of happy, motivated developers has failed to deliver a working product. I've been thinking around this idea for a while, and started looking at it in terms of user experience – both the user experience that we as developers are creating for our end users, but also the 'user experience' that's being provided by the libraries, frameworks and tools that we're using to do our jobs. Here's the talk synopsis:

We spend our lives working with systems created by other people. From the UI on our phones to the cloud infrastructure that runs so much of the modern internet, these interactions are fundamental to our experience of technology - as engineers, as developers, as users - and user experiences are viral. Great user experiences lead to happy, productive people; bad experiences lead to frustration, inefficiency and misery.

Whether we realise it or not, when we create software, we are creating user experiences. People are going to interact with our code. Maybe those people are end users; maybe they're the other developers on your team. Maybe they're the mobile app team who are working with your API, or the engineers who are on call the night something goes wrong. These may be radically different use cases, but there's one powerful principle that works across all these scenarios and more. In this talk, we'll draw on ideas and insight from user experience, API design, psychology and education to show how you can incorporate this principle, known as discoverability, into every layer of your application. We'll look at some real-world systems, and we'll discuss how discoverability works with different interaction paradigms. Because, whether you're building databases, class libraries, hypermedia APIs or mobile apps, sooner or later somebody else is going to work with your code - and when they do, wouldn't it be great if they went away afterwards with a smile on their face?

If that sounds interesting (or if you think I'm completely wrong and you want to come along and heckle!), sign up at the SkillsMatter website and come along on Tuesday 11th. Hope to see you there.

Wednesday, 22 March 2017

Goodbye ECMAScript; hello UKMAScript!

The UK government has announced it will trigger Article 50 on March 29th, beginning the two-year process of the United Kingdom leaving the European Union.


That’s right - it will no longer be legal for British web developers to run ECMAScript, since the ECMAScript specification is controlled by the European Computer Manufacturers’ Association. We’re happy to announce that as of today, top engineers are starting work on a superior British programming language called UKMAScript.

UKMAScript will extend the core language specification with the following enhancements, which we believe will provide a massive boost to the UK tech industry and offset the immeasurable damage caused when all our EU colleagues and collaborators decide to exercise their freedom of movement.

  • Along with NaN and Infinity, UKMAScript will support a new primitive numeric value called MoneyForTheNhs, whose value is defined to be exactly 3.5x108 until it’s used as an argument to any function, at which point its value will be silently changed to zero after the function has returned.
  • The Math.round() method will behave as before, except Math.round(0.52) will now return Number.MAX_VALUE. Math.round(0.48) will return a new constant Number.TRAITORS and any attempt to use this value in calculations will throw a TreasonError
  • A new “illogical implication” operator !#> will be introduced. This is syntactically similar to the notion of logical implication in Boolean algebra, but designed to allow the scope of arguments to be massively exaggerated. For example, the statement (leave_eu !#> leave_customs_union && leave_eea) will implicitly bind the values of leave_customs_union and leave_eea to the value of leave_eu, despite this dependency not being expressed anywhere else in the codebase.
  • Along with null and undefined, a new language primitive brexit will be introduced. This has the special equality semantics (brexit == brexit) == undefined. typeof(brexit) will return the value “hard”, and attempting to evaluate brexit.valueOf() will throw a TreasonError.
  • UKMAScript features a new parallel programming paradigm implemented via the Referendum.Invoke() method. This causes a thread to break away from the main sequence of program control and attempt to continue execution despite no longer having access to any processing capabilities or shared resources of the host system. Note that if a thread A has called Referendum.Invoke(), any child process B attempting to call Referendum.Invoke() will be summarily ignored by process A on the grounds that it’s clearly developed a fault.

UKMAScript ships with no standard library or runtime, but the UKMAScript language committee assures us that platform vendors are lining up to deliver first-class support for the new language.

To further promote the popularity of UKMAScript, the only alternative permitted once Article 50 has been invoked is a new language called LABOUR, which takes many of the core language principles of COBOL-64 but is only accessible using the GNU/Corbyn compiler. This compiler has a tremendously exciting installation routine but then doesn’t actually do anything other than occasionally create internal process deadlocks for no reason.

Tuesday, 7 March 2017

It's a bug! It's a feature! It's… a limitation of the fundamental design of your test framework?

As some of you probably know, I'm a big fan of NCrunch. When I'm coding in C#, NCrunch gets a CPU core and a whole screen to itself (yes, I don't really write code on a system that looks like this) and sits there quietly running all my tests, all the time, and telling me the second I break anything.

I'm also a big fan of testing things that are as close to production behaviour as you can. Unit tests are great for informing the design of your components, but without integration testing you can't be sure they're actually going to work when you stick them together.

So on my current project, there's a suite of unit tests using FakeItEasy and assertions, and then a suite of integration tests that connect to the live API, follow the various hypermedia links, throw assorted JSON objects at the PUT and POST endpoints to see how they respond, and then call DELETE to clean up when they're done. And, just to keep us honest, we've got a post-deploy step in our Octopus Deploy script that will actually run the integration test suite as part of the deployment process, and roll the whole thing back if any of the tests fail. Another small step on the road to truly continuous deployment.

Anyway. Last week, I push a release to our dev environment, and a whole load of tests fail. Which is weird, because it worked on MY machine. And it worked on my machine when I pointed my local codebase at the database in the dev environment. And – here's the fun part – it worked on my machine when I pointed the entire test suite at the dev environment. So I start eliminating variables. One of the first things I pick up on is that my local test runner is NCrunch, whereas the post-deploy step is using nunit-console. So I run the local integration tests using nunit-console and – bang. Failures. Which is good, because I know what's causing the weirdness, but weird, because tests are supposed to either pass or fail regardless of what test runner you're using.

So I dig a little deeper, and I end up with what looks to me like a bug in NCrunch. See, we're using the TestCaseSource attribute to generate test cases for the API tests, and – because all we need is a bunch of different JSON objects – we're just spinning up new anonymous objects and passing them in as test cases.

Here's two anonymous objects:

var testCase1 = new { forenames = null, surname = "Batman" }
var testCase2 = new { forenames = String.Empty, surname = "Batman" }

What I noticed is that if you generate these two test cases, NCrunch will only see them as a single test – which I assumed was because their ToString() representations are equal, because null and String.Empty both return String.Empty when you ToString() them in this situation. So I opened a post about it on the NCrunch forums, even going so far as to suggest using GetHashCode() when enumerating test names, and got this really interesting response from Remco Mulder, the NCrunch lead developer:

Tests must be uniquely identifiable between execution and discovery runs. This isn't important for a tool like the nunit console runner where a test can be discovered and executed within the same process call (and thus identified by its memory address), but for a tool like NCrunch, there's no way to run the test or collect data from it without this. As you've identified, generated tests with a null parameter and an empty string will return the same result under .ToString(), so NCrunch can't tell them apart.

The only way to solve this is to change the design of your code. Try using the NUnit .SetName() method to give each of your generated tests a distinctive name.

Unfortunately .GetHashCode() is not a reliable solution to this problem as this method is not designed to generate the same identifier across different processes. This method returns different results under x86 vs x64, and under .NET Core it will actually return an entirely different result for each process. Because your code is responsible for generating the tests, the problem can only be solved within your own code.

I thought this was a really interesting insight into how a tool like NCrunch has to deal with situations that an in-process test runner like nunit-console will probably never encounter. It also turns out I’d dismissed that very warning a few weeks earlier – when it cropped up in response to an unrelated issue which produced the same symptons – and sure enough, after clicking the “Show all hidden warnings” button on the NCrunch toolbar, the warning popped back up – along with a very detailed explanation of what was causing it:

image

Plus, I had no idea that NUnit has a TestCaseData interface with a SetName() method on it, which gives a much nicer way of presenting these test cases in both NCrunch and NUnit. I've ended up with something akin to:

public static IEnumerable TestData() {
  foreach (var data in new[] { null, String.Empty }) {
    var testCase = new { forenames = data, surname = "Batman" };
    var json = JsonConvert.SerializeObject(testCase);
    yield return new TestCaseData(testCase).SetName(json);
  }
}

Oh, and if you're interested, the deployment failures were because of a weird validation rule that treats null as missing, which is fine, but String.Empty as an empty string which violates a string length constraint. Which is wrong, and now the API doesn't do it any more. This is just another reason why integration testing is a good idea. So there you have it – a bug that wasn’t a bug, a crash-course in how NUnit and NCrunch actually work behind the scenes, and a TIL for naming your NUnit tests explicitly. Happy Friday.

Wednesday, 22 February 2017

Progressive.NET 2017 : Call for Themes

In September, SkillsMatter will be hosting the eighth annual Progressive.NET Tutorials. Over the next few months, the programme committee – including me – will be working to create a line-up of themes, workshops, talks and speakers that reflects the state of the art in .NET here in 2017. We’ll be opening our call for papers next month, but before we do, we’d like your help. Yes, you!

Prog .NET Tutorials

What sets the Progressive.NET Tutorials apart from most conferences is our emphasis on deep-dive half-day workshops. Something between a normal conference talk and a full training course, the idea is that you go away afterwards with running code, on your own laptop, that you’ve written during the workshop and can refer back to when you’re trying to implement the things you’ve learned. Over the years we’ve introduced dozens of new ideas and technologies to the wider .NET community – from technologies like F#, NHibernate and OpenRasta, to patterns like machine learning, event sourcing and continuous deployment.

We’ve got loads of ideas for themes, tracks and workshops this year, but we’d like your input. What do you want to see? What’s “progressive” in your corner of the .NET ecosystem? Some of the themes we’re already talking about are:

.NET on Linux in Production

OK, so your .NET Core application runs on Linux – awesome. What else do you need to know? Security? Configuration management? Monitoring, infrastructure? What about tools like Nginx, HAProxy and Varnish? How can you combine the power of .NET Core runtime with the maturity and flexibility of the Linux platform?

Contributing to .NET Core and Open Source

.NET Core is now part of a rich ecosystem of open source projects, but even for experienced developers, the journey from using open source to actually contributing can be daunting. Want to learn more about contributor licenses, workflows, issues and how to find your way around an unfamiliar codebase?

Cloud Native and Serverless

Ten years ago we were talking about dumping physical servers for virtual servers… now we’re talking about getting rid of servers completely. Cloud native is a whole new world for app developers. 12-factor apps, microservices, API-first development and containerisation are changing the way we approach application development – and the “big three” cloud platforms - .NET Core. AWS Lambda, Google Cloud Functions and Azure Functions  -now all support running serverless code built with .NET Core 1.1. So what can you do with it? What’s involved in designing, implementing and deploying serverless and cloud native applications?

Mobile, Desktop and Beyond

At one extreme, we’re deploying microservice apps onto serverless infrastructure. At the other extreme, people running .NET on a wider range of devices than ever before. Xamarin gives us a true cross-platform development toolchain for building native apps for Android and iOS devices. Libraries like Unity are helping C# developers build virtual worlds, from interactive data visualisation tools to launching Kerbals into space. HoloLens, Kinect and the latest generation of VR headsets are letting us interact with applications in all sorts of unprecedented ways, and with .NET Core and Windows Nano Server, we’re even seeing .NET running on the Internet of Things.

Prog .NET Tutorials

Agree? Disagree? Did we miss anything?

What do you think? What do YOU want to see? Akka.NET? Hexagonal architecture? ES.Next? What would you love to spend half-a-day learning about – discussing principles and patterns, asking questions, and going away with running code on your laptop that you can refer back to?

Prog .NET Tutorials

Comment here, find me on Twitter (@dylanbeattie), drop me an email, or come and say hi at the next London.NET User Group meetup, and let me know what you think. And let’s make this the best Progressive.NET Tutorials yet.

Monday, 20 February 2017

Bring back alt.NET? But… why?

Pull up a chair, dear reader. I’m going to tell you why I think we should revive alt.NET, but first, we need to establish a little context, which means it’s time for a good old-fashioned origin story.

I’ve been building web apps professionally since before they were called web apps, since the days when IIS was part of the Windows NT 4 Option Pack. I cut my teeth writing Active Server Pages, although unlike most of the ASP crowd, I wrote mine in JScript. That’s right, kids — I was running JavaScript on web servers back when The Matrix didn’t have any sequels. And I loved it. The web was a simpler place in those days, and JScript ASP provided a lightweight, expressive language and runtime that did 99% of everything I ever needed to do. But it was obvious that the Microsoft had a vision for the future of web development, and it wasn’t about JScript and what we now call classic ASP.

I stuck it out with JScript and ASP for a long while, but eventually a project came along that clearly required something with a bit more oomph. Multi-tenancy, internationalisation, that kind of thing. So I brushed up on my C# chops and started building WebForms. C# was — and is — a truly wonderful language to work in, but WebForms? Not even close. I was banging my head against a wall, battling with the .NET framework daily to deliver even the most basic features. The elegant, expressive programming models provided by HTTP and HTML were gone, hidden beneath endless layers of leaky abstractions and fragile redirection. I was unhappy and I was unproductive, but it was incredibly easy to rationalise the misery as a necessary part of the learning curve. It didn’t help that the few developers I discussed it with seemed to be using ASP.NET quite happily and didn’t see any problem at all with the idea of <form runat=”server” /> and OnItemDataBound.

So here’s the scene. It’s 2007, I’m fed up, I haven’t shipped any working code for literally months, and — probably via Scott Hanselman’s blog — I start hearing noises about something called “alt dot net”. Now, this sounded exciting. Unfortunately, many of the original posts and articles have been lost to the mists of time and website redesigns, but the original movement defined itself as:

What is ALT.NET?
1. You’re the type of developer who uses what works while keeping an eye out for a better way.
2. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc.
3. You’re not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc.
4. You know tools are great, but they only take you so far. It’s the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles

To somebody drowning in the prescribed chaos of ASP.NET WebForms, that sounded like a pretty attractive set of principles.

Now, before we go any further, I’d like to share two things by way of qualifying the rest of this post. Firstly, I’m talking very specifically here about the alt.NET movement as it happened here in the UK, and why it mattered to me, personally, as a developer. Alt.NET as an international movement generated a lot of interest, a lot of opinions and a lot of controversy, and yes, to some extent we’re reviving the alt.NET “brand” because this very controversy has given it a degree of recognition among the tech community that transcends geography and specialisation. But here in the UK, I think alt.NET caught just the right people, at just the right time, and I believe many of those people found it to be a really positive thing to be part of.

Second, I’m well aware that it’s not all smooth sailing — to highlight one recent example, last week’s post from JetBrains about the licensing dispute with a Microsoft debugging component they were using in Rider generated a lot of controversy. I know people on both sides of that particular exchange, and until we have a bit more clarification about exactly what’s happened, all I’ll say is that having multiple vendors working on commercial IDEs, targeting an open source .NET Core platform, is somewhat unprecedented, and there’s bound to be the odd bump along the way.

OK, back to alt.NET. In February 2008, the first alt.NET UK ‘unconference’ took place here in London. For me, it was a revelation. It was about openly challenging an orthodoxy that had become so established it was easy to think it was completely non-negotiable. Instead of “you do Microsoft.NET or you do open source”, it was “find what works for you, and don’t be afraid to mix it up”. Here was a loose-knit community of developers who were cherry-picking the bits of .NET that they liked and happily ignoring the rest. Don’t like WebForms? Cool, let me show you FubuMVC and Monorail. Don’t like Windows? Check out the Mono project. Having nightmares about SOAP, WSDL and DISCO? Here, have a look at this thing called REST. Oh, and by the way, here’s a bunch of neat ideas from Ruby and Java and Haskell that you might be interested in.

It suddenly became apparent that over time “.NET” had become an umbrella term for a whole gamut of languages, frameworks and technologies; some of them were really quite good, some of them were pretty poor, but you didn’t have to use all of them.

It was that first alt.NET UK conference that inspired me to start blogging — in fact, my very first blog post ever was a write-up of the event. I stopped relying so much on MSDN documentation, and I started going to user group meetings. I stopped writing stored procedures (I know, right?) and started using ORM’s — Linq-to-SQL, NHibernate, Castle ActiveRecord, and always with a very definite mindset of “use what works, ship working code, but keep your eyes open for something better”. I never took the plunge with Monorail or Fubu, but when Microsoft came out with ASP.NET MVC, I found it a perfect antidote to the bloated misery of WebForms. I remember vividly a quote from a Scott Hanselman podcast: “you don’t need a Repeater control; you’ve got a for() loop.” — that was the ASP.NET MVC ethos, and I loved it. My team launched our first ASP.NET MVC project in August 2008, whilst it was still on RC3. By the time it went beta, we’d put nearly a million pounds in revenue through it. The WebForms project was put ‘on hold’, and we never went back to it.

There were three alt.NET UK conferences. I don’t remember exactly when the third one was — probably 2009? — but I remember Ian Cooper saying at the time that there was unlikely to be another one any time soon. It felt like the right people had found each other, the conversations had started, the changes were happening. It wasn’t just in the UK, either. Dozens of blogs and meetup groups were spawned in the wake of alt.NET — a handful of them are still going, in Melbourne and Sydney, Paris, New York, and Brighton. Twitter was taking off in a big way; StackOverflow (built on ASP.NET MVC) was changing the way developers asked for help and shared solutions. We did a thing, and it worked.

So here in 2017, why are we talking about doing it again? For me, the answer’s simple. Partly, it’s just the passage of time. For every developer who has loudly and publicly abandoned the .NET platform, there’s a company somewhere whose investment in .NET isn’t going away anytime soon — and in the years since alt.NET, thousands of Comp Sci graduates have left university, landed their first job and ended up working on .NET. Sure, a lot of them are probably happy just to show up, write code, get paid and go home — but I suspect there’s also a lot of them who will go on to do really great things, and who haven’t yet grasped the power and the flexibility of the platform they’re using, and the community that’s built up around it.

But more than that, I don’t want to see the post-alt .NET community become an echo chamber. The ideas that were radical a decade ago have ossified into “best practice”, and it’s time to kick things up again. My personal and professional investment in .NET runs deep, because I sincerely believe that it’s a platform and a community worth investing in. I really enjoy working in .NET. C# and JavaScript are my languages of choice. but I like the fact that I can explore F# and TypeScript without having to learn a whole new platform to go with them. I’m a big fan of — and very occasional contributor to — open-source libraries like Dapper, NancyFX, Newtonsoft.Json, Shouldly and Moq. I’m also involved in running the London.NET User Group, and lately it feels like we’re seeing a lot of familiar faces and retreading a lot of familiar ground. Which is comfortable, and reassuring — and, yes, it’s fun; I love being a part of this community, and count the people I’ve met through it among my dearest friends. But I miss the cross-pollination, the fresh faces and the new ideas and that were such a vital part of alt.NET, and I want to see what we can do to reinvigorate that.

I want us to reach the junior developers — and aspiring future developers — who are looking around for a fast and free way to build their first web apps, and help them get started with .NET Core. I want to reach the people who have always dismissed .NET because they don’t want to run Windows and let them know about things like Visual Studio for Mac and JetBrains Rider. I want to learn more about projects like Unity, and new platforms like HoloLens — not just the “ooh, shiny!”, but to understand the patterns and principles that developers are using to create great products using these tools. And I’d like to reconnect with the people who have abandoned .NET in the years since that first wave, and say “hey! What are you working on? How’s it going? And whilst we’re chatting, have you seen what’s happened to .NET since you last looked at it?”

It’s 2017. .NET Core is open source, Entity Framework is open source — even Windows Live Writer is open source. Bash runs on Windows, SQL Server runs on Linux, and Microsoft is doing some genuinely innovative things. Visual Studio 2017 is right around the corner, and .NET is a free, fast, cross-platform development system that you can use to build just about anything. We’ve got cloud-native applications, running on hosting so cheap it’s basically free. We’ve got Universal Windows Apps running on phones, tablets, laptops, desktops and consoles. We’ve got Xamarin and .NET Core bringing .NET to Linux, Mac, iOS and Android. We’ve got VR headsets and 3D printers and autonomous drones and all sorts of fascinating and unprecedented ways to make our software do cool things… and I think it’s high time we broke down some barriers, shared our ideas and tried to restart some of those conversations.

This essay is also published at medium.com/altdotnet.

Tuesday, 14 February 2017

Naming Things is Hard: Spotlight Edition

Like most specialist industries, software is rife with mainstream English words that we’ve taken and misappropriated to mean something completely different. Show business is no different. The software team here at Spotlight sits smack-bang in the intersection between these two specialist fields, and so when we’re talking to our customers and product owners about the systems we build, it’s very important to understand the difference between typecasting and type casting, and exactly what sort of actor model we’re talking about. We therefore present this delightful “double glossary” of everyday terms that you’ll hear here at Spotlight Towers. Because as we all know, there’s only two hard problems in software: cache invalidation, naming things, and off-by-one errors.

Actor

Software: A mathematical model of concurrent computation that treats "actors" as the universal primitives of concurrent computation.
Showbiz: A person whose profession is acting on the stage, in films, or on television.

Agent

Software: A software agent is a computer program that acts for a user or other program in a relationship of agency
Showbiz: A person who finds jobs for actors, authors, film directors, musicians, models, professional athletes, writers, screenwriters, broadcast journalists, and other people in various entertainment or broadcast businesses.

Callback

Software: Any executable code that is passed as an argument to other code, which is expected to call back (execute) the argument at a given time.
Showbiz: A follow-up interview or audition

Casting

Software: Explicitly converting a variable from one type to another
Showbiz: Employing actors to play parts in a film, play or other production. Also the act of doing same.

Client

Software: The opposite of a server
Showbiz: An actor, specifically in the context of the actor’s relationship with their agent or manager Internally at Spotlight we have both internal and external clients/customers

Client Profile

Software: A subset of the .NET framework intended to run on mobile and low-powered devices
Showbiz: An actor’s professional CV, as it appears on their agent's’ website or in various kinds of casting software and directories

Double

Software: Primitive data type representing a floating-point number
Showbiz: A performer who appears in place of another performer, i.e., as in a stunt.

Mirror

Software: A copy of a system that updates from the original in near to real time, often a database or file storage system
Showbiz: An optical device that helps a performer check they’ve applied their makeup correctly

Principal

Software: Used in database mirroring to refer to the primary instance of the database
Showbiz: A performer with lines.

Production

Software: The live infrastructure and code environment
Showbiz: A film, TV or stage show, such as a professional actor might list on their acting CV.

REST

Software: Representational State Transfer - an architectural style used when building hypermedia APIs
Showbiz: What actors do between jobs.

Script

Software: A computer program written in a scripting language
Showbiz: The written dialogue and directions for a play, film or show

“Sequel”

Software: Standard pronunciation of SQL, referring to either the database query language. Also commonly refers to Microsoft’s SQL Server database product.
Showbiz: A published, broadcast, or recorded work that continues the story or develops the theme of an earlier one.

Server

Software: The opposite of a client
Showbiz: Someone working as waiting staff in a restaurant. Who is quite possibly an actor moonlighting as a server to pay the bills between acting jobs.

Spotlight

Software: The native macOS search application
Showbiz: Our company – www.spotlight.com,  “The Home of Casting” – and the directories and services we have created since 1927.

Staging

Software: A replica of a production hosting environment used to test new features and deployments.
Showbiz: The method of presenting a play or dramatic performance; also used to refer to the stage structure itself in theatre and live performance.

Thursday, 26 January 2017

Semantic Versioning with Powershell, TeamCity and GitHub

Here at Spotlight Towers, we’ve been using TeamCity as our main build server since version 6; it’s a fantastic tool and we love it dearly. It got even better a few years back when we paired it with the marvellous Octopus Deploy; TeamCity builds the code and creates a set of deployable packages known as Octopacks; Octopus deploys the packages, and everything works quite nicely. Well, almost everything. One of the few problems that TeamCity + Octopus doesn’t magically solve for us is versioning. In this post, we’re going to look at how we use Git and TeamCity to manage versioning for our individual packages.

If this sounds like your sort of thing, why not come and work for us? That’s right – Spotlight is hiring! We're looking for developers, testers and a new UX/Web designer - check out jobs.spotlight.com and get in touch if you’re interested.

First, let’s establish some principles

  • We are going to respect the semantic versioning convention of MAJOR.MINOR.PATCH, as described at semver.org.
  • Major and minor versions will be incremented manually. We trust developers to know whether their latest commit should be a new major or minor release according to semantic versioning principles.
  • Building the same codebase from the same branch twice should produce the same semantic version number.
  • Packages created from the master branch are release packages.
  • Packages created from a merge head of an open pull request are pre-release packages.
  • Pre-release packages will use the version number that would be assigned if that branch was accepted for release at build time.

Now, here’s the part where we’re going to deviate from the semantic versioning specification, because our packages actually use a four-part version number. We want to include a build number in our package versions, but the official semver extension for doing this – MAJOR.MINOR.PATCH+BUILD - won’t work with NuGet, so we’re going to use a four-part version number MAJOR.MINOR.PATCH.BUILD. Pre-release packages will be appended with a suffix describing which branch they were built from – MAJOR.MINOR.PATCH.BUILD-BRANCH.

OK, here’s an illustrated example that demonstrates what we’re trying to achieve. Master branch is green. Two developers are working on feature branches – blue and red in this example. To create our pre-release builds, we’re using a little-documented but incredibly useful feature of GitHub known as ‘merge heads’. The idea is that if you have an open pull request, the merge head will give you a snapshot of the codebase that would be created by merging the open pull request into master – so you’re not just testing your new feature in isolation, you’re actually building and testing your new feature plus the current state of the master branch. There is one caveat to this, which I’ll explain below.

So, we’ve got TeamCity set up to build and publish packages every time there’s a commit to master or to the merge head of an open pull request, and we’re also occasionally triggering manual builds just to make sure everything’s hanging together properly. Here’s what happens: 

semantic merging 500px

That line there that’s highlighted in yellow is a gotcha. At this point in our workflow, we’ve merged PR1 into our master branch, but because we haven’t pushed anything to the blue branch since this happened, the blue merge head is out of date. PR2 does NOT reflect the latest changes to master, and if we trigger a build manually, we’ll end up with a package that doesn’t actually reflect the latest state of the codebase. The workaround is pretty simple; if you’re creating pre-release builds from merge heads, never run these builds manually; make sure you always trigger the build by pushing a change to the branch.

Now let’s look at how can we get TeamCity to automatically calculate those semantic version numbers whenever a build is triggered. We’ll start with the major and minor version. We’re going to track these by creating a version.txt file in the root of the project codebase, which just contains the major and minor version numbers. If a developer decides that their feature branch represents a new major or minor version, it’s their responsibility to edit version.txt as part of implementing the feature. This also means that prerelease packages built from that branch will reflect the new version number whilst master branches will continue to use the old version until the branch gets merged, which I think is rather elegant.

For the patch version, we’re going to assume that every commit or merge to the master branch represents a new patch version, according to the following algorithm

  • If the current version.txt represents a NEW major/minor version, the patch number is zero
  • Otherwise, the patch number is the patch number of the latest release, incremented by the number of commits to the master branch since that release.

So – how do we know how many commits there have been since the last release? First, each time we build a release branch, we’re going to use Git tags to tag the repository with the version number we’ve just built. TeamCity will do this for you automatically using a build feature called “VCS labeling”:

image

Assuming every release has a corresponding tag, now we need to find the most recent release number, which we can do from the Git command line.

git fetch –tags
git tag –sort=v:refname

Git tags aren’t retrieved by default, so we need to explicitly fetch them before listing them. Then we list all the tags, specifying sort=v:refname which causes tag names to be treated as semantic versions when sorting. (Remember that semver sorting isn’t alphanumeric – in alphanumerics, v9 is higher than v12). Once we’ve got the latest tag, we need to count the number of revisions since that tag was created, which we can do using this syntax:

git rev-list v1.2.3..HEAD –count

To use this in our TeamCity build, we'll need to output the various different formats of that version so that TeamCity can use them. We want to do three things here:

  • Label the VCS root with the three-part semantic version number v1.2.3
  • Update the AssemblyInfo.cs files with the four-part version number 1.2.3.456 – note that we can’t put any prerelease suffix in the AssemblyInfo version.
  • Pass the full version – 1.2.3.456-pr789 – to Octopack when creating our deployable packages with Octopus.

I've wrapped the whole thing up in a Powershell script which runs as part of the TeamCity build process, which is on GitHub:

To use it in your project, add versions.ps1 to the root of your project repo; create a text file called version.txt which contains your major.minor version, and then add a TeamCity build step at the beginning of your build process that looks like this:

image

Finally, it’s worth mentioning that to use command-line git from Powershell, I had to set up TeamCity to use an SSH VCS root rather than HTTP, and install the appropriate SSH keys on the TeamCity build agent. I don't know whether this is a genuine requirement or a quirk of our configuration; your mileage may vary. And I still find Powershell infuriatingly idiosyncratic, but hey - you probably knew that already. :)

Happy versioning! And like I said, if this sort of thing sounds like something you’d like to work on, awesome - we’re hiring! Check out jobs.spotlight.com for more details and get in touch if you’re interested.

Monday, 9 January 2017

Sharing a Dropbox folder using Bootcamp and exFAT

My main laptop for the last few years has been a 15” Retina MacBook Pro. As I’m primarily a .NET developer, I use Bootcamp to run Windows 10, Visual Studio and SQL Server, but I also boot into macOS almost daily for things like audio recording and video editing using Logic Pro and Adobe Premiere. I’m also a huge fan of Dropbox – I use Evernote for text documents (notes, talk ideas and lyrics), and Dropbox for just about everything else.

My MacBook Pro has a 500Gb SSD, which sounds like a lot, but once you’ve got two operating systems it’s surprising how quickly you start running out of space – and one of my biggest frustrations was having two separate Dropbox folders. I have about 40Gb of stuff in Dropbox, which means a 40Gb Dropbox folder on my NTFS Windows partition, and another 4Gb Dropbox folder on the ExtFS macOS partition. And they store exactly the same filesso much so that Dropbox actually became my default sync mechanism for getting files from Windows into macOS and vice versa.

Being a huge nerd, I thought it would be fun to kick off 2017 by completely repaving my laptop, and I wondered if there was some way to store my Dropbox folder, and other big things like my music library, on a shared partition that both operating systems could use. It took a bit of tweaking but I think I’ve managed to get it working – here’s how. The secret sauce that made it all work is a relatively new file system called exFAT. The killer feature of exFAT here is that both Windows 10 and macOS can read and write it natively, but it doesn’t have any of the file size or volume size restrictions of older interoperable formats like FAT32.

Caveat: this is all completely unsupported. If you try any of this and you lose all your files, I’m not going to help you – and neither are Apple, Microsoft or Dropbox. If you’re happy using batch scripts to toggle file attributes and manually partitioning your OS drive, then this lot might help – but messing around with disk partitions and unsupported filesystems is a good way to end up with an unbootable laptop and a lot of bad mojo. So before we go any further,  back up everything. After all, Dropbox is a file sync tool – if some bizarre combination of macOS and exFAT results in it removing a load of files, the last thing I want is for Dropbox to conveniently remove the same files from all my other devices. I know it’s easy enough to revert to an older version of anything that’s in Dropbox, but I figured having a copy of it all on an external HD probably couldn’t hurt.

Next, work out how you want to partition your disk. In my case, I wanted to go for a ~150Gb partition for each operating system, and a 200Gb shared partition that both of them could see. Your mileage may vary, and – as I did – you might find you can’t get exactly what you wanted, but I managed to get pretty close. Now do a completely clean reinstall of macOS. Give it the entire drive and let it do its thing. Once you’re done, use the Boot Camp Assistant to set up and install Windows. When Boot Camp Assistant asks how you’d like to partition your drive, make the Windows partition equivalent to the total size of your desired Windows partition plus your shared partition – so in this case, I temporarily had a 150Gb macOS partition and a 350Gb Windows partition.

Screen Shot 2017-01-07 at 13.06.39

Once you’ve partitioned the disk, install Windows as usual. You should end up with a regular Boot Camp system, that you can boot into macOS or Windows by holding down the alt key during system startup or by using the Boot Camp options in Windows. Now here’s the fun part. Once you’ve installed Windows, but before you install any other apps or files, you’re going to use the Windows disk utility to shrink that Windows partition down to 150Gb and then create a shared exFAT partition in the remaining space. Right-click the BOOTCAMP (C:) partition, click Shrink Volume...

windows_disk_iutility

There’s some limit of the NTFS filesystem that means “you cannot shrink a volume beyond the point where any unmovable files are located”. Now I don’t know how much variance there is between Windows installations, but I couldn’t shrink my C: drive any smaller than 163Gb, despite the fact that the fresh Windows install only took up about 20Gb. For what I needed, this was close enough – if not, I’d have been looking for something that would let me defrag and reorganize the NTFS partition. Or reading up on how an SSD gets fragmented in the first place – I thought they were just big boxes of non-volatile solid-state memory?

Anyway. Next step was to format the new partition as exFAT, called it Shared, and then reboot into macOS and make sure it was visible. I poked around a bit, created some text documents and other files, rebooted back into Windows… After an hour or so this all seemed to be working quite nicely. Next step was to see how Dropbox coped with it. Rather than throw it at my 40Gb Dropbox account, I set up a new Dropbox account, installed Dropbox on both macOS and Windows and signed into to both of them using the new account.

This is where things got a bit weird. Here’s my best hypothesis as to what’s happening.

  1. macOS is storing some additional metadata for every file. On a regular ExtFS filesystem, the metadata is stored somewhere in the filesystem itself. On an exFAT partition, it’s stored in a tiny hidden file (apparently known as an AppleDouble) next to the original. Alongside foo.txt there’s now a 4k file called ._foo.txt. I don’t think this is a Dropbox thing, it just appears to be caused by running macOS on a non-ExtFS drive.
  2. The Dropbox agent on macOS sees these extra files, thinks that they’re important, and uploads them to Dropbox.
  3. The dot-underscore ._ files are then synced across all your other Dropbox devices – I can see them in the web interface as well as on my Windows partition.

If I wasn’t using Dropbox, the ._ files would still be there, but macOS marks them as hidden and Windows appears to respect this – doing a dir /a:h in one of my exFAT folders will show them but by default they don’t appear in Windows Explorer. I figured this wasn’t a big deal – I could live with a bunch of hidden 4K files – so I removed the temporary Dropbox account and installed the real one. This may have been a bit overly optimistic. Dropbox happily synced 40Gb of files from Windows, but when I rebooted into macOS, it created the ._ files for all of them… and then when I rebooted into Windows again, it started trying to sync all the hidden ._ files. After twelve hours of this, I concluded that something wasn’t working quite as I’d hoped, so I modified the plan slightly.

I realized that I don’t actually need the Dropbox agent running on both OSes – just having the shared Dropbox folder is enough – so I uninstalled the Dropbox agent from macOS, and things suddenly got a lot more straightforward. The dot-underscore files are still there. On the exFAT shared partition they’re hidden by default, but Dropbox helpfully syncs all the ._ files into all my other Windows systems and so I’ve created  a batch script in my Dropbox folder that finds any of these and sets them to hidden so they don’t show up in Windows Explorer. It’s a one-liner:

for /r %1 in (._*) do attrib +h "%1"

The last thing I wanted to try before doing it for real was to see how VMWare Fusion coped with the shared partition. As well as dual-booting between macOS and Windows, I’ll also occasionally use VMWare to run the Windows partition as a VM inside macOS, which works pretty well. Here’s what I discovered:

  • As soon as you boot the Windows VM, both the BOOTCAMP partion and the Shared partition disappear from macOS.
  • The BOOTCAMP partition is mounted in VMWare as drive C: (obviously), and the Shared partition appears in the VM as drive D:
  • The Dropbox agent will immediately pick up and sync any files created from macOS
  • When I shut down the VM, the BOOTCAMP and Shared partitions both show up in macOS again and everything’s back to normal.

So, here’s the final setup I ended up with. Windows 10 is running the Dropbox agent, and syncing files with my Dropbox cloud storage as intended. The Dropbox folder is on the shared exFAT partition, so macOS can read and write my local Dropbox folder, but macOS doesn’t sync anything. If I add files to Dropbox when running macOS, when I reboot into Windows again, the sync agent will pick up these new files and sync them to the cloud. Finally, if I’m in macOS and find I need a file that’s been added to Dropbox from another device and hasn’t synced yet, I can either reboot into Windows and sync it, fire up the VMWare machine for a few moments, or just download it via the web interface.

I've been running it for a couple of weeks, including a week of wrangling PowerPoint slides and shared files at NDC London, and it's working really quite nicely. And it's really rather nice to have nearly 200Gb of free disk space after installing both operating systems, all my apps and utilities, and 40Gb worth of Dropbox files.

Tuesday, 3 January 2017

My Life with the Microsoft Natural Keyboard

A moment ago I was catching up on Twitter and I saw this:

Now Scott has written some great posts about keyboards – and mice, and workstation ergonomics in general – over the years, so I immediately clicked the link to see what was so exciting… and wow. This is the new Microsoft Surface Ergonomic Keyboard, and I, too, am now sitting here trying not to buy it. Actually, I’m waiting for a UK layout, and then I suspect that trying not to buy it will rapidly become unmanageable. I may even fail to not buy two of them so I have one at work and one at home.

Top view of keyboard

Now, there’s a couple of things about this keyboard that are interesting only because Microsoft have consistently got them right, and then messed them up, and then got them right again, and then messed them up again - over many, many years. Like an even-numbered Star Trek movie or an odd-numbered version of CorelDraw, this latest one is a good one – and as somebody who’s used every incarnation of the Microsoft Natural keyboard, this seems like a nice opportunity to take a little wander down memory lane.

OK, cue the Wayne’s World dream-sequence time-travel effects… and here’s a keyboard. A 102-key IBM PS/2 keyboard, which was the standard layout for PC keyboards for about a decade, back in the days when keyboards were beige and the cool kids knew all sorts of tricks that would let you load HIMEM.SYS and your mouse driver and still have enough main memory left to run Wing Commander 2.

IBM Model M.png(IBM PS/2 keyboard by Raymangold22 via Wikipedia | CC0 | Link)

Around the time of Windows 95, Microsoft proposed the subtle addition of a two new keys – a Windows key (actually two of them) in those handy gaps next to the Ctrl keys, and a key that I’ve just this second learned is called the menu key, which is basically a right-click key. Other than shortening the space bar slightly, these new keys didn’t really move things around much, which was great, because that hard-earned muscle memory that let you hit triple-key combinations without taking your eyes off the screen worked just fine.

Oh, and they also started making them in colours that weren’t beige, which was a big improvement if you actually had to share your living space with them.

image

It’s around this time that I left school and got my first IT job, which meant I was typing for a good chunk of every day, and within a year I started getting unpleasant pain in my wrists and forearms. During a chance conversation at work, someone mentioned that a former employee there had had the same problem and switched to using an ergonomic keyboard – which was still knocking around in a cupboard somewhere. I tried it out and was instantly smitten.

This was the original Microsoft Natural keyboard, released in 1994.

imagePhoto by DeanW77 via Wikipedia – own work, CC BY-SA 4.0, Link

I absolutely loved it. I literally wore it out – I used it until some of the keys no longer worked, and then went shopping for a replacement… and discovered you couldn’t get the original Natural keyboard anymore; it had been replaced by the Natural Keyboard Elite, released in 1998.

https://www.engadget.com/products/microsoft/natural-keyboard/elite/ 

Looks close enough, right? Except if you look closely, you’ll see that instead of the familiar inverted-T cursor shape and two rows of navigation keys, the cursor keys are in a sort of weird diamond formation and the navigation keys are in two columns.

This was horrible. All those years of muscle memory suddenly gone – every time you’d try to hit Ctrl-PgUp or Shift-End you’d get it wrong. And when you’re in the zone, that’s a horrible, jarring experience – every time it happens it interrupts your flow, wrenches you back to reality and makes you want to throw the damn thing across the room. Imagine driving a car where the brake pedal is above the accelerator instead of alongside it – it was a truly unpleasant experience.

Fortunately, it didn’t take long for them to realise this one was a bit of a mistake. A year later in 1999, they came out with the Natural Keyboard Pro, and it was fantastic.

image
By ----PCStuff via Wikipedia, CC BY-SA 2.5, Link

All the keys were in the right places, and it included a two-port USB hub which was great for plugging in your mouse. It added a bunch of “multimedia” buttons (which I never really used) and a dedicated button for launching the Windows Calculator (which actually proved to be surprisingly useful.) I loved this keyboard dearly. You can guess what happened next… yep, I used it until it wore out, went shopping for a new one, and… yeah. No more Natural Keyboard Pro. Instead, we had this delightful triumph of form over function:


photo via Engadget

The Microsoft Natural Multimedia Keyboard, released in 2004. OK, it’s got the classic inverted-T cursor keys (yay, but - look at that! Not only are the navigation keys all wrong, but the Insert key isn’t even there; we’ve got a massive double-height Delete key instead. If memory serves, Insert was relegated to some sort of funky combo involving the PrtSc key. Oh, and this was also the keyboard where you had to keep F-Lock switched on all the time because the function keys were actually some sort of weird keyboard shortcuts that nobody ever used ever. You know. Like a dedicated key for “Send”, because Ctrl-Enter is just too complicated.

Again, it didn’t take long for them to come out with something better… and boy, did they ever get it right with the next one. The Microsoft Natural Ergonomic Keyboard 4000.


photo via microsoft.com

This was my main keyboard for most of the last decade. I loved this keyboard so much I actually stockpiled it – one at work, one at home, and a few spares standing by in case they wore out. And wear out they did, one by one, until late last year I decided it was time to replenish the stockpile… which is when my long-suffering colleagues half-jokingly asked if, maybe, I’d be prepared to try something quieter. See, the 4000 is lovely, but it’s noisy. The keys have plenty of travel, with a nice satisfying thump at the bottom of each keystroke, and the keyboard’s casing – which is big – makes a rather effective sounding-box. Until the bottom falls out of the London real estate market and we can justify private offices for our developers, FogCreek style, the open plan office is an unfortunate fact of life for most of us… and so, in the spirit of workplace harmony, I ordered a Microsoft Sculpt keyboard.

image
photo via Microsoft

And you know what? It’s pretty close to perfect. It’s comfortable. It’s quiet. It takes up about 60% of the desk space that the old Ergonomic 4000 did. The numeric keypad is actually separate, which I’m completely undecided about… when you actually want to type on it, it’s annoying not having it fixed in place, but being able to pick it up and use it like an old-school calculator is actually surprisingly useful. Except – yep – the navigation key layout is all screwed up. Again. I hit Insert by mistake all the time on this thing, and frequently land on the left cursor when I’m reaching for Ctrl.

But with the release of the Surface keyboard at the top of the post, it looks like, yet again, there’s a truly great keyboard hot on the heels of the not-so-great one. I’m just really curious as to why they keep bringing out models that use non-standard keyboard layouts.

image
photo via Microsoft

When it hits the market here in the UK, I’ll pick one up and let you know how I get on. Now, if they’d only release one that was completely black – with no key markings – then we’d really be onto something. I wonder if you can spray-paint it...