Wednesday, 9 December 2009

Fun with msdeploy, Extended Protection and Windows Authentication

Trying to push a change to our live servers today, we got this wonderful message from msdeploy:

Error: (09/12/2009 16:10:21) An error occurred when the request was processed on the remote computer.

Error: Child object 'extendedProtection' cannot be added to object 'windowsAuthentication'. The 'windowsAuthentication' provider may not support this deployment.

Error count: 1.

We do this several times a day… it worked yesterday, nobody changed anything, and suddenly today it doesn’t work. (Don’t you just love it when that happens?)

Now, msdeploy is wonderful, but has practically no documentation – which means when an error like this happens, you’re on your own.

A bit of Googling and a couple of lucky guesses later, and we worked out what was causing it. “Extended Protection” is apparently some of new fangled security framework that’s included in recent Windows Updates. Our internal servers install Windows updates automatically; our live servers don’t.

In other words – our staging server had quietly upgraded itself in the night to support extended authentication, and was now trying to push that configuration to the live server, which had absolutely no idea what was going on.

Logging on to the live server and installing the outstanding Windows security updates seems to have fixed it.

Saturday, 7 November 2009

Axure RP: Lego For Software Designers

Someone asked me on Twitter a little while back:

@dylanbeattie Do u currently use Axure? If so, could u pls tell what size team it's effective with? And, is it really better than pen-paper?
11:38 AM Nov 7th from web in reply to dylanbeattie

imageThe short Twitter answers are “yes”, “we’ve used it effectively with up to 4 people”, and “yes, because I can’t draw” – in that order.

But a slightly more detailed response would probably help.

Once upon a time, prototyping web apps was easy. You’d draw every page, and then use a site map to demonstrate which links went where. Every page was static; nothing moved, there was no Ajax, no infinite scrolling, no drag’n’drop, and most websites were actually about as interactive as a Choose Your Own Adventure novel. Well, those days are gone. People expect more – richer UIs, better responsiveness, less postbacks and waiting around for pages to load – and with libraries like jQuery, there’s really no excuse for not delivering code that satisfies those expectations.

Question is – how do you prototype a rich user interface? How do you draw a picture of something that won’t sit still? For me, that’s where Axure RP comes in. Axure is a “tool for rapidly creating wireframes, prototypes and specifications for applications and web sites”. It’s a commercial product, so it is, alas, not free (although to put it into perspective, it costs less than hiring a .NET developer for one day) – but it is a uniquely powerful and expressive piece of software that I find myself firing up on an almost daily basis.

In everyday use, it’s like a weird cross between Balsamiq, Visual Basic, and Lego.

  • Balsamiq, because it’s easy to mock up static user interfaces by dragging buttons, inputs and form elements on to your page.
  • Visual Basic, because it’s easy to add behaviour to those elements using click handlers, events and dynamic controls.
  • Lego, because it’s intuitive, and it’s fun, and there is no way anyone is going to look at what you’ve done and think the project is finished.

The game Populous was designed using Lego. I grew up with Lego*. From a very early age, I learned to use Lego bricks to express ideas. I knew every single brick I owned. I could demonstrate an idea I’d had for a car, or a spaceship, or a robot, by assembling these reusable components into a prototype with spinning wheels and moving parts and a sense of scale and colour. Working entirely in plastic bricks actually become very liberating, because it stops you worrying about materials and finishes, and allows you to focus entirely on expressing ideas.

"Infinity" © Nathan Sawaya / brickartist.comHave you ever showed someone a Lego house and had them say “Hey, that looks great! When can we move in?” No. People know a Lego house is not a real house. They appreciate that the point of a Lego - or cardboard, or clay - model is to demonstrate what you’re planning to do, not show off what you’ve already done.

Have you ever showed anyone an HTML mockup of a web app and had them say “Hey, that looks great! When do we launch?” – and then they look horrified when you explain that you haven’t actually started the build yet?

People don’t grok the difference between HTML mockups and completed web apps the way they grok the difference between Lego houses and real ones. I can’t say I blame them. HTML is HTML – whether it was hacked together late last night in Notepad or generated in the cloud by your domain-driven MVC application framework. The difference doesn’t become apparent until they actually start clicking things – by which point it’s too late; you’ve made your first impression (“wow, the new app is done!”) and it’s all downhill from there.

I think the hardest questions in software are “what are we doing?” and “are we done yet?”. I think good prototypes are absolutely instrumental in answering those questions, and any tool that can help us refine those prototypes without falling into the trap of “well, it looks finished” has to be a Good Thing.

* Some people look at how much Lego I still have and conclude that I never grew up at all…

Thursday, 5 November 2009

A (Slightly Faster) URL Resolver Module for ASP.NET MVC

Yesterday, I posted some code I’d hacked together as part of an MVC2 demo that would resolve ASP.NET virtual path URLs on the fly as pages were written to the ASP.NET response stream.

Having run some tests on this code in isolation – it’s actually quite nasty. For running ad-hoc demos on your workstation, it’s fine, but the performance hit of decoding the byte array, doing the regex transform and re-encoding it is something like two hundred times slower than a direct stream copy. Not good. There is now a modified version online at Google Code which is quite a bit faster, but there’s still huge scope for improvement. In particular, although it’s using byte comparisons now to work out where the ~/ combination occurs, it’s still falling back to string comparisons every time it finds a tilde to decide whether that tilde needs replacing or not.

These stats were created using a loop that spins up HTML pages of various sizes – one version full of ASP.NET-style tilde paths, one containing no tildes -  and then writes them 100 times to both a normal MemoryStream and a UrlResolverStream in order to calculate the average rendering time. If a page doesn’t contain any tildes at all, performance is 5-6 times slower than the equivalent direct memory copy – i.e. 21ms instead of 4ms. For pages with lots of tildes, the additional string processing hits quite hard and you’re looking at a slowdown factor of around 30-35x.

Tildes? Page Size MemoryStream copy (ms) UrlResolverStream copy (ms) Ratio
Yes 50Kb 0.05 0.38 7
Yes 101Kb 0.17 2.51 14
Yes 202Kb 0.19 6.05 31
Yes 405Kb 0.37 13.88 37
Yes 809Kb 0.88 29.47 33
Yes 1,619Kb 1.62 60.55 37
Yes 3,238Kb 3.45 123.40 35
No 50Kb 0.02 0.34 17
No 101Kb 0.07 0.66 9
No 202Kb 0.15 1.28 8
No 405Kb 0.48 2.62 5
No 809Kb 0.83 5.26 6
No 1,619Kb 1.66 10.57 6
No 3,237Kb 3.53 21.17 5

It should be possible to make this considerably faster still; since the code basically scans byte arrays, this is one of those areas where using pointer arithmetic could make a huge difference. I’ll dig out my unsafe hat and my pointer-gloves this weekend and see what I can do to it. In the meantime – play with it, experiment with it, but it’s probably a good idea not to let it within a hundred miles of your live servers :)

Wednesday, 4 November 2009

A URL Resolver Module for ASP.NET MVC

Update: An improved version of this module, along with some performance stats, is available here. The original version posted here was very, very slow. Probably not a good idea to use it for anything. Ever.

One of the few things I actually liked about ASP.NET WebForms was that you could do things like

<a href=”~/my/account.aspx” runat=”server”>My Account</a>

and ASP.NET would magically turn the tilde character (~) into the current relative application root – so you could debug your apps on http://localhost:4567/ and then deploy them to http://www.myserver.com/some/app/, and your links wouldn’t break.

ASP.NET MVC doesn’t like things that are runat=”server” – and with good reason, I think – but this does mean you can end up with rather a lot of calls to ResolveUrl() sprinked throughout your code.

To get around this, I’ve hacked together an HTTP module that basically rewrites the output stream on the fly. It wraps the HTTP output stream (the thing you're writing to when you Response.Write stuff) in a 'smart' stream wrapper, and the magic naively optimistic part looks like this:

public override void Write(byte[] buffer, int offset, int count) {
  if (HttpContext.Current.Handler is System.Web.Mvc.MvcHandler) {
    HttpContext.Current.Trace.Warn("Resolving URLs in output stream...");
    byte[] data = new byte[count];
    Buffer.BlockCopy(buffer, offset, data, 0, count);
    string html = Encoding.ASCII.GetString(data);

    // Don't try and use Regex transformations on your 
    // entire output stream. It is slow. Like, really, really slow.
    // Take a look at this updated version instead.

    var re = new Regex("(?src|href|action)=\"~/", RegexOptions.IgnoreCase | RegexOptions.Compiled | RegexOptions.Multiline | RegexOptions.ExplicitCapture);
    html = re.Replace(html, "${attr}=\"" + VirtualPathUtility.ToAbsolute("~/"));
    data = Encoding.ASCII.GetBytes(html);
    sink.Write(data, 0, html.Length);
    HttpContext.Current.Trace.Warn("Resolved URLs in output stream.");
  } else {
    sink.Write(buffer, offset, count);
  }
}

Basically, it looks for HTML SRC, ACTION and HREF attributes whose value begins with ~/, and replaces the ~ with the application’s virtual path on the fly. I haven’t tested this code for performance, so I don’t know what kind of impact it’ll have on your page response times, This code is something like 200 times slower than a straight stream copy, but it’s running in a couple of demo apps I’m working on and it seems to work pretty nicely.

The full implementation is over on Google Code if you’re interested.

Code from my HTML5 / MVC2 Talk at SkillsMatter

A big thank-you to everyone who came along to my HTML5 and MVC2 talk at SkillsMatter on Monday – and thanks also to SkillsMatter for hosting us! Whilst I’ve done plenty of talking at unconferences and events like BarCamp, this was the first proper full-length technical talk I’ve given, so I’d really appreciate any feedback – especially since we might be doing a re-run in a couple of weeks.image

During the talk, I demo’ed a tiny web app – TagAlong - that I’ve built to showcase some of the new features in HTML 5 and ASP.NET MVC preview 2. This is by no means production code – if nothing else, I’m using static List<T>’s instead of having an actual database, so your changes will disappear every time you restart the app – but it should be pretty easy to get it up and running and poke around.

If you’re interested, the code is online at http://code.google.com/p/tagalong – you’ll need MVC 2 Preview 2 installed to run it, but everything else is included.

A couple of other interesting links that I mentioned during the talk:

 

Tuesday, 20 October 2009

You Forgot to Say the Magic Word…

In Microsoft SQL Server, this query won’t work:

SELECT * FROM ( SELECT * FROM Customer UNION SELECT * FROM Supplier) ORDER BY CompanyName

But – if you ask nicely, it does exactly what you’d expect:

SELECT * FROM ( SELECT * FROM Customer UNION SELECT * FROM Supplier) PLEASE ORDER BY CompanyName

You won’t believe the look on your colleague’s faces when you solve their problem using simple good manners.

(Of course, it actually works because PLEASE in that context just acts as a table-name alias for result of the UNION sub-select, and sub-selects in SQL Server need to have a name. But don't let that stop you using it for fun and profit.)

Is doctype.com a License Too Far for Stack Overflow?

Short answer:

No, because doctype.com doesn’t use technology licensed from Stack Overflow. Sorry. I got this one completely, completely wrong. D’oh.

Long answer:

This post was originally inspired by doctype.com. I now understand, thanks to an extremely informative comment from one of the doctype.com developers, that doctype.com doesn’t actually run on Stack Exchange. It looks and feels very similar, but is in fact a completely separate codebase built by the guys at doctype.com using Ruby on Rails.

This post is therefore based on completely incorrect assumptions. I’ve struck-out the bits that are actually factually incorrect, although my concerns about fragmenting the user based remain valid – even more so since I discovered that ask.sqlteam.com and ask.sqlservercentral.com are both Stack Exchange sites - but clearly doctype.com has nothing to do with it, and in fact, their platform offers a lot of design-centric tools that Stack Overflow doesn’t.

There’s also this disucussion at meta.stackoverflow.com that addresses a lot of the same concerns.

 

The derelict Parachute Drop ride at Coney Island.(Note: In this post, where I say Stack Overflow I’m referring to the website, and where I say StackOverflow LLC, I’m talking about the company  behind it.)

I’ve been using stackoverflow.com since it was in beta, and I love it. I ask questions. I answer questions. I hang out and read and comment and vote and generally find the whole thing a hugely rewarding experience. I think it works for two reasons.

First, the technology platform (now available as Stack Exchange – more on this in a moment) is innovative, usable and packed with great ideas.

Second, by actively engaging with people who followed Jeff Atwood and Joel Spolsky’s blogs, they gathered exactly the right audience to breathe life into their product. Stack Overflow launched with a committed, dedicated community of experts already in place. They created a forum where people like Jon Skeet will donate endless hours of their time for nothing more than kudos and badges. (I bet Jon’s employers are wishing they’d thought of that…)

Here’s a few choice quotes from Joel Spolsky’s Inc.com column I’m referring to (my emphasis)

“Between our two blogs, we felt we could generate the critical mass it would take to make the site work.

“I started a business with the objective of building a big audience, which we would figure out how to monetize later.”

“we promised the audience that the site would always be free and open to the public, and that we would never add flashing punch-the-monkey ads or pop-up windows.”

Now this is the web, where “monetize” usually means “slap advertising all over everything.” – but when Stack Overflow introduced advertising, they were sympathetic and responsive to users’ feedback, and quickly evolved an advertising model that’s elegant, unobtrusive and complements the ethos of the site. The Woot! badge was clever. The tiny Adobe logo on tags like flex and actionscript was really clever – possibly the best use of targeted advertising I’ve seen.

Before long, non-programmers were asking how they could get a slice of the Stack Overflow goodness, and so serverfault.com – for systems admin questions – and superuser.com – for general IT enthusiasts – were born. That clearly worked, so they set up Stack Exchange, to license the platform to third parties, and soon there was moms4mom.com (questions about parenthood), Epic Advice (questions about World of Warcraft), Ask Recipe Labs (cooking and food), Math Overflow (for mathematicians), and various other Stack Exchange sites covering video, photography, car maintenance – all sorts.

A few days ago, I stumbled across doctype.com – a Stack Exchange site for HTML/CSS questions, web design and e-mail design – and some unsettling questions popped into my head.

1. Where am I supposed to ask my jQuery questions now?

I work on everything from T-SQL to a very occasional bit of Photoshop. There is a huge amount of crossover between HTML, CSS, Javascript, AJAX, and web server platforms and their various view/markup engines. Here’s the all-time most popular 20 tags on Stack Overflow, as of 20th October 2009:

1 c# 43,860
2 .net 24,590
3 java 22,924
4 asp.net 20,678
5 php 16,797
6 javascript 16,363
7 c++ 15,462
8 python 11,639
9 jquery 11,287
10 sql 10,910
11 iphone 9,686
12 sql-server 9,165
13 html 7,932
14 mysql 7,794
15 asp.net-mvc 6,532
16 windows 6,425
17 wpf 6,370
18 ruby-on-rails 6,095
19 c 6,071
20 css 5,849
 

The highlighted rows are all Web client technologies – and that ignores all the questions that get tagged as PHP, ASP.NET or Ruby on Rails but actually turn out to involve HTML, CSS or jQuery once the experts have had a look at them. There’s clearly a thriving community of web designers and developers already signed up to Stack Overflow. Should we now all be asking CSS questions on doctype.com instead of using Stack Overflow? I have no idea! 

I realize there are HTML / CSS gurus out there who aren’t currently using Stack Overflow because they think it’s just for programmers – but wouldn’t it be better if Stack Overflow was looking at ways to attract that expertise, rather than renting them a walled garden of their own?  Getting designers and coders to communicate is hard enough at the best of times, and giving them their own “definitive” knowledge-exchange resources isn’t going to help.

2. What Does This Mean For The Stack Overflow Community?

Shortly after discovering doctype.com, I tweeted something daft about “stackoverflow failed as a business”, which elicited this response from one of the guys at Fog Creek… he’s absolutely right, of course. StackOverflow LLC is clearly doing just fine – their product is being enthusiastically received, and I’m thoroughly looking forward to their DevDays event later this month.

However, I think the success of StackOverflow LLC is potentially coming at a cost to stackoverflow.com – the site and the community that surrounds it – and in that respect, I believe that the single, definitive, free site that they originally launched last year has failed to fulfil its potential as a revenue stream.

The bridge in Central Park.The decision to license Stack Exchange to sites who are directly competing for mindshare with Stack Overflow’s “critical mass” worries me, because it suggests that StackOverflow LLC is now calling the shots instead of stackoverflow.com, and making decisions that are financially astute but potentially deleterious to the existing user base.

They are entitled to do this, of course. It’s their site, and I’m extremely grateful that I get to use it for free.

What’s ironic is that the worst case scenario here - for me, for stackoverflow.com, and for the developer community at large - is that doctype.com is wildly successful, becomes the de facto resource for HTML/CSS questions on the internet, generates a healthy revenue stream of its own, and StackOverflow LLC does quite nicely out of the deal. The format is copied by other technology sites, and soon there’s a site for SQL, a site for Java, a site for WinForms, a site for PHP… stackoverflow.com is no longer the definitive resource for programming questions, and we, the users, are back to using Google to trawl a dozen different forum sites looking for answers, and cross-posting our questions to half-a-dozen different sites in the hope that one of them might elicit a response. It’ll be just like 2006 all over again.

OK, So What Would I Have Done Instead?

Fortitude, the stone lion outside the New York Public Library.doctype.com is trying to compete with an established market leader, by licensing that leader’s technology, in a market where the leader has a year’s head start and controls the technology platform. That’s like trying to open a BMW dealership in a town where there’s already a BMW factory outlet, run by two guys everyone knows and loves, whose reputation for service and maintenance is second to none. It has to fail… right? [1]

But – I can appreciate what they’re trying to do. I appreciate that StackOverflow LLC is not a charity, and I appreciate why the folks behind doctype.com think there’s a niche for an SO-style site focusing on designers.

The key to Stack Overflow’s success isn’t the catchy domain name, or that fetching orange branding. The key is the information and the people - I see no technical reason why something like doctype.com couldn’t be licensed as a front-end product that’s integrated with the same database and the same user community as Stack Overflow. Modify the back-end code so that users who sign up at doctype.com get certain filters applied. Use a different web address, a different design, maybe just include questions tagged with html, css, jquery and javascript to start with, so new users see content that’s more immediately relevant to their interests - but when they search or ask a question, they’re getting the full benefit of Stack Overflow’s huge community of loyal experts – not to mention the tens of thousands of accepted answers already in the Stack Overflow database.

How about it? doctype.stackoverflow.com, javaguru.stackoverflow.com, aspnetmvc.stackoverflow.com… each a finely-tuned filtered view onto a single, authoritative information resource for programming questions, from assembler up to CSS sprites. That has to be better than the gradual ghettoization and eventual fragmentation of a thriving community, yes?

Stop Press: Someone just pointed me at ask.sqlservercentral.com. That’s – yep, you guessed it – a Stack Exchange site for SQL questions. As if having to choose between stackoverflow.com and serverfault.com wasn’t bad enough. Does anyone else think this is getting a bit silly?

[1] Of course, it’s entirely possible that Joel & the gang know this, and are quite happy to take $129 a month off the folks at doctype.com whilst they work this out for themselves...

Friday, 16 October 2009

Is There Such A Thing As Test-Driven Maintenance?

One of the great strengths of test-driven development is that systems that are built one tiny test at a time tend to be… well, better. Fewer bugs. Cleaner architecture. Better separation of concerns. The characteristics that make code hard to modify are the same characteristics that make it hard to test, so by incorporating testing into your development cycle, you find – and fix – these pain points early, whilst development is cheap, instead of discovering them three months after you’ve shipped and spending the next two years death-marching your way to version 1.1.

However, there’s a flip-side to this. Not a disadvantage of TDD per se, but something that I think is an unavoidable side-effect of placing so much emphasis on TDD as applied to green-field projects. The “test-driven” part and the “development” part are so tightly coupled that it's easy to assume that automated testing was only applicable to new systems. 

I can’t be the only one using Moq and NUnit on new projects, whilst the rest of the time grimly hacking away on legacy code, dancing the Alt-Tab-F5 fandango and spending hours manually testing new features before they go live. And I can’t be the only one who thinks this is just not right – after all, the big legacy apps are the ones with the thousands of paying customers; surely if we’re running automated tests on anything, it should be those?

I love TeamCity so much, I want to go to the park and carve "DB 4 TC 4 EVER" into a tree.Last week, two things happened. One was a happy afternoon spent setting up TeamCity to build and man age most of our web projects. The other was a botched deployment of an update to a legacy site – the new code worked fine, but a config change deployed as part of the update actually broke half-a-dozen other web apps running on the same host. Broke, as in they  disappeared and were replaced by a Yellow Screen of Death, because the root web.config was loading an HttpModule from a new assembly, and the other web apps were picking up the root’s config settings but didn’t have the necessary DLL. Easily fixed, but rather embarrassing.

If It Runs, You Can Test It

This was a stupid mistake on my part, easily avoided, and it suddenly occurred to me, screamingly easy to detect. We may not have any controller methods or IoC containers to enable granular unit tests, but we can certainly make sure that the site is actually up and responding to HTTP requests.

One of the team put together a very quick NUnit project, which just sent an HTTP GET to the default page of each web app, and asserted that it returned a 200 OK and some valid-looking HTML. Suddenly, after five years of tedious and error-prone manual testing, we had green lights that meant our websites were OK. It took another ten minutes or so to add the new tests to TeamCity, and voila – suddenly we’ve got legacy code being automatically pushed to the test server, and then a way of firing HTTP requests at the server and making sure something comes back.

image You can do this. You can do this right now. TeamCity is free, Subversion is free, NUnit is free, and it doesn’t matter what your web apps are running. Because the ‘API’ we’re testing against is plain simple HTTP request/response, you can test ASP, ASP.NET, PHP, ColdFusion, Java – even static HTML.

What’s beautiful is that, once the test project’s part of your continuous-integration setup, it becomes really easy to add new tests… and that’s where things start getting interesting. Retro-fitting unit tests to a legacy app is hard, but when you need to modify a piece of the legacy app anyway, to fix a bug or add a feature, it’s not that hard to put together a couple of tests for your new code at the same time. Test-first, or code-first – doesn’t matter; just make sure they make it into the test suite. If you’re coupled to legacy data models and payment services and ASP session variables, you’re probably going to struggle to set up the required preconditions. But, most of the time, you’ll find something you can test automatically, which means it’s one less feature you need to worry about every time you make a change or deploy a build.

We now have 19 tests covering over 50,000 lines of code. Yeah, I know - that’s not a lot. But it’s a start, and the lines that are getting hit are absolutely critical. They’re the lines that initialize the application, verify the database is accessible, make sure the server’s configuration is sane, and make sure our homepage is returning something that starts with <html> and has the word “Welcome” in it – because I figure if we’re going to start somewhere, it might as well be at the beginning.

Thursday, 15 October 2009

There Can Be Only One. Or Two. Or Three, but Never Four.

A quick but very simple technique to limit the number of instances of a .NET app that will execute at once:

using System;
using System.Linq;
using System.Diagnostics;

namespace ConsoleApplication1 {
  public class Program {
    static void Main(string[] args) {

      var MAX_PERMITTED_INSTANCES = 3;

      var myProcessName = Process.GetCurrentProcess().ProcessName;
      Process[] processes = Process.GetProcesses();
      var howManyOfMe = processes.Where(p => p.ProcessName == myProcessName).Count();
      if (howManyOfMe > MAX_PERMITTED_INSTANCES) {
        Console.WriteLine("Too many instances - exiting!");
      } else {
        Console.WriteLine("I'm process #{0} and I'm good to go!", howManyOfMe);
        /* do heavy lifting here! */
      }
      Console.ReadKey(false);
    }
  }
}

Very handy if – like we do – you’re firing off potentially expensive encoding jobs every few minutes via a scheduled task, and you’re happy for 3-4 of them to be running at any one time – hey, that’s what multicore CPUs are for, right? - but you’d rather not end up with 37 instances of encoder.exe all fighting for your CPU cycles like cats fighting for the last bit of bacon.

I’m still sometimes really, really impressed at how easy stuff like this is in .NET… I thought this would end up being hours of horrible extern/Win32 API calls, but no. It’s that easy. Very nice.

Tuesday, 13 October 2009

Hey… My World Wide Web Went Weird!

About a week ago, my world wide web went weird. There’s no other way to describe it. Well, OK, there’s probably lots of ways to describe it, but I like the alliteration. Anyway – what happened was, lots of websites just suddenly started looking horrible, for no readily apparent reason. Like the “spot the difference” screenshot below.
 image
See how the snapshot on the left looks really rather unpleasant, while the one on the right is nice and crisp and readable?

First time I saw it, I assumed it was some ill-inspired redesign of a single site. Second time, I thought it must be some new design trend. Then I noticed it happening on some of our own sites - including our wiki and our FogBugz install – and since I definitely hadn’t messed around with them, that ruled out the possibility of it being something happening server-side. Some sites were still working and looking just fine, so it probably wasn’t a browser issue… but, thanks to a bit of lucky exploration and the awesome power of Firebug, I just worked out what’s going on.

All the affected sites use the same CSS font specification:

body { font-family: Helvetica, Arial, sans-serif; }

Of course, Helvetica isn’t a standard Windows typeface, so on most Windows PCs, the browser will skip over Helvetica and render the document using Arial instead. Last week, whilst working on something for our editorial team, I installed some additional fonts on my workstation, which includes – you guessed it – the Helvetica typeface shown above.

Arial, and most other Windows fonts like Calibri, Verdana, Trebuchet, use a neat trick called font hinting which ensures that when they’re rendered at small sizes, the shape of the individual glyphs lines up nicely with the pixels of your display – so you get nice, crispy fonts. The particular Helvetica flavour I’d installed obviously doesn’t do this – hence the spidery nastiness in the left-hand screenshot.

I’m guessing either the designers who built most of these sites had a hinted version of Helvetica (possibly ‘cos they’re Mac-based?), or that they just never tested that particular CSS rule on a Windows system with a print-optimised Helvetica typeface installed.

I guess the moral of the story is that if you want to annoy somebody in a really subtle way, install the nastiest Helvetica font you can find on their Windows PC. I’m pretty sure that if I hadn’t stumbled across the solution, sooner or later I’d actually have reinstalled in despair just to get things looking crispy again.

RESTful Routing in ASP.NET MVC 2 Preview 2

Microsoft recently released Preview 2 of the next version of their ASP.NET MVC framework. There’s a couple of things in this release that are designed to allow your controls to expose RESTful APIs, and – more interestingly, I think – to let you build your own Web pages and applications on top of the same controllers and routing set-up that provides this RESTful API. In other words, you can build one RESTful API exposing your business logic and domain methods, and then your own UI layer – your views and pages – can be implemented on top of this same API that you’re exposing for developers and third parties.

Thing is… I think they way they’ve implemented it in preview  doesn’t really work. Don’t get me wrong; there are some good ideas in there – among them an HTML helper method, Html.HttpMethodOverride, that works with the MVC routing and controller back-end to “simulate” unsupported HTTP verbs on browsers that don’t support them (which was all of them, last time I looked). You write your form code like this:

<form action=”/products/1234” method=”post”>
    <%= Html.HttpMethodOverride(HttpVerbs.Delete) %>
    <input type=”submit” name=”whatever” value=”Delete This Product” />
</form>

and then in your controller code, you implement a method something like:

[AcceptVerbs(HttpVerbs.Delete)]
public ActionResult Delete(int id) {
  /* delete product here */
   return(Index());
}  

The London Eye and Houses of Parliament by night. Very restful.The HTML helper injects a hidden form element called X-HTTP-Method-Override into your POST submission, and then the framework examines that hidden field when deciding whether your request should pass the AcceptVerbs attribute filter on a particular method.

Now, most MVC routing examples – and the default behaviour you get from the Visual Studio MVC file helpers – will give you a bunch of URLs mapped to different controller methods using a {controller}/{action}/{id} convention – so your application will expose URLs that look like this:

  • /products/view/1234
  • /products/edit/1234
  • /products/delete/1234

Since web browsers only support GET and POST, we end up having to express our intentions through the URI like this, and so the URI doesn’t really identify a resource, it identifies the act of doing something to a resource. That’s all very well if you subscribe to the Nathan Lee Chasing His Horse school of nomenclature, but one of the key tenets of REST is that you can apply a different verb to the same resource identifier – i.e. the same URI – in order to perform different operations. Assuming we’re using the product ID as part of our resource identification system, then:

  • PUT /products/1234 – will create a new product with ID 1234
  • POST /products/1234 – will update product #1234
  • GET /products/1234 – will retrieve a representation of product #1234
  • DELETE /products/1234 – will remove product #1234

One approach would be to map all these URIs to the same controller method – say ProductController.DoTheRightThing(int id) – and then inspect the Request.HttpMethod inside this method to see whether we’re PUTing, POSTing, or what.

This won’t work, though, because Request.HttpMethod hasn’t been through the ‘unsupported verb translator’ that’s included with MVC 2; the Request.HttpMethod will still be “POST” even if the request is a pseudo-DELETE created via the HttpMethodOverride technique shown above.

Now, MVC v1 supports something called route constraints. Stephen Walther has a great post about these; basically they’ll let you say that a certain route only applies to GET requests or POST requests.

routes.MapRoute(
    "Product", 
    "Product/Insert",
    new { controller = "Product", action = "Insert"},
    new { httpMethod = new HttpMethodConstraint("POST") }
);

That last line there? That’s the key – you can map a request for /Product/1234 to your controller’s Details() method if the request is a GET request, and map the same URL - /Product/1234 – to your controller’s Update() method if the request is a POST request. Very nice, and very RESTful.

But – yes, you guessed it; it doesn’t work with PUT and DELETE, because it’s still inspecting the untranslated Request.HttpMethod, which will always be GET or POST with today’s browsers.

However, thanks to the ASP.NET MVC’s rich extensibility, it’s actually very simple to add the support we need alongside the features built in to preview 2. (So simple that this started out as a post complaining that MVC2 couldn’t do it, until I realized I could probably implement what was missing in less time than it would take to describe the problem)

You’ll need to brew yourself up one of these:

/// Allows you to define which HTTP verbs are permitted when determining 
/// whether an HTTP request matches a route. This implementation supports both 
/// native HTTP verbs and the X-HTTP-Method-Override hidden element
/// submitted as part of an HTTP POST
public class HttpVerbConstraint : IRouteConstraint {

  private HttpVerbs verbs;

  public HttpVerbConstraint(HttpVerbs routeVerbs) {
    this.verbs = routeVerbs;
  }

  public bool Match(
HttpContextBase httpContext,
Route route, string parameterName, RouteValueDictionary values,
RouteDirection routeDirection
) { switch (httpContext.Request.HttpMethod) { case "DELETE": return ((verbs & HttpVerbs.Delete) == HttpVerbs.Delete); case "PUT": return ((verbs & HttpVerbs.Put) == HttpVerbs.Put); case "GET": return ((verbs & HttpVerbs.Get) == HttpVerbs.Get); case "HEAD": return ((verbs & HttpVerbs.Head) == HttpVerbs.Head); case "POST": // First, check whether it's a real post. if ((verbs & HttpVerbs.Post) == HttpVerbs.Post) return (true); // If not, check for special magic HttpMethodOverride hidden fields. switch (httpContext.Request.Form["X-HTTP-Method-Override"]) { case "DELETE": return ((verbs & HttpVerbs.Delete) == HttpVerbs.Delete); case "PUT": return ((verbs & HttpVerbs.Put) == HttpVerbs.Put); } break; } return (false); } }

This just implements the IRouteConstraint interface (part of MVC) with a Match() method that will check for the hidden form field when deciding whether to treat a POST request as a pseudo-DELETE or pseudo-PUT. Once you’ve added this to your project, you can set up your MVC routes like so:

routes.MapRoute(
  // Route name - anything you like but must be unique.
  "DeleteProduct",				 
  
  // The URL pattern to match
  "Products/{guid}", 
  
  // The controller and method that should handle requests matching this route 
  new { controller = "Products", action = "Delete", id = "" },   
  
  // The HTTP verbs required for a request to match this route.
  new { httpVerbs = new HttpVerbConstraint(HttpVerbs.Delete) }
);

routes.MapRoute(
  "CreateProduct",
  "Products/{id}",
  new { controller = "Products", action = "Create", id = "" },
  new { httpVerbs = new HttpVerbConstraint(HttpVerbs.Put) }
);

routes.MapRoute(
  "DisplayProduct",
  "Products/{id}",
  new { controller = "Products", action = "Details", id = "" },
  new { httpVerbs = new HttpVerbConstraint(HttpVerbs.Get) }
);

and finally, just implement your controller methods something along these lines:

public class ProductsController {
  public ViewResult Details(int id) { /* implementation */ }
  public ViewResult Create(int id) { /* implementation */ }
  public ViewResult Delete(int id) { /* implementation */ }
}

You don’t need the AcceptVerbs attribute at all. I think you’re better off mapping each resource/verb combination to sensibly-named method on your controller, and leaving it at that. Let proper REST clients send requests using whichever verb they like; let normal browsers submit POSTs with hidden X-HTTP-Method-Override fields, trust the routing engine and route constraints to sort that lot out before it hits your controller code, and you’ll find that you can completely decouple your resource identification strategy from your controller/action naming conventions.

BLATANT PLUG: If you’re into this kind of thing, you should come along to Skills Matter in London on November 2nd, where I’ll be talking about the future of web development - HTML 5, MVC 2, REST, jQuery, semantic markup, web standards, and… well, you’ll have to come along and find out. If you’re interested, register here and see you on the 2nd.)

Sunday, 11 October 2009

Coordinating Web Development with IIS7, DNS and the Web Deployment Tool

DISCLAIMER: There’s some stuff in here which could cause all sorts of chaos if it doesn’t sit happily alongside your individual setup – in particular, hacking your internal DNS records is a really bad idea unless you know what’s already in there, and you understand how DNS resolution works within your organisation. Be careful, and if you’re not responsible for your DNS setup, probably best to discuss this with whoever is responsible for it first.

I’ve been setting up a continuous integration system for our main software products. We host 20+ web sites and applications across four different domain names, ranging from ancient legacy ASP applications based on VBScript and recordsets, to ASP.NET MVC apps built with TDD, Windsor, NHibernate and “alt-net” stack.

The City of London skyline, from the South Bank at low tide.Here’s a couple of things we’ve come up with that make the whole process run a little more smoothly. Let’s imagine our developers are Alice, Bob and myself, and we’re running a three-stage deployment process. Here’s how it works.

  1. Alice implements a new feature, working on code on her local workstation. She has a full local copy of every site, under Subversion control, which she can view and test at www.website.com.local
  2. Once the feature’s done, Alice commits her code. TeamCity – the continuous integration server – will pull the change from Subversion, build it, and deploy the results to www.website.com.build
  3. We run tests – both automated and manual – against this build site. If everything looks OK, we send this .build URL to the stakeholders and testers to get their feedback on the new feature.
  4. Once the tests are green and the stakeholders are happy, the feature is ready for launch. We’ll now use msdeploy to push the entire modified website onto the test server - www.website.com.test
  5. We run integration tests that hit  www.website.com.test – and also www.website.com.test/some_app, www.website2.co.uk.test, www.another-site.com.test – basically, they verify that not only do the individual apps and sites all work, but that they’re co-existing happily on the same server.
  6. Finally, we have a couple of msdeploy tasks set up in TeamCity, that will deploy the entire server configuration from the test server to the public-facing servers.

Setting up Developer Workstations

Most of our developer machines are running Windows 7, which includes IIS7, which supports multiple websites (this used to be a huge limitation of XP Professional, which would only run a single local website). We have a standard setup across workstations, build, test and live servers – they all have a 100Gb+ D: drive dedicated for web hosting, which means we can use msdeploy.exe to clone the test server onto new workstations (or just to reset your own configuration if things get messed up), and to handle the deployment from build to test to live.

Note that this doesn’t mean we’re hard-coding paths to D:\ drives – the apps and sites will happily run from any location on the server, since they use virtual directories and Server.MapPath() to handle any filesystem access. However, it does make life much easier to set up configuration once, and then clone this config across the various systems.

Finally, note that our workstations are 64-bit and the servers are 32-bit, which works fine with one caveat – you can sync and deploy configuration from the servers to the workstations, but not vice versa. In practise, this is actually quite useful – anything being pushed onto the servers should be getting there via Subversion and TeamCity anyway

Using DNS to manage .local, .build and .test zones

Unless you want to maintain a lot of /etc/hosts files, you’ll need your own local DNS servers for this part – but if your organisation is using Active Directory, you’re sorted because your domain controllers are all local DNS servers anyway. The trick here is to create “fake” locally-accessible DNS zones containing wildcard records. We have a zone called local, which contains a single DNS CNAME record that points  * at 127.0.0.1. This means that anything.you.like.local will resolve to 127.0.0.1 – so developers can access their local copies of every site by using something like www.sitename.com.local.

There’s a DNS zone called build, which contains an ALIAS record pointing * at build-server.mydomain.com, and another one called test, which has an ALIAS record pointing * at test-server.mydomain.com. We’ve also set up *.dylan as an alias for my workstation, and *.alice as an alias for Alice’s PC, and *.bob as an alias for Bob’s PC, and so on.

This seems simple but it’ll actually give you some very neat capabilities:

Of course, this doesn’t work unless there’s a web server on the other end that’s listening for those requests, so our common IIS configuration has the following bindings set up for every site:

image

This looks like a lot of work to maintain, but because developer workstations are set up by using msdeploy to clone the test server’s configuration, these mappings only need to be created once, manually, on the test server, and they’ll be transferred across along with everything else.

I’d be interested to hear from anyone who’s using a similar setup – or who’s found an alternative approach to the same problem. Leave a comment here or drop me a line – or better still, blog your own set-up and send me a link, and I’ll add it here.

A Neat Trick using Switch in JavaScript

You ever see code blocks that look like this?

if (someCondition) {
    doSomeThing();
} else if (someOtherCondition) {
    doSomeOtherThing();
} else if (someThirdCondition) {
    doSomeThirdThing();
} else {
    doUsualThing();
}

Turns out in Javascript - and, I suspect, in other dynamically-typed languages that support switch statements - that you can do this:

switch(true) {
    case someCondition:
        doSomeThing();
        break;
    case someOtherCondition:
        doSomeOtherThing();
        break;
    case someThirdCondition:
        doSomeThirdThing();
        break;
    default:
        doUsualThing();
        break;
    }

Of course, by omitting the break keyword you could wreak all sorts of havoc – but I can think of a couple of scenarios where you want to apply one or more enhancements (e.g. adding one or more CSS classes to a piece of mark-up) and this would fit very nicely.

Saturday, 26 September 2009

I love this photograph.

We have an olive tree growing in a huge pot in our garden, which ended up horribly waterlogged and looking rather unhappy, so this afternoon I set about the back-breaking task of digging it out, emptying the (disgusting) waterlogged soil from the bottom of the pot, and generally sorting the whole thing out.

Turns out you get quite a lot of earthworms in 500 litres of waterlogged soil - and whether he was attracted by the worms, or just curious, this little robin showed up in the garden. I ran inside to grab a camera, hoping he'd still be there when I got back - and he was. He stayed around for most of the afternoon, perching on the tree, the washing line, the shovel, chasing spiders around - at one point he was sitting literally two feet away from me, and actually started singing. It was absolutely mesmerising.

The photo is complete luck; he was hopping around so much I didn't have time to frame or set up a shot, so I just snapped him whenever he stopped for a moment. I was thrilled that this one came out so well.

Thursday, 17 September 2009

A Better Example of Command-Query Separation

Daneel3001 just posted this little quartet on Twitter:

@udidahan decisions. And I guess the command will either succeed, or fail by raising a compensating action (booking couldn't succeed).

@udidahan Yet, there are some cases where the command will not enable the domain, cinema room here, won't have the freedom to make much.

@udidahan You were rightly stating that grid like views of data as having negative effect on understanding intent of user actions.

@udidahan Maybe instead of using the hotel booking analogy for explaining CQS you could have used the cinema seat booking.

- and that reminded me of one of my all-time frustrations with the internet – booking theatre tickets – which, coincidentally, dovetails very neatly with something Udi Dahan was describing last night.

I work in Leicester Square. I walk past a dozen theatres every day, so if I want to see a show, it really doesn’t matter when I go and see it. I can go any time – and yet, every single theatre and ticket website starts the search process by saying “which day do you want to go?” – and then this happens:

Me: Friday.
Computer: Sorry, sold out.
Me: OK, Wednesday.
Computer: Sorry, sold out.
Me: OK, Tuesday.
Computer: Great! Tuesday! We can do that! Now, choose a seating section:
Me: Dress Circle, please.
Computer: Sorry – sold out.

What I really want is a to be able to say “I want to see Mamma Mia. I want to sit anywhere in the front ten rows, anytime between now and Christmas, and I can’t make Sep 23rd or any Saturday” – and, as Udi put it, let the computer do the busy-work. I know that the data is in there. I know that there are algorithms capable of fulfilling that request. Why the hell am I sitting here brute-forcing a solution and whacking the back button like I’m playing Track & Field 2 all over again?

Most current ticket-booking websites will reduce your request to “seats H24, H25, H26 and H27 for Chicago at 19:00 on Saturday 25th” – but that request is just too detailed, and far too prone to failure, and doesn’t reflect AT ALL what I’m actually trying to achieve. Maybe I’m only in town for Saturday night and don’t care what show I see. Maybe I desperately want to see Chicago but I’m prepared to go on any night. Maybe I’d be happy with two pairs of tickets instead of four together. I’d love a website that actually asked me what I wanted on my terms instead of presenting me with a bunch of data and expecting me to do the grunt work.

Command-query separation would appear to be the first step towards this – but it seems that what’s really important here is that your command model reflects the intention of your users, rather than the shape of your data.

AltNet Beers – Command/Query Separation with Udi Dahan

Tonight was the twelfth of SerialSeb’s alternative network beers events, and this evening we were lucky enough to have Udi Dahan joining us. Clearly a lot of people in the room were very interested in hearing what Udi had to say because for the first time in history (I think?) the usual altnet musical chairs didn’t happen – the speakers hardly changed for the entire hour. That said, I feel like tonight raised a whole lot of fascinating questions for me, and I’m thoroughly grateful to Udi for taking the time to share his insight, to Seb for organizing the whole thing, and to Tequila\ and Thoughtworks for beer and pizza – thanks!

Command/query separation is a relatively new concept to me, and I’m sure I’ve got the wrong end of the stick here, but I’m going to share my reflections on this evening anyway so I can look back on this in a year or so and laugh at myself. Feel free to laugh now if you’ve already got your head around all this.

Anyway. The underlying principle of CQS seems to be that reading data and changing data are actually fundamentally different operations in any business system, and that trying to use the same architectural style for both of these operations can lead to all sorts of chaos.

It also seems pretty obvious that CQS is a topic with a lot of potential “false peaks”. Maybe you’ve refactored your Customer object to use a ChangeName() method instead of exposing property setters for Forenames and Surname. Maybe you’ve exposed a bunch of denormalized data based on optimised stored procedures for your common query scenarios, and you’re still using a rich domain model to do your inserts and updates. In each case, you probably think you’re doing command/query separation – but there’s more to it than that. Until tonight, I thought CQS just meant having some special methods on your data access code for returning big lists of stuff in a hurry. Now, I’m pretty sure I don’t really know what it is at all.

A couple of great highlights from Udi’s contribution to the discussion tonight:

  1. Users are always working with stale data. The information on their screen could potentially be stale by the time they actually see it. In any multi-user system, people are always making decisions and requests based on stale data. (“This is obvious, it’s physics – you can’t fight it. Well, you can, but you’ll lose”)
  2. Separating queries from commands allows the commands to model the user’s intention more clearly – which in turn allows the software to deal gracefully with conflicts and failures (“Sorry, room 133 is taken – would you like room 155?”), where a more granular system might just throw an exception because the data is no longer valid. (“Booking failed – not in the database!”)
  3. The reason we create domain models is really just that we need somewhere to store all our complex business rules, but it’s easy for elements of business rules to leak into the controllers or presentation layers when we’re manipulating domain objects directly.
  4. The ideal CQS approach is that every business operation involves exactly three things:
    1. Find a domain entity
    2. Execute one method on that entity
    3. Commit the transaction.
  5. With this approach, it’s impossible for any business logic to ‘leak’ into the presentation or controller layers – because they’re not making any decisions. Every business operation, complete with all the validation and processing and rules associated with that operation, has to be exposed as a single entry point to the domain model.
  6. The domain entity that exposes the method will probably behave as an aggregate root for the purpose of that operation – but different entities will act as aggregate roots for different operations. Again, this was a bit of an eye-opener for me; talking about DDD gave me the impression that an aggregate root was a fixture of your business model, not something you could chop and change based on what makes sense for a particular operation.

Finally, an analogy of my own that came to me on the way home, that might help, or might be horribly naive and misguided, but which I rather like and which I’ll share here in the hope of provoking some conversation. ‘Traditional’ domain modelling is like home baking; your data store is a supermarket, where the various products on offer are your objects. They’re all there, laid out for you to search through and count and process. To do anything complicated – like making a soufflé – you need to acquire all the various objects required for that operation, then manipulate and combine them in all sorts of complicated ways to achieve the result you’re after. If anything goes wrong – you forget the butter, or you over-cook the eggs – boom! No soufflé for you. Transaction aborted.

CQS seems far more like eating at a fine restaurant. You don’t choose your meal from an array of component products; instead, you get given a menu – a read-only representation of the domain that’s optimised for rapid retrieval. Based on the information on the menu, you then execute a command – you tell the waiter what you’d like to eat – but the structure of that command expresses your intention far more explicitly than the complex series of interactions involved in doing it yourself. If the data that informed your decision is stale - say they’ve just run out of haddock -the command carries enough context that the waiter can offer you the sea bream instead, or perhaps the mackerel, and the entire dining transaction isn’t abandoned.

I guess the question is, do you want your users to feel like they’re making a soufflé, or dining in a Michelin-starred restaurant?

Monday, 14 September 2009

Determining FluentNH Schema Mappings based on Entity Namespaces

Sardinia Sunrise by you.I’m setting up some Fluent NHibernate mappings for a rewrite of some of our legacy code, and one of the issues I’ve hit is that we make extensive use of cross-database views and joins – the data supporting our app is split across three separate SQL Server databases (which, thankfully, are all hosted by the same SQL Server instance).

Turns out this is pretty easy to do – Mike Hadlow has a great post here which covers the fundamentals.

I’ve extended this principal a bit, using the Conventions facility provided by Fluent NHibernate, so that you can determine the SQL database for each entity based on your entities’ namespaces, so I have a model that looks (very) roughly like this. Let's imagine that my core web data is in a database called WebStuff, my accounts system is in CashDesk and my CRM database is in DylanCrm. Each mapped entity is declared in a sub-namespace of my common Dylan.Entities namespace, with these sub-namespaces named to reflect the database they’re mapping:

namespace Dylan.Entities.WebStuff {
	public class WebUser {
		public int Id { get; set; }
		public Customer AssociatedCustomer { get; set; }
	}
}

namespace Dylan.Entities.CashDesk {
	public class Invoice {
		public int Id { get; set; }
		public Customer Customer { get; set; }
	}
}

namespace Dylan.Entities.DylanCrm {
	public class Customer {
		public int Id { get; set; }
		public IList Invoices { get; set; }
	}
}

NHibernate will quite happily retrieve and update data across multiple databases, by prepending the schema name to the table names - so you end up running SQL statements like SELECT ... FROM CashDesk.dbo.Invoice WHERE .... If you're only mapping a handful of tables, it's easy to specify the schema for each table/object as in Mike's example - but you can also use FluentNHibernate.Conventions to achieve the same thing.

First off, you'll need to add a new class which implements IClassConvention and modifies the Schema property of each class mapping:

public class SchemaPrefixConvention : IClassConvention {

	private string ExtractDatabaseName(string entityNamespace) {
		return (entityNamespace.Substring(entityNamespace.LastIndexOf('.') + 1));
	}

	public void Apply(IClassInstance instance) {
		instance.Schema(ExtractDatabaseName(instance.EntityType.Namespace) + ".dbo");
	}
}

Once you've done that, you just need to reference this convention when you set up your mappings; if you're using the auto-mapping facility, it looks like this:

mappings.AutoMappings.Add(
	AutoMap
		.AssemblyOf<Invoice>()
		.Where(t => t.Namespace == "Dylan.Entities.CashDesk")
		.Conventions.Add<SchemaPrefixConvention>()
	);

mappings.AutoMappings.Add(
	AutoMap
		.AssemblyOf<Customer>()
		.Where(t => t.Namespace == "Dylan.Entities.DylanCrm")
		.Conventions.Add<SchemaPrefixConvention>()
	);

mappings.AutoMappings.Add(
	AutoMap
		.AssemblyOf<WebUser>()
		.Where(t => t.Namespace == "Dylan.Entities.WebStuff")
		.Conventions.Add<SchemaPrefixConvention>()
	);

Fluent NH will run your Apply() method to each mapped class in each of these three mappings, which means the resulting configuration will qualify each table name with a schema name derived from the mapped class’ namespace – and once that’s in place, you can query, retrieve, update, join and generally hack your objects about at will, and completely ignore the fact that under the hood they're actually being persisted across multiple data stores.

I think that's quite neat.

Tuesday, 18 August 2009

The Problem with Stack Overflow

I love Stack Overflow. I think it’s a fantastic resource, not to mention a beautifully-engineered social community. But sometimes mixing knowledge resources with social networking can be… distracting. Let’s say you’re, I don’t know, trying to choose an open source blog engine to add to a .NET site you’re working on.

It starts like this:

image

Then this:

image Ah-ha… *click* -

image Oooh – how exciting! A comment! *click*

.

.

.

Some time later:

What do you want me to do? LEAVE? Then they'll keep being wrong!

Thanks to xkcd for the cartoon, and for many, many more I have loved over the years.

Inspired by the last comment to my accepted answer for this. Where do I even begin?

Also - you have any idea how hard it is to get a screenshot of StackOverflow's squawk bar? Once you've seen it once, it goes away and never comes back. And you can't spoof it by setting up another user account to post a temporary comment on your own thread, because your second user account can't comment until it has reputation... I'm actually really quite impressed now at how hard it would be to game the system.

Sunday, 2 August 2009

Alt.Net UK Conf – Unit Testing WinForms UIs

A couple of links and the code we hacked together during the session on unit-testing desktop UIs during the Alt.Net UK Conference on Sunday:

  • Ben Hall’s Blog Post on Project White and automated UI testing
  • Project White on Codeplex
  • The “Hello World” sample app we put together during the session is here – you’ll need the Project White download as well, and you’ll need to edit the hard-coded EXE path files.

Friday, 24 July 2009

Tool Time

So.. having installed Windows 7 beta twice and the release candidate three times, it feels like I’m turning bare Windows boxes into working developer workstations about once a week at the moment, so here’s the low-down of what I put onto a bare Windows install to get things real nice‘n’kentucky. Partly because it’s nice to share, but more because I’ll need a list to work from when Windows goes gold in October and I end up doing this all over again.

Microsoft Office 2007

Office 2007, because everyone else in the world uses it, and because Exchange calendaring is actually pretty good. I use completely separate accounts for work and personal mail – my work e-mail is all in our Exchange server at work, which means if need be I can share my work mailbox with my co-workers without sharing any personal stuff.

Visual Studio 2008 and SQL Server 2008

VS2008 and SQL2008 are kinda obvious – I can’t really imagine building .NET business apps without them. Unless you’re some sort of C# ninja who codes against MySQL and ProgreSQL libraries using vim.exe, in which case send me a picture – I’ll put you on a T-shirt.

Although I’m not a fan of Resharper, Coderush or any of the other ‘heavyweight’ refactoring add-ins, I do use a couple of little VS2008 utilities.

Show Me The Money

As well as the full Microsoft / MSDN licensing bundle, there’s a couple of high-end commercial apps that I absolutely swear by. They’re not open source – you can’t share, modify, hack or fork them – and when there’s so many great free apps around, paying hundreds of pounds for an application can be a bit of a shock, but they’re powerful, flexible, beautifully-crafted tools, and they are worth every single penny.

imageRed Gate SQL Toolbelt 

Red Gate’s database tools are fantastic. Awesomely powerful, intuitive, rock-solid, and polished – if you do anything at all with SQL Server databases, you need these tools. The SQL Toolbelt includes the whole lot for just under a grand (i.e. roughly the same as hiring a decent contract DBA for three days) and once you’ve used them, you’ll never want to build a project without them again.

imageAxure RP Pro

There are great tools out there for writing code, editing photos, writing documents and creating databases and debugging CSS, but for designing software, Axure RP blows everything else out of the water. It’s expressive, it’s intuitive, and the resulting interactive prototypes show people exactly what you’re planning to deliver - which is great, because you find out what you’ve got wrong after three hours instead of three weeks.

image SnagIt

Capture your screen, annotate it, scribble on it, move things around, snip and cut and paste and shuffle and reorganize – SnagIt is intuitive, powerful, and works extremely well. The latest version even supports basic video capture – and if you need more advanced video capture, Camtasia Studio from the same people, TechSmith,  is well worth a look.

image Beyond Compare

For when Subversion’s built-in diff doesn’t really cut it. Beyond Compare is the best file-compare utility out there, bar none. One little touch I really like – their 30-day trial license only counts days that you actually run the software, so if you only use it once a week, it’ll be a good six months before the trial expires. More software should do this.

The Best Things In Life Are Free

 

Chrome, Firefox, Safari and Opera

imageimageimageimageLatest versions of these (plus Internet Explorer, of course) are pretty much essential for testing final release web apps. I have a slightly odd set-up – I use IE for “work browsing” (MSDN, FogBugz, our wiki), I use Firefox for GMail, and I use Chrome for pretty much everything else. On Windows 7, you might want to try the Chrome Channel Changer which will pull updates from Google’s weekly alpha builds instead – might be a bit wobbly but generally works a lot better on Windows 7 than the mainstream build.

Firebug

The only Firefox add-in I ever use; Firebug gives you full, detailed, debug views into your CSS, HTML, HTTP, and Javascript, all at your fingertips. Building web pages without Firebug is like playing the piano in boxing gloves.

7-Zip

The best archiver and archive manager out there, bar none. 32-bit and 64-bit versions; supports every archiving format you can think of, with a decent GUI on the top. Open source. Free. Fast. Bye-bye Winzip. It’s been… emotional.

imageDigsby

  Twitter, Facebook, Linkedin, MSN Messenger, AIM, ICQ and Jabber (Google Talk) in one client. Just watch out for all the heinous bloatware in their installer – don’t say yes or accept anything except the first screen, and you should be fine.

NotePad2, TextPad and A. N. Other Editor…

Notepad2 is fast, free, lightweight, and lovely. You’ll want to download Kai Liu’s installer that replaces Windows notepad.exe.

For a long time I swore by TextPad, which back in the day was a truly impressive editor, but recent releases have felt to me like they’re treading the water a bit; some subtle UI changes between 4.x and 5.x meant it didn’t really feel like the upgrade I’d been waiting for, and since I now do most of my actual coding in Visual Studio, switching text editors isn’t the life-changing transition it would have been a couple of years ago. I’ve been playing around with Notepad++, EditPadPro, UltraEdit, and probably end up installing all of them at some point, but I don’t really have a favourite editor right now.

Cygwin

Cygwin provides a huge collection of Unix command line utilities – sed, grep, bash, tar, that kind of thing – that are just useful to have around. I install Cygwin at C:\Windows\Cygwin\ - which keeps it neatly out of the way - and then add C:\Windows\Cygwin\bin\ to the system path, and then forget about it, because grepping for stuff just works and that’s the whole point.

If you want to use cygwin’s git client, you’ll need to add the optional git and openssh packages, because you’ll need ssh-keygen.exe to set things up, and then git.exe to wrangle your repositories.

imageSlikSvn and TortoiseSvn

TortoiseSvn is the wonderfully smooth and polished Windows shell extension that gives you right-click version control menus in Windows. It’s not just a great revision-control system; it’s a wonderful example of how you can seamlessly integrate your software into the OS instead of needing lots of clunky great windows and forms all over the place. SlikSvn is the command-line Windows binaries; although Tortoise does 99% of the day-to-day stuff, once in a while it’s useful being able to call svn from batch files and scripts, and that’s where SlikSvn comes in.

NUnit, Moq and TestDriven.NET

I like the simplicity of NUnit; I like Moq’s Linq-driven syntax, and I like the way TestDriven.net gives you all this from a right-click anywhere in your project. I particularly like that once these are all in place, they become the easiest way to run a chunk of experimental code, so your successful experiments often end up as unit tests without even trying. I like that.

Snippet Compiler

Snippet Compiler compiles snippets. It’s like the notepad.exe of .NET IDEs, and it’s wonderful for just hacking together tiny programs to automate ad-hoc tasks or try out an idea.

image Paint.NET

It’s free, it’s open-source, it works, and it’s powerful. If you’re used to Photoshop it can take a bit of getting used to, but otherwise it’s a great application to have around.