Wednesday, 17 December 2008

Mocking the QueryString collection in ASP.NET

One of the hardest parts of building testable web applications using ASP.NET is the HttpContext object, which encapsulates access to the HTTP request and response, server state like the Session and Application objects, and ASP.NET's implementation of various other bits of the HTTP specification.

HttpContext has a God complex. It's all-seeing, all-knowing, ever-present, and most WebForms apps just call HttpContext.Current and work with whatever comes back. This approach really doesn't lend itself to test-driven designs, though, so the ASP.NET MVC team have implemented a collection of virtual base classes - HttpContextBase, HttpRequestBase, etc. - which gives us the ability to isolate elements of the HttpContext for testing purposes, either using a mocking framework or by writing our own test classes that inherit from those base classes. On the whole, this approach works fairly well - especially once you start explicitly passing an HttpContextBase into your controllers instead of letting them run amok with HttpContext.Current - but there's still some legacy implementation details inherited from ASP.NET that can cause a bit of confusion with your isolation tests.

In ASP.NET - both MVC and WebForms - the QueryString property of the HttpContext.Request claims to be a NameValueCollection. It isn't - which becomes immediately apparent if you're trying to test a controller method that handles IIS 404 errors. In classic mode, IIS will invoke a custom error handler as follows. Let's say you've mapped 404 errors to /MyMvcApp/Error/NotFound - where MyMvcApp is a virtual directory containing an ASP.NET MVC application, which contains an ErrorController with a NotFound() method.

image

When your browser requests http://myserver/page/is/not/here.aspx; IIS doesn't find anything, so it invokes your configured handler by effectively requesting the following URL:

http://myserver/MyMvcApp/Error/NotFound?404;http://myserver:80 /page/is/not/here.aspx

Notice that there's no key/value pairs in that query string. The code in my controller that parses it is using HttpContext.Request.QueryString.ToString() to extract the raw query string - but here's where it gets a bit weird. The framework claims that Request.QueryString is a NameValueCollection, but at runtime, it's actually a System.Web.HttpValueCollection. The difference is significant because HttpValueCollection.ToString() returns the URL-encoded raw query string, but NameValueCollection.ToString() returns the default Object.ToString() result - in this case "System.Collections.Specialized.NameValueCollection" - which really isn't much use to our URL parsing code.

So - to test our parsing code, we need our mock to return an HttpValueCollection. Problem is - this class is internal, so we can't see it or create new instances of it. The trick is to use System.Web.HttpUtility.ParseQueryString(), which will take the raw query string and return something that claims to be a NameValueCollection but is actually an HttpValueCollection. Pass in the URL you need to test, and it'll give you back a querystring object you can pass into your tests.

Putting it all together gives us something along these lines - this is using NUnit and Moq, but the query string technique should work with any test framework.

[Test]
public void Verify_Page_Is_Parsed_Correctly_From_IIS_Error_String() {

	// Here, we inject a test query string similar to that created
	// by the IIS custom error handling system.
	var iisQueryString = "404;http://myserver:80/i/like/chutney.html";
	var testQueryString = HttpUtility.ParseQueryString(iisQueryString);

	Mock<HttpRequestBase> request = new Mock<HttpRequestBase>();
	request.ExpectGet(req => req.QueryString).Returns(testQueryString);

	Mock<HttpContextBase> context = new Mock<HttpContextBase>();
	context.Expect(ctx => ctx.Request).Returns(request.Object);

	// Note that we're injecting an HttpContextBase into ErrorController
	// In the real app, this dependency is resolved using Castle Windsor.
	ErrorController controller = new ErrorController(context.Object);

	ActionResult result = controller.NotFound();

	// TODO: inspect ActionResult to check it's looked up the requested page
	// or whatever other behaviour we're expecting.
}

Monday, 15 December 2008

Open Source .NET Exchange

I’ll be presenting a short session on jQuery at the SkillsMatter Open Source .NET Exchange here in London on January 22nd.

open-src-dot-net-exchange-lIf you’re a .NET developer of any kind, you’ve probably seen or heard people talking about stuff like web UI frameworks, object-relational mapping, fluent APIs, asynchronous messaging, aspect-oriented programming – and you might well be wondering what they are, and why they’re relevant. These events are designed as a sort of “tasting menu” of open source frameworks and techniques – six fifteen-minute sessions that’ll give you some idea of what these technologies can do, why you might want to consider using them, and where you can find more information if you’re interested.

In the jQuery session, I’ll be showing you how jQuery’s CSS-based selector syntax and flexible “chaining” API let you add rich, cross-browser behaviour and effects to your web pages. I’ll demonstrate how to add animation, dynamic content and AJAX callbacks to your web pages,  and hopefully include a few examples from the multitude of freely-available plug-ins and libraries built on top of the jQuery framework. Yes, all that in fifteen minutes. Like I said, jQuery makes things easy.

You can see the full programme at SkillsMatter’s site. I’m really pleased that I’m speaking first, because it means I get to relax and listen to the rest of the speakers afterwards – in particular, Mike Hadlow’s session on the repository pattern. Mike’s code (and help!) were invaluable on one of my projects earlier this year, and in particular his Linq-to-SQL repository – part of the Suteki Shop project - was a great example of how this pattern can make your code cleaner and your life easier.

If any or all of this sounds interesting (or if you just fancy an evening of beer, pizza and geek chat) then please sign up and come along - especially if you’ve not come along to an event like this before.

Friday, 12 December 2008

Stuff You'll Wish You'd Known When You Switched To 64-bit Windows

A pagoda at sunset in Kyoto, Japan. This has nothing to do with 64-bit Windows, but it is quite pretty.64-bit Windows is great. I've been running on XP 64 and Vista 64  for about a year now. My extremely old Canon USB scanner isn't supported, and I had to wait a long time for 64-bit drivers for my Line6 Pod XT, but otherwise everything works very nicely - and for running virtual PCs, the extra memory is really worth it.

That said, there's a couple of underlying differences that result in some very odd behaviour in day-to-day usage. It's important to realize that 32-bit and 64-bit Windows subsystems exist as parallel but separate environments. Confusingly, the 64-bit tools and utilities live in C:\Windows\System32, and their 32-bit counterparts live in C:\Windows\SysWOW64. 32-bit processes on x64 Windows are actually running inside a virtual environment, which redirects requests for underlying system resources for 32-bit processes.

64-bit Windows ships with 32-bit and 64-bit versions of lots of common applications - including Internet Explorer and the Windows Script Host. Check out the links below for some more detailed discussion of the architecture and reasoning behind this.

Internet Explorer

You'll find Internet Explorer and Internet Explorer (64 bit) in your Start menu. The 64-bit version can't see any 32-bit components - so no Flash player, no Java, no plugins, no ActiveX controls, nothing. This is useful for testing, but not much else.

Windows Scripting Host

If you run a Windows script (myscript.vbs) from cmd.exe or directly from Explorer, it'll run as a 64-bit process, which means it can't see any 32-bit COM objects like ADO connections and datasets. If you explicitly invoke it using C:\Windows\SysWOW64\cscript.exe, it'll run as a 32-bit process.

The Windows\System32 folder

64-bit apps - like the notepad.exe that ships with Windows - can see C:\Windows\System32\ and it's various subfolders. 32-bit apps - like TextPad - can't see this folder because Windows is "hiding" the system folders from the 32-bit process. This is completely baffling when you try to edit your /etc/hosts file using your normal editor and it appears to be completely missing - even though you had it open in Notepad a second ago. There's a thing called the sysnative file system redirector that you'll need to set up to be able to see these folders from 32-bit apps.

The Registry

The same caveat applies to the registry.  When a 32-bit app asks for a value stored under, say,

HKEY_LOCAL_MACHINE\Software\MyCompany\MyProject,

64-bit Windows will actually return the value from

 HKEY_LOCAL_MACHINE\Software\Wow6432Node\MyCompany\MyProject.

This is normally fine, because most 32-bit apps are installed by a 32-bit installer, so the redirection is in place both during install (when the keys are created) and at runtime (when they're used). If you're manually importing registry keys - e.g. by doubleclicking a .reg file - they'll import into the locations specified in the file, and then your 32-bit apps won't be able to find them. You'll need to manually copy the keys and values into the Wow6432Node subtree (or edit the original .reg file and re-import)

References

http://blogs.msdn.com/helloworld/archive/2007/12/12/activex-component-can-t-create-object-when-creating-a-32-com-object-in-a-64-bit-machine.aspx

http://blogs.sepago.de/helge/2008/03/11/windows-x64-all-the-same-yet-very-different-part-5/

Thursday, 4 December 2008

Fun with Server.GetLastError() in classic ASP on Windows Server 2008

One of our sites, written many moons ago in classic ASP using JScript, uses a bunch of custom error pages to handle 404 errors, scripting errors, and so on.

Our error handling code looks like this:

var error = Server.GetLastError();
var errorMessage = "";
errorMessage += Server.HTMLEncode(error.Category);
if (error.ASPCode) errorMessage += Server.HTMLEncode(", " + error.ASPCode);
var errorNumber = error.number;
errorNumber = ((errorNumber<0?errorNumber+0x100000000:errorNumber).toString(16))
errorMessage += " error 0x" + errorNumber + " (from " + Request.ServerVariables("SCRIPT_NAME").Item + ")\r\n\r\n";
if (error.ASPDescription) errorMessage += "ASPDescription: " + error.ASPDescription + "\r\n";
if (error.Description) errorMessage += "Description: " + error.Description + "\r\n";

// and then we log and useful with errorMessage

On our old server, this worked because the HTTP 500 error page was mapped to a custom URL, /errors/500.asp, which included the code above.

When we migrated our site onto IIS7 recently, this stopped working - the custom page was still executing, but Server.GetLastError() wasn't returning any information about what had gone wrong.

There was a very similar known bug in Vista which was supposedly fixed in SP1, but it looks like the same fix isn't part of Windows 2008 Server yet. There is a workaround, though - if you set the site's default error property (under IIS settings -> Error Pages -> Edit Feature Settings...)to the custom page (see below), IIS will invoke this page whenever an error is not handled by an explicitly configured status-code handler (so your 404, etc. handlers will still work) - but for some reason, handling the error this way means Server.GetLastError() still works properly.

image

Thursday, 27 November 2008

Lies, Damned Lies, and Statistics

One of our big customer contracts is up for renegotiation next month. This involves pulling a list of all the search & site activity that originated from that customer over the last year, and then negotiating based on whether usage is up or down. Over the last few years we've seen 10%-15% increases from this particular account, year-on-year, which is good. Yesterday morning I ran the stats report and got this:

Not good. In fact, very very worrying indeed. Whilst the marketing team went into crisis mode to work out what the hell we were going to do if this was real, I started double-checking to make sure this was genuine. It certainly looked genuine. The graph is horribly organic, the way the decline is gradual, occasional peaks and troughs, but with a very, very definite downward trend. In my experience, when software fails, it tends to fail in big straight lines - everything just stops working completely and stays there.

Turns out the stats were wrong - huge sigh of relief all round - but the reason why they were wrong is, I think, quite interesting. These statistics are calculated using some custom logging routines in our (legacy ASP) web code. When a user first hits the site, we create a record in the UserSession table in our database that stores their IP address, user agent string, user ID, and so on. There's some counter fields in that table that are incremented over the course of the session as the user accesses particular resources, so we can build up a fairly accurate picture of which resources get accessed heavily, by whom, and at what times throughout the day.

Well, it turns out our CreateUserSession() routine was failing if the browser's UserAgent string was more than 127 characters. Historically, this was never a problem, but at some point last year Microsoft started putting all sorts of information about .NET framework versions and plugins into the HTTP_USER_AGENT header sent by Internet Explorer (Scott Hanselman has a great post about this if you're interested)  As various updates were pushed out to our users via Windows Update and corporate rollouts, the user agent strings were getting longer and longer, until one day they'd exceed 127 characters - and that particular PC would stop showing up in our logs. Whenever they'd roll out new hardware, we'd see the stats increase temporarily, until those new boxes were upgraded and the same thing happened. Hence the gradual decline and the fact that non-IE users were unaffected.

We would have noticed this a long time ago, of course - but the CreateUserSession() call was wrapped in a try/catch block that called a notification function when it caught an exception, and somewhere along the line, the notification mechanism for this particular system had been commented out. I'd love to blame someone else for this, but Subversion has a commit with my name on it sometime last year with the relevant line mysteriously commented out.

I believe the kids are calling that an "epic fail". I believe they have a point.

Wednesday, 19 November 2008

HQL-lo World

I've been playing with Castle ActiveRecord for a project I'm working on, and hit a brick wall earlier tonight that left me completely stuck for a couple of hours... and turned out to be incredibly simple and obvious. Turns out I'd refactored one of my business objects - from Page to CmsPage - and hadn't noticed that in one particular place in the code, I was doing this:

var rootPages = new SimpleQuery<CmsPage>(@"from Page p where p.Parent is null");
return (rootPages.Execute());

The Execute() call there was throwing an ActiveRecordException that just said {"Could not perform ExecuteQuery for CmsPage"} - no InnerException, nothing showing up in SQL Profiler, nothing except a bunch of query strings that all looked fine to me:

image

Even enabling ActiveRecord logging (which was wonderfully easy, by the way) didn't help - I couldn't see anything obviously amiss in the NHibernate logs.

Turns out I'd not yet got my head around a fundamental concept of object-relational mapping, namely that you are querying your objects, not your database. The string literal in the SimpleQuery definition that looks a bit like LINQ is HQL - Hibernate Query Language. I'd used the [ActiveRecord(Table="Page")] attribute to map the renamed class to the underlying DB table, which is still called Page, and it just completely didn't occur to me that the HQL query needs to be changed to reflect the new class name. Change that query to

var rootPages = new SimpleQuery<CmsPage>(@"from CmsPage p where p.Parent is null");

and it works as intended. I fear this ORM stuff is going to take some getting used to...

Saturday, 8 November 2008

Usability Tip of the Day: Label your Form Elements, Dammit.

I see high-profile, expensive, shiny, corporate websites all the time that don’t label their form inputs. It’s easy. It’s accessible. And – in the case of checkboxes and radio buttons, where the form inputs themselves are about this big :, it’s massively helpful, because in almost every modern browser, you can click the label instead of having to click the actual form element. It’s staggering that so-called professional web developers don’t label their form elements properly. Here’s how you do it:

<input type=”radio” id=”beerRadioButton” name=”beverage” value=”beer” />
<label for=”beerRadioButton”>Beer</label>
<input type=”radio” id=”wineRadioButton” name=”beverage” value=”wine” />
<label for=”wineRadioButton”>Wine</label>

(In ASP.NET WebForms, if you set the AssociatedControlId of an <asp:Label /> control, it’ll render an HTML label element with the correct for=”” attribute; if you omit the AssociatedControlId attribute, it won’t even render as a label...)

Here's a form example without the labels wired up properly:

And here's the same radio buttons, with the labels wired up properly. See how in the this example, clicking the labels will select their associated radio-buttons, but in the previous form, you have to actually click the radio-button itself?
Just do it, OK? It helps people using screen readers. It helps mobile browsers. And it helps people downloading Lego instructions after a couple of beers. Trust me.

Working on ASP.NET MVC Beta and Preview code side-by-side

I have an app built against ASP.NET MVC Preview 3 that needs some tweaking, and I'm also working on a couple of projects using ASP.NET MVC Beta, so I'm in the slightly odd situation of trying to build and run preview 3 and beta projects side-by-side. (Yes, I will be updating this code to run against the beta version. I don't have time to do that this weekend, though, and I need some changes live before Monday afternoon.)

I've just checked-out the preview 3 project to make some changes, and although it builds absolutely fine, I'm seeing the lovely Yellow Screen of Death when I try and run it:

Server Error in '/' Application.

Method not found: 'Void System.Web.Mvc.RouteCollectionExtensions.IgnoreRoute   (System.Web.Routing.RouteCollection, System.String)'.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.  Exception Details: System.MissingMethodException: Method not found: 'Void System.Web.Mvc.RouteCollectionExtensions.IgnoreRoute(System.Web.Routing.RouteCollection, System.String)'. 

This is weird, because this code is deployed and running live on a box that doesn't have any versions of MVC installed; in theory, the project is entirely self-contained and XCOPY-deployable. First thing I tried was to shut down Visual Studio, uninstall ASP.NET MVC Beta, reinstall Preview 3, reload VS2008. That worked, so it's definitely the beta doing something strange. This project has hard-wired references to copies of the MVC assemblies in the \Dependencies folder of the solution, which are copied to the \bin folder during the build. It looks like the beta is installing something that's interfering with this process. Frustratingly, the installers also set up the MVC Web Application project type in Visual Studio, so although I can run the site without any versions of MVC installed, I can't open it in VS2008 because of the "project type is not supported" error.

Ok, first thing to realize is that, according to ScottGu's beta release blog post, the beta installs System.Web.Mvc, System.Web.Routing and System.Web.Abstractions to the GAC to allow them to be automatically updated. The preview versions of MVC would only install them to C:\Program Files\Microsoft ASP.NET\.

Given this particular chunk of web.config code:

<system.web>
 <compilation debug="true">
<assemblies>
    <add assembly="System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
   </assemblies>
</compilation>
</system.web>

the runtime is going to use the first version of System.Web.Mvc matching the specified culture, version number and public key token. This is significant because the CLR checks the GAC first when resolving assembly references - and if it finds a matching assembly in the GAC, it won't look anywhere else. The ASP.NET MVC previews and beta release all use the same assembly version, culture and public keys, so the CLR has no way of distinguishing between the preview 3 version of System.Web.Mvc and the beta version of the same assembly. They're different DLLs with different file versions, but because the assembly version is the same, the CLR regards them as the same assembly.

There are techniques you can use to override this behaviour, but, according to this thread on StackOverflow, these techniques only work if the assembly in the GAC has a different version to the assembly that's deployed with your application.

Ok - no problem, we'll just remove System.Web.Mvc from the GAC, by running gacutil.exe /u to uninstall it.

C:\Documents and Settings\dylan.beattie>gacutil /u system.web.mvc
Microsoft (R) .NET Global Assembly Cache Utility. Version 3.5.30729.1
Copyright (c) Microsoft Corporation. All rights reserved.

Assembly: system.web.mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL
Unable to uninstall: assembly is required by one or more applications
Pending references:
SCHEME: <windows_installer> ID: <msi> DESCRIPTION : <windows installer>
Number of assemblies uninstalled = 0
Number of failures = 0

C:\Documents and Settings\dylan.beattie>

Works on MY Machine! OK, that didn't work. Because we installed the ASP.NET MVC beta using Windows Installer, it's registered a dependency on System.Web.Mvc that means we can't uninstall it. So... registry hack time. This is the bit that might kill your PC, wife, cat, whatever.  Editing the registry is dangerous and can cause all kinds of problems, so read this stuff first, and if it sounds like a good idea, proceed at your own risk.

Fire up regedit and navigate to HKEY_CLASSES_ROOT\Installer\Assemblies\Global, and you should find a key in there called

System.Web.Mvc,version="1.0.0.0",culture="neutral",publicKeyToken="31BF3856AD364E35",processorArchitecture="MSIL"

I deleted this key. I also got a bit carried away and deleted the key

System.Web.Mvc,1.0.0.0,,31bf3856ad364e35,MSIL  from

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Fusion\GACChangeNotification\Default

as well... but I forgot to try gacutil /u first, so I don't know whether this second step is necessary or not. It seemed like a good idea, though, and doesn't appear to have broken anything, so you may or may not need to delete this second key as well.

Having removed those keys, I could run gacutil /u and remove System.Web.Mvc quite happily:

C:\>gacutil /u System.Web.Mvc
Microsoft (R) .NET Global Assembly Cache Utility. Version 3.5.30729.1
Copyright (c) Microsoft Corporation. All rights reserved.

Assembly: System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL
Uninstalled: System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL
Number of assemblies uninstalled = 1
Number of failures = 0

C:\> 

My preview 3 project now builds and runs quite happily against the System.Web.Mvc DLLs installed as part of the website, and the VS2008 MVC Project template still works just like it did before.

Monday, 3 November 2008

A Rant about RAID, with a Bad Metaphor about Eggs, and No Happy Ending.

I went in to work this morning and my main workstation had died over the weekend. Bluescreen on boot, no safe mode, nothing. Windows Update gone bad? We'l l probably never know, given I don't think it's coming back any time soon... but, as with previous overnight machine suicides, it looks like a problem with SATA RAID - specifically, two WD Velociraptors in a RAID-1 (mirror) array controlled by an Intel ICH10R chipset on an Asus P5Q motherboard.

You know your whole eggs & baskets thing, right? SATA RAID is like carefully dividing your eggs into two really good baskets, then tying them together with six feet of wet spaghetti and hanging them off a ceiling fan.

Long story short, I lost a day, and counting. I had to split the mirror into individual drives, switch the BIOS back to IDE, which gave me a bootable OS but - seriously - no text. No captions, no icon labels, no button text, nothing. Just these weird, ghostly empty buttons. Running a repair off the WinXP x64 CD got my labels back, but somehow left Windows on drive D. Another half-hour of registry hacks to get it back to drive C: where it belongs, and I had a creaking but functional system - VS2008 and Outlook are working, but most of my beloved little apps are complaining that someone's moved their cheese. Reinstalling is probably inevitable, along with the deep, deep joy that is reinstalling Adobe Creative Suite when your last remaining "activation" is bound to a PC that now refuses to deactivate it. Even Adobe's support team don't understand activation. Best they could come up with was "yes, that means there's no activations on that system." Err, no, Mr. Adobe, there are. It was very clear on that point. Wouldn't let me run Photoshop without it, you see. "Oh... then you'd better just reformat, and when you reinstall, you'll need to phone us for an activation override". Thanks, guys. I feel the love.

Sorry, I digress. This whole experience is all the more frustrating because RAID mirrors are supposed to be a Good Thing. If you believe the theory, RAID-1 will let you keep on working in the event of a single drive failure. Well... In the last 5 years or so, I haven't had a single workstation die because of a failed hard drive, but I've lost count of the number of times an Intel SATA RAID controller has suddenly thrown a hissy-fit under Windows XP and taken the system down with it. Every time it starts with a bit of instability, ends up a week or two later with bluescreens on boot and general wailing and gnashing of teeth, and every time, running drive diagnostics on the physical disks shows them to be absolutely fine.

This is across four different Intel motherboards - two Abit, one Asus, and a Dell Precision workstation - running both the ICH9R (P35) and ICH10R (P45) chipsets, and various matched pairs of WD Caviar, WD Raptor, WD Velociraptor and Seagate drives. One system was a normal Dell Precision workstation, the others are various home-built combinations, all thoroughly memtest86'ed and burned-in before being put into production doing anything important.

Am I doing something wrong here? I feel like I've invested enough of both my and my employer's time and money in "disaster-proofing" my working environment, and just ended up shooting myself in the foot. I'm beginning to think that having two identical workstations, with a completely non-RAID-related disk-mirroring strategy, is the only way to actually guarantee any sort of continuity - if something goes wrong, you just stick the spare disk in the spare PC and keep on coding. Or hey, just keep stuff backed up and whenever you lose a day or two to HD failure, tell yourself it's nothing compared to the 5-10 days you'd have lost if you'd done something sensible like using desktop RAID in the first place.

[Photo from bartmaguire via Flickr, used under Creative Commons license. Thanks Bart.]

Sunday, 2 November 2008

The Roadcraft of Programming

I was chatting with Jason "Argos" Hughes after the Skillsmatter event last week, and he said something I think is really quite brilliant, so I hope he doesn't mind if I quote him here and expand on his ideas a little.

We were discussing the merits of various different platforms and programing languages, and he said "knowing a language inside-out doesn't make you a better programmer, any more than knowing a lot about a particular car makes you a better driver".

wacky_races

That comment has been going round and round my head ever since, and I think that's one of the most insightful metaphors about programming languages that I've heard. Anyone who's owned a car will know that every make and model - and every individual example of a particular model - has its idiosyncrasies and quirks. I drive a slightly knackered Vauxhall Tigra. On this particular car, I know that I need to replace the cam-belt every 40,000 miles or Really Bad Things might happen. I know that I need to clean the gunk out of the frame around the back window otherwise it fills up with rainwater; I know where the little lever to adjust the seats is, and where all the various controls and switches are, and how to check the oil and change the headlamp bulbs. 

None of this makes me a good driver. In fact, it has absolutely nothing to do with my driving ability. Beyond a basic familiarity with a vehicle's controls and signals, the Highway Code has very little to say about the quirks and idiosyncrasies of particular cars.  On the other hand, it has rather a lot to say about stopping distances, speed limits, lane discipline, the importance of maintaining awareness of your surroundings and communicating your intentions clearly to other road uses. In other words, being a good driver boils down to discipline, restraint, awareness and communication - your choice of vehicle is largely irrelevant. Good drivers are good whatever they're driving, and the choice of car alone can't turn a poor driver into a good one.

I think there are strong parallels here with software development. Good coders are like good drivers; they'll work within the safe parameters of whatever technology they're using, exercise restraint and discipline in the application of that technology, and rely on awareness and communication to make sure that they're doing doesn't create problems for other people.

Programming interviews can easily degenerate into a pop-quiz about the characteristics of a particular language or platform, but maybe we should be approaching them more like a driving test - even to the extent of letting the candidate demonstrate their problem-solving capabilities using whatever languages and tools they're comfortable with, and then discussing the results in terms of clarity, effective communication, restraint and awareness. Even though we're a .NET shop, I can see how a developer who can create elegant solutions in Ruby or Java and explain clearly what they've done might be a better .NET programmer than somebody who knows every quirk of C# and ASP.NET but can't demonstrate those core qualities of discipline, restraint, awareness and communication.

Huagati DBML Tools for Linq-to-SQL

The bridge in Central Park, NY. Nothing to do with DBML tools. Just looks pretty.

I haven't had a chance to use them yet, but the Huagati DBML/EDMX tools look interesting - a set of extensions to the DBML designer in Visual Studio 2008 that provide some additional functionality, including the much-needed ability to update your DBML to reflect changes in your database schema. It's a commercial package costing $119.95 per user, but a free trial license is available.

With Microsoft effectively abandoning Linq-to-SQL, it's good to see tools like this in the wild. Of course, it'd be really good to see Microsoft open-source Linq-to-SQL and let the community develop it as they see fit... but failing that, these tools can make things easier if you're maintaining an existing Linq-to-SQL system.

Monday, 20 October 2008

Help! I have Multiple Internet Personality Disorder!

Update - looks like you can already do this. chrome.exe will accept a --user-data-dir="" switch, so you can set up shortcuts with different profiles - and it works, really quite well. I now have three Chrome shortcuts that bring up different homepages with different sets of persisted cookies. No colour-coding or cool icons, though...

I have too many Google accounts. Or rather, I have the right number of Google accounts for me, but that's too many for Google, who would seemingly be much happier if I only had the one.

I'll explain. I have a Gmail mailbox, which I forward copies of stuff to, so I can get hold of it from anywhere. My main mailbox runs on a Linux box so old I think it's actually running Redhat instead of Fedora, so Gmail acts as a second-level backup strategy as well. There's a couple of calendars and things in this Google account as well. I also have a Google account linked to my 'real' e-mail address, which I use to sign in to Blogger and various other online services that have ended up under the Google umbrella. Then I have another set of credentials, which are the accounts we've set up at work for access to stuff like Google Webmaster Tools and Google Analytics. I'm not entirely happy with Google linking my e-mail or my blog to my employer's website statistics, so I keep this separate as well.

imageBasically - I want multiple, independently-persisted identities, with their own history, their own cookies and their own shortcuts, so when I say "remember me", I'm actually saying "remember who I'm pretending to be right now" Google Chrome already has 'incognito mode' (and we all know what that means, right?). Can we have work mode, home mode, geek mode, pretending-to-be-a-client-so-I-can-test-my-own-website mode, and as many other modes as we want? With their own colours? And icons? And desktop / start menu shortcuts?

image

Actually, it doesn't have to be Google Chrome at all, it's just that their little "secret agent" icon guy worked really well for the screen mockup. Firefox could do this. Or even Internet Explorer. I know there are cookie-switcher add-ons for Firefox et al, but what none of these solutions offer, as far as I can tell, is the ability to use multiple identities simultaneously - and since Google's made such a big thing of Chrome's separate-processes-for-each-tab technology, it seems like it couldn't be too hard to give those processes their own profiles and history.

Saturday, 18 October 2008

Adding a work-in-progress to Subversion

I love Subversion, but from time to time I'll stumble across a bit of SVN behaviour that just doesn't feel quite right. Case in point - you've created 10-15 files, set up a folder structure for a new project, made rather more progress than you were expecting to, and now you want to check the whole thing into revision control.

The 'proper' way of adding existing code to a repository is via the svn import command, but that doesn't turn your local folder into a Subversion working copy. Having completed the import, you'll then need to move/rename/delete your work in progress, and then do an svn checkout to download the version of your project that's now under revision control. This can take a while if you're working on big files and your repository is on the far end of a slow connection... and even when that's not applicable, it's still frustrating.

So, here's how you can add a new project to Subversion without having to do the import-checkout shuffle.

  1. Use the repo-browser to create a new empty folder in the repository - this will form the root folder of your new project, so call this folder /myproject/trunk or whatever you'd normally use.
  2. Check out the empty folder into the folder containing your work-in-progress project.  You'll get this warning - which is fine, because what you're doing is 'wrapping' an empty SVN folder around your existing work.

    image

  3. You'll check out a single folder, and you'll see that your project now consists of a root folder with the happy green SVN icon, containing a bunch of folders with the question-mark overlay that means "Subversion doesn't know about this folder yet..."

    image

  4. Now you can do an svn commit in the usual way, and it's trivial to add the 'new' files (i.e. all of them) that should be added to the repository. On the first commit, you'll need to uncheck the bin/obj folders for .NET projects, and then on the subsequent commit, you'll be able to add them to the SVN ignore list (you can only ignore a folder whose parent is already under version control)

Friday, 17 October 2008

Googling the Zeitgeist

Just for fun, I googled "website", and got  a little glimpse into the internet zeitgeist as Googlebot sees it. With such a generic query, it's basically comparing websites based on a lowest common denominator and, presumably, the sites with the greatest number of incoming links and highest page-rank bubble to the top.

www.google.com tells us the most important website-related websites in the world, right now, are

  1. The painted shutters of Serra san Quirico, in the Marche, Italy.Website - Wikipedia, the free encyclopedia
  2. Welcome to Obama for America - Barack Obama's presidential campaign website
  3. Microsoft
  4. Website.com (who are presumably here because they talk about websites a lot)
  5. Adobe
  6. Apple
  7. The IRS (US tax and revenue agency)
  8. Starbucks
  9. McDonalds
  10. Subway (restaurants)

That's interesting... Wikipedia, Barack Obama, tech companies, coffee, taxes and fast food. It's like a little summary of the daily lives of hi-tech America. (Worth noting that of the sites in that list,  Microsoft, Apple and Adobe have a Pagerank of 9/10, while website.com has a fairly unremarkable 6/10)

Then Ben Taylor tried the same thing on Google UK, which gives us

  1. The BBC
  2. Banksy (the "street artist")
  3. en.wikipedia.org/wiki/Website
  4. Oasis (the band)
  5. The Secret Intelligence Service
  6. The British royal family
  7. Bloc Party (the band)
  8. The National Trust (an organisation that works to protect historic buildings and sites of natural beauty in the United Kingdom)
  9. Number 10 Downing Street (official residence of the Prime Minister)
  10. Shakespeare's "Romeo & Juliet" on Google Books.

That's us... Graffiti, Shakespeare, history, and rock'n'roll... how very British.

It'll be interesting to see how those lists change over time... graphing the progress of specific topics up and down the Google "website" results over time would make for interesting viewing. Watch this space. Or rather, come back in about six months when I've got the data, and then watch this space.

Thursday, 16 October 2008

Updated VS2008 / Castle / NHibernate solution bundle

image I've just uploaded a new version of my quick'n'dirty VS2008 solution, which now includes the log4net and Iesi.Collections projects (these were formerly being referenced from Program Files\Castle Project\ and so wouldn't build on a box without Castle installed).  The whole thing is now completely self-referential, so it should build & run without any dependencies other than the .NET framework itself, and everything's done using relative project references so you should be able to get step-debugging right down to the SQL command calls. But it's late and I haven't tried that yet.

As I've said before, this is aimed at getting something up and running with the Castle ActiveRecord stack as easily as possible, so I can play with it and see what it does.

Having seen Ayende's post about building Rhino-Tools from the various libraries' SVN trunks, I'm now convinced there might be a way of using SVN externals and NAnt to create a single project that automatically builds against latest trunk revisions of the various libraries - I guess this is one of those areas where only experience will tell you whether you're better off running against nightly trunk commits or just picking a stable revision and building against that, but I'm sure it'll be educational finding out.

You can get the ZIP here if you're interested. Again, I must restate that I didn't write any of this; log4net is distributed by Apache, NHibernate is from www.nhibernate.org,  Castle is from www.castleproject.org, and all I've done is package them together for convenience.

Thursday, 9 October 2008

Cycling, Eyedroppers and the Benefits of Laziness Applied with the Proper Tools

I  cycle to work. I work in Leicester Square in London, and I live about three miles away. On a really good day, the trip door-to-door by public transport takes about 25 minutes. On a really bad day, you can end up stuck on a crowded bus full of angry people, in standstill traffic, for two hours, because there's a problem on the Underground on the same day they're digging up the water mains along the bus route.

Dahon Matrix 2007 - yep, it's a proper bike that folds in half. That's how I get it up to the fourth floor in the lift every morning.By bike, it takes about 20 minutes, plus time to shower & change when you get there. But that's a constant. It doesn't vary depending on traffic or roadworks or industrial action by Underground staff.

I believe that cycling is the 'optimum' way to travel to and from work. If software development was travel, cycling would be agile, test-driven and all that jazz. It's healthy, it's cheap, it's green, it's often fun. What I really love about it, though, is that even after a rotten day when I'm tired and fed up and just want to go home, I still get 20 minutes of exercise on the way home, because I have to get home somehow, and cycling is the fastest and easiest way to do it.

Good software tools should be like bikes; they should encourage better habits by making the right thing to do  the same as the easy thing to do.

Which, by a rather circuitous route, brings us to Instant Eyedropper, which I stumbled upon earlier this week and find myself rather smitten with. It's tiny. It's fast. It's free. It works ridiculously well. It loads on startup, sits in your system tray, and when you need a colour , you just drag it out of the system tray, drop it onto the colour you need, and it copies the appropriate hex code to your clipboard. It takes about a second - literally - and then gets the hell out of your way so you can get on with whatever you were doing.

I have occasionally, in the past, "guessed" HTML colour codes on the fly because I can't face digging through CSS looking for the or opening Photoshop just to use the eyedropper tool to pick a colour off a screen capture. I've used eyedropper tools before, but somehow they've never quite got the formula right. With Instant Eyedropper, thought, when you're in a hurry, it's quicker to do it properly than it is to guess. I like that.

Check it out. It's free and you might like it.

Thursday, 2 October 2008

ASP.NET MVC Preview 3 and Linq-to-SQL - One Month On

At the start of September, we launched a web app based on ASP.NET MVC preview 3 and Linq-to-SQL, and I'm happy to say that it's generally gone really, really well.

Our primary codebase is legacy ASP in JScript, but for this latest project - an online proof and payment system for actors renewing their Spotlight membership - we needed something faster, more robust and generally better. I'd been playing with the ASP.NET MVC previews since version 1, and while the framework's obviously still very much in development, I figured my ASP.NET background would mean a much easier learning curve than trying to pick up MVC at the same time as learning a new view engine like Brail or NVelocity. I used a Linq-to-SQL implementation of IRepository<T> based on code from Mike Hadlow's Suteki Shop project -the original intent was just to try it out for a couple of hours to see how it worked, but it performed so well for what we needed that I just went ahead and built the rest of the app on top of it.

Linq-to-SQL clearly has some interesting potential, but it's also clearly showing right now the biggest problems with it are the slightly clunky tools (having to hack the XML to create cascade delete relationships, no way to refresh a table in the LINQ designer if the schema's changed, for example) and the big question mark hanging over its future. Between Entity Framework and the various open-source OR mappers competing for mindshare, not to mention talk of a Linq-to-NHibernate implementation, it's really not clear whether Linq-to-SQL will ever see another release which fixes the problems with the current release, or whether it's just going to be quietly retired as a historical curiosity.

ASP.NET MVC, on the other hand, looks like it's really going to go places - especially with the news that Microsoft are going to be shipping - and supporting - jQuery with ASP.NET MVC and Visual Studio. The day I get my hands on Intellisense for jQuery will be a good day indeed. I can't wait.

Tuesday, 30 September 2008

A Ready-To-Hack Visual Studio 2008 Solution including NHibernate and Castle Active Record

metafinkI've been taking my first steps into the wonderful world of NHibernate and Castle ActiveRecord, and to make things easier, I've put together a Visual Studio 2008 solution that should build NHibernate, Castle Core, Castle Windsor, Castle Microkernel and Castle ActiveRecord from source, just by firing it up in VS2008 and hitting Ctrl-Shift-B, so you can load it, build it, run it, and start hacking around and getting your hands dirty.

I didn't actually create any  of this - NHibernate is from www.nhibernate.org, Castle is from www.castleproject.org. Thing is, these projects are all interdependent to some extent, the official binaries aren't always up-to-date or in sync with each other, and building them from source means setting up nant, working through various machine-specific configuration glitches... so hopefully this will save you a bit of head-scratching. I've just done an svn trunk checkout from the various projects over the last few days, built each project using nant as per the included instructions, then extracted the Visual Studio projects for the actual runtime libraries (so I've left out unit tests, etc.) and combined it into a single solution, along with a very rudimentary ActiveRecord model project, a simple console app showing how to get things up and running using app.config, and a SQL Express .mdf file containing some test data to make sure it's working.

This is undocumented and full of holes - but if, like me, you've decided it's time to learn this stuff and you just want a working build to play around with, it'll probably save you a couple of hours.

Oh, and it's called Metafink. No reason - things just need a name.

Download it from http://www.dylanbeattie.net/misc/metafink.zip

EDIT: Turns out this isn't quite complete - trying to build it on a machine that had never had any of this stuff on it before, and it appears that  log4net and a couple of other DLLs that are required by NHibernate are missing from the package. For the sake of completeness (and being able to step-debug through the whole stack), I'll try and incorporate these as projects rather than just linking to the binaries - should get this sorted out later today / tonight. Sorry!

Making Your SVN mod_dav_svn Repository Firefox-Friendly

There's a great add-on module for Subversion - mod_dav_svn, which I've blogged about before - that exposes the contents of your repository through a Web server interface. This is great for bringing up designs, ideas and HTML prototypes in meetings - we've got one of those interactive whiteboard things, and we've  saved lots of time, and probably a couple of acres of forest, by showing designs on the screen instead of printing handouts.

This doesn't quite work out-of-the-box, though. It'll sort-of work if you're using Internet Explorer to browse your WebDAV repository, but Firefox and Opera will probably display everything as plain text. Or gibberish. This is because Apache is sending a Content-Type header telling the browser that the content is text/plain, and Apache in turn is getting this information directly from Subversion. To get everything displaying properly, you'll need to make sure that every file in your repository has the proper MIME type associated with it in Subversion.

Using auto-props to set MIME types automatically when adding files to a repository

This bit depends on your client. Check out the official documentation on the auto-props feature; it's also worth knowing that you can open the svn configuration file in Notepad via the handy Edit button in the Tortoise settings dialog - right-click any folder window in Windows Explorer, hit TortoiseSVN -> Settings...

image 

To update files already in the repository

The auto-props feature is all very well, but if (as I did) you don't find out about it until you've already got a repository full of stuff, you have a second problem - how do you set the MIME-type properties on everything that's already in your repository?

This works on Windows, via the command shell, and needs the command-line version of svn installed - try the SlikSVN installer if you don't have svn already installed. Remember that although your repository is probably organised into projects, with their own trunks, tags and branches, it's still just a great big hierarchy of files and folders - if you do an svn checkout from svn://my.subversion.server/ without specifying /myproject/trunk, you will check out the HEAD version of your entire repository. (These techniques work just as well on individual trunks, branches and sub-folders, of course.)

First, check out the folder, branch or even the entire repository into a working folder - say c:\repository\.

Then run this:

C:\repository>for /r %1 in (*.gif) do svn propset svn:mime-type image/gif "%~f1"

for is the Windows shell command that basically says "repeat the following for every file matching this specification" - and we're saying for /r %1 in (*.gif), meaning "recursively find every file matching *.gif in or below the current folder, temporarily reference that file as %1, and run the following command" - where the command itself is svn propset svn:mime-type image/gif "%~f1"

Note that the %1 reference there is quoted, and we're using the ~f modifier to expand it to the full path - you may find

C:\repository>for /r %1 in (*.gif) do echo "%~f1"

enlightening if this doesn't make sense - remember, everything after the do is invoked for each matching file.

So, when for matches something.gif under myproject\trunk in your repository, it'll call svn.exe with the command line

svn propset mime-type image/gif "C:\repository\myproject\trunk\something.gif"

- which will set the MIME-type on something.gif to image/gif.

Repeat this incantation using the various file extensions and MIME types you need to configure, e.g.

C:\repository>for /r %1 in (*.jpg) do svn propset svn:mime-type image/jpeg "%~f1"
C:\repository>for /r %1 in (*.htm*) do svn propset svn:mime-type text/html "%~f1"

and once you're done, commit your changes back to the repository. You'll see a whole lot of SVN "Property changed" messages, and next time you browse your repository via mod_dav_svn, you should find things are working as expected.

Saturday, 9 August 2008

Colour transformation in .NET and GDI+... aka "What Is The Matrix?"

I've been working on some code that converts colour photographs uploaded by users to black-and-white. To do this, I'm using the following code to render an image onto a canvas, using GDI+, and applying a colour transformation in the process:

private Bitmap ApplyMatrix(Bitmap source) {
    ColorMatrix matrix = //TODO: determine appropriate colour matrix! 

    Bitmap result = new Bitmap(source.Width, source.Height);
    Rectangle sourceRectangle = new Rectangle(0, 0, source.Width, source.Height);
    using (Graphics g = Graphics.FromImage(result)) {
        g.SmoothingMode = SmoothingMode.HighQuality;
        g.CompositingQuality = CompositingQuality.HighQuality;
        g.InterpolationMode = InterpolationMode.HighQualityBicubic;

        ImageAttributes ia = new ImageAttributes();
        ia.SetColorMatrix(matrix);

        Point upperLeft = new Point(0, 0);
        Point upperRight = new Point(result.Width, 0);
        Point lowerLeft = new Point(0, result.Height);
        Point[] destinationPoints = new Point[] { upperLeft, upperRight, lowerLeft };
        g.DrawImage(source, destinationPoints, sourceRectangle, GraphicsUnit.Pixel, ia);
    }
    return (result);
}

The key to these colour transformations is a ColorMatrix - the GDI+ colour model lets you treat the <R,G,B> elements of a colour as a point in three-dimensional colour space, and apply geometric transformations to that point using matrix multiplication. There's some in-depth discussion of this at http://www.codeproject.com/KB/GDI-plus/colormatrix.aspx.

"Nobody can be told what the matrix is... you have to see it for yourself."

imageEven once you've got your head around the underlying mathematics, it's still not easy to work out what matrix values you need to achieve a particular result, so I've hacked together a WinForms app that'll let you tweak the values in real time and see what effect they have.  Nothing fancy - just a bunch of numeric up/down boxes, with tooltips explaining what effect they'll have on the resulting image.

You can download ColorMatrixLab (source and binary) here:

(requires the Microsoft .NET 2.0 framework.)

Tuesday, 29 July 2008

Crazy from the Heat...

Computers don't like heat. Apparently. Years ago, I was putting together a system for my brother based on one of the old AMD Athlon CPUs.  Built it, tested it, installed Windows, everything running beautifully. Fire it up an hour or two before he arrives to pick it up... it bluescreens and won't boot. Open up the case, check everything's seated properly... you know the drill. It's all fine, of course. Three hours later, I still can't work out what's wrong. Every component is fine. Every diagnostic passes. The disks are fine. The memory is fine. Eventually, and completely by chance, I actually move the case off the desk onto the floor whilst it's running... and it crashes. Turns out the heatsink clamp was ever-so-slightly bent out of shape. Unlike the LGA775 heatsinks of today with their wonderfully-engineered motherboard mountings, the old Athlon heatsinks just clipped onto the plastic CPU socket, and what was happening was that when the box - a generic mini-tower case - was up on the desk, on its side, running tests and diagnostics, everything was fine. When I put it back together and flipped it the right way up - i.e. standing vertically - the weight of the heatsink combined with the bent clip was just enough to pull the heatsink out of contact with the CPU, which would then shoot up to 96°C and crash spectacularly. A new heatsink clip and some arctic silver and it worked perfectly.

Anyway. Moral of the story is, in my experience, PCs go funny in the summer. Whether it's the heat or just plain coincidence I don't know, but they do. And when they do, the first thing to check - always - is the memory. Get the Ultimate Boot CD, load up MemTest86, and let it run overnight. (If anything's wrong, it generally shows up in about two minutes... but if it'll run overnight without any problems, your RAM is almost certainly OK.)

Faulty memory creates the most bewildering array of crashes, faults, errors and bluescreens I have ever seen. Having inadvertently run a system with a stick of bad RAM for a couple of weeks, I would at various points have sworn it was the RAID controller, the hard drive, the video card, Windows, the printer driver - in fact, pretty much every component of the system seemed to have caused it to crash at one point or another. I'd ignored the possibility of the memory, because the system in question isn't that old and it was tested when I put it together... I was wrong, and just running Memtest86 in the first place would have saved literally hours of troubleshooting and head-scratching.

Gadgets and Gizmos Time...

We're on a green drive. London is having a full-fledged heatwave, the temperature is regularly hitting 30°, and sitting in a room surrounded by electrical equipment that basically sits there spitting out heat all day suddenly doesn't seem like such a good idea. When you realise that all that heat is basically wasted electricity - that I'm paying for, and then trying desperately to push out of my windows to cool the room down - a little geek spring-cleaning suddenly seems like a pretty good idea.

Belkin N1 Vision First off, last month I replaced the motley collection of switches, DSL routers and wireless access points - along with their accompanying individual power supplies - with a Belkin N1 Vision. This truly wonderful gizmo combines a DSL router, four Gigabit wired ethernet ports and draft 802.11n, in a really innovative package. The LCD readout on the front is frankly genius. Download monitor; wi-fi status readout; desk clock, DSL speed meter - it's intuitive, versatile and just very, very cool. And it works. I don't have to keep unplugging it to get wi-fi working again, like I did with the old one. The DSL automatically comes back on when power is restored (a major gripe I had with my old Speedtouch router). I love it. My computers love it. Maria's iPhone really loves it. And it's fast enough to watch movies over wi-fi from pretty much anywhere in the house.

wdfMyBook_World_2NThe really neat part, though, is a Western Digital My Book World Edition II. It's basically two cheap SATA hard drives in a box, with a fan and a low-powered CPU running a cut-down version of Linux, and some dreadful but completely optional Mionet software. You might remember these as the subject of some truly awful press coverage last year. The truth is, the supplied (but optional) MioNet software won't share certain file types with anonymous users over a public-facing connection. The press picked this up as "network hard drive won't share files" - which is almost total bollocks - and ran with it.

But I digress. Despite the apparently slow network interface, it works - easy to set up, easy to get files onto it, easy to get them off again. The fun really starts, though, when you work out how to get console access to the onboard Linux OS. (Neat tip - once you've got SSH access, it's much quicker to transfer big files - music, video, etc. - by copying them onto an external USB drive, plugging it into the back of the MyBook, SSHing into the console and copying it onto the internal HD using Linux 'cp' command, than to copy them over the network.) With a couple of evening's tinkering and copius assistance from the wiki at mybookworld.wikidot.com, it's now acting as an iTunes media server (via MediaTomb) and print server. The kind of stuff I used to leave a "proper" PC running 24/7 to do. The print server, in particular, is very neat - any computer in the house, including wireless, printing to a completely normal USB Canon Pixma iP5300, and it's even smart enough to work with the printer's power-save features... send a print job, printer wakes up, prints it, and goes back to sleep with nary so much as a standby light.

Basically, the "infrastructure" - file server, wi-fi, DSL, printing support - that all the other computers rely on is now running on two small appliances and a printer that only switches itself on when it's needed. I'm going to get one of those electricity meter things to get some hard stats on how much power this lot actually draws, but in the meantime it's definitely cooler and quieter than a full-blown server and pile of networking gear - and I figure that has to be good for the electricity bill and the planet. Not to mention how much fun it is doing things that aren't supposed to be possible in the first place.

Wednesday, 28 May 2008

Strongly-Typed View References with ASP.NET MVC Preview 3

Two short methods that'll give you compile-time type checking for your ASP.NET MVC views:

///<summary>Render the view whose implementation is defined by the supplied type.</summary>
protected ActionResult View(Type viewType, object viewData) {
    return(View(viewType.Name, viewData));
}

/// <summary>Render the view whose implementation is defined by the supplied type.</summary>
protected ActionResult View(Type viewType) {
    return(View(viewType.Name));
}

I've added these methods to a BaseController : Controller class, and my MVC controllers then inherit from my custom base class, but you could always add them via the extension-method syntax to the ordinary Controller supplied with ASP.NET MVC.

This means you can call your View() methods via a type reference that's checked by the compiler when you build, so instead of:

public ActionResult Info(int id) {
    Movie movie = dataContext.Movies.Single<Movie>(m => m.Id == id);
    return (View("Info", movie));
}

you can say

public ActionResult Info(int id) {
    Movie movie = dataContext.Movies.Single<Movie>(m => m.Id == id);
    return (View(typeof(Views.Movie.Info), movie));
}

and the reference to typeof(Views.Movie.Info) will be type-checked when you compile, so renaming or moving your view classes will cause a broken build until you fix the controllers that refer to them.

Friday, 16 May 2008

Let's Usurp "Web 3.0" and Do Something Useful With It...

Everyone's been talking about Web 2.0 for a while now, and I still get the feeling no-one really knows what it is. I think Stephen Fry's description of web 2.0 as "genuine interactivity, if you like, simply because people can upload as well as download." comes close to my understanding of the phenomena... but that's not really the point. The point is, "web two point oh" sounds cool. Tim O'Reilly probably knew this when he coined the phrase. People and companies want web 2.0, despite the fact that they're not really sure what it is, because it sounds cool.

On the one hand, we have web 2.0 mash-ups and tag clouds and Ajax and all that lovely interactive multimedia goodness. On the other hand, we have web standards. Standards are not as cool as Web 2.0. They sound a bit... boring, frankly (and the W3C spec documents really don't help with this. Informative, yes - but readable?) Many companies would rather spend their time and money investing in potential revenue sources instead of the endless hours of testing and tweaking that's involved in getting semantically clean, standards-compliant pages that look good and work across all modern browsers... and as soon as they want something clever and interactive, they reach for Flash.

IE8 is coming, and will supposedly offer the standards support that we've all been waiting for. Joel Spolsky has written this post about the fact that there really isn't an acceptable compromise between standards compliance and backward compatibility. Either you follow the standards and break old sites, or you maintain bugwards compatibility at the expense of standards compliance.

When you say "IE8's default rendering view conforms to the W3C XHTML+CSS standards", people yawn. I mean, c'mon. Double-you-three-ex-aitch-cee-ess-what? 

So how about if we just take a reasonable baseline set  W3C guidelines - XHTML 1.1, CSS 2.1, XmlHttpRequest - and say that "Web 3.0" means full, complete support for those standards? It could be that simple. IE8 can be a Web 3.0 browser. Firefox 3 can be a Web 3.0 browser; Opera 10 can be a Web 3.0 browser (if Opera 9 isn't already, that is). Google SpacePirate or whatever they think of next will be a Web 3.0 application, which works as intended on any web 3.0 browser. Technically, it's exactly the same as what's going on right now - but I wonder what'll happen if we slap a cool name on it and make standards sound like the Next Big Thing?

Sunday, 11 May 2008

The Future of Subversion?

Following a blog post by Ben Collins-Sussman, one of the original developers of Subversion, about the future of the open-source revision control system, Slashdot asks "is there still a need for centralized version control in some environments?"

Frankly - yes, there is. Just like we're always going to need one-bedroom apartments, even though most people end up getting a bigger place sooner or later. For some people, it's all they'll ever need - and for other people, it will always be an important stepping-stone on the path to bigger and better things.

I wrote my first program on an Amstrad 6128, many years ago. Committing a revision meant saving it onto a 3" floppy disk and putting it in a desk drawer. Tagging meant I'd write "dylan - hello.bas" on the label in pen instead of pencil, and then ask Dad for another disk. That approach - saving frequently and using offline copies for important milestones - got me through school, college, university and my first four years of gainful employment. These were all self-contained, short-term projects of one sort or another, and generally I'd be the only developer working on them. I didn't really get what revision control was, because I wasn't aware of having any problems to which it offered a better solution.

I should mention that, at university, they did teach us CVS briefly, as part of a ten-week course on "Software Tools and Techniques". Problem is, after that initial exposure to it, it played no part in any of the course material or assignments we did over the following two years - and we weren't really expected to use it except on the "learning CVS" assignment - so I think a lot of us, myself included, left with CVS filed alongside awk and sed as tools that were useful in certain circumstances but for which better alternatives existed for day-to-day development.

It wasn't until 2005, about twenty years after "hello.bas", that revision control actually become a day-to-day problem. One of my previous projects had turned into a full-time job, and we'd hired someone else - who also had no team development experience - to help out. At first, it was chaos. Our initial solution was using Beyond Compare (the nicest file comparison program I've used) to sync our local working copies against the live code. This lasted a couple of months, I think - until we hit a problem that meant rolling-back the code to a specific point, and neither of us had an appropriate snapshot. Whilst Beyond Compare was great, and simple file-comparison syncing was easy to understand, we needed something better. I installed Subversion, imported our codebase, and we've never looked back.

This is what I think makes Subversion interesting - and what guarantees a growing user base for the foreseeable future. It's a significant milestone on the road to becoming a better developer. I'm sure there's people out there who learn this stuff as a junior developer on a 50-strong team, with a sysadmin managing their BitKeeper repository and a mentor patiently explaining how it all works. I don't think they're the people we need to worry about. It's the people who have have already moved from simple copies to syncing tools, and are looking for the next step. When you start building a team around people who are used to working individually, revision control can get very complex, very fast, and I've found installing and running my own Subversion repository to be a great way to slowly get to grips with lots of the underlying concepts.

Sunday, 4 May 2008

I Want Tony Stark's Build Server (with possible "Iron Man" Spoilers...)

Personally, I think the ideal software development process boils down to three things:

1. Every decision is reflected in exactly one place in your project.

By this, I don't necessarily mean documented. Documentation is there to direct the people who are implementing the decisions, and to explain the business context and background information that's necessary to understand the actual code/schema/designs/whatever. I mean that, if a customer's name is limited to 32 characters, you type that number 32 exactly once in your entire project, and everything that needs to know how long a customer name can be refers to that one place where you originally recorded it.

2. Your tools allow each decision to be expressed clearly, succintly and quickly

Most of the worthwhile progress I've seen with software development is about letting the developer express their intent quickly and without ambiguity. String code in C is basically arithmetic manipulation of arrays of numbers. String code in .NET or Ruby is a whole lot friendlier; there's a compile and runtime overhead, but Moore's Law is on our side and, with a few exceptions, I think speed of development and ease of maintenance are becoming more important than speed of execution for most software these days. 

3. Everything else is generated automatically.

If I change my mind about something, I don't want to have to tell a whole bunch of classes that I've changed my mind. I don't believe it's possible to build software without making some assumptions - even if I have a signed-off set of requirements, I'm still assuming that the person who signed the requirements understood what the client actually wants - and when these assumptions turn out to be wrong, I want to be able to correct them quickly and painlessly.

I have a homebrewed code-generation system based on CodeSmith that will generate a pretty comprehensive set of domain model objects and supporting DAL and stored procedures based on a SQL Server database. If we decide that a customer's name is compulsory and is limited to 32 characters, I change the Name column in the Customer table in our DB to a varchar(32) NOT NULL, and re-generate the code. 30 seconds later, my Customer class includes validation rules that check that Customer.Name is not null, not empty, and no greater than 32 characters - and throw a descriptive exception "Sorry, Customer.Name is limited to 32 characters" if you exceed the limit. The generated objects implement the IDataErrorInfo interface for automatic validation of data-bound WinForms apps, and use a variation on the MVP pattern that means that for each business object, we also generate an interface defining that object's editable fields - so you make your CustomerEditForm.ascx.cs code-behind class implement ICustomerView, and you can validate by calling Customer.ValidateView(this, "Name") from your user controls and get a nice (and completely auto-generated) error message if there's anything wrong with the name you've just entered.

That example is from Real Life. That's why it's a bit... technical.

[Potential spoilers after the photo... careful now.]

 

Tony Stark working on his Iron Man suit (Copyright © Marvel / Paramount)

 

Driving home from watching Iron Man tonight, it occurred to me... in the movie, they do basically the same thing, but they do it with style. There's a scene about halfway through Iron Man where Tony Stark, our intrepid millionaire-playboy-genius-weapons-designer-turned-superhero, is putting the finishing touches on his Iron Man suit in his Malibu beachfront workshop. (I told you fiction made it look cool.) His computer system - known as 'Jarvis', apparently - brings up a 3D visualisation of his latest design; Tony casually asks Jarvis to "throw a little hot-rod red in there", and then goes off to drink scotch and dance with Gwyneth Paltrow while the system does the actual suit fabrication.

Ok, so I'm assuming there's some A.I. involved and that a certain visual style is implied by the phrase "hot-rod red" - but that's just about configuring your tools to suit your preference. Otherwise, it's just a really powerful configuration and build server... you make a decision, you record it once, and the system does the rest while you go dancing. Oh, and there's also the fact that it makes experimental rocket-powered bulletproof flying superhero suits instead of database-driven websites... but we can work on the details later, right?

Anyway. Point is - next time I have to explaining code generation and continuous integration to a non-developer, I'll start by asking if they saw Iron Man, and we'll take it from there.