Wednesday, 28 May 2008

Strongly-Typed View References with ASP.NET MVC Preview 3

Two short methods that'll give you compile-time type checking for your ASP.NET MVC views:

///<summary>Render the view whose implementation is defined by the supplied type.</summary>
protected ActionResult View(Type viewType, object viewData) {
    return(View(viewType.Name, viewData));
}

/// <summary>Render the view whose implementation is defined by the supplied type.</summary>
protected ActionResult View(Type viewType) {
    return(View(viewType.Name));
}

I've added these methods to a BaseController : Controller class, and my MVC controllers then inherit from my custom base class, but you could always add them via the extension-method syntax to the ordinary Controller supplied with ASP.NET MVC.

This means you can call your View() methods via a type reference that's checked by the compiler when you build, so instead of:

public ActionResult Info(int id) {
    Movie movie = dataContext.Movies.Single<Movie>(m => m.Id == id);
    return (View("Info", movie));
}

you can say

public ActionResult Info(int id) {
    Movie movie = dataContext.Movies.Single<Movie>(m => m.Id == id);
    return (View(typeof(Views.Movie.Info), movie));
}

and the reference to typeof(Views.Movie.Info) will be type-checked when you compile, so renaming or moving your view classes will cause a broken build until you fix the controllers that refer to them.

Friday, 16 May 2008

Let's Usurp "Web 3.0" and Do Something Useful With It...

Everyone's been talking about Web 2.0 for a while now, and I still get the feeling no-one really knows what it is. I think Stephen Fry's description of web 2.0 as "genuine interactivity, if you like, simply because people can upload as well as download." comes close to my understanding of the phenomena... but that's not really the point. The point is, "web two point oh" sounds cool. Tim O'Reilly probably knew this when he coined the phrase. People and companies want web 2.0, despite the fact that they're not really sure what it is, because it sounds cool.

On the one hand, we have web 2.0 mash-ups and tag clouds and Ajax and all that lovely interactive multimedia goodness. On the other hand, we have web standards. Standards are not as cool as Web 2.0. They sound a bit... boring, frankly (and the W3C spec documents really don't help with this. Informative, yes - but readable?) Many companies would rather spend their time and money investing in potential revenue sources instead of the endless hours of testing and tweaking that's involved in getting semantically clean, standards-compliant pages that look good and work across all modern browsers... and as soon as they want something clever and interactive, they reach for Flash.

IE8 is coming, and will supposedly offer the standards support that we've all been waiting for. Joel Spolsky has written this post about the fact that there really isn't an acceptable compromise between standards compliance and backward compatibility. Either you follow the standards and break old sites, or you maintain bugwards compatibility at the expense of standards compliance.

When you say "IE8's default rendering view conforms to the W3C XHTML+CSS standards", people yawn. I mean, c'mon. Double-you-three-ex-aitch-cee-ess-what? 

So how about if we just take a reasonable baseline set  W3C guidelines - XHTML 1.1, CSS 2.1, XmlHttpRequest - and say that "Web 3.0" means full, complete support for those standards? It could be that simple. IE8 can be a Web 3.0 browser. Firefox 3 can be a Web 3.0 browser; Opera 10 can be a Web 3.0 browser (if Opera 9 isn't already, that is). Google SpacePirate or whatever they think of next will be a Web 3.0 application, which works as intended on any web 3.0 browser. Technically, it's exactly the same as what's going on right now - but I wonder what'll happen if we slap a cool name on it and make standards sound like the Next Big Thing?

Sunday, 11 May 2008

The Future of Subversion?

Following a blog post by Ben Collins-Sussman, one of the original developers of Subversion, about the future of the open-source revision control system, Slashdot asks "is there still a need for centralized version control in some environments?"

Frankly - yes, there is. Just like we're always going to need one-bedroom apartments, even though most people end up getting a bigger place sooner or later. For some people, it's all they'll ever need - and for other people, it will always be an important stepping-stone on the path to bigger and better things.

I wrote my first program on an Amstrad 6128, many years ago. Committing a revision meant saving it onto a 3" floppy disk and putting it in a desk drawer. Tagging meant I'd write "dylan - hello.bas" on the label in pen instead of pencil, and then ask Dad for another disk. That approach - saving frequently and using offline copies for important milestones - got me through school, college, university and my first four years of gainful employment. These were all self-contained, short-term projects of one sort or another, and generally I'd be the only developer working on them. I didn't really get what revision control was, because I wasn't aware of having any problems to which it offered a better solution.

I should mention that, at university, they did teach us CVS briefly, as part of a ten-week course on "Software Tools and Techniques". Problem is, after that initial exposure to it, it played no part in any of the course material or assignments we did over the following two years - and we weren't really expected to use it except on the "learning CVS" assignment - so I think a lot of us, myself included, left with CVS filed alongside awk and sed as tools that were useful in certain circumstances but for which better alternatives existed for day-to-day development.

It wasn't until 2005, about twenty years after "hello.bas", that revision control actually become a day-to-day problem. One of my previous projects had turned into a full-time job, and we'd hired someone else - who also had no team development experience - to help out. At first, it was chaos. Our initial solution was using Beyond Compare (the nicest file comparison program I've used) to sync our local working copies against the live code. This lasted a couple of months, I think - until we hit a problem that meant rolling-back the code to a specific point, and neither of us had an appropriate snapshot. Whilst Beyond Compare was great, and simple file-comparison syncing was easy to understand, we needed something better. I installed Subversion, imported our codebase, and we've never looked back.

This is what I think makes Subversion interesting - and what guarantees a growing user base for the foreseeable future. It's a significant milestone on the road to becoming a better developer. I'm sure there's people out there who learn this stuff as a junior developer on a 50-strong team, with a sysadmin managing their BitKeeper repository and a mentor patiently explaining how it all works. I don't think they're the people we need to worry about. It's the people who have have already moved from simple copies to syncing tools, and are looking for the next step. When you start building a team around people who are used to working individually, revision control can get very complex, very fast, and I've found installing and running my own Subversion repository to be a great way to slowly get to grips with lots of the underlying concepts.

Sunday, 4 May 2008

I Want Tony Stark's Build Server (with possible "Iron Man" Spoilers...)

Personally, I think the ideal software development process boils down to three things:

1. Every decision is reflected in exactly one place in your project.

By this, I don't necessarily mean documented. Documentation is there to direct the people who are implementing the decisions, and to explain the business context and background information that's necessary to understand the actual code/schema/designs/whatever. I mean that, if a customer's name is limited to 32 characters, you type that number 32 exactly once in your entire project, and everything that needs to know how long a customer name can be refers to that one place where you originally recorded it.

2. Your tools allow each decision to be expressed clearly, succintly and quickly

Most of the worthwhile progress I've seen with software development is about letting the developer express their intent quickly and without ambiguity. String code in C is basically arithmetic manipulation of arrays of numbers. String code in .NET or Ruby is a whole lot friendlier; there's a compile and runtime overhead, but Moore's Law is on our side and, with a few exceptions, I think speed of development and ease of maintenance are becoming more important than speed of execution for most software these days. 

3. Everything else is generated automatically.

If I change my mind about something, I don't want to have to tell a whole bunch of classes that I've changed my mind. I don't believe it's possible to build software without making some assumptions - even if I have a signed-off set of requirements, I'm still assuming that the person who signed the requirements understood what the client actually wants - and when these assumptions turn out to be wrong, I want to be able to correct them quickly and painlessly.

I have a homebrewed code-generation system based on CodeSmith that will generate a pretty comprehensive set of domain model objects and supporting DAL and stored procedures based on a SQL Server database. If we decide that a customer's name is compulsory and is limited to 32 characters, I change the Name column in the Customer table in our DB to a varchar(32) NOT NULL, and re-generate the code. 30 seconds later, my Customer class includes validation rules that check that Customer.Name is not null, not empty, and no greater than 32 characters - and throw a descriptive exception "Sorry, Customer.Name is limited to 32 characters" if you exceed the limit. The generated objects implement the IDataErrorInfo interface for automatic validation of data-bound WinForms apps, and use a variation on the MVP pattern that means that for each business object, we also generate an interface defining that object's editable fields - so you make your CustomerEditForm.ascx.cs code-behind class implement ICustomerView, and you can validate by calling Customer.ValidateView(this, "Name") from your user controls and get a nice (and completely auto-generated) error message if there's anything wrong with the name you've just entered.

That example is from Real Life. That's why it's a bit... technical.

[Potential spoilers after the photo... careful now.]

 

Tony Stark working on his Iron Man suit (Copyright © Marvel / Paramount)

 

Driving home from watching Iron Man tonight, it occurred to me... in the movie, they do basically the same thing, but they do it with style. There's a scene about halfway through Iron Man where Tony Stark, our intrepid millionaire-playboy-genius-weapons-designer-turned-superhero, is putting the finishing touches on his Iron Man suit in his Malibu beachfront workshop. (I told you fiction made it look cool.) His computer system - known as 'Jarvis', apparently - brings up a 3D visualisation of his latest design; Tony casually asks Jarvis to "throw a little hot-rod red in there", and then goes off to drink scotch and dance with Gwyneth Paltrow while the system does the actual suit fabrication.

Ok, so I'm assuming there's some A.I. involved and that a certain visual style is implied by the phrase "hot-rod red" - but that's just about configuring your tools to suit your preference. Otherwise, it's just a really powerful configuration and build server... you make a decision, you record it once, and the system does the rest while you go dancing. Oh, and there's also the fact that it makes experimental rocket-powered bulletproof flying superhero suits instead of database-driven websites... but we can work on the details later, right?

Anyway. Point is - next time I have to explaining code generation and continuous integration to a non-developer, I'll start by asking if they saw Iron Man, and we'll take it from there.

Friday, 2 May 2008

Subversion + WebDAV + Wiki = Cool. Fact.

Following on from yesterday's post about Confluence, I've been thinking how to get the same sort of workflow going for images, screenshots and designs - basically, stuff that isn't text.

A lot of our projects start life as Visio diagrams. A couple of us use Visio to actually put them together; everyone else has the free Visio viewer plug-in for Internet Explorer which means they can browse and view our work but can't edit it. Which is probably a good thing. Likewise, most of our designs start life as PSDs, and we always seem to end up with a couple of random spreadsheets full of data that isn't clean enough to import into the actual project DB yet but which is too important to ignore.

Erzs├ębet Bridge, Budapest. There's this principle in software engineering - Don't Repeat Yourself (DRY) - which basically means if you do something, do it exactly once, so that if it changes you only need to modify one piece of code. I think it's one of the absolute core principles of writing maintainable software, but I also believe the same principle is equally applicable to documentation, configuration - in fact, anything that involves manual intervention when changes are required. If I create a class diagram, I want to maintain a single master copy of that class diagram and, in a perfect world, any documents or pages that refer to it should be smart enough to always refer to that master copy.

We use Subversion as our version control system, but the principle of update-merge-edit-commit applies pretty well to almost any resource in a collaborative development process - there's even folks out there who keep their entire life in Subversion. So...  assuming we're keeping all our diagrams, designs and documents in Subversion, how can I link a page in our wiki directly to the latest revision of a document inside the repository?

Turns out it's pretty straightforward, mainly because other people have already built all the hard bits.

The key to all this is an Apache module, mod_dav_svn, that's part of Subversion and exposes your Subversion repository via WebDAV.  Since WebDAV is an extension of HTTP/1.1, that means you can use pretty much any web browser to navigate and view documents in the repository.

We already had Subversion up and running, so to add Apache and the WebDAV module I just followed the Apache install instructions in the Subversion book.

One thing to note - Subversion on Windows will not work with Apache 2.2.x; the mod_dav_svn module isn't compatible with this version and will give you warning messages about "mod_dav_svn.so is garbled". You'll need to install the latest version of the 2.0 series (2.0.63 in my case), and you'll also ideally want to disable IIS or any other web server on the box that's hosting your repository so that Apache can use port 80 on the server's primary IP.

Once it's all up and running, next time you're editing a Wiki page and think "hey, I really want to link to the Visio user experience workflow diagram here" - easy. Just point the link at the full URL of that diagram inside your WebDAV repository - something like

http://my.subversion.server/svn/myproject/docs/workflow/User+Experience.vsd

I think the real advantage of this approach is that you're not constantly exporting Visio docs as JPEGs and uploading them to the wiki - that's high-maintenance, and there's never any guarantee that the JPEGs in the wiki actually reflect the latest state of the original diagrams.

With this approach, you just run svn update locally to get the latest set of documents, make your changes, and  run svn commit - just like checking in regular code. Any links pointing to your document via the WebDAV repository will always point to the latest version - and you never have to worry about maintaining multiple copies.

Confluence, the wiki engine we're using right now, should theoretically let you embed images that are hosted under WebDAV - so your tags end up something like

<img src="http://my.subversion.server/svn/myproject/docs/homepage01.png" alt="Homepage design (01)" />

as well as just creating hyperlinks to documents - but Confluence doesn't seem to like that right now.

UPDATE: It's possible to work around this bug by inserting the image URL directly into the Wiki markup surrounded by exclamation marks - !http://www.myserver.com/image.jpg! will display image.jpg from www.myserver.com as a normal IMG tag. Not quite as streamlined as drag'n'drop but it does work.

Spring blossom in BudapestAs an aside - right now we've got a build server running on IIS, a wiki engine running J2EE on Tomcat, and Subversion hosted from Apache, but the resulting environment actually feels more cohesive and integrated than most of the IIS-only setups I've played with over the last year or so. Personally, I think this is because the quality of the apps themselves - Apache, Subversion, Confluence, FinalBuilder, CruiseControl.NET - is so high that the underlying platform is basically irrelevant in day-to-day use. What's been really apparent getting this configuration up and running, though, is that the package authors (or vendors - never quite sure what you call the people you get free stuff from...) have put enough effort into the installers that the platform doesn't really matter at installation time either. Sure, it means we're running three different web servers right now - but they seem to be working, and maybe we should be thinking in terms of eggs & baskets instead.

Thursday, 1 May 2008

Getting Started with Confluence

In principle, I think wikis are great. I've worked on too many projects that have ended up with fifteen different copies of the "definitive" functional spec - all called something like Copy (2) of Rhubarb - Functional Spec - FINAL (3).doc - and no-one's really sure what's going on any more. The idea of a centralised documentation repository that everyone can read and anyone can edit, with a full history of who changed what, is appealing to say the least.

In practise - software specs are weird, awkward documents. Some concepts and ideas are best described as English prose. Some things are best described using screenshots, sketches, flow charts, sequence diagrams, class diagrams. Some things are best included by just pasting the code - 'cos you've already written it, and at the end of the day C# is just a really, really accurate description of an algorithmic solution to a particular problem.

I've played with MediaWiki, but found the installation process a bit daunting - we're a Microsoft shop; we have Windows, SQL Server and the like already up and running, and no real LAMP expertise to speak of...

I've set up Sharpforge, I've used the wiki engine built in to FogBugz. I've recently been playing with PerspectiveWiki (and I have to say, the v3 alpha is very, very nice indeed - but covered in warnings about how it's early alpha and you shouldn't use it for real).

One notable platform I haven't played with is Sharepoint... call me cynical, but when it's a Microsoft business platform that you can attend a five-day training course on, it's probably not quick & easy. We're not that big a team, we're all in one building, there's relatively little collaborative authoring going on, and Sharepoint just feels like total overkill for what we need.

So... this morning, I googled "project documentation wiki" or some such phrase, and stumbled across a commercial wiki engine called Confluence. Mike Coon wrote a post a while back called "It's the Installation, Stupid" - basically saying that nobody wants to install and configure a whole application stack just to try out a new package. He's absolutely right. Confluence uses J2EE, Tomcat, Apache and a whole raft of stuff I really don't want to install - but I didn't have to. I just ran setup.exe. Despite the fact it's based on J2EE, a platform I've never even touched - I had an evaluation up and running on one of our dev servers within half an hour. So far, I'm very impressed with it. The WYSIWYG editing is slick and intuitive, the admin and configuration is excellent, and if all goes well I'll be putting my money where my mouth is before the month is up :)

A couple of hours later, and it was up and running against our existing MS SQL Server database, authenticating via LDAP against our Active Directory server, and generally making the world a better place. One of these days I really have to get my head around LDAP a little better... when it works, it's absolutely wonderful ("no, no new password, just log in with your normal one") but my LDAP filters are too close to cargo-cult programming for comfort right now.

One major gotcha that was almost a showstopper until I found a way around it was getting Confluence to run on the default port 80 on the same server as our existing IIS intranet site, so here's what I did - and so far, it's working.

Confluence coexisting with IIS on a single server

Microsoft IIS, by default, grabs port 80 of every IP address on your server - and there's no way to switch this off using built-in Windows admin capabilities. This means if you want another web server running on the same box, you need to do a bit of tweaking to both IIS and the other server to make them coexist happily.

Confluence installs by default on port 8080, but for various reasons (mainly being I just don't like it), I wanted to get it running on port 80 - alongside the existing IIS sites.

There's extensive docs at the Confluence site about installing an ISAPI redirect so you can have www.mysite.com hosted by IIS and www.mysite.com/wiki invisibly proxy all requests to a JIRA or Confluence server - but that's not really what I was after.

First off, I bound a second IP address to the adapter of the server via Windows' TCP/IP controls - so my server was now running on 192.168.0.78 and 192.168.0.79

Next thing to do was to set up a DNS record (you could also spoof this using your /etc/hosts file) on our local server, so that http://wiki/ resolved to 192.168.0.79 - all our existing intranet addresses still resolve to the previous 192.168.0.78 address

Now for the fun part, which is lovingly documented in Microsoft Knowledge Base Article 813368. Basically, you need to download the Windows Server 2003 Support Tools, and use the included httpcfg.exe to modify the IIS metabase so that IIS will only listen on specified IP addresses. This done, it meant that IIS was still running fine on 192.168.0.78, and 192.168.0.79:80 was still available.

Next step was to modify the server.xml file that's shipped with Confluence. I have to admit this part was pretty much guesswork, but it made sense and it seems to have worked, so use it at your own risk.

By default, this file is installed at C:\Program Files\Atlassian Confluence\Application\conf\server.xml, and the clue is the <Connector /> section which refers to port 8080. By default, Confluence installs at http://localhost:8080/, so on a hunch I tried switching port="8080" to port="80" in this node, restarting Confluence, and as if by magic, everything started working.

<Server port="8005" shutdown="SHUTDOWN" debug="0">
<
Service name="Tomcat-Standalone">
<Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
port="80"
minProcessors="5"
maxProcessors="75"
enableLookups="false"
redirectPort="8444"
acceptCount="10"
debug="0"
connectionTimeout="20000"
useURIValidationHack="false"
URIEncoding="UTF-8"
/>

Firing Static Events from Instance Methods in C#

In a current project, I have a UserDocument class, which inherits from EditableBase<T>, and stores metadata about the contents of a file that's stored somewhere on my app server. What I needed to do was to add an event to the business object that would fire just after deleting the UserDocument's record, so that I could delete the corresponding file from the server's filesystem. The solution is almost what I expected, with one slightly odd workaround.

OK, the original code looked (very roughly) like this. We're using generics throughout so T denotes "whatever kind of business object this is" - Customer, Invoice, whatever.

// Our event handler methods have to match the following delegate
public delegate void DataEventHandler<T>(T t);

// The base class for our editable business object.
public class EditableBase<T> {

// The event we want to fire immediately after deleting a database record.
public static event DataEventHandler<T> DeletingRecord;

// The event we want to fire immediately after deleting a database record.
public static event DataEventHandler<T> DeletedRecord;

}

// A class representing a document or file that's been uploaded to our
// application by a user.
public class UserDocument : EditableBase<UserDocument> {

private string filename;
public string Filename {
get { return filename; }
set { filename = value; }
}

public void Delete() {
// this is the call to the DAL to actually remove the record
// from the UserDocument table.
DataContext.Current.DeleteUserDocument(filename);
}
}

First approach - let's just fire the event in the usual way:

public void Delete() {
    // this is the call to the DAL to actually remove the record 
    // from the UserDocument table.
    DataContext.Current.DeleteUserDocument(filename);
    if (DeletedRecord != null) DeletedRecord(this);
}

Erk. Compiler doesn't like that...

The event 'EditableBase<UserDocument>.DeletedRecord' can only appear on the left hand side of += or -= (except when used from within the type 'EditableBase<T>')

It would appear that you can't fire static events from instance methods. No idea why this is the case - I can't see any reason for it - but the workaround is pretty straightforward. First, we provide a static wrapper method for each of our EditableBase<T> events:

// A wrapper method that can 'see' the static event, but can be called    
// from instance methods.
public static voidNotifyDeletedRecord(T t) {
    if(DeletedRecord != null) DeletedRecord(t);
}

What's cool about this method is that it provides an adapter between instance methods and static events. Instance methods can call static methods; static methods can fire static events. Our instance method can now quite happily do this:

public void Delete() {
// this is the call to the DAL to actually remove the record
// from the UserDocument table.
DataContext.Current.DeleteUserDocument(filename);
NotifyDeletedRecord(this);
}

and then the NotifyDeletedRecord method will raise the static event, and any handlers are attached to it will fire as expected.

What's cool about this particular example is the ability to do this sort of thing - this code is from global.asax, and shows clearly how we can bind a very simple event handler to the static event on the UserDocument class that will clean up the underlying files whenever a record is removed:

namespace MyProject.Website {
public class Global : System.Web.HttpApplication {

protected void Application_Start(object sender, EventArgs e) {
// Attach event handler that cleans up files on disk
// after DB records are deleted.
UserDocument.DeletedRecord +=
new DataEventHandler<UserDocument>(DeleteUserDocumentFile);
}

// This method removes the file associated with the specified UserDocument
// from the application servers' filesystem.

void DeleteUserDocumentFile(UserDocument t) {
string userPath = HttpContext.Current.Server.MapPath("~/UserDocuments/");
string filePath = Path.Combine(userPath, t.Filename);
if (File.Exists(filePath)) {
try {
File.Delete(filePath);
} catch (Exception ex) {
// Something went wrong - maybe log the error, or add
// to a queue of files to be manually cleaned up later?
// In the meantime, just throw the exception again.
throw (ex);
}
}
}
}
}

But why not just put the File.Delete() inside the UserDocument class?

Because, depending on the context in which our business objects are running, we could be deleting from the local filesystem (via Server.MapPath() because we're a website), or via a WebDAV call to a remote file server, or by calling some web service that deletes the remote file for us. Separating the requirement and the implementation in this way means our business objects implement deletion consistently and our application itself is free to run cleanup code.

I think it's quite a nice approach - we're now using it with NotifyCreating, NotifyInserting / NotifyInserted, and all sorts of other hooks around CRUD data access methods, and it seems to be working really rather nicely.