Wednesday, 29 December 2010

Christmas, Crisis and the Cloud

This Christmas, I volunteered at Crisis, the UK charity that provides food, shelter and help for homeless people and rough sleepers over Christmas. It’s the first time I’d done anything like this, and – amongst stints of washing-up, manning doors, looking after showers, mopping and playing Articulate – I spent quite a lot of time helping with the suite of PCs that were provided for our guests to use.

It was a profound and enlightening experience. I use Gmail and Google Docs because they’re convenient – I regularly use 3-4 different PCs and online services are just a convenient way of keeping things accessible and in sync. It never really occurred to me that for someone who doesn’t have a computer of their own, or any regular access to one via a university or workplace, something as simple as a webmail account can be the difference between staying in touch and disappearing completely. People with no regular income, no phone and no fixed address were using Yahoo Mail to keep in touch with family and friends, apply for jobs and look for accommodation. It was quite amazing. I don’t know whether the folks at Yahoo, Gmail, Hotmail et al realize just what a profound difference this makes – but if any of them are reading, well done. You rock, and I never realized quite how much until this week.

Another interesting observation – Microsoft’s market saturation is so complete that even people who store their worldly possessions in a plastic shopping-bag insist on Microsoft Word. Their CVs are in Word format, they know the Word menus and commands – many of them are remarkably accomplished Word users. This is sad, because Word means Windows, and – even with the awesome job the people at the Aimar Foundation had done setting up a locked-down Citrix system -  some machines became infected with a malware product called “Security Shield”. Security Shield masquerades as an anti-virus package,  makes a lot of bad noise about (non-existent) viruses and then asks the user for a credit card number to “activate the virus protection”. Well, Security Shield, this Christmas you actually asked the homeless for money. Well done. You scared the hell out of frightened, vulnerable people who thought they’d broken the computer. You scared and upset the volunteers who were giving up their Christmas holiday to help those same people. I really hope you take all the money you’ve made from your little scam and spend it on something that’ll bring you joy in this life, because karma’s a bitch and your next life is going to suck.

Anyway. Karmic retribution aside, it really got me thinking. For every iPad-toting hipster raving about how they keep all their stuff “in the cloud” because it’s the Next Big Thing, there’s a rough sleeper out there who genuinely doesn’t have anywhere else to keep it. There’s web mail, web docs, online tools for retouching photos, online games (spidersolitaire.com proved very popular with some of our guests), online video – you have no idea how cool YouTube is to someone who doesn’t own a PC or a television - and I started wondering what else technology could offer.

What about a complete online tool set and sandbox for people looking to learn about software development? Tutorials, exercises, a lab, workshop, CV and portfolio, accessible anytime, anywhere, for the cost of a virtual machine and a few gigs of disk space (i.e. practically nothing). A full development environment where they can learn to code, store projects, compile, test and deploy applications and websites – but accessible entirely within the sort of stripped-down web browser you'd find in most internet caf├ęs. There’s no material reason why someone couldn’t learn to build software, establish an online presence, contribute to open-source projects, develop a reputation on Q&A sites like StackOverflow, and generally become a demonstrably employable developer, entirely without any investment in physical resources. The days of needing expensive computer time or equipment to become a coder are gone. The financial barrier to entry in our industry is effectively zero. We should be shouting this from the rooftops, doing everything possible to put information and resources in the hands of anyone who cares to take advantage of it, and celebrating this amazing consequence of the openness that’s sustained our industry for so long.

Anyway. Crisis was awesome, you should all do it next year, and here’s to a 2011 filled with compassion, kindness, enthusiasm and excitement.

Happy New Year.

Sunday, 28 November 2010

Git Logos and Icons

imageI’ve recently started working on a couple of projects that are hosted on Github, most notably the Kurejito payment gateway project for which I’m a member of the core team on an OS project for the first time (yeah, scary…)

Git is pretty cool. A bit of a learning curve but I suspect once I get the hang of frequent commits and local branching and work out how stop shooting myself in the foot it’s going to be quite hard going back to Subversion.

Thing is, I really don’t like the Git icon and logo that are shipping with Mingw32 Git for Windows. I mean, I really don’t like them. I don’t like them so much that every time I fire up git bash, I get completely distracted by how much I don’t like them and end up wanting to redesign them instead of doing whatever it was I was supposed to be doing… so I did.

This started out as me just wanting a decent Windows icon for the link to Git Bash in my Start menu, I got a bit carried away, and ended up with this. And since Git appears to use a collaboratively-edited Wiki page as the closest thing it has to brand and logo guidelines, if you want to use any of these logos or icons for your Git-related shortcuts, projects or pages, go ahead. I’m not going to stop you, and I suspect they won’t, either.

If you’re interested, they’re released under a Creative Commons Sharealike Attribution License – and there’s downloads (including Windows and Mac icon formats) and original artwork files at http://www.dylanbeattie.net/git_logo/

Direct download links:

or grab the whole package as git_icon_and_logo.zip

Friday, 26 November 2010

Sending Templated E-mail using the Spark View Engine

Spark photo © SCholewiak via FlickrWe have a couple of systems that send personalized e-mail notifications to our users, and for a while I’ve been looking for a nice way to use a proper MVC-style view engine to populate the templates for personalizing these e-mails. The problem is, most of the ASP.NET view engines are so tightly bound to System.Web and things like HttpContext.Current and the VirtualPathProvider that running them in a standalone console application is really quite unpleasant.

Well, with a bit of hacking around and some help from @RobertTheGrey, I’ve finally got the awesome Spark view engine running within a console app. No fake VirtualPathProviders, no mocking or spoofing HttpContext.Current – it just works.

The secret is this little snippet of code here:

var templateFolder = @"D:\Templates\";
var viewFolderParameters = new Dictionary<string, string> {
    {"basePath", templateFolder}
};
var settings = new SparkSettings();
settings.SetPageBaseType(typeof(TemplateBase));
settings.AddViewFolder(ViewFolderType.FileSystem, viewFolderParameters);
engine = new SparkViewEngine(settings);

which spins up a fresh SparkSettings configuration object, tells it to use your own TemplateBase class and the templates folder you’ve specified. The method that actually does the population looks like this:

public string Populate(string templateFileName, object data) { 
  var writer = new StringWriter(); 
  var descriptor = new SparkViewDescriptor(); 
  descriptor.AddTemplate(templateFileName); 
  var view = (TemplateBase)engine.CreateInstance(descriptor); 
  try { 
    view.ViewData = new ViewDataDictionary(data); 
    view.RenderView(writer); 
  } finally { 
    engine.ReleaseInstance(view); 
  } 
  return (writer.ToString()); 
}

so you end up with little snippets like this:

foreach(var user in userRepository.RetrieveUsersWhoShouldGetWelcomeEmails()) {
    var htmlBody = templater.Populate("welcome_html.spark”, user):
    var textBody = templater.Populate(“welcome_text.spark”, user):
    mailServer.SendMail(
[email protected]”, user.Email, “Welcome!”, textBody, htmlBody);
}

There’s a full working example in the Spark repository on GitHub if you’re interested.

Spark photo © SCholewiak via Flickr – used under Creative Commons attribution license.

Wednesday, 3 November 2010

Should “Cancel” cancel the cancellation, or just cancel the cancellation cancellation?

This is from SagePay’s new MySagePay admin system, which has brought us all so much joy these past weeks and will go down in history as my second favourite piece of web design ever, after Kibo’s Optimised for WebTV! page.

Today’s happy little discovery. You click to view a transaction. You think “Oops, wrong transaction.” Years of Windows experience means you instinctively hit Cancel – the button marked Cancel, that’s at the bottom-right of the window, just like it is on every other dialog box you’ve seen in the last 25 years. But guess what? In this case, the Cancel button actually cancels the transaction, rather than closing the window and backing out of the operation.

Of course, it wouldn’t do anything that drastic without a confirmation dialog… guess what that dialog looks like?

image

“Cancel” means exactly the same thing in practically every application, window manager and operating system that’s been released for the last 25 years. The idea that anyone might think that using it for a different purpose in a different context is… just unbelievable.

Moral of the story: if your ubiquitous language contains a word or phrase that everyone knows means something else – change the ubiquitous language. Think of a better word. Here’s some to get you started… Unauthenticate. Withdraw. Retract. Deauthenticate. Discard. Abandon. Reject. Decline. Even “Cancel Transaction” would have been better than this.

So – to cancel a transaction, you hit Cancel, then Cancel again. To cancel the cancellation, you hit Cancel, but DON’T press Cancel, press X and then X.

(and yeah, to cancel the dialog as opposed to the transaction, you click the groovy little X in the top corner, because this is Web 2.0, dontcha know?)

Obvious, really.

Tuesday, 26 October 2010

“If It Ain’t Broke…” – Why Good Code Doesn’t Guarantee Happy Users

For years, my sites have processed online payments using a firm called Protx. Protx was acquired by Sage a few years back, rebranded as SagePay, and since then they’ve been gradually making changes to their system. Nothing too drastic, and always with plenty of warning that changes were forthcoming. Their payment API is generally rock-solid, powerful, flexible, and has allowed us to cope with all sorts of complicated payment scenarios without too much fuss. So far, so good.

About two months ago, SagePay announced that a preview of their new administration system was running on their test servers, and they invited everyone to go and have a look. Their old system was kinda clunky, but it worked, and our accounts team had been using it quite happily for years. So I went. I looked. The new system had lots of shiny Ajax – OK, fair enough, this is 2010 after all – and horizontal scrolling. No, really. Check this out:

image

I have a 1600x1200 screen because information is nice and I like being able to see lots of it at once. SagePay have decided, for some reason, that our payment transactions are best presented in a fixed-size 800x300px window. Fine, though – that’s what technical previews are for; lots of people reported this and were assured it would be sorted out.

Well, the new system launched over the weekend. The old one’s gone, they new one’s live, and things like the horizontal scrolling have not been sorted out, and customers’ reactions to the new system is, well, a bit negative. A thread on their support forums is full of gems like:

“A complete crock. Should be rolled back immediately.” (here)

“this is a complete and utter disaster. This system needs rolling back as soon as possible.” (here)

“The system is rejecting the order because of the delivery address line 2 which contained 58-62 xxxxx Road. Your system is not allowing the - in the address (This address is the formatted address that is provided by Royal Mail). This needs to be fixed quickly.” (here)

OK, fine. Big company rolls out shiny upgrade, upsets customers, breaks APIs, whatever. Big companies do this a lot.

But - I was expecting something different here, because last week, Mat – SagePay’s “Chief Nerd” – published this blog post, in which he talks quite rationally and eloquently about the forthcoming improvements. And it sounds like quite good stuff, too. Stuff like:

“To accommodate this we’ve changed the way we develop software, not only at a language level, but also at the process level. We’ve embraced Test Driven Development, Agile methodologies, Continuous Integration and parallel automated testing.”

“By writing tests first and only then writing code that passes those tests, we know that our software does what it is supposed to.”

“This demonstrable code quality gives our developers much more confidence in their code, frees them to refactor software that behaves sub-optimally, and ensures the test team’s time isn’t wasted on trivial bugs.”

(quotes from http://sagepay.wordpress.com/2010/10/18/if-it-aint-broke/)

OK. Mat, I believe you. If your comments about improved capacity and security are correct, this actually represents a huge achievement for the SagePay technical team. But I have to wonder… if the chief nerd is doing everything right, why are the customers so upset?

…if the CHIEF NERD is doing EVERYTHING RIGHT, why are the CUSTOMERS so UPSET?

Well, this is what I think probably happened. First, they’ve been working on this since June 2009; the first real customer preview was in September 2010, and it’s now live, just over a month later. That’s not a series of incremental improvements, delivering discrete chunks of business value every few weeks or months. That’s the big rewrite wearing a not-very-convincing Agile disguise.

Second, one of the core tenets of the Agile Manifesto is “customer collaboration over contract negotiation” – and to get that right, you have to know who your customers really are, and collaborate with them. This is tricky, because to the developers, the ‘customer’ is probably the product owner – but in reality, the customer should be the person who’s going to use the product. Now a firm like SagePay probably can’t call their customers in off the street to collaborate on a big project like this – but they can talk to them.

I think it’s really easy – particularly in big companies – to get this wrong. I have made this mistake many times. You launch version 1.0, you spend a couple of years babysitting it and fielding the support calls, and you end up thinking you know exactly what all your customers want… except you don’t, because happy users never call support. They just use the product and get on with their lives. You can find out what your users don’t like from support calls and complaints, but to find out what they do like, you need to get out there and talk to them, and it sounds like many of the features in the new SagePay system were based on feature requests and complaints from a vocal minority who weren’t really representative of the user base as a whole. It feels like you’re doing exactly the right thing, but you’re not actually collaborating with your customer.

REAL USERS don't care about TEST DATA

Second – technical previews are all very well, but real users don’t care about test data. When I tried the new MySagePay tech preview last month, it had about a dozen transactions in it, from 2006, when we were testing an upgrade to our own payment system. That’s not a realistic test of the system, because there’s no incentive to actually do anything with it. The only way to get real feedback is to get real people to use the software to do real work – and you can’t do real work with test data.

In a nutshell, I think they missed the distinction between have we built the right thing? and have we built it right?

TDD, Continuous Integration, refactoring – these will all help you build it right, but agile’s also about making sure you’re building the right thing, and I think SagePay dropped the ball on this one.

Mat even says (my emphasis)

“…any change introduces risk. … We might produce new software that performs identically to the old for a given payment protocol only to find that two thousand customers are using a non-documented “feature” of the old system that we’ve now written out."

Well, that’s exactly what they did. On Friday, https://live.sagepay.com/txstatus/txstatus.asp was returning real-time status for previous payment transactions – and now it’s gone. Vanished. What used to take 500 milliseconds in an automated script now takes a real person 90 seconds or so – including all that lovely horizontal scrolling they have to do.

Still, all is not lost. If SagePay really do have a clean, new architecture, and full test coverage, and a decent agile process in place, then it should be straightforward to respond to this customer feedback. Respond to change, instead of following a plan. Some tweaks at the presentation layer, maybe a couple of new properties on various view models and controllers, a handful of new methods on the supporting services, and it shouldn’t take long to deliver a product that combines the scalability and security of the new system with a UI that does exactly what the customers need.

Mat, if you’re reading this, I’d be really interested to hear what you guys did today whilst your support team were running around putting out fires. I’d love to see what your product backlog looks like right now, and how you’re condensing the torrent of feedback into user stories and work items. There’s not enough sharing in this industry, and if you and your team can be as open about things today as you were this time last week, you’ll probably help a whole bunch of people – including me – next time we find ourselves babysitting a tricky launch.

Tuesday, 12 October 2010

Making iPhone Ringtones using Free Software

iPhone ringtones are audio files encoded as MPEG-4 audio and then renamed to use the .m4r file extension. If you don’t fancy paying £0.99 for twelve seconds of Van Halen, it’s pretty easy to roll your own iPhone ringtones, and you don’t even need to buy any software to do it. You’ll need:

  1. An MP3 or WAV file of the song or noise that you want to use as a ringtone. Legally speaking, this could constitute copyright infringement, so please stick to original recordings or your own compositions.
  2. Audacity, an open-source audio editor.
  3. ffmpeg, the most awesomely powerful audio/video encoder ever. You’ll want to grab one of the Windows binary builds from http://ffmpeg.arrozcru.org/autobuilds/

Aside: I normally have Cygwin, a set of command-line Unix-a-like tools for Windows, installed into c:\windows\cygwin\, and then I add C:\Windows\cygwin\bin\ to my system path. Having done this, it makes sense to install ffmpeg into c:\windows\cygwin\ – it uses the same bin/doc/lib structure as most *nix ports and means I don’t have to add another path to the system PATH variable.

Import your source track into Audacity. Highlight the section you want, copy it, Ctrl-N to get a new file, and paste it.

image

At this point, playing around with effects might be worthwhile – especially selecting the first 1-2 seconds of the clip and using Effect –> Fade In, and applying a corresponding Fade out to the last second or two of your clip. When you’re happy with it, File –> Export as WAV…, and put it somewhere useful – D:\Ringtones\ works for me.

Then you’ll want to turn your WAV into an m4a file:

D:\ringtones>ffmpeg –i ringtone.wav ringtone.m4a

Input #0, wav, from 'ringtone.wav':
  Duration: 00:00:40.30, bitrate: 1411 kb/s
    Stream #0.0: Audio: pcm_s16le, 44100 Hz, 2 channels, s16, 1411 kb/s
Output #0, ipod, to 'ringtone.m4a':
  Metadata:
    encoder         : Lavf52.80.0
    Stream #0.0: Audio: aac, 44100 Hz, 2 channels, s16, 64 kb/s
Stream mapping:
  Stream #0.0 -> #0.0
Press [q] to stop encoding
size=     427kB time=40.31 bitrate=  86.8kbits/s
video:0kB audio:413kB global headers:0kB muxing overhead 3.447648%

Once you’re done, either via Windows Explorer or the rename command, rename ringtone.m4a to ringtone.m4r

Finally, fire up iTunes, go to File –> Add File to Library… and browse to your new ringtone.m4r file. After a moment’s processing, it should show up in the Ringtones library. Right-click, Get Info… and you can replace the name, title, artist and so on, and then you just need to make sure it’s selected under the Ringtones heading on your iPhone sync menu.

image

Happy ringtoning.

Thursday, 7 October 2010

/usr/bin/tawgo: Instant Geek Cred for Movies

I had this idea a while back. You know how computers in movies are always some sort of interactive brightly-coloured touch-screen kind of deal? (“It’s a UNIX system… I know this!”) And you remember how excited all us techno-geeks got when Trinity used sshnuke to crack the security mainframe system in The Matrix Reloaded?

There should be a utility /usr/bin/tawgo, that’s standard on all Linux distros, that combines the audience appeal of brightly coloured cheese with the geek cred of technical accuracy.

[[email protected]]$ cd /home/dr.evil/
[[email protected]]$ tawgo "PREPARING TO COPY SECRET FILES..."
[[email protected]]$ cp -Rf * /mnt/floppy
[[email protected]]$ tawgo "SECRET FILES COPIED"

[[email protected]]$ tawgo --help

tawgo: Tell Audience What's Going On

Usage: tawgo [option] MESSAGE

Displays MESSAGE in big bright coloured letters, probably in some sort of futuristic animated dialog box.

-a --animation Show cheesy animation
-w --warning Use yellow & black warning stripes
-s --self-destruct Initiate fake countdown sequence
-v --voice Reads MESSAGE in a Female Computer Voice

Use -v -s if you need Female Computer Voice counting down the seconds to our hero's impending destruction.

[[email protected]]$ tawgo "INITIATING SATELLITE ALIGNMENT"
[[email protected]]$ /usr/sbin/comsatctl -a --lat=53.47.6 --lon=1.29.2
[[email protected]]$ tawgo "SATELLITE ALIGNED."
[[email protected]]$ tawgo "BEGINNING FIRING SEQUENCE"
[[email protected]]$ /usr/sbin/comsatctl --target 01 --fire
[[email protected]]$ tawgo -v -s "FIRING SEQUENCE INITIATED"

They’d save a fortune on expensive UI mockups, too – all you’d need is a laptop running Ubuntu and you’ve got your snazzy whizz-bang computer hacker Unix system all ready to go.

Monday, 4 October 2010

Stap Me Vitals! Scanning Domain Servers using System.Management

In preparation for an overhaul of our hosting & network infrastructure, I wanted to gather stats on exactly what was running on all of our remote servers – stuff like how much physical memory is installed in the servers, what’s their Dell service tag, what sort of disk interface they’re running, what version of Windows, that kind of thing.

There’s a command-line tool wmic.exe that does some neat stuff by hooking into the Windows Management Instrumentation interface – little snippets like:

C:\>wmic /node:myserver cpu get description
Description

Intel(R) Xeon(TM) CPU 3.00GHz
Intel(R) Xeon(TM) CPU 3.00GHz
Intel(R) Xeon(TM) CPU 3.00GHz
Intel(R) Xeon(TM) CPU 3.00GHz


C:\>


- hey, looks like myserver has a quad-core Xeon CPU. Nice. Anyway, doing this across a whole lot of properties on a whole lot of servers would clearly be a bit time-consuming, so here’s how you do it using C# and the System.Management namespaces. The username/password specified in ConnectionOptions will need admin rights on all the servers you’re connecting to, and I’m sure you can reformat the output to look a little nicer, but in a nutshell, this’ll scan all the servers you specify and dump their vitals into D:\servers.txt

Simple - especially once you work out that the documentation for the cryptic magic strings inside the WMI classes is at http://msdn.microsoft.com/en-us/library/aa394084.aspx

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Management;

namespace ServerScan {
class Program {

static string[] servers = new string[] { "hugh", "pugh", "barney", "mcgrew", "cuthbert", "dibble", "grubb"};

// see notes on Win32 instrumentation classes at http://msdn.microsoft.com/en-us/library/aa394084.aspx

static Dictionary<string, string[]> wmiClasses = new Dictionary<string, string[]> {
{ "Win32_DiskDrive", new string[] { "Caption", "InterfaceType", "MediaType", "Name","Size" }},
{ "Win32_PhysicalMedia", new string[] { "Tag", "SerialNumber" }},
{ "Win32_Processor", new string[] { "Name", "ExtClock", "CurrentClockSpeed", "DeviceID" }},
{ "Win32_OperatingSystem", new string[] { "Name", "CSDVersion", "Description", "TotalVisibleMemorySize" }},
{ "Win32_SCSIController", new string[] { "Caption" }},
{ "Win32_SystemEnclosure", new string[] { "SerialNumber" }}
};

static void Main(string[] args) {
var output = File.CreateText(@"D:\servers.csv");
ConnectionOptions options = new ConnectionOptions();
options.Username = @"MYDOMAIN\big.boss";
options.Password = "password";
foreach (var server in servers) {
ManagementScope scope = new ManagementScope(String.Format(@"\\{0}\root\cimv2", server), options);
try {
scope.Connect();
foreach (var wmiClass in wmiClasses.Keys) {
ObjectQuery query = new ObjectQuery("SELECT * FROM " + wmiClass);
ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query);
Console.WriteLine("Searching {0} / {1}", server, wmiClass);
var results = searcher.Get();
foreach (var result in results) {
foreach (var thing in result.Properties) {
if (!wmiClasses[wmiClass].Contains(thing.Name)) continue;
output.Write(String.Format("{0}\t{1}\t{2}\t{3}\r\n", server, wmiClass, thing.Name, thing.Value));
}
}
}
} catch (Exception ex) {
output.Write(String.Format("{0}\tFAIL\tFAIL\t{1}\r\n", server, ex.Message));
}
}
output.Close();
Console.Write("All Done!");
Console.ReadKey(false);
}
}
}


Thursday, 30 September 2010

Five Things I Wish My iPhone Did

Wake Me Up When I Get To Bristol

For falling asleep on the train – or on the last bus home – this would be a killer feature; an alarm that goes off when you get within 1km of your destination, waking you up so you don’t miss your stop. For bonus points, it could calculate your average speed and last known distance from your destination, so if you've lost GPS signal, it’ll wake you up early just to be on the safe side.

Geographical Task Reminders

Every night I get home and realize that I’ve left my headphones in my desk drawer at work – again. It’d be cool to set a reminder so that next time I’m at work, it’ll remind me to put my headphones in my bag so I don’t forget them.

Charging Reminders Based On Wi-Fi Signals

I sometimes forget to charge my phone at work, and since it won’t last a full day & night without a top-up, this means it dies sometime during the evening, leaving me out & about with no music, no movies, no Twitter, no e-mail – oh, yeah, and no phone.

If my phone’s connected to my office wi-fi network, then I’m at work – so if I haven’t plugged it in to charge after 10-15 minutes, it should beep at me “hey, I know you’re at work – plug me in!”

Don’t Ask for a Unlock Code When On A Known Wi-Fi Network

Just like above – if my phone’s on my office or my home wi-fi network, then it’s hopefully not been stolen, and it’s probably safe to use it without entering the unlock code first.

Make Phone Calls and Send Text Messages Reliably

I know… wishful thinking. The iPhone is a lovely internet gadget, music player, movie player, sat-nav and all-round geek toy. But for actually communicating with other people, I still carry a battered old Nokia. It makes calls, it sends texts, and the battery lasts well over a week.

Friday, 24 September 2010

Fun with Powershell, SQL Backup and Automated Restores

Our database backup system is based on taking a full backup of every database every night, and transaction log backups every 20 minutes. Backups are shipped offsite to a backup server sitting in a data-centre across town, and what I was trying to do was to automate the process of restoring the latest full backup and all subsequent transaction logs on the remote server.

So: ingredients. I have :

  • Red Gate SQL Backup (and the SqlBackupC.exe command-line client)
  • Windows Powershell
  • A list of databases in a text file (“databases.txt”)
  • A  folder full of database backups, and a folder full of transaction log backups.

The backup filenames look something like:

D:\backup\
  data\
    FULL_myserver_Animals_20100921_183021.sqb
    FULL_myserver_Animals_20100922_183021.sqb
    FULL_myserver_Animals_20100923_183021.sqb
    FULL_myserver_Northwind_20100921_183021.sqb
    FULL_myserver_Northwind_20100922_183021.sqb
    FULL_myserver_Northwind_20100923_183021.sqb
  logs\
    LOG_myserver_Animals_20100921_190002.sqb
    LOG_myserver_Animals_20100921_200004.sqb
    LOG_myserver_Animals_20100921_210003.sqb
    LOG_myserver_Animals_20100921_220002.sqb
    LOG_myserver_Animals_20100921_230005.sqb
    ...
    ...

What I’m trying to do is, for each backup listed in databases.txt, I want to find the most up-to-date full backup, restore it, and then restore, in chronological order, every transaction log that’s been created since that full backup was taken.

So. Powershell… time to see what all the fuss is about.

Let’s start with a couple of things that threw me until I got my head around them. Powershell is a shell. Don’t think of it like C# or Java – think of it like a batch file on steroids. With objects. If you want to run a program from a Powershell script, you just put its name in the script. For example – this is a valid Powershell script that’ll open itself in Notepad:

C:\Windows\System32\Notepad.exe myscript.ps1

Copy that line into notepad, save it as as myscript.ps1, fire up Powershell.exe, navigate to the folder where you saved it, and type .\myscript.ps1 – bingo.

Secondly, Powershell is designed so that if something has a certain meaning in DOS, it’ll mean the same thing in Powershell. <, | and > will redirect and pipe output – just like in DOS – which means Powershell can’t use > as the greater-than operator – so they’ve used –ge instead. (Mmm. Perly.) Backslashes are path separators – just like in DOS – so Powershell uses the backtick (`) as an escape character.

Thirdly, Powershell supports Perl-style string interpolation.

$foo = 10
echo “I have $foo fingers”

will output “I have 10 fingers”. To avoid this behaviour, use single-quotes instead:

$foo = 10
echo 'I have $foo fingers'

will print "I have $foo fingers” Be careful, though – underscores are valid in variable names, so

echo “LOG_$server_$catalog_$timestamp”

is actually going to try and find variables called $server_ , $catalog_ and $timestamp. To work around this behaviour, enclose the variable name in braces:

echo “LOG_${server}_${catalog}_${timestamp}”

Finally – Powershell supports aliases, which means most commands have more than one name. For example, there’s a built-in command Get-ChildItem – and dir, gci, and ls are all shortcuts for this command, so when you bring up a Powershell command and want to list the contents of the current directory, it doesn’t care whether you type dir, ls, gci, or Get-ChildItem – they all do exactly the same thing.

OK. Time for scripty fun. Here’s the script – you’ll need to supply your own databases.txt and credentials, and probably tweak the

# Read the contents of databases.txt into a collection called $databases
$databases= Get-Content "databases.txt"

# Path to the SQL Backup command-line client.
$exe = "C:\Program Files (x86)\Red Gate\SQL Backup 6\SqlBackupC.exe"

# Credentials for connection to SQL Server. 
$username = 'sa' $password = '[email protected]'
# Database backup encryption password (you *do* encrypt your backups, don’t you… ?)
$dbpw = 'T0P$3CR3T'
foreach($database in $databases) {

  # Get a collection of ALL backup files in the backup folder.
  $allSqbFiles = Get-ChildItem D:\backup\data\*.sqb 
  
  # Create a regular expression that'll match filenames
  # against the current database name
  $regex = "FULL_\(local\)_${database}_\d+_\d+\.sqb"
  
  # Filter the list of backup files to find those for the current DB
  $backups = $allSqbFiles | Where-Object { $_.Name -match $regex }
  
  # Sort the backups by LastWriteTime...
  $backups = $backups | Sort-Object LastWriteTime

  # and extract the most recent one
  $latestBackup = $backups | Select-Object -last 1

  # Capture the LastWriteTime of the most recent backup  
  $t = $latestBackup.LastWriteTime

  # Extract the full name of the database backup file  
  $backup = $latestBackup.FullName
  
  # Construct the SQL statement to pass to SqlBackupC.exe - note the 
  # doubled double-quotes used to include a double-quote in the result.
  
  $sql = """RESTORE DATABASE [$database] FROM DISK = '$backup' WITH PASSWORD = '$dbpw', NORECOVERY, REPLACE"""
  
  # This next bit is what actually does the database restore, and 
  # it's a bit fiddly.
  # & (the ampersand) is known as the "call" operator, and 
  # basically says "hey, run this command" Note that you CAN'T just
  # give it a big fat string containing the whole command, arguments
  # and everything. That's not how it works.
  # Second, the arguments beginning with a hyphen need to be
  # backtick-escaped so Powershell knows they're not operators.
  # Finally, note how we've included the $exe in quotes,
  # because the path to the SqlBackupC.exe has spaces in it. 

  & "$exe" `-U $username `-P $password `-SQL $sql
  
  # Use another regex to grab all the LOG files for the current database.
  $regex = "LOG_\(local\)_${database}_\d+_\d+\.sqb";
  $logFiles = Get-ChildItem D:\backup\logs\*.sqb
  $logFiles = $logFiles | Where-Object { $_.Name -match $regex }
  
  # Then we filter this list to return only those that are more
  # recent than the full backup (which we captured earlier)
  $logFilesToRestore = $logFiles | Where-Object {$_.CreationTime -ge $t } | Sort-Object CreationTime
  
  $logFileCount = $logFilesToRestore.Length
  
  # If there's no log files, we break out of the loop and move
  # onto the next database.
  if ($logFileCount -le 0) { continue }
  
  # Now, the LAST transaction log needs a slightly different SQL 
  # command, because we want the last restore to leave the database
  # in a usable state, so we need to split the logs into the final
  # one, and all the others. 
  $splitAtIndex = $logFileCount - 2
  
  # Powershell lets you slice arrays by saying $MyArray[x..y]
  $logFilesExceptFinaltOne = $logFilesToRestore[0..$splitAtIndex]
  
  $finalFogFile = $logFilesToRestore | Select-Object -last 1
  
  foreach($log in $logFilesToRestore) {
    $logFileName = $log.FullName
  
    # Construct the SQL statement to restore the transaction log
    # leaving the database ready to accept further log restores:
    $sql = """ RESTORE LOG [${database}] FROM DISK = '${logFileName}' WITH PASSWORD = '${dbpw}', NORECOVERY"" "
   
    # and run SqlBackupC with that SQL statement:
    & "$exe" `-U $username `-P $password `-SQL $sql
} $logFileName = $finalFogFile.FullName # Construct the SQL statement to restore the last # transaction log and leave everything ready for use $sql = """RESTORE LOG [${database}] FROM DISK = '$logFileName' WITH PASSWORD = '${dbpw}',
RECOVERY, ORPHAN_CHECK "" "
# and run it! & "$exe" `-U $username `-P $password `-SQL $sql }

Having never really used Powershell before, this whole thing took about three hours to put together – and I spent the first two hours cursing Powershell for being so damned awkward, and then it just clicked and suddenly everything made a lot more sense. Hopefully the tips above will save you a bit of frustration if you’re taking your first steps with it… and I’ve a feeling I’m going to be doing a lot more Powershelling before too long. For stuff like this, it’s so much more powerful than batch files, and so much more maintainable than writing little command-line utilities in C#.

Monday, 20 September 2010

Want a Free Red Gate Tool to Support Your Open-Source .NET Project?

Earlier in the year, I helped Red Gate software out with some customer focus group stuff, and in return they’ve offered me a free license to any single Red Gate software tool

Since I’m already a happily-licensed user of Red Gate’s SQL Toolbelt and most of their .NET tools, I thought I’d pass this generous gesture on to the .NET community, so here’s the deal.

If you maintain an open-source .NET project, and you’d like a free license for any single Red Gate tool, drop me a short e-mail, tell me about your project, tell me which tool you’d like, and why it would be useful.

imageThere is a single license up for grabs, for any of the tools listed on Red Gate’s products page at http://www.red-gate.com/products/index.htm

I’m not looking for ten-page essays – if your project is well-known, just tell me which tool you’d like and how it will help. E-mail your entry to [email protected] – please put “Red Gate Giveaway” in the subject line so they don’t end up in the spam folder, and remember to include your full name & e-mail so I can let you know if you’ve won. I’ll accept entries until this Friday, October 1st, at 10:00am (GMT), after which I’ll pick a winner and send their details across to Red Gate.

Wheaton’s Law applies; my decision is final, no correspondence will be entered into, etc, etc. Good luck!

Wednesday, 15 September 2010

.NET String Formatting Cheat Sheet

This morning I saw this tweet from @jhollingworth:image

which linked to this excellent guide to .NET string formats by Steve Tibbett

Sometimes, something is just so incredibly useful that even bookmarking it isn’t convenient enough – so I’ve hacked Steve’s guide and examples into a one-page A4 cheat sheet and stuck it on the wall next to my desk.

image It’s up on my site as a PDF or as a Word document – download it, print it out, stick it on the wall, and you’ll never find yourself Googling {0:n} vs {0:c} again.

All credit to Steve Tibbett for this – all I did was make it fit on one printed page. Happy formatting.

Monday, 13 September 2010

Automating Secure FTP Backups to a Windows 2008 Server

I’ve been looking into solutions for shipping database backups and log-files to a remote server as part of our backup strategy. Requirements are pretty simple:

  • It’s got to be completely automated
  • A daily run has to take less than 6 hours so it’ll finish overnight whilst our connection is quiet
  • Transfers need to be encrypted to stop people sniffing our packets
  • I don’t want to spend a lot of money
  • If I can get away without lots of complicated (read: fragile) command-line switches, then that would be a bonus

Now, there’s four common protocols for doing secure file transfers - SFTP, FTPS, FTP+SSH, and SCP. The specifics aren’t important – what matters is that they’re incompatible, so your client and server have to be using the same protocol, otherwise nothing will work. There’s also various dedicated tools such as rsync, robocopy, xcopy, SyncBack, DeltaCopy… the list goes on. Setting up rsync on Windows is a bit fiddly – not to mention the dozens of subtle and nefarious command-line switches available to rsync.exe – and whilst I love a bit of cygwin hacking as much as anyone else, I’d really rather use something a little more intuitive here. I did try DeltaCopy, a GUI wrapper around rsync, but I quickly ran into security problems with SSH keys and username/password combinations. Since I wanted to avoid open folder shares on the remote system, tools like xcopy and robocopy were out.

In the past, I’ve had great results with Vandyke Software’s VShell, a commercial Windows SSH server, but whilst putting this lot together, I discovered that in Windows Server 2008, you can set up an FTP server to encrypt connections via SSL (it’ll even use the same certificate as your website). That’s good, by the way – I think things are always cheaper & easier when Windows does them out of the box.

So – assuming you’ve already got a Windows 2008 Server running IIS, and you’ve already got a certificate installed, then log onto the server, fire up IIS, create a new FTP site, switch on SSL, bingo.

image 

Just to sanity-check that everything’s working, fire up WinSCP, create a new session using FTP with SSL Explicit Encryption, and you should be able to connect to your new secure server.

image OK – so far, so good. We can connect, we can see a directory listing. Now what I want to do is to automate the client side of things, and for this bit, I’m going to use a superb bit of software called Super Flexible File Synchronizer. This is something I’ve only discovered recently, but so far, I’m delighted with it. It’s got native support for FTP. SFTP, WebDAV and HTTP, as well as Google Docs, Amazon S3 and Azure (!), plus a built-in scheduler, mail notification facility… as I said, it’s looking very promising.

Right, time for some actual computer science – complete with experimental data and graphs and everything. I want to know how fast this thing is going to go, and I want to know how much the encryption’s going to slow things down, so I’ve put together 72Mb of assorted database backups, TIFF files and documents, and transferred them up to the remote server using various settings. Results here are the average of five runs for each combination.

There’s a slightly odd setting in Super Flexible File Synchroniser – when you’re setting up an Internet target, if you pick FTP, it’ll give you a choice of libraries – cryptically called 1, 2 and 3.

image I’m guessing that under the hood, there’s three different off-the-shelf FTP libraries in there, and the option lets you pick another one if the default doesn’t play nicely with your particular server. (Yeah, like anyone’s ever going to need that…)

So, numbers. I ran five different settings – network share (i.e. copying straight to \\myserver\backups using Windows file sharing), “open” FTP (no SSL), and then each of the three FTP libraries with the SSL option switched on.

 

Total Time

Time per File

Transfer Speed

 

Average

Fastest

Slowest

Average

Slowest

Fastest

FTP

00:00:46

2

0

16

1,944

146

2,274

SMB

00:00:45

2

0

16

1,976

86

2,451

FTP+SSL (Library 1)

00:08:22

22

0

41

148

1

870

FTP+SSL (Library 2)

FAIL

FTP+SSL (Library 3)

00:00:40

2

0

14

2,161

129

2,521

image

A couple of things to note:

  1. FTP Library 2 just didn’t work – it wouldn’t even connect to the remote server to retrieve a directory listing.
  2. FTP Library 1 clearly had some issues – a typical run took well over eight minutes, and the subsequent logs were littered with timeouts and stuff like this:
  3. COPY L->R d:\FileDemo\F00090820-0103.tif  (889.7kB)
    FTP Exception (5): EclSocketError Timeout error occured @ 009283D0
    COPY L->R d:\FileDemo\F00090820-0104.tif  (851.6kB)
    FTP Exception (5): EclSocketError Timeout error occured @ 009283D0

Fortunately – FTP Library #3 worked perfectly, and gave transfer speeds that were actually faster than the raw FTP connection. I’m thinking that’s probably down to variations in test conditions (time of day, traffic. and so on) but even allowing for a little experimental error, it’s definitely fast enough to work with.

We’re looking at transferring about 20Gb worth of database backups and photographs a day. Based on the stats above, we can do 75Mb in 45 seconds, which equates rather neatly to 100Mb / min, which means our 20Gb backups are going to take just over three hours. $60 for a copy of SuperSynchroFlexofile O’Maticalyzer (and I’ll throw in a case of beer if you can come up with a snappier name for the next edition), SSL all the way, and a nice GUI to set it all up.

Isn’t it nice when everything just… works?

Monday, 23 August 2010

Heisenbug of the Day: IIS 7.0 Discarding POST Data From Firefox 3 when using Custom 404 Handlers

Our site uses IIS custom error handlers, so that when you request /not/a/real/page.html, it’ll actually run /errors/404.asp – there’s a nice article on 4guysfromrolla about how you do this in classic ASP.

In theory, this works for both GET and POST requests, but last week we hit a snag – some of our jQuery Ajax code wasn’t working properly in Firefox 3. More specifically – it worked fine locally, it worked fine in all other browsers, but when we deployed the code to any of our test or live servers, it wouldn’t work in Firefox. IE, Opera, Safari, Chrome – all fine; it seems like only Firefox was affected.

Method Server Request URL

Result

GET IIS 7.5 (Windows 7) /errors/404.asp

OK

POST IIS 7.5 (Windows 7) /errors/404.asp

OK

GET IIS 7.5 (Windows 7) (404 handler)

OK

POST IIS 7.5 (Windows 7) (404 handler)

OK

GET IIS 7.0 (Windows 2008) /errors/404.asp

OK

POST IIS 7.0 (Windows 2008) /errors/404.asp

OK

GET IIS 7.0 (Windows 2008) (404 handler)

OK

POST IIS 7.0 (Windows 2008) (404 handler)

FAIL

Firebug didn’t show up anything unusual, so we fired up Fiddler, a web debugging proxy that’ll show you what’s actually being passed between the client and the server. At least, that’s the idea… what actually happened is that when we started running Fiddler, the bug went away. Yep… we had ourselves a real live Heisenbug:

Heisenbug: “…a computer bug that disappears … when an attempt is made to study it.” [via Wikipedia]

Fiddler runs as an HTTP-level proxy – in other words, it understands the HTTP protocol, and sits between your web browser and your web server, and – in theory – transparently forwards information between them, whilst recording all the bits that fly backwards and forwards so that you can dissect them and see what’s going on. I’d guess that, somehow, Firefox was sending dodgy requests, and Fiddler was cleaning up these requests as part of the proxying process – hence why the problem disappeared when Fiddler was running.

Time to dig a little deeper. Wireshark is a deep-level network protocol analyser that’ll sniff your network traffic right down to the frame level. What I did next was to load up Wireshark, set up a filter [1] to show only HTTP traffic to/from our build server, and then submit the same request from a couple of different browsers – including Firefox.

image image

The first grab there is what’s travelling over the wire when you POST that form using Google Chrome; the second is the same POST submitted using Firefox. Remember – at this point, we’re totally lost and so looking for absolutely anything that’s different. If you look closely, you’ll see the Chrome trace includes an extra line - [Reassembled TCP Segments (680 bytes)] – that isn’t in the Firefox trace. They’re otherwise identical other than known differences like the User-Agent string and so on. Curious. A bit of experimentation verifies that Safari and IE are doing the same thing as Chrome – submitting two frames of data for each POST – where Firefox is only submitting one.

It turns out this triggers a bug in IIS 7.0 when you’re using custom 404 handlers.

Bad Analogy Time…

image Imagine it’s your birthday. You’ve got a load of packages to open - you open the first one, and there’s a card inside saying “Happy Birthday! Enjoy the Lego! Love Granny xxx”

Now – at this point, you’re expecting some Lego, right? Well, if Granny is Chrome, IE or Safari, she’s been sensible – she’s sent the card in its own envelope, and put the Lego in the next parcel. But, if Granny is Firefox, then Granny has done something foolish, and has crammed as many of the Lego bricks as she can into the same envelope as the birthday card. If the Lego set is only tiny, then she can fit all the bricks into the envelope – and so won’t bother sending the now-empty Lego box.

So… imagine the envelopes/parcels are TCP frames, the birthday card is your HTTP request, and the Lego is the associated POST data. The card (headers) say “hey, there’s more stuff coming” – and then somewhere close behind, there’s another package with that “stuff” in it.

Now, onto the IIS 7.0 bug. Under normal circumstances, IIS 7 copes just fine with POST data being in the same frame as the actual request. (That’s why this bug doesn’t affect every Firefox user who visits an IIS7 site.)

Thing is - when a request is processed by a custom 404 handler, IIS 7.0 is opening the envelope, finding the birthday card, going “whooopeee! Lego!” – and then throwing the envelope away without checking to see if there’s any Lego in it, before looking around excitedly to see where the next parcel is.

For very small POSTs, this results in the Request.Form being empty (because all the Lego has been thrown away with the envelope). If you deliberately pad your POST with a couple of really long fields - <input name=”padding” value=”xxxxxxxx … xxx” /> for 2000 characters or so – then you’ll see that even Firefox now has to split the request over more than one frame, and that any POST values that end up in the second frame are now accessible to IIS via Request.Form in the usual way. Kinda like Granny sending you a really big Lego set, and putting the first 20 or so bricks in the envelope with the birthday card, and the rest in a separate parcel or two – throw away the envelope, and you’ve still got *most* of the bricks, but many of them have gone missing.

So… workarounds. Firefox patch – no good. Too many installed users. Upgrade all our web servers to IIS 7.5? Er, not right now, thanks. IIS hotfix? Lovely – if you’ve got one, send it over.

In the meantime, the best option we could find was to stick two hidden fields at the top of the affected form, something like:

<input type=”hidden” name=”ff_frame” value=”xxxxx . . . 1460 Xs here  . . . xxxx” />
<input type=”hidden” name=”ff_split” value=”1” />

<!—everything after this point will show up intact in Request.Form -->

<input type=”hidden” name=”real” value=”some_data” />

The big string of X’s pads the first frame to make sure all your real data ends up in the second one, and the ff_split value ensures that this padding doesn’t mess up IIS’ parsing of subsequent POST values. Yes, this is disgusting - and it adds 1Kb+ to every POST - but it’s only required in a handful of places, and we’re looking to isolate it inside the jQuery code we’re using so it’ll be dynamically inserted into POSTs where necessary.

[1] ((http.request || http.response) && (http.host contains "build")) && !(http.request.uri == "/favicon.ico")

Saturday, 24 April 2010

Uncle Bob’s Bowling Kata in Functional C#

During the session on functional programming at OpenVolcano10 this week, Uncle Bob Martin spoke of a particularly elegant functional solution to his bowling game kata. Proposed by Stuart Halloway, the solution regards the bowling game itself as a (potentially infinite) sequence of ‘rolls’ (balls), and then uses some nicely elegant list manipulation to extract ten valid scoring frames from the sequence of balls and calculate the score of the game.

C# and IEnumerable<T> allow you to do some very similar things in .NET – basically, you can define operations in terms of mapping one infinite list to another infinite list, and you don’t have to worry about stack overflows or out of memory errors because, thanks to the wonder of lazy evaluation, nothing is actually allocated or returned until you start asking for the resulting values.

Take a look at this code snippet:

static IEnumerable<Int64> Integers {
    get {
        for (var i = 1; true; i++) yield return (i);
    }
}

If you’re thinking “but that’s just an infinite loop!” – you’re partly right. Yes, it’s infinite, but no, it’s not a loop. The magical yield return operator there actually breaks the loop, and allows you go to infinity (and beyond!)one step at a time. Using the LINQ Take() method, you can easily slice’n’dice this infinite list into useful pieces:

// Calculate the sum of the first 20 positive integers:
var sum = Integers.Take(20).Sum();

// Calculate the product of the first 10 positive integers:
var product = Integers.Take(10).Aggregate((a, b) => (a * b));

By throwing some extensions methods onto Int64, you can use the same approach to filter the list:

public static class ExtensionMethods {
    public static bool IsPrime(this Int64 candidate) {
        if ((candidate & 1) == 0) return (candidate == 2);
        var num = (int)Math.Sqrt((double)candidate);
        for (var i = 3; i <= num; i += 2) {
            if ((candidate % i) == 0) return false;
        }
        return true;
    }
}

// now we’ve defined myInt64.IsPrime(), we can do this:
var primes = Integers.Where(i => i.IsPrime());

We now have a C# variable – primes – that, for all intents and purposes, contains all the prime numbers. If we try to print the entire list, our program will never terminate – but that’s what we’d expect, because the list of prime numbers is infinite. We can, however, do things like printing a list of the first 100 primes:

foreach(var prime in primes.Take(100)) Console.WriteLine(prime);

or calculating the product of the first 50 primes:

var product = primes.Take(50).Aggregate((a,b) => (a*b));

So, what’s all this got to do with bowling? Well – using this approach, we can store the progress of a bowling game as a (potentially infinite?) list of rolls, and by using a couple of useful extension methods on IEnumerable<int>, we can calculate the resulting ten-pin bowling score in a single LINQ statement:

var score = rolls.ToFrames().Take(10).Sum(frame => frame.Sum());

Here’s the supporting extension methods  - which basically capture the scoring quirks of spares, strikes, frames and the other idiosyncracies of ten-pin-bowling. The ToFrames() method is particularly interesting – it’s translated from the cons operator in Lisp/Clojure, and effectively returns a list (the rolls that count towards the current frame), followed by a list of lists (where each list represents a subsequent frame in the game):

public static class BowlingRules {
    public static IEnumerable<IEnumerable<int>> ToFrames(this IEnumerable<int> rolls) {
        yield return (rolls.Take(rolls.BallsToScore()));
        foreach(var frame in (rolls.Skip(rolls.FrameAdvance()).ToFrames())) {
            yield return(frame);
        }
    }

    private static int FrameAdvance(this IEnumerable<int> rolls) {
        return (rolls.IsStrike() ? 1 : 2);
    }

    private static int BallsToScore(this IEnumerable<int> rolls) {
        return (rolls.IsBonus() ? 3 : 2);
    }

    private static bool IsStrike(this IEnumerable<int> rolls) {
        return (rolls.Take(1).Sum().Equals(10));
    }

    private static bool IsSpare(this IEnumerable<int> rolls) {
        return (rolls.Take(2).Sum().Equals(10));
    }

    private static bool IsBonus(this IEnumerable<int> rolls) {
        return (rolls.IsSpare() || rolls.IsStrike());
    }
}

The project – including the unit test from Uncle Bob’s kata translated to NUnit / C# – is up on Google Code if you want to take a look.

Wednesday, 14 April 2010

Dynamic CSS with .less at the TechDays Open Source Evening

image Thanks to all of you who came along to last night’s Open Source .NET evening (and extra thanks to @serialseb for putting the whole thing together!) I think we managed to cover a really great programme of topics – OpenRasta, Ben Hall using Ruby to test ASP.NET projects, Mike Hadlow talking about the Windsor IoC container, Jeremy Skinner presenting his FluentValidation library (and a late apology from Neil Robbins, whose CouchDB talk had to be cancelled after his laptop went into ‘permanent hibernation’ at the last minute… I guess the long cold winter affects laptops as well as people)

I should make one very important clarification: dotLess is NOT my project! The original Ruby project was created by Alexis Sellier and Dmitry Fadeyev, and it was ported to .NET by Christopher Owen, Erik van Brakel and Daniel Hoelbling. Apologies if this wasn’t clear from the presentation – from the questions afterwards, I appear to have inadvertently given the impression I was one of the project contributors. I’m not; I just think it’s an awesome library and wanted to give it a bit of exposure.

You can download the demo code from last night’s talk here:

    http://www.dylanbeattie.net/dot_less_demo.zip 

You should just be able to unzip the whole lot into a folder, bring up LessDemo.sln in Visual Studio and hit “Run” – but let me know if you’re having any problems with it.

The other bits I spoke about in the talk are:

Finally, I’m very excited to see that for the next version of LESS they’re migrating away from server-side Ruby and porting the entire library to Javascript – so you can run it client-side or server-side depending on your setup. Nothing’s officially released yet, but you can get download the latest source from the less.js project on GitHub – definitely a project worth keeping an eye on.

Tuesday, 2 March 2010

A Manager’s Perspective - Why Attend Progressive.NET Tutorials?

SkillsMatter are organizing another series of their Progressive.NET tutorials this May. I’ve heard from several people who would love to attend but are struggling to convince their managers that it’s worth the money (and to justify the time out of the office) – so I’d like to take this opportunity to go on the record, as a software manager, to endorse these tutorials.

If you’re in a hurry, here’s the quote:

I’m Dylan Beattie. I’ve been programming professionally for over 10 years, I’ve run a software team for 5 years. I believe that the SkillsMatter Progressive.NET tutorials offer more knowledgeable speakers, more relevant content, and better value for money than any other paid training course I have ever attended – and I believe that the skills learned at these tutorials will almost immediately result in improved productivity and better-quality software.

Progressive.NET is not a dry textbook lecture delivered by some professional trainer who doesn’t care what happens after you leave the room. It’s enthusiastic, passionate experts, who hate wasting time and money as much as you (and your boss) do, sharing lessons they’ve learned the hard way, and demonstrating tools that will help you build better software faster.

I manage a team of Web developers, who build software on Microsoft platforms – Windows, IIS, SQL Server, ASP, ASP.NET, MVC, and every kind of data access framework from good old ADO through ADO.NET, Linq to SQL, Castle ActiveRecord and Fluent NHibernate. Between us we’ve built and shipped software using almost all these platforms. We’ve spent painful hours and days tracking down obscure bugs in seven-year-old VBScript code, and we’ve seen how smoothly everything happens once you have unit tests, continuous integration and a decent development environment in place.

Over the years, my team & I have been on various “proper” developer training courses, including a five-day Microsoft-certified ASP.NET course that cost thousands of pounds and was basically a complete waste of time.

Three of us attended the SkillsMatter Progressive.NET tutorials last year, and found the event to be very, very good. The hardest thing about adopting a new technology like NHibernate or Castle is working out where it can help, and what it can do – and the best way I’ve found of getting this “guided tour” of a project’s capabilities is to spend a few hours with an expert like Ayende or Hammett showing you how it works and how it all fits together. One of my team also said that it was the first time he’d really spent time out of the office thinking about how to write better software – and he’s right; it’s hard to find the time when you’re at your desk day in, day out, to really take a step back and think “could I be doing this better?”

From a team lead / management perspective, the lessons learned and insights gleaned from the sessions at last year’s tutorials have made it much easier to base new projects on tools like NHibernate – and this has resulted in genuine time savings. We’re delivering better code, we’re delivering it faster, we’re not wasting time writing stored procedures and SELECT statements. We have less bugs. We have better release processes. We are more productive because we are inspired by learning new techniques, we’re using the right tools for the job, and we feel like part of a larger community.

More importantly, having sat in a room full of people sharing the same interest in software and software development, we don’t feel like we’re going off on some crazy hippy open-source tangent. We’ve talked IoC and ORM with people who write e-commerce systems, insurance software, finance software and major enterprise systems for healthcare organisations. It’s a radically different perspective from trying NHibernate because you read about on some guy’s blog. It’s given me, my team, and our business stakeholders confidence in these tools, and access to a rich network of fellow users and experts who are happy to advise and help out when we get a bit lost.

Wednesday, 17 February 2010

Fun and Games with NHibernate and String Keys

One of the team spent much of yesterday banging his head against the wall. We had a query that took 18ms to run in Query Analyzer, and 1800ms to run from NHibernate. NHibernate Profiler would report 1800-2200 ms – and copying’n’pasting the exact same query statement into Query Analyzer would run it in < 20 ms. Running the same SQL statement via ADO.NET, from the same block of code, proved it wasn’t a network latency issue or anything - ADO.NET came back in 20ms, NHibernate still took over 2 seconds.

Finally, thanks to SQL Profiler, we managed to work out what was going on. The table in question had a primary key which was a varchar(9); in this instance, we were filtering on a particular primary key value – and NHibernate was specifying the value as an nvarchar(6), because NHibernate treats strings as unicode (nvarchar) by default. This C# code

currentSession
    .CreateCriteria<UserFeedItem>()
    .Add(Expression.Eq("User.UserCode", “abc123”)) // is a varchar(9) in the database for historical reasons
    .AddOrder(Order.Asc("Received"))
    .SetFirstResult(firstResult)
    .SetMaxResults(resultCount)
    .List<UserFeedItem>()
);

was resulting in the following SQL statement

exec sp_executesql
    N'SELECT TOP 50 FeedItemId16_0_, FeedUser16_0_, MailSent16_0_, ItemSeen16_0_, Deleted16_0_, Received16_0_
    FROM (
        SELECT this_.FeedItemId as FeedItemId16_0_, this_.FeedUser as FeedUser16_0_, this_.MailSent as MailSent16_0_, this_.ItemSeen as ItemSeen16_0_, this_.Deleted as Deleted16_0_, this_.Received as Received16_0_, ROW_NUMBER() OVER(ORDER BY this_.Received) as __hibernate_sort_row FROM Spotweb.dbo.[FeedUserFeedItem] this_ WHERE this_.FeedUser = @p0) as query WHERE query.__hibernate_sort_row > 50 ORDER BY query.__hibernate_sort_row',
    N'@p0 nvarchar(6)',
    @p0=N'abc123'

This means SQL Server has to do an implicit conversion when comparing your parameter value to the table’s primary key value – instead of using indexes, it has to retrieve every row, cast the primary key to an nvarchar(6), and then see whether the result is equal to your parameter. As you can imagine, this slows things down a little – especially on a table with several million records.

The immediate solution for this was to rewrite the code as follows:

currentSession
    .CreateQuery("from UserFeedItem where User.UserCode = :code")
    .SetParameter("code", “abc123”, TypeFactory.GetAnsiStringType(9))
    .SetFirstResult(firstResult)
    .SetMaxResults(resultCount)
    .List<UserFeedItem>()
);

- notice how on the third line we’re explicitly specifying TypeFactory.GetAnsiStringType(9) to force NHibernate to specify the parameter as a varchar(9). A better solution is to explicitly specify the mapping type for the column – we’re using Fluent NHibernate, so the mapping override code looks like this:

public class SiteUserOverrides : IAutoMappingOverride<SiteUser> {
    public void Override(AutoMapping<SiteUser> map) {
        map.Id(u => u.UserCode).CustomType(“AnsiString”);
    }
}

and now any reference in any query to a SiteUser.UserCode will be properly mapped as varchar instead of nvarchar.

Moral of the story: be careful with varchar and nvarchar keys when using NHibernate; make sure there aren’t implicit type conversions happening all over the place, because they can seriously mess up your performance.

It’s also worth noting that even the excellent NHibernate Profiler couldn’t shed any light on what was going on here – SQL Profiler (included with SQL Server) is still an incredibly powerful and under-used tool, and it’s well worth spending a couple of hours getting to grips with it.