Wednesday, 26 October 2011

GiveCamp UK 2011 – A Retrospective

I spent this last weekend at UCL’s Bloomsbury campus in London, with a hundred or so charitable geeks, at the first UK GiveCamp. I came away from it amazed – at the generosity of the volunteers and sponsors alike; at the sophistication of some of the solutions that were delivered within a single weekend, and at the reactions from the charities involved when we presented our projects on Sunday afternoon. It was a wonderful experience, and one I’d happily do again, but in the spirit of agile and continuous improvement, here’s my own personal retrospective on GiveCamp UK 2011.

What went well?

  • The project. I was lucky enough to be working with a team who were helping Scene and Heard, a mentoring project based in north London that arranges for children to work with volunteer theatre professionals to write and perform plays. Simma , their Head of Development, showed up on Friday with a great pitch, obvious enthusiasm, a wealth of knowledge about the domain, and a “wish list” of projects and ideas for us to investigate. Top of the list was a ticket booking system – and I was immensely heartened when our entire team of coders – you know, people who like to build stuff – unanimously agreed that the pragmatic solution would be to hook them up with EventBrite. By 8pm on Friday, we’d drawn up a backlog of requirements, made sure EventBrite would actually deliver what they needed, and a couple of the team were already working on the integration. This left the rest of us free to start looking at some of the more blue-sky ideas on Simma’s list… and somehow by Saturday afternoon we were building a full relational database, MVC web front-end, and a set of data migration and normalisation routines to import their data from a collection of Access and Excel sheets into SQL Server. It’s not quite live yet – we’re having some problems getting the Entity Framework config code to work with AppHarbor’s SQL Server databases – but by the end of the week we hope to have it online, hosted, and handed over to them, for immediate use, or as a solid platform for future development.
  • The venue. The power worked. There was ample desk space and seating. The wi-fi was pretty solid – for most of the weekend we were actually using their wi-fi network for all our database access and development, as well as simple web browsing & e-mail, and other than an occasional IP address change, it worked.
  • The catering. The food was excellent, there were ample supplies of drinks, snacks, awesome tea thanks to Teapigs, fresh fruit and enough Haribo to keep you wired and coding well into the small hours. Result.
  • The wrap-up. Without Paul & Phil imposing a strict code cut-off, we’d have happily coded until they threw us out – but in retrospect, having a hard deadline with plenty of notice really focused our efforts towards the end of the project.
  • The platforms. We used Github for hosting, Trello for organising our backlogs and a whole lot more besides; EventBrite solved the ticketing problem, and we’re in the process of deploying to AppHarbor. All fantastic, powerful tools – and all completely free.
  • The swag. 80Gb SSDs for all the attendees? Best. Swag. EVER. Plus a hugely generous range of licenses, software, books and ebooks. Check out the list of sponsors at http://www.givecamp.org.uk/sponsors – I’m hugely grateful to every single one of them for making this amazing event a reality.
  • The people. Everyone was helpful, cooperative, collaborative and enthusiastic, and I hope everyone learned as much working with each other as I did working with them.
  • The demos. I was really, really impressed at what the teams delivered. From discovery to implementation to – in some cases – deployment onto a live website, in under 48 hours… as Ben Hall put it, it “makes you wonder what we normally do all day…” I am also not at all biased by how genuinely delighted Simma and Jasmine looked when we showed them our work. Honest.

What could have gone better?

  • I thought the Friday start was too early. Lots of people – including several of the charities and one of the organisers – were held up in transit and missed the opening pitches and kick-off. The only downside of a venue as wonderfully accessible as UCL Bloomsbury is that at 5pm on a Friday it’s smack in the middle of the worst rush hour in the country. Just an idea – but I’d suggest next time a preliminary session, maybe earlier in the week, with the team leaders and the charity reps? A solid couple of hours capturing requirements and understanding the domain. We’d have been up to speed much quicker if one or two of us had had the chance to think things over ahead of time.
  • That’s actually the only criticism I have of the event itself. The rest are memos to myself and the team for next time:
    • Bring an ergonomic keyboard! By Sunday afternoon my arms were starting to seize up… I’m not a laptop coder by choice, and having one of my trusty MS Natural 4000s would have made a *huge* difference. Plus a proper mouse. And a second monitor.
    • Stick to what you know. This is not an event for trying out new technology – unless someone on your team knows it well, don’t go near it. We spent a good half-day investigating Visual Studio Lightswitch before concluding that we just didn’t have the expertise to tailor it to our requirements… it was gone 3pm on Saturday when we finally settled on ASP.NET MVC 3 with Entity Framework, and an hour later we were absolutely flying.
    • Get a Skype chat or IRC channel set up ASAP. You’re going to be sharing lots of addresses, API keys, URLs – stuff that’s time-consuming to read out loud or write on paper.

In conclusion – wonderful event, I’m really pleased to have been part of it, and I hope the rest of our team will be at the next one because I’d love to work with them again some time. And huge thanks to Paul Stack, Rachel Hawley, Phil Winstanley, Kendal Miller and Dave Sussman, who all looked absolutely worn out by Sunday evening – it wouldn’t have happened without you!

Saturday, 17 September 2011

Moleskine + Kindle = ... Moleskindle?

Faced with the harrowing prospect of trying to fit the entire Song of Ice and Fire into hand luggage on my next holiday, I bought a Kindle. It's quite magic. It won't switch off, it doesn't light up, it looks utterly fake - like some sort of plastic prop tablet device where they've used a printed cardboard screen... and it's absolutely lovely. My paperback books tend to end up rather battered from being slung around in bags all the time, and I wanted a case to keep the Kindle safe from knocks and scratches. Rather than spend money on one of the ridiculously overpriced cases you can get for it, I wanted to try something a bit different. You remember reading spy books as a kid where people would hide stuff inside hollowed-out books?

I found an old Moleskine notebook that was just the right size for it, and started hacking away - a couple of happy hours playing with craft-knives and glue, and here it is: the Moleskindle

IMG_8804 IMG_8810 
IMG_8815 IMG_8814

It's a bit fiddly - and messy - getting the cutouts just the right shape; I found wood glue worked just fine - and once it's dried, the compacted glued paper is quite easy to carve & trim using a sharp craft knife. I cut a notch in the right-hand side so I can reach the page-turn buttons whilst it's in the case, but you need to pop it out to reach the power button or recharge it. Still, I think it looks pretty cool, it'll stop the Kindle getting knocked and scratched, and you can fool people on the Tube into thinking you're reading something incredibly intellectual that's been hand-written in a Moleskine notebook when you're secretly reading rock star autobiographies.

Sunday, 11 September 2011

Software Development - HORSE-style

There's as many ways to lose at poker as there's ways to fail at delivering software, but one variation I have yet to experience is a game called HORSE. In HORSE, each hand follows a different set of rules - you'll play a hand of Hold'Em, a hand of Omaha, a hand of Razz, a hand of Stud, and a hand of Stud Hi-Lo; then you go back to the beginning and do it all over again, until my brother has all my chips.

Inspired by this, I've devised the following brilliant software methodology for all those teams who can't quite settle on a system that works for them. It's called WALKS, and you work in two-week sprints, using a different methodology for each sprint to ensure you get the maximum efficiency from all these wonderful processes and systems:

Weeks 1-2: Waterfall
You spend the first two weeks making bold, ambitious, big-design-up-front plans, and not actually writing any code or shipping any features.

Weeks 3-4: Agile
You spend the next two weeks trying desperately to get *something* built and releasable.

Weeks 5-6: Lean
Realizing that your "big design" is probably killing your attempts to be agile, you start hacking out unnecessary features and trying to pare the design back to something you might actually be able to build.

Weeks 7-8: Kanban
You still don't know what you're doing, so you decide to write everything on Post-It notes and stick them to a board, figuring that if you start pulling jobs off the queue, you might at least get *something* done.

Weeks 9-10: Scrum
You have two weeks of daily stand-up meetings, in a desperate attempt to try and get a handle on things. Finally, you sit down on Friday afternoon, have a two-hour timeboxed retrospective, and decide that what you really need is a full set of requirements and a definitive spec.

Then you take a weekend off, come in on Monday, and start at the top again.

If that sounds familiar, it's OK - you're not hopelessly lost, confused or unproductive; you're just taking a structured approach to being multi-disciplinary...

(This is a joke post. Please don't use WALKS to build software. Ever. The world has enough problems as it is...)

Monday, 1 August 2011

SkillsMatter Progressive.NET Tutorials 2011

Some of you may have heard about - or attended - SkillsMatter's Progressive.NET Tutorials over the last couple of years. This is a three-day program of in-depth workshops covering the latest languages, frameworks and techniques in .NET development. The great thing about the workshop format is that it provides enough time to actually get some hands-on experience; instead of the rapid-fire 45-minute lectures you'll find at most conferences, you can actually try things out, ask questions, work through examples, and really get to grips with the techniques or frameworks you're exploring.

This year, Ian Cooper - whom you may know from the London .NET User Group - has put together a great programme of speakers and topics, and I'm really excited to say that, for the first time, I'll be there as a speaker instead of an attendee.

My workshop is called "Front-End Tips for Back-End Devs". I'll be looking at how many of the techniques we take for granted in back-end development - including DRY, abstractions, packaging and dependency management -  can be applied to your page layouts, stylesheets and scripts. We'll cover semantic markup, we'll take a whirlwind tour of all the wonderful new tags introduced by HTML5, we'll look at CSS sprites and media queries, and we'll see how you can keep your web UI as clean, elegant and maintainable as the rest of your codebase.

There's also Damjan Vujnovic talking about TDD in JavaScript; I went along to one of Damjan's talks earlier this year and was really impressed, so I'm looking forward to the opportunity to go over his ideas in a little more depth. Jon Skeet will be talking about async/await programming on C# 5 (no word yet on whether Tony the Pony will be there). There's workshops on continuous deployment, on web development in F# using the WebSharper framework, on packaging and dependency management, on REST, on Nancy, on SimpleData - in fact, if you've heard the .NET community buzzing about anything in the last year or so, chances are there's a workshop here that will show you what they're all so excited about.

It's at the SkillsMatter eXchange in London, from 5-7th September 2011. The cost is £425 (yes, it's not free, but you do get three days of top-notch content without giving up your evenings or weekends) - and you can use the promo code PROGNET50 to get £50 off.

Sign up online here. To keep up with event news, follow the hashtag #prognet on Twitter, and it'd be great to see you there.

Wednesday, 29 June 2011

Just Do It: Command-Query Segregation, Nike-Style

OK, CQRS is a hugely mis-applied and mis-understood architectural style. The insight I'm sharing here is based on my attending Udi Dahan's Advanced Distributed Systems Architecture course, then applying what I'd learned to a project we were building for 3-4 months, then having one of my colleagues go on the same course, and then have him come back and point out everything we'd done wrong. Let's assume you are already using CQRS. Maybe you're using it appropriately; maybe it's over-engineering; maybe it's completely misapplied. Doesn't matter, for what I have to say here; there are smarter folks than I who can tell you whether you should be doing it in the first place. No, I'm here to share a particular insight about CQRS with you.

Your commands should be like a psychotic drill sergeant screaming orders.

image

 

When you issue a command, your work is done. End of story. Maybe it happens immediately. Maybe there's a delay, and someone has to wait a few seconds. Maybe it doesn't go according to plan, and somebody else notices afterwards, and they call someone else, and it gets fixed up. Maybe it fails spectacularly. You don't care. (Hell, you're probably dead by now. Nobody would ever have won any wars if everyone threw an exception when they found the sarge face-down in a fox-hole with a bullet-hole in his head.)

You're the sarge. You are IN COMMAND. You order someone to destroy the ammo dump in North Camp, then you get on with your life. Tomorrow morning, you'll get a fresh intel report. Maybe it'll say that ammo dump in North Camp has been destroyed - maybe it won't. You'll review the fresh intelligence, decide what to next, issue a fresh batch of orders - and get on with your life. That's CQRS. Your data is stale, your word is LAW, and you have better things to do than hang around wondering if you maybe did the wrong thing. If you give a command, and it doesn't get obeyed, there's exactly two potential outcomes:

  1. Nobody notices
  2. Somebody notices

Nobody notices? Cool. No problem. Somebody notices? Well - that's where you hope it's one of your guys (i.e. alerts, logging, infrastructure, monitoring) instead of one of THEIR guys (i.e  customers/clients) That's how you do CQRS. Get your intelligence - your queries. The freshest data you can get, but don't bust a gut if it's a little out of date. Give your orders. Trust your intelligence. Get on with your life. Rinse. Repeat.

Saturday, 11 June 2011

How to install "Active Directory Users and Computers" on your Windows 7 Workstation

This one stumped me until I hit upon the magical combination that makes it work.

  1. Install Windows 7 Service Pack 1.
  2. Download the Remote Server Administration Tools for Windows® 7 with SP1
  3. Install them.
  4. Reboot
  5. Go into Start -> Control Panel -> Programs and Features, and go to "Turn Windows features on or off" - because the installer will download and install the admin tools, but won't actually switch them on. Helpful.
image

What makes it fun is that it just fails silently until you get it right. No error messages, no warnings... installer didn't even tell me I was missing SP1 - which I thought I already had.

Thursday, 9 June 2011

Why Cloning Classic Games in Javascript Makes for a Great Hack Day

Gary Larson / Far SideHack days should be about code. Anything that stops you writing code (or talking about / editing / refactoring code) is friction.

There's two things in any software project that tend to cause huge amounts of friction - certainly during the early stages. One is tooling. Installing compilers takes time. Installing libraries takes time. Configuration takes time. I remember a day at Snowcode last year when we spent literally five hours installing Ruby, various build tools, make files, modules, browser automation components, plug-ins... I don't think I wrote a single line of code that day. It was interesting, and educational, but a hack day should be about building stuff.

The other huge source of friction in software development? Debates. There's X ways of doing something, and you can't make any progress until you've chosen one. Should we allow HTML in the comments? How big do we make the gallery thumbnails? What colour should we paint the bike-shed? On a "real" project, these decisions are made by the product owner - but for something like a hack-day, if you one person in charge of all the design, decision-making and prioritisation, they'll rapidly become a rather frustrated bottleneck.

So - pick a problem that's clearly-defined and well-understood, and solve it using a language that everyone's already got, that doesn't need a compiler, linker or build environment, and that everyone can run just by opening a web browser.

In other words - clone a classic game in JavaScript.

Choose something everyone's played. Bomberman. Tetris. Lemmings. Asteroids. Pac-man. Have a copy of the game on hand - on a laptop, or an emulator, or bring a console, or whatever - so if anyone asks questions, you can just refer to your definitive reference implementation and get back to work. That'll eliminate debate without handing anyone the poisoned chalice of product ownership on a volunteer-based project.

And embrace the awesome lightweight expressiveness of JavaScript - the only language that you can write and run, out of the box, on every single computer since Windows 98. There's no compiler. There's no IDE, no build chain, no runtime or virtual machine or standard libraries to install. People can use vi, Visual Studio, TextMate, Notepad - whatever they like. (Personally, WebStorm is rocking my world right now - JavaScript intellisense and refactoring with built-in Git support... it's fantastic. Just remember that Ctrl-Y doesn't redo by default and you'll love it.)

OK, this weekend we were using NodeJS, so the build/run/test cycle involved restarting the node server (which is Ctrl-C, up, enter) - but the guys working on the renderer didn't even need a server. They built a client-side test harness (index.html), and their build and deployment cycle was Ctrl-S, Alt-Tab, F5.

That's low friction. That's walking in off the street, opening up your laptop, pulling the code, and starting to build stuff straight away. And I like that.

Sunday, 5 June 2011

Notes from the KaboomJS! LonDev Hack-Day

A couple of weeks back, I had this crazy idea to build Bomberman, in Javascript, in a day. I floated the idea on Twitter and got a pretty enthusiastic response, and so I set up the first LonDev hack day. 12 people, in a room, for one day, working together to build and ship a working game.

Did we do it? You'll have to read all about it on the new LonDev wiki, but personally, I'm really pleased at how it went, and really  excited at the idea of organising the next one.

What's really encouraging is the mix of people and expertise who contributed. We had a couple of .NET coders, some Ruby/Rails guys, some JS web hackers who'd not done node/socket stuff before, and @palfrey who, as far as I can tell, spends his days switching between Erlang and PHP to stop himself getting bored.

A fun day. Some really solid code. Some really interesting lessons learned. And there's a couple of us hanging out in #kaboomjs on Freenode over the next few days to get it finished off and up and running.

Tuesday, 26 April 2011

Want to work for Spotlight?

Spotlight are hiring! We're looking for someone to join our software team full-time, in a senior development position. An experienced scrum master, who knows how to work with business and technical people to make things happen. Somebody who understands how to create great software. From database optimization to SOLID principles to TDD to user experience and accessibility, you understand what makes software great - great to use, great to maintain, great to extend.

We’re at a turning point. Five years ago, we were an award-winning publishing company who maintained our own website. Five years from now, we’re going to be a software company who publish award-winning directories. It’s a great place to work, and it’s a really exciting time to be here. It’s full-time, permanent job, working at our office just off Leicester Square. We’re upstairs from the Prince Charles Cinema – London home of Sing-along-a-Sound-of-Music and The Room – and surrounded by excellent bars, restaurants and theatres.

If you understand 90% of the postings on my blog, you’re probably in the right ball-park in terms of technology – but if you want buzzwords, it’s C#, .NET, agile, scrum, MVC, Castle Windsor, NHibernate, NServiceBus, jQuery, IIS, SQL Server, NUnit, SOA, TeamCity, FinalBuilder, msdeploy, and various other bits that are occasionally referred to as the “alt net stack”.

Interested? Read the full job spec, and details of how to apply, at www.spotlight.com/jobs/developer.html

NO AGENCIES. Seriously. If we want to deal with agencies, we’ll call you. If you call me, I will put my phone handset in a drawer, close the drawer, and let you talk to my stationery while I wander off and make some coffee. If you’re lucky, it’ll only waste 90 seconds of your time. If you’re unlucky, your phone system still uses analogue-switched PSTN and you’ll find you can’t hang up. It’s hard to earn commission when you can’t use your phone, and you’d be surprised how long it takes to make a really good cup of coffee.

Sunday, 10 April 2011

Slides and Notes from “So You Think You Know JavaScript”

imageThe slides, notes and references from my JavaScript talk are now online, at

    http://www.dylanbeattie.net/javascript/

A huge thanks to everyone who came along, to Ian Cooper and the LDNUG User Group for organising, and to SkillsMatter for the venue, the projector, the publicity, the video and the ginger tea. A full video of the talk is also available on the SkillsMatter website – and you’ll be pleased to hear that their awesome new video processing rig means you can now see my grinning face AND read the code samples on the slides.

The NodeJS demo code is open source and is online at https://github.com/dylanbeattie/BomberJS – fork it, pull it, do whatever you like with it. No warranties as to whether it’s any good or not… but it’s there and it works.

A couple of people asked afterwards about running Node on Windows, as I was doing in the demos. I was using a compiled binary from http://node-js.prcn.co.cc/, which worked absolutely fine for little demo apps with 5-6 concurrent client connections. I've no idea how it scales, but the general consensus seems to be that you should stick to Linux / MacOS for hosting any significant Node applications.

Churn-down Charts

Our team have just finished a sprint on a project that’s using loads of new technology – MSMQ, NServiceBus, WCF – that we’ve not worked with before, and it’s played havoc somewhat with our estimates of how long everything was going to take. We hit our deadline, but only thanks to the product owner shifting a group of features into the next sprint, and at the retrospective everyone agreed that the process worked just fine but we didn’t have any really good way of visualising it. We have a burn-down chart – actually, we have two, ‘cos there’s one drawn on the whiteboard and there’s one in FogBugz as well – and what we’ve been doing is at the end of every day, we’ll just mark the number of hours left. On days when we discover more problems than we ship features, this looks like we’re moving backwards… which is true in terms of monitoring progress and planning, but isn’t great for morale, and it doesn’t really explain what’s going on.

So, I’ve come up with this, as a way of tracking estimation accuracy and churn as well as straightforward progress. I’ve no idea if it’s original or not, I don’t know whether it has a name, but I’ve called mine a churn-down chart. I’ve annotated this example to show you what happens over the course of the project – click for a bigger version.

Churn-down Chart

Basically – when the green line hits the red line, you’re done. The product owner controls the red line, by adding and removing features from the sprint. Yes, I know you’re not allowed to add stories to a sprint that’s in progress - I think the chart actually demonstrates why. The little blue tails were inspired by this fantastic visualisation of budget forecasts compared with reality (which I found via Chart Porn), and they track how many hours of features we actually delivered that day, as opposed to how many hours we have left at the end of the day. This clearly shows the difference between days when we were productive but discovered lots of unplanned work, as opposed to days when we were just stuck.

I like the way it empowers the product owner to actually work with the team to hit the deadline – you’re not working towards a fixed target, you’re both dealing with shifting requirements as you find bugs and have ideas, and you can see at a glance whether you’re on target or not, and if not, why not.

Monday, 4 April 2011

Development Methodologies

By now, everyone’s seen test-driven development (TDD), behaviour-driven development (BDD) and domain-driven design (DDD) – but there’s some other, so far development paradigms that haven’t got nearly the attention that they deserve.

Attention Deficit Disorder Driven Design (ADDDD)

Most commonly seen in open-source projects. You begin by implementing a core feature. After a couple of days, when either it gets boring or you’ve coded yourself into a corner and can’t work out how to get out, you pick a new feature and start implementing that one instead. Advantages of this approach are that you can tick “in development” on the feature comparison charts when evaluating your solution against the alternatives. Disadvantages are that it leads to crappy software that doesn’t work.

Attention Deficit Hyperactive Disorder Driven Development (ADHDDD)

Just like ADDDD, but features are only ever added in brief caffeine-fuelled bursts of manic coding, usually around 4am, accompanied by dozens of tweets, blog posts and Facebook status updates.

Developer Developer Developer Driven Development (DDDDD)

Projects are started twice a year, normally the week immediately after the popular DDD community event at Microsoft headquarters, and generally involves building something really ground-breaking like a wiki or a blog engine, just to “get your head around” all the amazing new stuff you’ve seen at DDD. You’ll generally lose interest about two days after you put the code up on Github as a “pre-alpha technology demo”, and then six months later you’ll do the whole thing all over again.

Advanced Dungeons & Dragons Driven Development (AD&DDD)

Everyone sits around drinking Red Bull, eating Doritos, boasting about their accomplishments and pretending to be some sort of tenth-level software architect when deep down they’re still not quite sure what a pointer is. A “dungeon master” (also known as a “project manager”) occasionally rolls some dice or reads a Gartner report, and then tells them that their project has died. Then they do it all over again, once every couple of months, sometimes continuing well into middle age.

Acronym Driven Development (ADD)

The HORSE of development methodologies; you consistently blame the failure of your last project on the fact that you picked the wrong methodology, and resolve to try something different on your next project. The conventional approach is to go test-driven, then behaviour-driven, then domain-driven, then extreme, then back to domain-driven. It’s a very educational way of wasting your employer’s time and money, and there’s normally someone in a back room happily coding away who doesn’t have the faintest idea what the rest of you are doing, but is probably shipping enough features to keep your company afloat.

Tuesday, 22 March 2011

I’ll be talking about JavaScript at Skills Matter on April 5th

On April 5th I’ll be giving a talk on JavaScript at SkillsMatter here in London. It’s being organized by the London .NET User Group, but it’s not a .NET talk. Instead, it’ll cover a range of topics related to JavaScript’s history, the current state of the language, and the future of this widely-used and widely-misunderstood language. 

JavaScript is fifteen years old, and the principles that influenced JavaScript’s architecture go back to the very dawn of computer science. It’s a powerful, expressive, dynamic language, that’s now being used to deliver some of the biggest and most popular software application in the world – and yet a whole generation of developers still thinks of JavaScript as being a scripting language that’s barely good enough to make pop-up windows appear on a web page.

imageThere’s a lot of very cool stuff going on in the JavaScript world right now. With HTML5’s offline storage, you can use JavaScript to write client applications that you can install to your phone or your laptop and run them even when you’re offline. With CommonJS, there’s finally a unified effort to create a standard runtime library for JavaScript so we can write JS programs that support file systems, networking, loadable modules and unit tests. With NodeJS, there’s a fast, scalable  framework for writing HTTP servers as collections of discrete JavaScript components. With frameworks like KnockoutJS, there’s declarative support for building rich web user interfaces in JavaScript - and it’s still pretty good at doing pop-up windows as well.

So you think you know Javascript? Sign up, come along and find out. I’ll bet you a pint there’s something in there you’ve never seen before.

Saturday, 26 February 2011

True Names, and Other Dangers: What Dr. Seuss Can Teach Us About SoA

Did I ever tell you that Mrs. McCave, Had twenty-three sons, and she named them all Dave?

Well, she did. And that wasn't a smart thing to do; You see, when she wants one, and calls out "Yoo-Hoo!
Come into the house, Dave!" she doesn't get one; All twenty-three Daves of hers come on the run!

- from “Too Many Daves” by Dr. Seuss

They say there’s only two hard problems in software – cache invalidation, naming things, and off-by-one errors. Cache invalidation’s hard because it’s difficult to clarify requirements. Off-by-one errors are hard because the joke wouldn’t work without them. But naming things? How hard can that be?

The If you’ve ever worked in IT support, you’ll have had calls saying “the system is down”. Sometimes, a more enlightened caller will helpfully tell you that it’s ‘the network’ or ‘the database’ that’s broken. I once started at a job where everybody referred to everything as “sequel.” There had been a big database migration a few years earlier, resulting in a new website and new desktop software, and the whole process had been referred to as “upgrading to SQL Server”. Everyone kept hearing the techies talk about “upgrading to sequel”, and so when they got something new on their desktops, they conclude “Ah – this must be that sequel thing that everyone’s been talking about!”. Two days later, they call you up and say there’s a problem with ‘sequel’ – and in this context, ‘sequel’ could refer to just about anything. The name was overloaded to the point of uselessness.

“Ah yes - but it’s
services
all the way down!”

What’s scary is that this happens all over the industry. People talk about “Software as a Service”, when what they’re actually dealing with is an XML web service, that’s connecting to a WCF service hosted in a Windows service to provide a business service. Like dear old Mrs. McCave, we’re finding out that names are great if they’re unique, but when different things start laying claim to the same names, you’re going to end up cross-eyed.

So, as my team start teasing apart our proverbial big ball of mud, I’m trying out a new naming policy for the new components and modules we’re building. Pick a word that sounds nice and doesn’t mean anything within our business.

The last three projects I worked on were called Rosemary, Tarragon and Kamogelo. No namespace conflicts, no semantic overloading and no clashing with reserved keywords. Rosemary’s almost like an employee – it has an event log, and a mailbox, and a sufficiently clear sense of identity that people seem to get it. When they say there’s a problem with Rosemary, they’re right; it doesn’t take 15 minutes to work out what they mean, and that’s a good thing. It also encourages clear separation of concerns, and facilitates good discussion thereof – lots of “does this feature belong in Rosemary or Tarragon?” instead of just adding another class to the legacy codebase.

So it’s goodbye, “customer e-mail service” and “accounts system” and “web shop”, and hello to Sundance, Monolith and Aquarius. And before too long, Moonface and PuttPutt and Shadrack, in recognition of dear old Mrs. McCave.

Thursday, 24 February 2011

Making HttpContext.Current Available Within a WCF Service

I needed to add a quick’n’dirty WCF service to an ASP.NET MVC web application, so I could call a handful of methods from a different application.

The MVC app in question is using Windsor, NHibernate and the repository pattern, so we’ve got a fairly standard pattern where we spin up a ManagedWebSessionContext in the Application_BeginRequest handler (in global.asax.cs) and then flush and close the session in Application_EndRequest(). I used the Windsor WCF facility to inject a bunch of dependencies into a little WCF service, but I was finding that SessionFactory.GetCurrentSession() was always returning null – because when you’re using the ManagedWebSessionContext, your NHibernate session is bound to your HttpContext.Current, and by default you don’t have one of these inside a WCF service.

However - if you can live with tight coupling between your WCF service and IIS hosting, there’s a couple of little config things you’ll need to do to get this working. What doesn’t help is that until you’ve got all this just right, you’ll get a really helpful “Failed to Execute URL” error from IIS that’ll tell you absolutely nothing about what’s wrong.

First, make sure WCF HTTP activation is installed on your server – in Windows 2008, it’s under Server Manager –> Features:

 image

Next, make sure you’ve registered the WCF service model with IIS, by running:

C:\WINDOWS\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\> ServiceModelReg.exe –i

Next, make sure your web service is running in ASP.NET compatibility mode. First, check you’ve got this:

<system.serviceModel>
    <serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
</system.serviceModel>

in your web.config file, and then decorate your service implementation with the AspNetCompatibilityRequirements attribute:

[AspNetCompatibilityRequirements(RequirementsMode= AspNetCompatibilityRequirementsMode.Required)]
public class WcfMagicService : IMagicService {
   . . .
}

The last thing I had to do was necessitated by WCF not supporting multiple host headers; I had to hard-wire the WCF endpoint to listen on a specific hostname. In this case, this involved tweaking the serviceHostingEnvironment section of web.config, which now looks like this:

<serviceHostingEnvironment aspNetCompatibilityEnabled="true">
    <baseAddressPrefixFilters>
        <add prefix=http://services.mydomain.com” />
    </baseAddressPrefixFilters>
</serviceHostingEnvironment>

And then adding another attribute to the service implementation class:

[ServiceBehavior(AddressFilterMode=AddressFilterMode.Any)]
[AspNetCompatibilityRequirements(RequirementsMode= AspNetCompatibilityRequirementsMode.Required)]
public class WcfMagicService : IMagicService {
}

Once that’s done, you’ll have an instantiated HttpContext.Current inside your service methods, so your code – and useful things like NHibernate’s ManagedWebSessionContext – will behave just as they do in normal MVC controllers or WebForms code-behind.

Monday, 21 February 2011

Check Out smtp4dev if You Build Mail-Enabled Software on Windows

One of my cow-orkers pointed me at a great little utility a while back called smtp4dev. It’s an SMTP server that listens on your local machine, and instead of relaying e-mail, it’ll capture them and store them in a queue so you can review and open them.

image

It’s brilliant – simple and elegant and incredibly easy to use. Just configure your application (website, debugger, logging framework – whatever it is you’re building) to send mail on localhost:25, fire up smtp4dev, and watch the messages pile up. I’ve been building SMTP appenders for log4net this evening, and it’s been really, really useful.

Binaries and source are at smtp4dev.codeplex.com – well worth a look if you ever write software that sends e-mail.

Monday, 7 February 2011

How to host Git in the same Apache server that comes with CollabNet Subversion

This is the moon rising over the Costa Smeralda, Sardinia. It has nothing to do with revision control.CollabNet Subversion Edge is a great Subversion distro that includes Apache 2.2 and the ViewVC web-based repo browser, and makes it really, really easy to get up and running with Subversion and WebDAV. I’m setting up a project server to host something we’re working on, and it’s been generally decided that whilst Subversion is all very well for keeping Word documents in, we’d quite like something a touch more… distributed for the actual source code repo. And when James Gregory mentioned on Twitter that git would mean “no more tree conflicts ”, I may have actually started salivating… ahem.

Anyway, yes. Apparently Git is quite good.

Jeremy Skinner has some fantastic notes on how to get git up and running with Apache 2.2. on a Windows server – I followed these pretty much to the letter to get my first incarnation up and running, but had to comment out a bunch of the Collabnet/Subversion settings in the Apache config files to get the Git server running properly. A bit of tinkering, though, and I’m pretty much there. What makes this interesting is that CollabNet includes a web-based admin console, which makes configuring the built-in modules very straightforward, but it does mean several of the config files have this rather ominous warning at the top:

#
# DO NOT EDIT THIS FILE IT WILL BE REGENERATED AUTOMATICALLY BY COLLABNET SUBVERSION 
#

so – any changes we make in there will be just peachy until someone touches the web interface, at which point BOOM! they’ll spontaneously stop working. So whatever we’re going to do, we need to do it without touching any of those files. I wasn’t sure at first whether this would be possible, but it seems to be up and running now and hanging together quite nicely.

Fire up the Apache httpd.conf file in your favourite editor – by default it’ll be in C:\Program Files\Subversion\data\conf\ – and add the following lines at the end:

# Configure Apache to listen for named virtual hosts on port 80
NameVirtualHost *:80

# Include the configuration file for our git http hosting
Include "C:\Program Files\Subversion\data\conf\git_httpd.conf"

Now create a new document called – yep - C:\Program Files\Subversion\data\conf\git_httpd.conf – and make it look something like this:

# HTTP settings for using Apache with MSysGit on Windows
# Based on Jeremy Skinner's notes at
http://www.jeremyskinner.co.uk/2010/07/31/hosting-a-git-server-under-apache-on-windows/

<VirtualHost *:80>

    # Set this to the root folder containing your Git repositories.
    SetEnv GIT_PROJECT_ROOT D:/Git/
   
    # Set this to export all projects by default (by default,
    # git will only publish those repositories that contain a
    # file named “git-daemon-export-ok”

    SetEnv GIT_HTTP_EXPORT_ALL
   
    # Route specific URLS matching this regular expression to the git http server.
    ScriptAliasMatch "(?x)^/git/(.*/(HEAD | info/refs | \
        objects/(info/[^/]+ | [0-9a-f]{2}/[0-9a-f]{38} | pack/pack-[0-9a-f]{40}\.(pack|idx)) | \
        git-(upload|receive)-pack))$" \
        "C:/Program Files (x86)/git/libexec/git-core/git-http-backend.exe/$1"
   
    # The canonical DNS hostname that you want to use for your git server
    ServerName my_git_server
    
    # Any other DNS aliases that point to your git server
    ServerAlias my_git_server my_git_server.mydomain.com my_git_server.my_intranet.local
    
    # The root folder for non-GIT-hosted documents (e.g. phpgit or some other Web front end)    
    DocumentRoot "D:\gitserver\htdocs\"
    <Location />
        # This section is duplicated from the Collabnet SVN LDAP authentication
        AuthType Basic
        AuthName "Spotlight GIT Repository"
        AuthBasicProvider csvn-file-users ldap-users
        Require valid-user
    </Location>
</VirtualHost>

Check your configuration by running httpd.exe from the command line, like so:

C:\Program Files\Subversion\bin>httpd.exe -f "c:\program files\Subversion\data\conf\httpd.conf" –t
Syntax OK

and if all looks good, go into services.msc and restart the CollabNetSubversionServer service (which is actually Apache)

Finally, I followed Jeremy’s instructions to get GitPhp running, but then replaced it with a different project – also called GitPhp – from http://www.xiphux.com/programming/gitphp/, which provides a full repository browser, revision history, etc.

All I had to do to get GitPHP running was to copy the gitphp.conf.php.example file to gitphp.conf.php, and then tweak the following settings:

/* The root folder of my Git repositories */
$gitphp_conf['projectroot'] = 'D:\git';

/* On 64-bit Windows, C:\Program Files (x86) ends up as C:\Progra~2\ so these need to be configured manually */
$gitphp_conf['gitbin']  = "C:\Progra~2\Git\bin\git.exe";
$gitphp_conf['diffbin']  = "C:\Progra~2\Git\bin\diff.exe";

Job done. I now have:

and that’s all on one box, with two IP addresses, with the svn and git servers sharing an instance of Apache on one address, the IIS server running on the other, and DNS records pointing svn and git at the first address and IIS at the second.

Running IIS and Apache on the same Windows 2008 R2 Server

I’m trying to get a Composite C1 site and the Apache WebDAV front-end to Subversion running on the same Windows 2008 R2 server, and doing so requires a bit of trickery with IP address bindings and such, and I thought I’d share it – partly ‘cos it’s useful, and partly because I’m bound to have to do this again in three months time and there’s no way I’ll remember how I did it. First off, make sure your box has (at least) two IP addresses – I’ve bound mine to 192.168.0.13 and 192.168.0.14
To get IIS to listen on ONLY 192.168.0.13, you’ll need to run the netsh.exe utility.
C:\Users\dylan.beattie>netsh
netsh>http add iplisten ipaddress=192.168.0.13
IP address successfully added
netsh>http show iplisten
IP addresses present in the IP listen list:
-------------------------------------------
    192.168.0.13
netsh>exit
(note that netsh.exe is a Windows 2008 utility – if you’re running Windows 2003 or earlier, look up the docs on using httpcfg.exe to achieve the same thing)
If you now fire up a web browser and go to http://192.168.0.13/, you should get the default IIS7 “Welcome” screen, and http://192.168.0.14/ shouldn’t return anything at all. Now to get Apache listening on 192.168.0.14. Find your httpd.conf file – if you’ve just installed CollabNet Subversion (like I have) it’ll be in the \data\conf folder of wherever you put your SVN install.
You’ll need to find the Listen directive in httpd.conf, and modify it to say:
Listen 192.168.0.14:80
That’s all. Next time – to get Git running on the same Apache installation… until then, happy hacking.

UPDATE: After running this for several years, I've found that occasionally, following an unscheduled shutdown or power outage, IIS won't come back up properly after the box is restarted. Sites will respond on http://localhost/ but trying to access them via hostname gives an ERR_CONNECTION_RESET message.
This can be fixed by removing and re-adding the HTTP binding:
C:\>netsh
netsh> http
netsh http> delete iplisten 192.168.0.13netsh http> add iplisten 192.168.0.13netsh http> exit

C:\>iisreset

Monday, 17 January 2011

“Choose Life” For DBAs

I’m really really sorry. Someone tagged #sqlmoviequotes on Twitter and I got carried away…

SELECT TOP(1) * FROM LIFE.

SELECT TOP(1) * FROM JOB.

SELECT TOP(1) * FROM CAREER.

SELECT TOP(1) * FROM FAMILY.

SELECT TOP(1) * FROM television ORDER BY SIZE DESC

SELECT * FROM washing_machine CROSS JOIN car CROSS JOIN compact_disc_player CROSS JOIN electrical_tin_opener

SELECT * FROM health, cholestorol, dental_insurance
    WHERE health.status = 'good' and cholestorol.level < 5
   
SELECT * FROM mortgage WHERE interest_rate = 'fixed'

SELECT TOP(1) * FROM home WHERE TYPE = 'starter'

SELECT * FROM person
    INNER JOIN friendship ON person.id = friendship.person_id and friendship.friend_id = 'ME'

SELECT * FROM leisurewear
    INNER JOIN luggage ON leisurewear.color_scheme = luggage.color_scheme
   
SELECT TOP(3) * FROM lounge_furniture WHERE payment_plan = 'hire purchase'
    AND range_id IN (SELECT range_id FROM fabric_option GROUP BY range_id HAVING COUNT(*) > @RANGE_SIZE)
   
SELECT 'diy', CURRENT_USER FROM activity WHERE DATEPART(dw, activity_date) = 1 AND DATEPART(hh, activity_date) < 12

DECLARE @your_mouth INT
DECLARE junk_food CURSOR FOR SELECT * FROM FOOD WHERE TYPE = 'junk'
WHILE @@CURRENT_SHOW IN (SELECT * FROM SHOW WHERE keyword IN ('mind_numbing', 'spirit-crushing')) BEGIN
    FETCH NEXT FROM junk_food INTO @your_mouth
END

BEGIN

    sp_start_job 'brat'
    sp_start_job 'brat'
    sp_start_job 'brat'

    KILL @@SPID
END

SELECT * FROM events WHERE EventDate > GETDATE()

SELECT TOP(1) * FROM LIFE

Monday, 10 January 2011

Mapping a Drive Letter to a Subversion Repository with CollabNet, WebDrive and WebDAV

This is quite neat. As part of a business-wide agile initiative, I’m looking into solutions for storing and collaborating on documents – something that gives *me* the history and auditing capabilities of something like Subversion, but gives the rest of the team something clean and easy that fits well with current working practises.

So… mapped drive letters. Everyone knows about drive letters – “just stick it on the R: drive” is nice and easy, and as long as everyone’s R: drive points to the same place and the fileserver behind it’s getting backed up, you’re sorted. Unless you want to revert a document that’s been corrupted, or accidentally deleted, or you just want to get back an earlier revision because you realize you’ve done something dumb. Then you need to mess around with tapes and stuff, and that’s just no fun at all.

Plus, of course, we have a wiki, which isn’t much fun to edit because the constant round-tripping from WYSIWYG->markup->HTML->WYSIWYG tends to clobber newlines and formatting, but it *is* a great place to keep stuff because it doesn’t get lost.

So, requirements for document storage:

  1. As easy to use as a drive letter.
  2. Security. Windows / LDAP authentication to control who can read and who can write.
  3. Revision history – just a record of who made changes, and when.
  4. Ability to revert changes to an earlier revision
  5. HTTP accessible so you can read stuff with just a web browser.

There’s two things you can do with a list of requirements like that… speak to vendors, or hack something together yourself. You wanna speak to vendors, you go ahead; I shan’t stop you. Still here? Good. Let’s hack.

1. Install Subversion on the server.

For this, I’m using the CollabNet Subversion Edge stack – a single installer combining Subversion, Apache and the ViewVC web front-end. It’s very, very neat, and (having done this the hard way) much easier than setting up mod_dav_svn yourself. Once it’s up, use the web interface (linked from the Start menu on your server) to set up a new repo – call it doc_repo – and then verify that if you browse to http://myserver/viewvc/doc_repo/ you get the ViewVC web front-end view of your new, empty repository.

If you’re after Windows/LDAP authentication, that’s also configurable from the Subversion Edge web interface – and CollabNet has detailed notes on how this works.

image

2. Install WebDrive on your workstation.

This is a commercial package that’ll map a Windows drive letter to a WebDAV share. This is supposedly something that Windows is capable of doing natively, but I have never, ever got this to work, not even once. I would be glad to hear recommendations for free / open-source alternatives for this, since it’s currently the only bit of this set-up that costs money. There’s apparently also a netdrive.exe floating around but licensing for NetDrive seems to be a little confused.

3. Hack the Subversion config file.

On the server, open up C:\Program Files\Subversion\data\conf\svn_viewvc_httpd.conf. We’re not going to edit this file – you’ll need to find the bit that looks like:

<Location /svn/>
   DAV svn
   SVNParentPath "C:\Program Files\Subversion\data\repositories"
   SVNReposName "CollabNet Subversion Repository"
  AuthzSVNAccessFile "C:\Program Files\Subversion\data/conf/svn_access_file"
  SVNListParentPath On
  Allow from all
  AuthType Basic
  AuthName "CollabNet Subversion Repository"
  AuthBasicProvider csvn-file-users ldap-users
  Require valid-user
</Location>

and copy it. Then open up C:\Program Files\Subversion\data\conf\httpd.conf – which is the regular Apache configuration file – and paste the copied section right at the end, and make the following changes:

# Change Location to be the URL path of your WebDAV repo – I’ve used webdrive here
<Location /webdrive/>
    DAV svn
    SVNParentPath "C:\Program Files\Subversion\data\repositories"
    SVNReposName "Subversion WebDAV"
    AuthzSVNAccessFile "C:\Program Files\Subversion\data/conf/svn_access_file"
    SVNListParentPath On
    Allow from all
    AuthType Basic
    AuthName "Document Repository"
    AuthBasicProvider csvn-file-users ldap-users
    Require valid-user
# Add the two lines below
    ModMimeUsePathInfo on
    SVNAutoversioning on
</Location>

Now restart the web server (using ApacheMonitor.exe from C:\Program Files\Subversion\bin\ on your server) and check that you can see http://myserver/webdrive/ in a normal Web browser – the screen should say “Collection of Repositories” with your doc_repo repository listed underneath.

4. Connect WebDrive to your new WebDEV-Enabled Repository

Nearly there. Finally, fire up WebDrive on your workstation and create a new connection. Enter the Site Address/URL as http://myserver/webdrive/doc_repo/ – note that you must put the repo name in the URL otherwise WebDrive will complain with an error something like:

Unable to connect to server, error information below

Error: Socket receive failure (4507)
Operation: Connecting to server
Winsock Error: WSAECONNRESET (10054)

Correct settings will look something like this:

image

Hit Connect, and Windows explorer will fire up a new N: drive window pointing at your repo.

5. Witness the Awesome Power of Autoversioning!

Create an empty folder, create a text file in it, then browse to http://myserver/viewvc/doc_repo/ and you should see your new folder, and file, along with the Subversion history recording who created the file and when:

image

I like that a lot.