Wednesday, 10 February 2016

“How Can Software Be So Hard?”

Last night, I went to a Gresham College lecture at the Museum of London, “How Can Software Be So Hard?” presented by Professor Martyn Thomas CBE. The lecture itself was great – good content supported by solid examples. I must say there wasn’t a great deal there that I haven’t heard before – software engineering is an immature discipline; we rely too heavily on “testing” to validate and verify the systems we create; validation normally happens far too late to do anything about the problems it uncover; we’re overly reliant on modules and components that we can’t actually trust to work properly… all interesting and valid observations, but nothing particularly revolutionary.

Personally, I think the question posed in the lecture title is, to some extent, built on a false premise. Towards the end, he makes the observation – whilst talking about teenagers getting rich by writing iPhone games – that we only focus on the success stories, and the countless failures don’t get any attention. Which is true – but I think with software, we routinely take so much success for granted that it’s the failures which stand out. Sure, they happen – but if you think software is hard, try building a mechanical or an analog system that can send high-fidelity music from Boston to Singapore, or show the same colour photograph to a million people five minutes after it was taken. So many software projects are instigated to pursue efficiencies – to take some business process or system that used to require tens of thousands of people, and hire a few dozen programmers to replace it with a system that basically runs itself; it is any wonder it doesn’t always go smoothly? There’s a lot of things which are easy in software and close to impossible in any other domain, and I think to lose sight of that would be disadvantageous in an age where we’re trying to inspire more kids to study STEM subjects and pursue careers in science and engineering.

The thing that really stuck in my head, though, was a comment Professor Thomas made in response to a question after the lecture. Somebody asked him what he thought about self-driving cars, and amongst the points he raised in response, he said something like:

What about pedestrians? Why would you find a crossing, press the button and wait for the green man if you know all the cars on the road are programmed not to run you over?

Over the last few years, I’ve spent a lot of time studying the way smart devices are affecting the way we interact with the world around us – something I’ve covered at length in my talk “Are Smart Systems Making Us Stupid?”, which I presented at BuildStuff last year. I’ve looked into all sorts of models of human/machine interaction, but I’d never considered that particular angle before – and it’s fascinating.

Photograph of David Prowse as the Green Cross Code man.

Our basic instinct for self-preservation is reinforced from an early age – is there anybody here who DOESN’T remember being taught how to cross the street? So what happens to those behaviour patterns in a world where kids work out pretty early on that they can jump in front of Google Cars and get away with it? Do we program the cars to hit a kid once in a while – not to kill them, just give ‘em a nasty bump to teach them a lesson? How much use is a self-driving car when your journey takes longer than walking because your car slams the brakes on every time a pedestrian walks in front of it? Maybe you’re better off dumping your luggage, or your shopping, in the car, and taking the train instead?

It’s also an interesting perspective on a discussion that, until now, has been framed very much from the perspective of the driver – “will my self-driving car kill me to save a dozen schoolkids?” – and raises even more questions around the social implications of technology like driverless cars.

The next talk in the series is “Computers, People and the Real World” on April 5th, and if you’re interested in how our increasing dependency on machines and software is affecting the world we live in and the lives we live, I’d heartily recommend it.

(PS: if anyone from Gresham College is reading this – get somebody to introduce your speakers. They’ve worked hard to prepare the material they’re presenting; help them make the best possible first impression by taking the stage to a round of applause from a crowd who already know who they are. It’s not that hard, and it makes a huge difference.)

Wednesday, 27 January 2016

Let’s Talk About Feedback

Photo of the enormous guitar amplifier from the start of "Back to the Future"

There’s been some really interesting blog posts about speaking at conferences recently – from Todd Motto’s “So you want to talk at conferences?” piece, to Basia Fusinska’s “Conference talks, where did we go wrong?” and Pieter Hintjens “Ten Steps to Better Public Speaking”, to Seb’s recent “What’s good feedback from a talk?” post. There’s a lot of very useful general advice in those posts, but I absolutely believe the best way to improve as a speaker is to ask your audience what you could be doing better, and that’s hard.

After most talks you’ll have a couple of people come up to you at the coffee machine or in the bar and say “hey, great talk!” – and that sort of positive feedback is great for your confidence but it doesn’t actually give you much scope to improve unless you take the initiative. Don’t turn it into an interrogation, but it’s easy to spin this into conversation – “hey, really glad you enjoyed it. Anything you think I could have done differently? Could you read the slides?” If there’s something in your talk which you’re not sure about, this is a great chance to get an anecdotal second opinion from somebody who actually saw it.

Twitter is another great way to get anecdotal feedback. Include your Twitter handle and hashtags in your slides – at the beginning AND at the end – and invite people to tweet at you if they have any comments or questions. In my experience, Twitter feedback tends to be positive (or neutral – I’ve been mentioned in lots of “@dylanbeattie talking about yada yada yada at #conference”-type tweets) – and again, whilst it’s good for giving your confidence a boost, it can also be a great opportunity to engage with your audience and try to get some more detailed feedback.

And then there’s the official feedback loops – the colored cards, the voting systems and the feedback app. I really like how  BuildStuff does this. They gather feedback on each talk through a combination of coloured-card voting and an online feedback app. Attendees who give feedback online go into a prize draw, which is a nice incentive to do so – and it makes it an easy sell for the speakers: “Thank you – and please remember to leave me feedback; you might win an iPad!” The other great thing BuildStuff does is to send you your feedback as a JPEG, which looks good and makes it really easy for speakers to share feedback and swap notes afterwards. Here’s mine from Vilnius and Kyiv last year:

Build Stuff Lithuania ratings16 Build Stuff UA6

Now, I’m pretty sure there were more than five people in my talk in Kyiv – so I think something might have got lost in the transcription here – but the real value here is in the anecdotal comments.

Some conferences also run a live “leaderboard”, showing which speakers are getting the highest ratings. I’m generally not a big fan of this – I think it perpetuates a culture of celebrity that runs contrary to the approachability and openness of most conference speakers – but if you are going to do it, then make sure it works. Don’t run a live leaderboard that excludes all the speakers from Room 7 because the voting machine in Room 7 was broken.

Finally, two pieces of feedback I had from my talk about ReST at NDC London this year. The official talk rating, which I’m quite happy with but doesn’t really offer any scope for improvement:

image

And then there’s this, thanks to Seb, who not only came along to my talk but sat in the front row scribbling away on his iPad Pro and then sent me his notes afterwards. There’s some real substance here, some good points to follow up on and things I know I could improve:

ndc_london_seb_notes

This also goes to highlight one of the pitfalls of doing in-depth technical talks – your audience probably aren’t in a position to judge whether there’s flaws in your content or not, so your feedback is more likely to reflect the quality of your slides and your presentation style than the substance of your content. In other words – just because you got 45 green tickets doesn’t mean you can’t improve. Find a subject matter expert and ask if they can watch your talk and give you notes on it. Share your slides and talks online and ask the wider community for their feedback. And don’t get lazy. Even if you’ve given a talk a dozen times before, we’re in a constantly-changing industry and every audience is different.

Sunday, 24 January 2016

“The Face of Things to Come” from PubConf

A version of my talk from PubConf London, “The Face of Things to Come”, is now online. This isn’t a recording of the actual talk – the audio has been recorded specially, one slide has been replaced for copyright reasons, and a couple of things have been fixed in the edit – but it’s close enough.

The Face of Things to Come from Dylan Beattie on Vimeo.

As always, the only way to improve as a speaker is to listen to your audience, so I would love to hear your comments or feedback – leave a comment, email me or ping me on Twitter.

Tuesday, 19 January 2016

Would you like to speak at London .NET User Group in 2016?

The London .NET User Group, aka LDNUG – founded and run by Ian Cooper, with help from Liam Westley, Toby Henderson and me – is now accepting speaker submissions for 2016.

We aim to run at least one meetup a month during 2016, with at least two speakers at each meetup. Meetups are always on weekday evenings in central London, are free, and we want to have at least two speakers at each of our meetups this year. We’re particularly keen to welcome some new faces and new ideas to the London .NET community, so if you’ve ever been at a talk or a conference and thought “hey – maybe I could do that!” – this is your chance.

We’re going to try and introduce some variation on the format this year, so we’re inviting submissions for 45-minute talks, 15-minute short talks and 5-minute lightning talks, on any topic that’s associated with .NET, software development and the developer community. Come along and tell us about your cool new open source library, or that really big project your team’s just shipped. Tell us something we didn’t know about asynchronous programming, or distributed systems architecture. We welcome submissions from subject matter experts but we’re also keen to hear your first-hand stories and experience. Never mind what the documentation said – what worked for you and your team? Why did it work? What would you do differently?

If you’re a new speaker and you’d like some help and support, let us know. We’d be happy to discuss your ideas for potential talks, help you write your summary, rehearse your talk, improve your slide deck. Drop me an email, ping me on Twitter or come and find me after the next meetup (I’m the one in the hat!) and I’ll be happy to help.

So what are you waiting for? Send us your ideas, come along to our meetups, and let’s make 2016 a great year for London.NET.

 

Yes! I want to speak at London.NET in 2016!

Conway’s Law and the Mythical 17:00 Split

I was at PubConf on Saturday. It was an absolutely terrific event – fast-paced, irreverent, thought-provoking and hugely enjoyable. Huge thanks to Todd Gardner for making it happen, to Liam Westley for advanced detail-wrangling, and to NDC London, Track:js, Red Gate and Zopa for their generous sponsorship. 

Seb delivered a great talk about the Mythical 17:00 Split, which he has now written up on his blog. His talk resonated a lot with me, because I also find a lot of things about workplace culture very strange. I’m lucky enough to work somewhere where I seldom encounter these issues directly, but I know people whose managers genuinely believe that the best way to write software is to do it wearing a suit and tie, at eight’o’clock in the morning, whilst sitting in a crowded office full of hedge fund traders.

But Seb’s write-up post said something that really struck a chord.

“Take working in teams. The best teams are made of people that like working together, and the worst teams I’ve had was when a developer had specific issues with me, to the point of causing a lot of tension”

Now, I’m a big fan of Conway’s Law – over the course of my career, I’ve seen (and built) dozens of software systems that turned out to reflect the communication structures of the organizations that created them. I’ve even given a conference talk at BuildStuff 2015 about Conway’s Law with Mel Conway in the audience – which was great fun, if a little nerve-wracking.

In a nutshell, Conway’s Law says of Seb’s observation regarding teams that if you take a bunch of people who are fundamentally incompatible, and force them to work together, you’ll end up with a system which is a bunch of incompatible components that are being forced to work together. If you want to know whether – and how – your systems are going to fail in production, look at the team dynamic of the people who are building them. If watching those people leaves you feeling calm, reassured and relaxed, then you’re gonna end up with something good. If one person is absolutely dominating the conversations, one component is going to end up dominating the architecture. If there’s two people on the team who won’t speak to each other and end up mediating all their communication through somebody else – guess what? You’re going to end up with a broker in your system that has to manage communication between two components that won’t communicate directly.

If your team hate each other, your product will suck – and in the entire history of humankind, there are only two documented exceptions to this rule. One is Guns’n’Roses “Appetite for Destruction” and the other is “Rumours” by Fleetwood Mac.

\m/

Thursday, 14 January 2016

The Rest of ReST at NDC London

A huge thank you to everyone who came along to my ReST talk here at NDC London. Links to a couple of resources you might find useful:

Thanks again for coming – and any comments, questions or feedback, you’ll find me on Twitter as @dylanbeattie.

Thursday, 7 January 2016

Confession Time. I Implemented the EU Cookie Banner

Troy Hunt kicked off 2016 with a great post about poor user experiences online – a catalogue of common UX antipatterns that “make online life much more painful than it needs to be”.

One of the things he picks up on is EU Cookie Warnings – “this is just plain stupid.” And yeah, it is. Absolutely everybody I know who added an EU cookie warning to their website agrees – this is just plain stupid. But for folks outside the European Union, it might be insightful to learn just why these things started appearing all over the place.

First, a VERY brief primer on how the European Union works. There’s currently 28 countries in the EU. The United Kingdom, where I live and work, is one of them. One of the aims of the EU is to create a consistent legal framework that covers all citizens of all its member states. Overseeing all this is the European Parliament. They make laws. It’s then up to the governments of the individual member states to interpret and enforce those laws within their own countries.

So, in 2009, the European Parliament issued a directive called 2009/136/EC – OpenRightsGroup has some good coverage of this. The kicker here is Article 5(3), which says

“The storing of information or the gaining of access to information already stored in the user’s equipment is only allowed on the condition that the subscriber or user concerned has given their consent, having been provided with clear and comprehensive information in accordance with Directive 95/46/EC, inter alia, about the purposes of the processing. This shall not prevent any technical storage or access for the sole purpose of carrying out the transmission of a communication over an electronic communications network, or as strictly necessary in order for the provider of an information society service explicitly requested by the subscriber or user to provide the service.”

In a nutshell, this means you can’t store anything (such as a cookie) on a user’s device, unless

  1. You’ve told them what you’re doing and they’ve given their explicit consent, OR
  2. It’s absolutely necessary to provide the service they’ve asked for.

Directive 2009/136 goes on to state (my emphasis):

“Under the added Article 15a, Member States are obliged to laydown rules on penalties, including criminal sanctions where applicable to infringements of the national provisions, which have been adopted to implement this Directive. The Member States shall also take “all measures necessary” to ensure that these are implemented. The new article further states that “the penalties provided for must be effective, proportionate and dissuasive and may be applied to cover the period of any breach, even where the breach has subsequently been rectified”.

Golly! Criminal sanctions? Retrospectively applied, even for something that we already fixed? That sounds pretty ominous.

Anyway. Here’s what happens next. Directive 2009/136 means that is is now THE LAW that you don’t store cookies without consent, and the various member states swing into action and try to work out what this means and how to enforce it. In the UK, Parliament interpreted this via something called the Privacy and Electronic Communications (EC Directive) (Amendment) Regulations 2011, which would come into effect in 2012.

My team and I found out in late 2011 that, when the new regulations came into force on 26 May 2012, we would be breaking the law if we put cookies on our user’s machines without their explicit consent. And nobody had the faintest idea what that actually meant, because nobody had ever broken this law yet, so nobody knew what the penalties for non-compliance would be. The arm of the UK government that deals with this kind of thing is the Information Commissioner’s Office (ICO), who have a reputation for taking data protection very seriously, and the power to exact fines up to £500,000 for non-compliance. The ICO also usually publish quite clear and reasonable guidelines on how to comply with various elements of the law – but that takes time, so in late 2011 we found ourselves with a tangle of bureacracy, a hard deadline, the possibility of severe penalties, and absolutely no guidance to work from.

So… we implemented it. Despite it being a pointless, stupid, ridiculous endeavour that would waste our time and piss off our users, we did it - because we didn’t want to end up in court and nobody could assure us that we wouldn’t.

We built a nice self-contained JavaScript library to handle displaying the banner across our various sites and pages.

image

Instead of just plastering something on every page saying “We use cookies. Deal with it”, the approach taken by most sites - we actually split our cookies into the essential ones required to make our site work, and the non-essential ones used by Boomerang, Google Analytics and other stats and analytics tools. And we allowed users to opt-out of the non-essential ones. We went live with this on 10th May 2012. Around 30% of our users chose to opt-out of non-essential cookies – meaning they became invisible to Google Analytics and our other tracking software. Here’s our web traffic graph for April – June 2012 – see how the peaks after May 10th are suddenly a lot lower?

image

On 25th May 2012, ONE DAY before the new regulations became law, the ICO issued some new guidance, which significantly relaxed the requirements around ‘consent’. “Implied consent” was suddenly OK – i.e. if your users hadn’t disabled cookies in their browser, you could interpret that as meaning they had consented to receive cookies from your site.

They also announced that any enforcement would be in response to user complaints about a specific site:

“The end of the safe period "doesn't mean the ICO is going to launch a torrent of enforcement action" said the deputy commissioner and it would take serious breaches of data protection that caused "significant distress" to attract the maximum £0.5m non-compliance fine.” (via The Register)

So there you have it. Go to http://www.spotlight.com/ and, just once, you’ll see a nice friendly banner asking if you mind us tracking your session using cookies. And if you opt out, that’s absolutely fine – our site still works and you won’t show up in any of our analytics. Couple of weeks of effort, a nice, clean, technically sound implementation… did it make the slightest bit of difference? Nah. Except now we multiply all our Analytics numbers by 1.5. And yes, we periodically review the latest guidance to see whether the EU has finally admitted the whole thing was a bit silly and maybe isn’t actually helping, but so far nada – and in the absence of any hard evidence to the contrary, it’s hard to make a business case for doing work that would make us technically non-compliant, even if the odds of any enforcement action are minimal.

Now, if the European Parliament really wanted to make the internet a better place, how about they read Troy’s post and ban popover adverts, unnecessary pagination, linkbait headlines and restrictions on passwords? Now that’s the kind of legislation I could really get behind.