The Flip Side of “Move Fast and Break Things”

Last year, Mark Zuckerberg notably described Facebook’s policy of encouraging developers to “move fast and break things”. When you are an early stage, lean company, this certainly helps you to build new features rapidly, allowing you to move on in leaps and bounds. It is also perhaps necessary if you have sensibly only built what seemed necessary at each stage, because it allows you to change those earlier architectural decisions and assumptions, no-matter how major their impact.

I believe there is a flip side to this though: After a period of “moving fast”, during which you should be relying on test-driven development (TDD) to help you find bugs and functionality that you have broken or regressed, you will have a new-and-improved product that appears to work. If you are sensible, you will also have added to that test suite a whole host of new tests.

But, what you no-longer have, is a stable product.

Sure, your automated tests and manual verification tell you that the product works. But not all problems will become apparent on-demand or in formal testing. Performance-sensitive issues, threading, concurrency, race conditions, capacity problems, etc… No-matter how stable the product was before, you essentially now have a new product and a new piece of complex software to learn about.

So you either need to soak test the product again to see how it behaves over time and with realistic load, or release it — to production or a more production-like environment — and be ready to detect and fix issues that arise.

Whilst the “move fast” approach has its certain benefits, I feel this flip side is why Facebook often got burned when using it on a product that was already in production and in-use by millions of people. They frequently seemed to cause issues that got into live usage. In an ideal world, the things that have been broken by moving fast should be fixed well before an end user sees them. But, in practice, I’m asserting that you simply can’t rapidly repeat that stabilisation and discovery phase that comes after building or radically changing complex software.

So yes, move fast and break things… But then be ready to respond and fix the unexpected for quite a period after you next release it. (Perhaps not quite as pithy a one-liner!)

8 Things I Learned The Hard Way About Communication

We’ve all met — and in some cases have been — the stereotypical tech person: The one who explains, at great length and in unnecessary detail, our favourite tech topic… in response to a question about a different topic. The one who struggles to introduce a concept to a non-tech audience. The one who, when faced with an audience of glazed eyeballs, ignores the signals and keeps talking. The one who the CEO is afraid to put in front of clients.

Thankfully, most of us don’t entirely fit this stereotype. But I’m sure all of us occasionally show some of its traits. And so, I began thinking how I would describe, to new tech folks — and perhaps also to myself — the ways to avoid these pitfalls and to become the one who people seek out as able to explain concepts, and with whom an audience doesn’t zone out within the first 10 seconds. The slightly more approachable techie, if you will.

For me, these seem to be the key points:

Know your audience – The way you describe a concept to the CEO, a salesperson, your other tech colleague, or your great aunt, will all necessarily be quite different. — Or they should be!  — Think about who your audience is. They may be a mix of people, rather than all one kind. Think about what they might each already know, or what you’ll need to introduce to them as a preamble.

Pick your language – Depending on who you’re talking to, your language may need to change. Chatting to another tech colleague is one thing, but the CEO or salesperson may need to hear your topic described at least partly in business terms before you introduce a few technical words. And those technical words should either be ones that they’ll already know, or ones you’ll be happy to define for them. But don’t take them on an unnecessary tour of your “impressive vocabulary”. Use plain language where it works, and introduce terms that will be valuable.

But also don’t be condescendingly simplistic. In a tech company, you can assume the CEO will know words relevant to your line of business… or will be happy for you to introduce and explain them.

Context is key – Aside from introducing unfamiliar terms, you may need to think where your audience comes from and what they are already familiar with. Describe the context of your topic, relative to a context they are familiar with. Perhaps a salesperson may like to hear a business use case in which a particular tech concept comes into play. Or a colleague from the finance department may be able to relate to your topic if you describe it relative to parts of the system they will already know about.

They may not know (or care) what the load balancer is doing, but if it’s relevant and you draw it on a diagram of “what happens when a user visits our site”, they will understand.

Get to the point, and then stick to it – If the whole point is to describe the security of your software… stick to that. If it’s to showcase a new feature, make that the focus. Make sure it’s clear pretty very soon what you intend your audience to get from this, then cover that concisely. Don’t reach the topic via a tortuous route unless it’s all relevant, and let them know why before embarking on it.

Don’t deep dive too soon – Don’t go into graphic detail at every point just because that topic interests you. Is it relevant? Did you introduce it at a higher level first? Did you describe why you’re going to look in detail and why it’s relevant to the overall topic? If not, don’t expect your audience to humour you whilst you go on what seems like an irrelevant jaunt through detail.

Notice when you’re losing them – Learn to read body language. Not everyone gives visual feedback, so find at least a few people in your audience who are nodders, eyebrow raisers, smilers, etc. Use them to gauge whether they are keeping up with what you’re saying. And if they aren’t, it’s your job to ask if you need to return to introduce the topic more eloquently, re-define some terms, explain in a little more depth, etc.

Know when to stop – If you want to be asked to talk or explain again… know when to stop. In response to a quick question, make your answer brief. If your talk is supposed to be 15 minutes, time yourself and don’t go over. If you can give them what they need in a shorter time, don’t hog their attention. Knowing how to do this means you’ll be appreciated and asked again. Rambling on means you’ll be the one they avoid asking in future.

Know that technology is just an enabler – Yes, you’re in love with technology, as many people are. I know I am. But know that, to some, it’s a necessary evil and, to all, it should really just be considered an enabler to businesses rather than a world in itself. Explain the benefits it brings, rather than expecting automatic appreciation, and you may find others learn to love it as much as you. Ramble on in irrelevant detail, and you’ll only push the corporate Luddites further away.

Build Just Enough, For Now

Like many developers, I work hard and I hate wasted effort. It costs us, it costs clients, and it benefits no-one.

When designing and building software, we are so often tempted to second-guess what a new feature should be, or how it might end up being used. Subconsciously, we usually know where the line between necessary and extra lies but, in our enthusiasm, we cross it anyway. Time and time again.

So often, that extra functionality needs to be reworked when the software gets into the hands of end users and the real requirements come to light, or sadly goes unused when our guesses miss the mark, yet again.

Even in “lean” startups, we are tempted to build features out as much as possible, to what we think (or hope?!) will be their logical conclusion. We feel certain, this time around, that our gut instinct is correct, and that this is necessity rather than extra work.

This distinctly non-lean enthusiasm doesn’t just come from us developers either: Clients often define in minute detail what they would like a feature to be. But we can so often see clearly the line between their certainty and their second-guessing. Even with clients and precise requirements, it is perhaps necessary to build what seems logical from the requirement first, and then demonstrate it as an indication of progress… as a way of forcing a review. What I suspect often happens when that core functionality is demoed is that it changes everyone’s perspective. Things look different when part of the system is real.

Non-technical clients perhaps aren’t used to this process and get carried away specifying their needs. I think we should reveal the results to them incrementally and regularly, knowing that the end result may (necessarily) look a little different from their original idea.

So nowadays I’m trying now to build “just enough”. That is, just enough to find out more.

If I’m building an API to collect data and other functionality to make use of it or visualise it, how about just building the data collection API and then seeing what the real data looks like? Or if I’m building a UI to explore a use case, how about putting an early version into the hands of real users to see what they actually do with it, before padding it out or even worrying too much about visual design or details?

So isn’t this just what the lean and agile manifestos say? No, I think it goes further. I think it’s ok to write down or discuss a detailed description of what a feature might look like, but this is the act of slicing that into “necessary” and “next”, before getting started.

I think it’s ok to visualise the end goal. Just don’t build all of it. Not just yet, anyway.

Caffeine and Me

coffeeI first started drinking coffee in significant quantities during my industrial training year at IBM, in 1992. Being tanked up on caffeine was part of the whole buzz of doing my first paid job in the software industry and everyone else consumed it in large quantities. In this, and most other ways, I fit right in.

By the time I started my first job in London after completing my degree, in 1994, I was used to consuming about 5-6 cups a day of the strong black stuff.

It was at this point that I first encountered what would become, for me, caffeine’s main down-side: the crash.

By 3pm each day, with my caffeine consumption tailing off after a midday peak, I would struggle to stay awake, let alone productive. I looked into all sorts of solutions, including what I was eating for lunch (it must be the bread, right?!), whether I needed to supplement with energy drinks / tablets (I tried them all), or whether I was physically run-down in some way.

Actually, the latter was probably closest to the truth. I was being run down, literally and on a daily basis, by my own caffeine consumption. As with most things in life, the feel-good high of the coffee hit earlier in the day had a corresponding down-side; a come-down, drop in energy, call it what you will.

Given all of this and my apparent understanding of it back then, you’d think that I would have moderated my caffeine consumption at this point… right? Far from it! Despite three periods in the following 20 years where I weaned myself off caffeine for a mere week or so, I always returned to my weekday love with great happiness and, seemingly, the memory and learning capacity of a goldfish. A wide-eyed, caffeine-loaded goldfish.

In 2012-13, I worked for myself for a while. It took me a while to battle what seemed to be some form of attention deficit that prevented me from really focussing when I was away from a traditional workplace. I later found that this problem was linked with my coffee consumption and, on the days that my consumption was lower, I tended to find focussing to be easier. This discovery, coupled with hitting the 3pm wall, made me have a long hard think.

So in the past month, rather than going for my usual all-or-nothing approach to coffee consumption, and rather than insisting that I give it up completely, I’ve done something a little more mature: I’ve moderated my consumption. I’ve gradually weaned myself off 5-6 cups per day, down to one single cup.

That one cup is something I drink in the morning, at home, when I can really enjoy it. It isn’t interrupted by work, or other people. It isn’t affected by the whim of the day-to-day change in coffee strength that should shame some of the big chains on the high street. It’s just me and a cup of good coffee, and the knowledge that the next one will be tomorrow.

What I’ve noticed since getting myself down to one cup is startling:

Firstly, I no-longer hit the 3pm energy/attention wall. Whilst I still get tired as the day goes on, I can usually stretch, have a drink of water, stand up for a minute and then carry on. My alertness never dips below a point that I can’t power through with a little willpower.

Secondly, I’m calmer. Too much caffeine, particularly early in the morning, seemed to fuel my anxiety. One cup doesn’t seem to do that. It perks me up a bit, and that’s all.

Thirdly, I sleep. I began to suspect that the caffeine levels in my body weren’t resetting 100% at the end of each day. Some days, I would wake up still feeling wired. Now the one cup, consumed around 7:30am, is definitely out of my system by bedtime and I sleep soundly. As everyone knows, it’s not just a question of how long you sleep, but rather the quality of it and whether you can access deep sleep. I wake up feeling rested, rather than ragged.

Lastly, that one cup in the morning is now really good. I think I had become desensitised to the effects of caffeine. My daily cup now tastes better, wakes me up more effectively, and is something I look forward to rather than being a necessity just to get my head off the pillow.

The question that confronts me now is whether to stick with one cup, longer-term, or whether to wean myself off it altogether. There’s a part of me that knows the positive effects I’ve experienced could be even more amplified if I kick the habit completely. But then, another part of me knows that the enjoyment I get from my morning cup isn’t something I should deny myself. (I won’t even begin to debate the supposed health benefits of moderate caffeine intake, as I think those articles tend to be written by folks who have yet to learn the relationship between correlation and causation).

One thing to note: The word “me” in the title of this post. I know other folks have wildly different experiences of the effect of caffeine. Some drink pots of the stuff with relatively few downsides, others can’t stomach a single cup and get the jitters straight away.

As with many of these nutritional / psychological / health issues, we all have to be our own sample set of one and learn to notice the effect that substances have on us. It’s taken me 20 years to begin to learn that for myself.

Frameworks: Inherit by Default, or Compose by Choice?

composeLet’s face it, we’re all trying to build and deliver software as fast as possible, and reliance on frameworks is key. Building a complete solution from the ground up, when so many pieces of that puzzle have already been implemented well by others, just isn’t sensible.

But how best to make use of frameworks? There seem to be two approaches offered at the moment:

  • Inherit by default – The “easy start” frameworks, such as Grails, turn on a great deal of “free” functionality, pre-packaged libraries and layers for you by default. You can go from zero to a working app in minimal time.
  • Compose by choice – Other frameworks, such as Spring or just general third party libraries, expect you to decide which functionality you need and you configure your app to use those pieces only. This is a quite a bit more cumbersome up-front, usually requiring a hefty configuration (albeit perhaps pasted from a handy example), but it means you’re only using the components and layers that you decide you really need. Some frameworks and libraries are now making this up-front configuration easier, by allowing you to plug-and-play rather than hand configure, which is a great step forward.

For run of the mill apps, or for novice developers, inheritance of free functionality and pre-packaged libraries is fine and probably desirable. You basically want something working quickly, and you want to worry about details as little as possible. This is abstraction working for you. You probably don’t care how many layers there are between an incoming request (for a web app) and that request actually hitting your own controller code. Likewise, additional steps in rendering the view and returning the resulting HTML probably won’t cause you too many headaches. You want to write and configure minimally and get it working.

But, if you need to scale your app aggressively at some later date, all of those additional layers and unchosen components will affect that performance, to a greater or lesser degree. And here’s the crucial thing: Even if you can identify the culprit, you may not be able to easily turn some of them off. That “free” functionality might be cumbersome to configure away, or it might be impossible to turn off at all.

I think both styles of framework have their place. But for me, right now, I’m favouring frameworks and libraries that let me choose and compose the functionality and layers that I’d like to use, so that I don’t waste yet more time later working out where all the time is being spent during performance/scalability tests, discovering that those “sensible defaults” aren’t sensible at all, and Googling for answers on how to turn off unwanted behaviour.

But as they say, your mileage may vary!

Masters of Abstraction

binaryThe stereotypical “nerd”, “geek” or “techie” — call us what you will — spends our time wrapped up in another world and therefore, so the stereotype goes, perhaps being slightly neglectful of this one.

But is it any wonder? — We spend our working lives building something you can’t actually see. Software has no tangible form. It has a manifestation in the real world, which is what we sell as a product, but only in the ways we choose to allow it to expose a User Interface, an API, or perhaps to transform data from one form into another. And yes, your photos, music, calendar entries… without software, they are just data.

Sure, you can see files that form software and data when it sits on disk, but even those are just an abstraction; a serialised form from which they can be recreated in memory when next loaded. Ones and zeros… but that’s an abstraction too! In its stored form alone, software and data is useless.

And so because we are in effect building something you can’t see, so that we can perhaps sell the manner in which it manifests itself in the world as a product, software engineers are forced to deal mainly with abstractions. I guess this is probably where the notion of us being in a world of our own comes into play.

Abstractions allow us to talk about, draw, reason about and build software by focussing at a certain level of detail, showcasing certain concepts and features whilst temporarily hiding others. We spend our time switching between different levels of abstraction; some familiar to other engineers (a data structure, a deployment platform, a communication protocol, an Operating System, etc), others invented for a specific problem domain (representations of the objects in the problem space in which the software will function).

Crucially, none of these abstractions actually exists, and the skill of a software engineer is being able to invent, combine and deal with all of them as if they did. Mastery involves being able to move between them and to pick the relevant one to adopt when solving a particular problem, or describing software to other people. It is important to abstract away details that are irrelevant for the current task, in order to focus on what matters at that time.

More than just a linear scale between macro- and micro-focus, abstraction works in many dimensions. Is it any wonder that many of us find this such a compelling career? And is it any wonder that others see us as occasionally caught up in worlds of our own?

No-one has actually seen any of our abstractions: A stack, queue, linked list, thread or map, to name a few. Though we may draw representations of them on a whiteboard and use familiar words to describe them. Our algorithms are also abstractions, and no-one has ever directly witnessed the execution of a binary sort, calculation of a hash value, or the calling of a function. Though we may explain them in code in a programming language, which is also just an abstraction, or write entries to a log file as a manifestation of their execution.

So the stereotype of engineers being wrapped up in other worlds is probably valid, and thankfully so. For it seems that’s the only way to describe the abstractions that let us build software that can in-turn manifest itself in the real world again, hopefully to benefit the lives of real people in some way.

Just so long as those abstractions are a means to a useful end 🙂