For Better Software, Consider the Layers

(Adapted from a discussion I had earlier this year about why we don’t just “code the feature” and what software developers actually do)

layersSoftware is undeniably a many-layered thing… but I’ll spare you a bunch of weak comparisons with certain desserts.

Though most of the layers in software are invisible, their selection and our level of investment in them dictates a range of not-so-subtle factors that are apparent in the quality and experience of the end result, either immediately or over time.

On the surface, software interacts with the real world via a User Interface (UI), or perhaps by sending messages and emails to a user. These upper layers are primarily visual and need to mesh well with the user’s own ideas about what they are trying to achieve with the system. This is the world of User Experience (UX), and it is usually possible to describe these layers in terms of stories and to draw them as wireframes or more detailed designs.

Meanwhile, way, way below at some of the very lowest layers, we have code in various programming languages, formally compiled or interpreted, running as components on one or more machines, and communicating via network protocols. These lowest layers — though we may be forgiven for not realising it — are the actual software, as it executes. All of the layers above are simply abstractions; ways of hiding and dealing with the underlying complexity, to help us focus on portions of it and to relate it back to the real world.

What do those abstractions and layers look like? They are the architecture, modules, components, packages, subsystems and other sub-divisions we create to explain software. Though some may be drawn for us, we choose carefully where to draw these lines, and we try to name them so their impact on the real world result, or processing of it, is understandable and uses similar language. We often draw diagrams about them, talk about them and then build them in code.

Between the extremes of the top and the bottom layers, there is a journey… From the visual and less formal, towards achieving a result via the inevitable rigour and formality of the lower layers… and back again to the real world.

There simply is no way to avoid the need for that eventual lower-level formality, as the software must run on a machine that requires it and which understands no subtlety. It’s just a question of the layers / abstractions we choose to create en route and how much we decide to invest in them, that dictates the experienced functional and non-functional quality of the overall end result: Whether it works as intended, whether it always works as intended, how quickly it responds, whether it scales, how much further investment is required to change it later, etc.

It is almost as if our acknowledgement of and attitude to the non-obvious layers, formalities and rigour has an impact on the visible quality of the end result we achieve. Again, though we may be forgiven for not realising it.

Choice and investment in the layers isn’t about black-and-white decisions, but rather a series of compromises and considered choices, usually made entirely within the engineering team. These choices can sometimes be hard to explain or justify outside of that team, which is why I’ve found this explanation of layers can be useful.

It is possible not to invest much at all and to pick a more direct path so that end results are delivered quickly. This is often sensible to validate ideas before further investment. It is also possible to over-invest, so things are delivered slowly but where much of the effort never impacts the end result and is wasted.

Factors to consider investing in, that have a knock-on effect on the end result, are whether the chosen layers and abstractions are understandable (using language and concepts reflecting the real world), testable to minimise mistakes, reusable to minimise wasted code, loosely coupled with others (so changes in one don’t unnecessarily impact another), documented so further work is easier, and structured so that they can be adapted when the need for change arises, as it inevitably will.

Under-investment in various software layers, or their restructuring, can result in what we call “technical debt”. This is where the work required to make further changes at those levels is hampered either by illogical structure, redundant code, quick workarounds or lack of prior thought. Some degree of technical debt is understandable but, beyond a certain point, it will impact the overall result, either in terms of quality or the cost and timescales for making changes.

Luckily, with software, we can retrospectively invest in these layers and even change our choices about which layers to focus on. This allows us to begin lightly, with less investment, to get a result or to validate whether an idea has benefit, then pick layers to invest in for stability, quality, longer-term results, etc. This retrospective investment is what engineers sometimes call “hardening”, and the changes it implies are often referred to as “refactoring”. They may result in no apparent change, but they are retrospective investment that may drastically affect the quality of the end result, whether now or in the future.

Choice and investment in software layers, whether visible or non-visible, is an ongoing and never-ending process. It is also a discussion about resourcing, quality, deadlines, compromises and desired outcomes. All that is required is that we remember that the layers are (or should be) there in some form, and that time spent considering them affects the end result in very tangible ways.

Integration-Driven Development

It’s a familiar stereotype: The lone software nerd. Working in isolation. Rarely interacting with anyone.

In reality, this particular stereotype couldn’t be further from the truth. Everything we work on as software developers integrates with something or someone else.

cogsIt might be behind the scenes by providing or using an API, or by communicating with another module, an external system or a database. Or if we’re building a User Interface, that’s a visual integration with another person’s expectations and mental model of a task they want to perform. Granted, it’s much more creative and fluid than the ones mentioned earlier, but it’s an integration with another party nevertheless.

Nothing we develop exists in isolation: Integrations and interactions are crucial, and this usually means working with other people, either as end users or as the creators of the systems we need to integrate with. So much for that loner stereotype.

But we’re also human…

When we begin a development task, there’s a natural tendency to go depth-first and to work on a piece as we understand it at first. We like to work on the bit we feel in control of, and to feel “ready” before we expose it to others.

As it turns out, instead of this seemingly productive focus leading to our “best” work, once revealed, it can often be detrimental, both for us and for the project. Why…?

  • We’re making assumptions about the integration and interactions that are inevitably needed and, by not validating them, risking the need for re-work.
  • We’re delaying exposing our work to others at which point their own assumptions can be tested, risking re-work on their part.

The answer, contrary to our natural inclinations, seems to be to begin integrating and exposing our work as early as possible.

How to integrate early?

For non-visual work, we can actually build the integration, the API, the interface with the other module or system, as early as possible. This may involve a certain amount of mocking up and simulation to get it working, perhaps using fake data, but we can do this quickly and cheaply as those parts will be thrown away towards the end.

One crucial thing to remember… is to actually throw those mocks away and not to accidentally leave any behind. It’s a common mistake to go live with mock code remaining, so it’s best to mark it in some way or to use obviously fake data so as to be embarrassingly obvious.

For visual work, a lower-quality version with just the interactions working is a good starting point; a form of living wireframes, minimally demonstrable. Something people can play with before any visual polish is added.

There are undeniable challenges I’m ignoring here, as the incremental development from living wireframes to pixel-perfect renderings of designs can be a difficult transition that ignores fundamental differences between the two. Design work often leads to an internal restructuring of visual implementations. That said, these are problems worth tackling, as exposing something people can interact with early potentially prevents much rework anyway.

Validating assumptions about the complex and nuanced interaction with human beings is best done as early as possible.

Visual work also needs to integrate with and test assumptions about the data and other non-visual interactions upon which it will rely, if only informally, as mistakes here can also lead to rework.

What improves?

If we can convince ourselves to integrate early, and on an ongoing basis…

  • We expose our work earlier to others. It’s not only helpful but healthy. Perhaps time to get over our natural tendencies to perfect then reveal.
  • We often learn something unexpected that shapes how we build the bit we’re working on, and so avoids rework.
  • Expectations and assumptions can be tested; both ours and those of others.
  • A basic working version of the overall system is achievable earlier, whereby end-to-end assumptions can be tested, perhaps with stakeholders, and problems spotted.
  • Avoids the last-minute integration and “Oh, s***!” moments that so often lead to project delays.

What are the challenges?

Just like integration, change is inevitable.

Doesn’t this mean we should integrate later, once we “know everything”? Don’t early integrations just set us up for rework when change occurs?

I’d argue that early integration means we know who else is affected, and how, so we can swiftly make the change and fix the integration, rather than storing up the change for disclosure during late integration. It’s better to absorb the impact of the change early, including any knock-on effect. Learning can take place sooner and in all the places it is needed.

Due to earlier coupling, we do however need to be mindful of timings and upfront about the impact on other people, rather than just “breaking” integrations and waiting for others to discover them. Checking others are ready to absorb the change minimises disruption and provides a valuable way to communicate the change. We need to move sympathetically with the needs others.

Rather than looking on early integrations as inevitable victims of change, they are also a valuable source of it. Without early integration, we may be delaying discovering the need for the change and storing up the disruption for later.


If we do it early and on an ongoing basis, this breadth-first early approach to integration not only avoids late discovery of problems and minimises the need for re-work… it also makes software development quite an interactive activity.

So much for the loner stereotype!

“Don’t worry about the details”,… said no software developer ever

bubbles1bDevelopers spend our days deep in details, usually a thousand and one of them. Combining and orchestrating them up to the point where they collectively deliver something tangible to other — often non-technical — people.

Up and down the stack we go, from the bigger picture down to concrete implementation details, and back up again.

Good software developers can traverse that stack all day long; speaking to people or explaining concepts at all levels… in language appropriate to that level, using abstractions and terminology that hide the detail below, but always being aware that the detail is still there,… waiting to be dealt with.

We know the bottom line is that the details have to work, or there is no bigger picture.

The trick is learning the right times to get into those details, how deep to go, and when to acknowledge that problems at one level have an impact on a wider scope. What we try not to do is get lost in the details in ways that aren’t relevant to someone who just wants to know about the overall solution or a particular layer of it. We try not to baffle you or drag you down the rabbit hole with us,… and sometimes we succeed.

In most other professions, you operate at one level and you remain there. Not so with software, where we work with the details but orchestrate and explain them at a level appropriate to many audiences, whatever they might need. We deal with customers and other developers, and everyone in between.

So if you ask a software developer about a solution or talk to us about a problem… we’re usually traversing the stack as we talk, picking out what’s relevant… choosing language… hiding details, unless they impact you… storing up things irrelevant to you that need investigating later… trying to be open without being complex,… honest but not alarmist. And if we’re good, you can’t tell we’re doing it.

The details matter to all of us eventually. But it’s our job, as developers, to figure them out.

The Introvert and the Startup

feetPerhaps nowhere is the phrase “one size fits all”  more misplaced than when talking about work environments and company culture. This is particularly true if, like me, you just happen to be something of an introvert.

At startups with aggressive growth and targets to hit, we try to foster a culture in which people have the opportunity to contribute their strengths and abilities, whilst creating an environment in which they want to spend a significant amount of their time. We also want to have some fun to offset all the hard work.

The creative options and flexibility for doing this at small startups simply can’t be matched in the wider corporate world, so it is no wonder that startups tend to have some of the most enviable work environments and cultures.

But getting it right involves a complex interplay of factors when catering for a diverse workforce, and talented people bring their whole personality to work. Tweak one environmental or cultural aspect to suit certain people or to get results, and you often find you’ve ruined another aspect for someone else. Worst of all, you may even hamper their ability to contribute.

There is perhaps no group that this applies to more than introverts, as many startup environments can be somewhat charged up and geared towards the high-energy needs of extroverts.

introvert, noun /ˈɪntrəvəːt/

A person who is more energised and stimulated by spending time alone or in low-key situations, than with others, and who finds certain group interactions exhausting.

To my amusement, I find it easier to tell people that I’m gay than to admit being an introvert. The former rarely causes a ripple. But the latter is often the start of a grand inquisition: “What’s an introvert?” “Why do you say that?” “You don’t seem like one” “We don’t need introverts here” “You just need to come out of your shell and let your hair down”… All precisely the kinds of confrontation an introvert wants to avoid at any cost. It’s no wonder we usually try to fly under the radar.

It is said that people are hard-wired somewhere on the introversion / extroversion spectrum, or we learn it very early on in life and it can be incredibly hard to unlearn. This has certainly been my experience. But what we are rarely told are the upsides to being introverted, particularly in environments you might think it would be a weakness, such as at startups. Indeed, a mix of introverts and extroverts may be the ideal team configuration… but only if handled thoughtfully.

So what are some of the unexpected strengths of introverts?

  • Leading – Many famously great leaders report being introverts; Bill Gates and Abraham Lincoln among them. Perhaps introverts make good leaders because it didn’t come naturally to us, we had to learn it and tend to have a more considered leadership style that we adapt to the needs of those we are working with. We also have less of a tendency to micro manage, choosing instead to let people exercise their abilities and prove themselves to us.
  • Presenting, Explaining – Strangely, I have no difficulty standing up in front of 200 people to give a presentation or to explain a technology. But after presenting, being part of the audience as a crowd can suddenly seem daunting to me. This is perhaps the opposite of what you’d expect.
  • Listening, Insights – Introverts tend to develop good listening skills. We have a tendency to step back from crowds, or to remain at the edges. We also spend more time processing what has been said to us, less time interrupting and, because we naturally need breaks from engaging, we are more likely to spot connections and insights.

But what do introverts find harder, and what might we need adjusting in work environments and culture so we can thrive?

  • Discussion, not Confrontation – Discussions that turn into confrontations, or environments that verge on being boisterous, can be tough for introverts. We tend to retire rather than engage, feeling shut down rather than fired up. The precise opposite of how extroverts react. When discussions stick to facts, rationales and explanations rather than the raising of voices, introverts can actively and even energetically engage and, as mentioned above, may have an entirely different set of valuable insights to offer. Conversations can still be very animated and productive.
  • Space to Recharge – We tend to need time in which we are likely not to need to engage with others, so we can digest, consider and recharge. We also need the physical space to do that in. If an environment is socially charged non-stop, introverts can rapidly be worn down, drained and become detached.
    Good environments allow a mix of engagement and focus, with meetings and work spaces kept separate where possible. Some would say this limit on engagement is a downside, preferring constant communication instead, but I’d say it naturally timeboxes interaction and forces more considered work to be scheduled. No-one, not even extroverts, can stay engaged, communicative and effective the whole time. Open-plan spaces can be a drain on everyone, not just introverts.

So perhaps the strengths we’re recruiting for in fast-paced startup environments might occasionally come aligned with a personality trait we’ve usually considered to be a weakness: introversion.

Introverts comprise up to one-third of the total workforce, so getting things right for a mix of personality types could be a significant win for startups looking to build great teams and great places to work. And if startups can get this right, the wider corporate world might follow.

Headspace: Perhaps the Most Urgent Thing?

londonNothing of any significance or impact was ever created by a scattered mind.

Great things are not built in a rushed series of truncated thoughts, punctuated by distractions. They are created by getting the headspace to go deep, to stay there a while, and to surface with new insights, structure and conclusions.

Even with a eureka moment, a spark of brilliance or a chance connection, those initial insights take focus and refinement to develop, distill and simplify; to make them usable.

In fast-paced modern work environments, it’s clear that most of us aren’t managing to achieve headspace or to go deep into our work with any regularity. Our days seem like a series of distractions, email, phone calls, chat, meetings and being ever-available. Busy-ness, and particularly visible busy-ness, is a medal we seem to wear with pride.

But it’s likely that we’re barely scratching, then re-scratching, the surface of the work in front of us. We’re probably repeating large portions of it, as we absorb the cognitive switch away and back again. If we’re lucky, we’re gradually making progress in the right direction, though it will be hard-won, inefficient and lacking any real insight or depth. We may find ourselves achieving most of our meaningful work outside of office hours, which implies our work environment isn’t… well, isn’t working.

Most worryingly, we may be killing our ability to innovate, to compete and to deliver value… which is surely why we’re at work in the first place?

After 20 years working in teams, I spent a year working alone on two startup ideas… yet initially I still found it hard to focus. Surely, I had assumed, headspace would be simple to achieve alone?! It seemed that the learned expectation of interruption and the numbing array of possibilities to do other lesser things to get a hit of achievement kept me from focussing deeply for any duration. I had become a busy-ness and achievement junkie. At first, I seriously wondered whether I had some kind of attention deficit problem and so I turned to tools like the Pomodoro Technique to regain my headspace.

I gradually learned to commit to 25-minute periods of focused work, during which I would let nothing else (including my own thoughts and other interests) distract me, and then to take a short break or work on something less involved. Before long, I could crank out 6-10 of these “pomodoros” per day without too much struggle, and my ability to focus on my own returned.

During that year of regular focus, the work I produced felt significant: I made leaps forward, created two products and wrote my own UI framework. These achievements would never have been possible in my usually-scattered frame of mind. All of them were the result of focus and depth… of finding regular headspace.

Even more so in teams, the ability to carve out periods of headspace is key to getting work done together. But balancing it with being available on-demand is also crucial. We simply can’t schedule the needs of other people and the wider team, nor block them out to pre-determined times. But perhaps, after a landslide of unscheduled requests, we can push back a little, pop on a pair of headphones (a visible sign of focus), and get 30 minutes of focused work done before accepting more outside input. Oscillating in this way between being available and being focussed allows a portion of our time to be allocated to deeper work, even if we have to be a little creative and flexible about precisely which hours of the day that ends up being.

Respecting Headspace: It may seem that getting immediate answers from colleagues, and being available to provide them ourselves, keeps a work environment moving at maximum speed. But there is a cost to the team, and therefore to the project, for that immediate answer, paid in terms of the headspace of the person providing it. It may feel as if asking in-person is more productive and that chat or email are inefficient ways of getting an answer, but those tools allow the person providing the answer to crank out a series of such answers during a quick break from other work, and perhaps still within a relatively short timeframe. More urgent interruptions can still take place, but they must be balanced with the permission to block out further interruptions for a period of time that each person chooses themselves. Our headspace and their headspace will rarely occur in-sync.

Deserving Headspace: There is a flipside to this. In order to be allowed to focus, we must also prove ourselves capable of handling those queued requests in a responsive and reliable manner. Ignoring emails and not scanning chat channels for updates occasionally are certain ways to force people to interrupt us with more urgency, to grab our attention and to destroy our focus. If people learn that we will always respond within a reasonable timeframe, they will happily learn to queue non-urgent work for us.

Respecting the headspace of others may improve the output and depth of work produced by the whole team, by reducing cognitive switching. It probably also doesn’t greatly delay the answers our interruptions require, and it can be mixed with much more interactive sessions, standups, meetings and brainstorming to get the best of both worlds.

Headspace in teams, but also alone, isn’t easy to achieve. But what’s clear is that without it, the work we produce and the things we create will be mere shadows of what we’re capable of.

Maybe headspace and our ability to focus really are, somewhat counterintuitively, the most urgent things of all?


Image: (c) 2015, Ian Cackett


“Someone” Doesn’t Work Here Any More


“Someone should fix the…”

“Someone will need to…”

I hear it all the time. I even say it. There is always so much for “someone” to do.

But “someone” is a problem: A cause of forgotten issues, broken links, missing pieces, out-of-date documents, poor integrations. “Someone” clearly isn’t working.

“Someone”, when uttered, tends to mean “not me”. A way of shrugging off the item, of distancing oneself from it.

We need to ban “someone”, and instead…

Find “someone”… or a team. Or nominate one. We should make sure they will take on the issue, or put it on their list, without dumping it on them. We should check they will own it. We should look on it like a baton handover in a relay race, rather than the throwing of a grenade from a-far.

Make “someone”… if there is no-one specific. We should make a placeholder list for issues and items in the same category. We should make sure a person or team picks them up at the right time, or keep a prominent record of those lists for review. We should make sure they are easily found and not lost. This is often necessary before hiring the first person in a department, such as marketing. A placeholder list might be needed for that team, until they exist.

Become “someone”… The ultimate, is to become the “someone”, for issues we could tackle or own, or just be willing to hold until a more appropriate “someone” can be found. Sometimes this means becoming “someone” in conjunction with another person, because these are often the difficult items that require collaboration and pushing forward to completion, particularly if they fall between the remit of several teams.

In rapidly-growing startups, particularly, we all collectively need to step into the gaps between teams, the places that aren’t owned by anyone or any process yet, and fill them. There can be a lot of “someones” in startups, if we aren’t careful.

The truth is, unless we are all “someone” from time to time, then the task will default to our other favourite friend, who is always more than willing to take it: “No-one”!

The Three Hardest Words: “I Made This”

dish“I made this”, is what we say by implication when we publish, send, launch, perform or unveil work that is entirely our own. Three wonderful, difficult and daunting words.

We are the only ones who can choose when — or whether — to say them, whether to hit “publish”… whether to take the risk.

Solo entrepreneurs, writers, bloggers, artists, performers, artisans all know the feeling. But even in a team at a small startup, we may feel this to some degree because we may singlehandedly represent some entire aspect of a company, a product or a service.

Our work represents us. Once we unveil it, it is out there and usually can’t be retracted. There is no-one else to stand behind it, to share the blame or the praise. No large corporation to take the flack. No anonymity. It’s like publishing a condensed version of ourselves; a snapshot of what we are currently capable of, warts and all, for the world to see.

It takes nerve to accept feedback – good, bad and indifferent – and still stand by our work, to discuss potential improvements to it, or the inevitable choices we could have made. There is no perfect launch and nothing is ever truly “ready”.

The crucial thing is realising we did our best at the time, that no-one is 100% correct, and still being glad that we put it out there. It takes guts to realise we will inevitably want to retract, delete, smash it to pieces, or to endlessly correct it based on feedback — or the perceived feedback of indifference — but to keep it out there anyway, in its original form.

So far, I’ve done this with two software products I developed alone, this blog, photographs and other articles. I have friends who have done it with startups, artwork, experimental new food products, books and performance art. I’m proud to count myself among them because I think people who have done this are amazing.

I’m convinced that we learn more by putting our work in front of an audience than we possibly ever could by thinking and agonising about it in private, by tweaking it and waiting until we feel like it is somehow worthy or “ready”. Those lessons feed back into our next attempts, but only if we make our first attempts.

If you’re in doubt, it’s probably ready. So take the leap, and say “I made this”.

Why Software Engineering Isn’t Engineering, and the Implication for Deadlines

questionmarkSoftware engineering estimates and plans often fail to live up to the reality that follows. It seems to be the only engineering discipline in which this is the rule rather than the exception.

Before you leap to the defence of software projects you’ve worked on — perhaps even your own — think about how many times those projects delivered precisely what they said they would, and in the timeframe they originally promised. If yours did, you’re probably in a happy minority, or you’ve already embraced some of the realities I’m highlighting below.

“Engineering”… Really??!

Other than for extremely formal cleanroom and safety-critical systems, the flavour of software development undertaken in most companies isn’t really an engineering discipline at all, despite our ambitions to call it that. But the real problem is that we still predominantly estimate work and plan our activities as if it was, either within the software team or in our interactions with the wider company and its goals.

We architect and break down systems into subsystems, components and tasks, then estimate the resources required for each task. At the end of it all, we are usually surprised when our estimates don’t reflect the reality of what comes next. It is clear that the wider business is often similarly surprised and, therefore, no wonder that software estimates are often viewed with skepticism and disbelief.

Roads, Bridges, Bricks & Mortar

In physical engineering (such as civil or mechanical), estimates are based upon abstractions that are known to be quite accurate and dependable; component parts integrate and combine in ways that are by now well-understood. Each level is based on a foundation that is, at the lowest levels, ultimately based on well-researched principles: physics, chemistry, metals, concrete, glass, water, soil, gravity, etc.

Even the introduction of new modern materials is relatively risk-free as they can be tested, studied and understood in isolation or smaller builds, then integrated into larger builds with fairly predictable results.

That’s not to say that physical engineering doesn’t have it’s delays, complexities and flaws… but at least there’s the possibility of attempting to plan, reason and engineer them out.

Building on a Foundation of Complexity

Teams building software seem to ascribe the same physical characteristics to it as if it were more formal engineering. Whilst it is certainly true that there may still be physical laws governing the electronics and other hardware the software ultimately runs on, most software is built at a level of abstraction several hundred layers above that, and therefore on a foundation of potential complexity we may not understand quite so well.

Even where component parts of our system are well-understood (operating systems, languages, libraries), they can interact with one another, and with the foundations they in turn depend upon, in ways that components rarely would in a physical build.

Add to this the fact that most of our abstractions are about systems the precise likes of which we may have never personally built before, and it becomes clear that we are, at best, working with assumptions.

It seems we often mistake our ability to envisage a final system for the reality of the way that system will actually need to function, and the work involved in building it.

Complexity Appears Late

Problems and complexity appear notoriously late in software projects. Attempts to get an early understanding — such as proofs of concept, or use of spikes in the Agile world — can help, but they often don’t uncover the reality of the actual system we’re building… just one similar to (and therefore potentially totally unlike) it.

The result is that reality rarely adheres to our estimates and plans.

So What?

It is probably high time that we admitted most software engineering isn’t actually engineering, in the strictest sense of the term. The implication is that our estimates and plans — and our ability to hit deadlines — shouldn’t be relied upon in the same way. Embracing this truth, company-wide, might help us.

Secondly, I’d say that Agile, yet again, has something to contribute: We already know that it helps to nail requirements by working in an iterative and evolutionary manner alongside a customer, but it also assists in handling complexity and planning, by removing strict estimates and by working iteratively.

Most Agile “stories” are planned in points, t-shirt sizes, and other schemes, in an attempt to get away from time-based estimates and instead plan based on the perceived complexity and past ability of the team to deliver similar stories. Agile’s iterative approach means we can adjust course as complexity and other problems appear, just as we adjust course by regularly showing early versions of a system to a customer.

One implication of accepting this is that software projects will be notoriously hard to plan against calendar deadlines. This means the wider company will need to work with software teams in a way that depends less upon deadlines, estimates and plans. Many organisations embrace Agile in their software teams, but not in the wider company. This creates the untenable combination of Agile software work practices, but rigid / calendar deadlines and deliveries to the wider business… and the largely unacknowledged fact that these two worlds seldom align.

Maybe it’s time we embraced the reality of this less-than-formal “engineering” activity of ours, owned up similarly to the reality of our less-than-dependable “estimates”, and relied more on ways to work with these truths, such as Agile? This may finally mean embracing Agile in a truly company-wide manner, rather than just within software teams.

Hitting Software Deadlines: Essentials Only… then Upgrade

From “Deadline” can seem like a dirty word, in the software world. But without them, our work wouldn’t see the light of day, get an audience or have an impact. So despite their perceived unpleasantness, I’m all for them.

We tend to plan work towards a deadline in ways that ignore two inherent truths about software development, and therefore often miss them:

  • Complexity appears late: Despite our best efforts to surface it during planning and design activities. This is the reason plans go awry at the eleventh hour and deadlines that seemed achievable are inexplicably missed;
  • Completeness is hard to nail: Specifically, deadline-completeness. This means only the things that are strictly necessary for a viable release, launch or demo at the deadline. This is likely smaller than the list you feel you want to include, but adds crucial items that are usually not visible to a client, such as productionising, releasing, legal formalities, etc. Many of these are usually slotted in just-in-time before a deadline, or missed. Denial of the need to nail completeness, or discovery of further complexity here, can undermine a deadline.

Maybe we could try something along the following lines, to hit deadlines more often:

  • Find the Essentials for Deadline-Completeness: Begin by ruthlessly separating the work that must be completed for the deadline, but perhaps in a heavily-downgraded form.
    This might mean something releasable to a production environment (in a walled-off manner, minus a final trivial “reveal” on the deadline date), something that discharges legalities, and something that exercises the agreed use cases in a minimal but viable manner.
    Encourage people to write separate stories for bare bones deadline completeness, and then the rest.
    Also write meta stories for items required for completeness that would otherwise be forgotten due to not being visible to a client, such as deployments and productionising;
  • Hit deadline-completeness early: Complete the bare bones deadline-sensitive work as early as possible, and nothing else. In an Agile approach, this should be at least a sprint or two before the deadline itself. You will end up with a very minimal but released (though perhaps walled-off) system that would hit your deadline, but would perhaps disappoint you in a number of ways. Learn to accept this, because you won’t be revealing this version of your work and something much more crucial just happened that is worth celebrating: You already hit the deadline. You are already capable of delivering. So now, you can…
  • Upgrade, iteratively: Add in the work to upgrade that initial complete solution in tight iterations. Each of these should result in a system that is “better” in some way but still deadline-complete; remains releasable, still discharges legal obligations, still exercises the same use cases and perhaps some new and less crucial ones.
    Iterate on these upgrades, in a order of decreasing priority, until the deadline.

When the deadline comes, as it inevitably will, you are guaranteed to hit it in some viable and complete manner. There will always be upgrades you wish you could have included but they will be the least important ones and, much more crucially, you aren’t left with an incomplete solution that can’t released, demonstrated or used.

Approaching deadlines in this way means that complexity can be handled as it arises, without derailing the deadline. Hitting deadline-completeness in a minimal way, then upgrading, allows complexity to push out less-crucial upgrades if necessary. Yes, you don’t want it to… but the reality is it might need to. Post-completeness, there is room for the battle between complexity and upgrades to take place, however that plays out and without risking the deadline.

How do you persuade people to adopt this approach? By showing them that everything is still planned to be included, but that deadline completeness is prized first. Assure them that other upgrades, in decreasing priority order, will only be omitted if unforeseen complexities or problems arise. The maximum value possible is still going to be delivered. Nothing has changed but the approach to delivery and risk.

Isn’t this just Agile? If your approach to Agile (and there are many) includes a notion of deadline completeness, perhaps. But who really does Agile that way? After hitting the deadline early, the iterations are Agile-like. But this approach forces early inclusion of aspects that might be unseen to a customer in an Agile world — hence no stories — but which are crucial for the deadline, such as productionising.

Find the essentials and complete them early, then upgrade. By planning in a way that handles complexity like this, maybe we can let it do its worst — as it inevitably will — and still deliver to a deadline?

For a Coherent Software Product… Draw More Diagrams

Pencil, By Juliancolton (Own work) [Public domain], via Wikimedia Commons

Unless you’re building it entirely on your own — and even sometimes then — for a unified and coherent software product, you need prominently-shared diagrams of what you intend to build.

Most software products are built by more than one person; more than one mind. Most are built by more than one team and usually across more than one skill or discipline. That’s many backgrounds, many viewpoints… but still one intended product in which they all need to be unified.

No-one would imagine coding an API without a spec, yet we often build systems without a diagram of the architecture, components, boundaries or intended flows. Retrospective diagrams often never happen, or happen way beyond the point it became painfully obvious they were needed. We over-rely on the verbal signs that we are aligned, without backing it up with a diagram.

Without a shared vision of the intended outcome, we’re likely to translate any purely verbal descriptions into different mental models, and onwards into different technical solutions… which then need to be aligned. “Oh I didn’t realise”, becomes all too common. This is particularly true in places where multiple disciplines overlap: Product owner, UX, front end, back end, operations, marketing, etc.

Everyone sees their piece of the puzzle, but diagrams confirm how they will all fit together. Without them, we introduce opportunities for hidden agendas and unnecessary extras to creep in. Human nature dictates that, where ambiguity exists, we tend to work silently to our own preferences rather than seeking a unified team answer. A diagram can provide that answer, or at least highlight the blank space in which it needs to go.

What to draw?

Whatever you find yourself discussing… but especially user interfaces, architecture, components, flows, and absolutely all inter-team or component boundaries.

How many drawings?

Just enough… but at least the highest levels of the product, then break down where more clarity is required and particularly where team boundaries exist.

Not too many though… or no-one will look at them. Find a happy balance.

What to draw with?

Informality, and a shared medium… Keep diagrams informal, because speed is important in encouraging capturing of info. Use a tool for more formal diagrams, but a tool that everyone can access; the person drawing the diagram will not necessarily be the only one changing it or adding to it later.

Where to draw them?

Visibly! Stuck away in a repository or a wiki where people rarely (or begrudgingly) go, diagrams are next to useless. Get copies of them on up on the walls too.

Update them!

An obsolete diagram is worse than no diagram. Think living diagrams!

When a change arises, update the diagram now, even if only with a scribbled annotation on a printout to be added to the original later. This is why informality often wins; there is no pristine copy that needs painstakingly updating.

Assign someone the action of updating the diagram, and make sure that they do.

Encourage diagrams

Make sure whiteboards are available, with pens (if you have to hunt, the moment will pass). Also make sure wall space is available, on which to stick diagrams that will live for longer.

Whiteboards aren’t just for visual disciplines; encourage engineers to draw components, flows and architecture too. Even short-lived diagrams, such as those drawn in one meeting, can unify people at a point in time. Get everyone comfortable with the idea of standing up with a whiteboard pen… and drawing. It should be a daily occurrence.

There should be a blank whiteboard usable by anyone who feels the need, at any time, and preferably within reach. This is a cheap resource, so over-provide. The improvement in communication, and in a coherent product, will be worth it.