For Better Software, Consider the Layers

(Adapted from a discussion I had earlier this year about why we don’t just “code the feature” and what software developers actually do)

layersSoftware is undeniably a many-layered thing… but I’ll spare you a bunch of weak comparisons with certain desserts.

Though most of the layers in software are invisible, their selection and our level of investment in them dictates a range of not-so-subtle factors that are apparent in the quality and experience of the end result, either immediately or over time.

On the surface, software interacts with the real world via a User Interface (UI), or perhaps by sending messages and emails to a user. These upper layers are primarily visual and need to mesh well with the user’s own ideas about what they are trying to achieve with the system. This is the world of User Experience (UX), and it is usually possible to describe these layers in terms of stories and to draw them as wireframes or more detailed designs.

Meanwhile, way, way below at some of the very lowest layers, we have code in various programming languages, formally compiled or interpreted, running as components on one or more machines, and communicating via network protocols. These lowest layers — though we may be forgiven for not realising it — are the actual software, as it executes. All of the layers above are simply abstractions; ways of hiding and dealing with the underlying complexity, to help us focus on portions of it and to relate it back to the real world.

What do those abstractions and layers look like? They are the architecture, modules, components, packages, subsystems and other sub-divisions we create to explain software. Though some may be drawn for us, we choose carefully where to draw these lines, and we try to name them so their impact on the real world result, or processing of it, is understandable and uses similar language. We often draw diagrams about them, talk about them and then build them in code.

Between the extremes of the top and the bottom layers, there is a journey… From the visual and less formal, towards achieving a result via the inevitable rigour and formality of the lower layers… and back again to the real world.

There simply is no way to avoid the need for that eventual lower-level formality, as the software must run on a machine that requires it and which understands no subtlety. It’s just a question of the layers / abstractions we choose to create en route and how much we decide to invest in them, that dictates the experienced functional and non-functional quality of the overall end result: Whether it works as intended, whether it always works as intended, how quickly it responds, whether it scales, how much further investment is required to change it later, etc.

It is almost as if our acknowledgement of and attitude to the non-obvious layers, formalities and rigour has an impact on the visible quality of the end result we achieve. Again, though we may be forgiven for not realising it.

Choice and investment in the layers isn’t about black-and-white decisions, but rather a series of compromises and considered choices, usually made entirely within the engineering team. These choices can sometimes be hard to explain or justify outside of that team, which is why I’ve found this explanation of layers can be useful.

It is possible not to invest much at all and to pick a more direct path so that end results are delivered quickly. This is often sensible to validate ideas before further investment. It is also possible to over-invest, so things are delivered slowly but where much of the effort never impacts the end result and is wasted.

Factors to consider investing in, that have a knock-on effect on the end result, are whether the chosen layers and abstractions are understandable (using language and concepts reflecting the real world), testable to minimise mistakes, reusable to minimise wasted code, loosely coupled with others (so changes in one don’t unnecessarily impact another), documented so further work is easier, and structured so that they can be adapted when the need for change arises, as it inevitably will.

Under-investment in various software layers, or their restructuring, can result in what we call “technical debt”. This is where the work required to make further changes at those levels is hampered either by illogical structure, redundant code, quick workarounds or lack of prior thought. Some degree of technical debt is understandable but, beyond a certain point, it will impact the overall result, either in terms of quality or the cost and timescales for making changes.

Luckily, with software, we can retrospectively invest in these layers and even change our choices about which layers to focus on. This allows us to begin lightly, with less investment, to get a result or to validate whether an idea has benefit, then pick layers to invest in for stability, quality, longer-term results, etc. This retrospective investment is what engineers sometimes call “hardening”, and the changes it implies are often referred to as “refactoring”. They may result in no apparent change, but they are retrospective investment that may drastically affect the quality of the end result, whether now or in the future.

Choice and investment in software layers, whether visible or non-visible, is an ongoing and never-ending process. It is also a discussion about resourcing, quality, deadlines, compromises and desired outcomes. All that is required is that we remember that the layers are (or should be) there in some form, and that time spent considering them affects the end result in very tangible ways.

Integration-Driven Development

It’s a familiar stereotype: The lone software nerd. Working in isolation. Rarely interacting with anyone.

In reality, this particular stereotype couldn’t be further from the truth. Everything we work on as software developers integrates with something or someone else.

cogsIt might be behind the scenes by providing or using an API, or by communicating with another module, an external system or a database. Or if we’re building a User Interface, that’s a visual integration with another person’s expectations and mental model of a task they want to perform. Granted, it’s much more creative and fluid than the ones mentioned earlier, but it’s an integration with another party nevertheless.

Nothing we develop exists in isolation: Integrations and interactions are crucial, and this usually means working with other people, either as end users or as the creators of the systems we need to integrate with. So much for that loner stereotype.

But we’re also human…

When we begin a development task, there’s a natural tendency to go depth-first and to work on a piece as we understand it at first. We like to work on the bit we feel in control of, and to feel “ready” before we expose it to others.

As it turns out, instead of this seemingly productive focus leading to our “best” work, once revealed, it can often be detrimental, both for us and for the project. Why…?

  • We’re making assumptions about the integration and interactions that are inevitably needed and, by not validating them, risking the need for re-work.
  • We’re delaying exposing our work to others at which point their own assumptions can be tested, risking re-work on their part.

The answer, contrary to our natural inclinations, seems to be to begin integrating and exposing our work as early as possible.

How to integrate early?

For non-visual work, we can actually build the integration, the API, the interface with the other module or system, as early as possible. This may involve a certain amount of mocking up and simulation to get it working, perhaps using fake data, but we can do this quickly and cheaply as those parts will be thrown away towards the end.

One crucial thing to remember… is to actually throw those mocks away and not to accidentally leave any behind. It’s a common mistake to go live with mock code remaining, so it’s best to mark it in some way or to use obviously fake data so as to be embarrassingly obvious.

For visual work, a lower-quality version with just the interactions working is a good starting point; a form of living wireframes, minimally demonstrable. Something people can play with before any visual polish is added.

There are undeniable challenges I’m ignoring here, as the incremental development from living wireframes to pixel-perfect renderings of designs can be a difficult transition that ignores fundamental differences between the two. Design work often leads to an internal restructuring of visual implementations. That said, these are problems worth tackling, as exposing something people can interact with early potentially prevents much rework anyway.

Validating assumptions about the complex and nuanced interaction with human beings is best done as early as possible.

Visual work also needs to integrate with and test assumptions about the data and other non-visual interactions upon which it will rely, if only informally, as mistakes here can also lead to rework.

What improves?

If we can convince ourselves to integrate early, and on an ongoing basis…

  • We expose our work earlier to others. It’s not only helpful but healthy. Perhaps time to get over our natural tendencies to perfect then reveal.
  • We often learn something unexpected that shapes how we build the bit we’re working on, and so avoids rework.
  • Expectations and assumptions can be tested; both ours and those of others.
  • A basic working version of the overall system is achievable earlier, whereby end-to-end assumptions can be tested, perhaps with stakeholders, and problems spotted.
  • Avoids the last-minute integration and “Oh, s***!” moments that so often lead to project delays.

What are the challenges?

Just like integration, change is inevitable.

Doesn’t this mean we should integrate later, once we “know everything”? Don’t early integrations just set us up for rework when change occurs?

I’d argue that early integration means we know who else is affected, and how, so we can swiftly make the change and fix the integration, rather than storing up the change for disclosure during late integration. It’s better to absorb the impact of the change early, including any knock-on effect. Learning can take place sooner and in all the places it is needed.

Due to earlier coupling, we do however need to be mindful of timings and upfront about the impact on other people, rather than just “breaking” integrations and waiting for others to discover them. Checking others are ready to absorb the change minimises disruption and provides a valuable way to communicate the change. We need to move sympathetically with the needs others.

Rather than looking on early integrations as inevitable victims of change, they are also a valuable source of it. Without early integration, we may be delaying discovering the need for the change and storing up the disruption for later.

 

If we do it early and on an ongoing basis, this breadth-first early approach to integration not only avoids late discovery of problems and minimises the need for re-work… it also makes software development quite an interactive activity.

So much for the loner stereotype!