For Better Software, Consider the Layers

(Adapted from a discussion I had earlier this year about why we don’t just “code the feature” and what software developers actually do)

layersSoftware is undeniably a many-layered thing… but I’ll spare you a bunch of weak comparisons with certain desserts.

Though most of the layers in software are invisible, their selection and our level of investment in them dictates a range of not-so-subtle factors that are apparent in the quality and experience of the end result, either immediately or over time.

On the surface, software interacts with the real world via a User Interface (UI), or perhaps by sending messages and emails to a user. These upper layers are primarily visual and need to mesh well with the user’s own ideas about what they are trying to achieve with the system. This is the world of User Experience (UX), and it is usually possible to describe these layers in terms of stories and to draw them as wireframes or more detailed designs.

Meanwhile, way, way below at some of the very lowest layers, we have code in various programming languages, formally compiled or interpreted, running as components on one or more machines, and communicating via network protocols. These lowest layers — though we may be forgiven for not realising it — are the actual software, as it executes. All of the layers above are simply abstractions; ways of hiding and dealing with the underlying complexity, to help us focus on portions of it and to relate it back to the real world.

What do those abstractions and layers look like? They are the architecture, modules, components, packages, subsystems and other sub-divisions we create to explain software. Though some may be drawn for us, we choose carefully where to draw these lines, and we try to name them so their impact on the real world result, or processing of it, is understandable and uses similar language. We often draw diagrams about them, talk about them and then build them in code.

Between the extremes of the top and the bottom layers, there is a journey… From the visual and less formal, towards achieving a result via the inevitable rigour and formality of the lower layers… and back again to the real world.

There simply is no way to avoid the need for that eventual lower-level formality, as the software must run on a machine that requires it and which understands no subtlety. It’s just a question of the layers / abstractions we choose to create en route and how much we decide to invest in them, that dictates the experienced functional and non-functional quality of the overall end result: Whether it works as intended, whether it always works as intended, how quickly it responds, whether it scales, how much further investment is required to change it later, etc.

It is almost as if our acknowledgement of and attitude to the non-obvious layers, formalities and rigour has an impact on the visible quality of the end result we achieve. Again, though we may be forgiven for not realising it.

Choice and investment in the layers isn’t about black-and-white decisions, but rather a series of compromises and considered choices, usually made entirely within the engineering team. These choices can sometimes be hard to explain or justify outside of that team, which is why I’ve found this explanation of layers can be useful.

It is possible not to invest much at all and to pick a more direct path so that end results are delivered quickly. This is often sensible to validate ideas before further investment. It is also possible to over-invest, so things are delivered slowly but where much of the effort never impacts the end result and is wasted.

Factors to consider investing in, that have a knock-on effect on the end result, are whether the chosen layers and abstractions are understandable (using language and concepts reflecting the real world), testable to minimise mistakes, reusable to minimise wasted code, loosely coupled with others (so changes in one don’t unnecessarily impact another), documented so further work is easier, and structured so that they can be adapted when the need for change arises, as it inevitably will.

Under-investment in various software layers, or their restructuring, can result in what we call “technical debt”. This is where the work required to make further changes at those levels is hampered either by illogical structure, redundant code, quick workarounds or lack of prior thought. Some degree of technical debt is understandable but, beyond a certain point, it will impact the overall result, either in terms of quality or the cost and timescales for making changes.

Luckily, with software, we can retrospectively invest in these layers and even change our choices about which layers to focus on. This allows us to begin lightly, with less investment, to get a result or to validate whether an idea has benefit, then pick layers to invest in for stability, quality, longer-term results, etc. This retrospective investment is what engineers sometimes call “hardening”, and the changes it implies are often referred to as “refactoring”. They may result in no apparent change, but they are retrospective investment that may drastically affect the quality of the end result, whether now or in the future.

Choice and investment in software layers, whether visible or non-visible, is an ongoing and never-ending process. It is also a discussion about resourcing, quality, deadlines, compromises and desired outcomes. All that is required is that we remember that the layers are (or should be) there in some form, and that time spent considering them affects the end result in very tangible ways.

Trackbacks

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s