For Better Software, Consider the Layers

(Adapted from a discussion I had earlier this year about why we don’t just “code the feature” and what software developers actually do)

layersSoftware is undeniably a many-layered thing… but I’ll spare you a bunch of weak comparisons with certain desserts.

Though most of the layers in software are invisible, their selection and our level of investment in them dictates a range of not-so-subtle factors that are apparent in the quality and experience of the end result, either immediately or over time.

On the surface, software interacts with the real world via a User Interface (UI), or perhaps by sending messages and emails to a user. These upper layers are primarily visual and need to mesh well with the user’s own ideas about what they are trying to achieve with the system. This is the world of User Experience (UX), and it is usually possible to describe these layers in terms of stories and to draw them as wireframes or more detailed designs.

Meanwhile, way, way below at some of the very lowest layers, we have code in various programming languages, formally compiled or interpreted, running as components on one or more machines, and communicating via network protocols. These lowest layers — though we may be forgiven for not realising it — are the actual software, as it executes. All of the layers above are simply abstractions; ways of hiding and dealing with the underlying complexity, to help us focus on portions of it and to relate it back to the real world.

What do those abstractions and layers look like? They are the architecture, modules, components, packages, subsystems and other sub-divisions we create to explain software. Though some may be drawn for us, we choose carefully where to draw these lines, and we try to name them so their impact on the real world result, or processing of it, is understandable and uses similar language. We often draw diagrams about them, talk about them and then build them in code.

Between the extremes of the top and the bottom layers, there is a journey… From the visual and less formal, towards achieving a result via the inevitable rigour and formality of the lower layers… and back again to the real world.

There simply is no way to avoid the need for that eventual lower-level formality, as the software must run on a machine that requires it and which understands no subtlety. It’s just a question of the layers / abstractions we choose to create en route and how much we decide to invest in them, that dictates the experienced functional and non-functional quality of the overall end result: Whether it works as intended, whether it always works as intended, how quickly it responds, whether it scales, how much further investment is required to change it later, etc.

It is almost as if our acknowledgement of and attitude to the non-obvious layers, formalities and rigour has an impact on the visible quality of the end result we achieve. Again, though we may be forgiven for not realising it.

Choice and investment in the layers isn’t about black-and-white decisions, but rather a series of compromises and considered choices, usually made entirely within the engineering team. These choices can sometimes be hard to explain or justify outside of that team, which is why I’ve found this explanation of layers can be useful.

It is possible not to invest much at all and to pick a more direct path so that end results are delivered quickly. This is often sensible to validate ideas before further investment. It is also possible to over-invest, so things are delivered slowly but where much of the effort never impacts the end result and is wasted.

Factors to consider investing in, that have a knock-on effect on the end result, are whether the chosen layers and abstractions are understandable (using language and concepts reflecting the real world), testable to minimise mistakes, reusable to minimise wasted code, loosely coupled with others (so changes in one don’t unnecessarily impact another), documented so further work is easier, and structured so that they can be adapted when the need for change arises, as it inevitably will.

Under-investment in various software layers, or their restructuring, can result in what we call “technical debt”. This is where the work required to make further changes at those levels is hampered either by illogical structure, redundant code, quick workarounds or lack of prior thought. Some degree of technical debt is understandable but, beyond a certain point, it will impact the overall result, either in terms of quality or the cost and timescales for making changes.

Luckily, with software, we can retrospectively invest in these layers and even change our choices about which layers to focus on. This allows us to begin lightly, with less investment, to get a result or to validate whether an idea has benefit, then pick layers to invest in for stability, quality, longer-term results, etc. This retrospective investment is what engineers sometimes call “hardening”, and the changes it implies are often referred to as “refactoring”. They may result in no apparent change, but they are retrospective investment that may drastically affect the quality of the end result, whether now or in the future.

Choice and investment in software layers, whether visible or non-visible, is an ongoing and never-ending process. It is also a discussion about resourcing, quality, deadlines, compromises and desired outcomes. All that is required is that we remember that the layers are (or should be) there in some form, and that time spent considering them affects the end result in very tangible ways.

Integration-Driven Development

It’s a familiar stereotype: The lone software nerd. Working in isolation. Rarely interacting with anyone.

In reality, this particular stereotype couldn’t be further from the truth. Everything we work on as software developers integrates with something or someone else.

cogsIt might be behind the scenes by providing or using an API, or by communicating with another module, an external system or a database. Or if we’re building a User Interface, that’s a visual integration with another person’s expectations and mental model of a task they want to perform. Granted, it’s much more creative and fluid than the ones mentioned earlier, but it’s an integration with another party nevertheless.

Nothing we develop exists in isolation: Integrations and interactions are crucial, and this usually means working with other people, either as end users or as the creators of the systems we need to integrate with. So much for that loner stereotype.

But we’re also human…

When we begin a development task, there’s a natural tendency to go depth-first and to work on a piece as we understand it at first. We like to work on the bit we feel in control of, and to feel “ready” before we expose it to others.

As it turns out, instead of this seemingly productive focus leading to our “best” work, once revealed, it can often be detrimental, both for us and for the project. Why…?

  • We’re making assumptions about the integration and interactions that are inevitably needed and, by not validating them, risking the need for re-work.
  • We’re delaying exposing our work to others at which point their own assumptions can be tested, risking re-work on their part.

The answer, contrary to our natural inclinations, seems to be to begin integrating and exposing our work as early as possible.

How to integrate early?

For non-visual work, we can actually build the integration, the API, the interface with the other module or system, as early as possible. This may involve a certain amount of mocking up and simulation to get it working, perhaps using fake data, but we can do this quickly and cheaply as those parts will be thrown away towards the end.

One crucial thing to remember… is to actually throw those mocks away and not to accidentally leave any behind. It’s a common mistake to go live with mock code remaining, so it’s best to mark it in some way or to use obviously fake data so as to be embarrassingly obvious.

For visual work, a lower-quality version with just the interactions working is a good starting point; a form of living wireframes, minimally demonstrable. Something people can play with before any visual polish is added.

There are undeniable challenges I’m ignoring here, as the incremental development from living wireframes to pixel-perfect renderings of designs can be a difficult transition that ignores fundamental differences between the two. Design work often leads to an internal restructuring of visual implementations. That said, these are problems worth tackling, as exposing something people can interact with early potentially prevents much rework anyway.

Validating assumptions about the complex and nuanced interaction with human beings is best done as early as possible.

Visual work also needs to integrate with and test assumptions about the data and other non-visual interactions upon which it will rely, if only informally, as mistakes here can also lead to rework.

What improves?

If we can convince ourselves to integrate early, and on an ongoing basis…

  • We expose our work earlier to others. It’s not only helpful but healthy. Perhaps time to get over our natural tendencies to perfect then reveal.
  • We often learn something unexpected that shapes how we build the bit we’re working on, and so avoids rework.
  • Expectations and assumptions can be tested; both ours and those of others.
  • A basic working version of the overall system is achievable earlier, whereby end-to-end assumptions can be tested, perhaps with stakeholders, and problems spotted.
  • Avoids the last-minute integration and “Oh, s***!” moments that so often lead to project delays.

What are the challenges?

Just like integration, change is inevitable.

Doesn’t this mean we should integrate later, once we “know everything”? Don’t early integrations just set us up for rework when change occurs?

I’d argue that early integration means we know who else is affected, and how, so we can swiftly make the change and fix the integration, rather than storing up the change for disclosure during late integration. It’s better to absorb the impact of the change early, including any knock-on effect. Learning can take place sooner and in all the places it is needed.

Due to earlier coupling, we do however need to be mindful of timings and upfront about the impact on other people, rather than just “breaking” integrations and waiting for others to discover them. Checking others are ready to absorb the change minimises disruption and provides a valuable way to communicate the change. We need to move sympathetically with the needs others.

Rather than looking on early integrations as inevitable victims of change, they are also a valuable source of it. Without early integration, we may be delaying discovering the need for the change and storing up the disruption for later.

 

If we do it early and on an ongoing basis, this breadth-first early approach to integration not only avoids late discovery of problems and minimises the need for re-work… it also makes software development quite an interactive activity.

So much for the loner stereotype!

“Don’t worry about the details”,… said no software developer ever

bubbles1bDevelopers spend our days deep in details, usually a thousand and one of them. Combining and orchestrating them up to the point where they collectively deliver something tangible to other — often non-technical — people.

Up and down the stack we go, from the bigger picture down to concrete implementation details, and back up again.

Good software developers can traverse that stack all day long; speaking to people or explaining concepts at all levels… in language appropriate to that level, using abstractions and terminology that hide the detail below, but always being aware that the detail is still there,… waiting to be dealt with.

We know the bottom line is that the details have to work, or there is no bigger picture.

The trick is learning the right times to get into those details, how deep to go, and when to acknowledge that problems at one level have an impact on a wider scope. What we try not to do is get lost in the details in ways that aren’t relevant to someone who just wants to know about the overall solution or a particular layer of it. We try not to baffle you or drag you down the rabbit hole with us,… and sometimes we succeed.

In most other professions, you operate at one level and you remain there. Not so with software, where we work with the details but orchestrate and explain them at a level appropriate to many audiences, whatever they might need. We deal with customers and other developers, and everyone in between.

So if you ask a software developer about a solution or talk to us about a problem… we’re usually traversing the stack as we talk, picking out what’s relevant… choosing language… hiding details, unless they impact you… storing up things irrelevant to you that need investigating later… trying to be open without being complex,… honest but not alarmist. And if we’re good, you can’t tell we’re doing it.

The details matter to all of us eventually. But it’s our job, as developers, to figure them out.

Why Software Engineering Isn’t Engineering, and the Implication for Deadlines

questionmarkSoftware engineering estimates and plans often fail to live up to the reality that follows. It seems to be the only engineering discipline in which this is the rule rather than the exception.

Before you leap to the defence of software projects you’ve worked on — perhaps even your own — think about how many times those projects delivered precisely what they said they would, and in the timeframe they originally promised. If yours did, you’re probably in a happy minority, or you’ve already embraced some of the realities I’m highlighting below.

“Engineering”… Really??!

Other than for extremely formal cleanroom and safety-critical systems, the flavour of software development undertaken in most companies isn’t really an engineering discipline at all, despite our ambitions to call it that. But the real problem is that we still predominantly estimate work and plan our activities as if it was, either within the software team or in our interactions with the wider company and its goals.

We architect and break down systems into subsystems, components and tasks, then estimate the resources required for each task. At the end of it all, we are usually surprised when our estimates don’t reflect the reality of what comes next. It is clear that the wider business is often similarly surprised and, therefore, no wonder that software estimates are often viewed with skepticism and disbelief.

Roads, Bridges, Bricks & Mortar

In physical engineering (such as civil or mechanical), estimates are based upon abstractions that are known to be quite accurate and dependable; component parts integrate and combine in ways that are by now well-understood. Each level is based on a foundation that is, at the lowest levels, ultimately based on well-researched principles: physics, chemistry, metals, concrete, glass, water, soil, gravity, etc.

Even the introduction of new modern materials is relatively risk-free as they can be tested, studied and understood in isolation or smaller builds, then integrated into larger builds with fairly predictable results.

That’s not to say that physical engineering doesn’t have it’s delays, complexities and flaws… but at least there’s the possibility of attempting to plan, reason and engineer them out.

Building on a Foundation of Complexity

Teams building software seem to ascribe the same physical characteristics to it as if it were more formal engineering. Whilst it is certainly true that there may still be physical laws governing the electronics and other hardware the software ultimately runs on, most software is built at a level of abstraction several hundred layers above that, and therefore on a foundation of potential complexity we may not understand quite so well.

Even where component parts of our system are well-understood (operating systems, languages, libraries), they can interact with one another, and with the foundations they in turn depend upon, in ways that components rarely would in a physical build.

Add to this the fact that most of our abstractions are about systems the precise likes of which we may have never personally built before, and it becomes clear that we are, at best, working with assumptions.

It seems we often mistake our ability to envisage a final system for the reality of the way that system will actually need to function, and the work involved in building it.

Complexity Appears Late

Problems and complexity appear notoriously late in software projects. Attempts to get an early understanding — such as proofs of concept, or use of spikes in the Agile world — can help, but they often don’t uncover the reality of the actual system we’re building… just one similar to (and therefore potentially totally unlike) it.

The result is that reality rarely adheres to our estimates and plans.

So What?

It is probably high time that we admitted most software engineering isn’t actually engineering, in the strictest sense of the term. The implication is that our estimates and plans — and our ability to hit deadlines — shouldn’t be relied upon in the same way. Embracing this truth, company-wide, might help us.

Secondly, I’d say that Agile, yet again, has something to contribute: We already know that it helps to nail requirements by working in an iterative and evolutionary manner alongside a customer, but it also assists in handling complexity and planning, by removing strict estimates and by working iteratively.

Most Agile “stories” are planned in points, t-shirt sizes, and other schemes, in an attempt to get away from time-based estimates and instead plan based on the perceived complexity and past ability of the team to deliver similar stories. Agile’s iterative approach means we can adjust course as complexity and other problems appear, just as we adjust course by regularly showing early versions of a system to a customer.

One implication of accepting this is that software projects will be notoriously hard to plan against calendar deadlines. This means the wider company will need to work with software teams in a way that depends less upon deadlines, estimates and plans. Many organisations embrace Agile in their software teams, but not in the wider company. This creates the untenable combination of Agile software work practices, but rigid / calendar deadlines and deliveries to the wider business… and the largely unacknowledged fact that these two worlds seldom align.

Maybe it’s time we embraced the reality of this less-than-formal “engineering” activity of ours, owned up similarly to the reality of our less-than-dependable “estimates”, and relied more on ways to work with these truths, such as Agile? This may finally mean embracing Agile in a truly company-wide manner, rather than just within software teams.

Why Early UX Should Scratch Deeper than the UI

uxThe early stages of software engineering projects aiming to expose anything more than a trivial User Interface (UI) quite sensibly begin with a heavy focus on the User Experience (UX), as this dictates a great deal of what the software needs to do. This is usually a stage of the project that produces many wireframes, mockups, and a sense of what the user — or different user “personas” — will be attempting to achieve with the software.

This UX-centric approach undeniably works really well, and tends to result in software that delivers a strong, usable user experience.

However, I’ve seen a UX-/UI-only focus in the early stages of some projects lead to blind spots and incorrect assumptions which can force costly rework later in the project. For this reason, I’d assert that early UX work should scratch deeper than just the UI, if only to validate assumptions and, in-particular, to confirm what’s actually possible in terms of the underlying data and services.

Server-Side Constraints

Too often, in UX-centric early project stages, the assertion is that the server-side data, APIs and services are much less relevant than the UI. This is often correct as it can distract from the goal of designing usable software, but there’s an underlying assumption that needs validating: That the server can deliver anything the UX/UI work dictates it needs.

This is often not the case… If the server-side data model is built on serious data modelling or clever algorithms, those services and that model are often constrained in terms of what they can provide. There may be heavy computational costs involved in exposing certain data in a timely manner, if it’s even possible.

As always, using specific examples of server-side data in UX work can really help to clarify what’s possible. Even just involving a few token Data / Server-Side engineers can be enough to verbally “sign off” that the server will be able to support what’s required.

This form of early full-stack validation can prevent costly rework later, and can even help to direct server-side work appropriately by providing added context about what the UI needs.

Server-Side Data Model and User Mental Model

An added benefit in considering the server-side data model during UX work is that the user’s own mental model of what they are trying to achieve when they use the software is so often a version of that server-side model.

Considering, albeit leanly, how we might model data on the server can help to clarify the ways in which a user might expect (via their own mental model) to visualise, interact with and modify that data. After all, this is effectively what they’re going to be doing by using the software; accessing and manipulating server-side data and services indirectly via a UI.

Even just tying together the correct terminology, so UX and server-side work are talking the same language, can prevent costly disconnects between different parts of the team.

Again, precise examples of server-side data, or a little involvement of the relevant engineers, can really help here.

Lean Approach to Scratching the Full Stack

But doesn’t involving server-side engineers and data scientists in UX discussions slow down those discussions? Doesn’t it negate the value of considering the software purely from the point of view of the user at first?

I’d say not, but only if it’s clear that the focus of this stage is still primarily UX, and that any deeper layers in the stack are to be validated only. Crucially, any details of how the eventual UI will interact with the server should be avoided, unless they too dictate what is possible in the UI. No-one needs to build the server at this stage (though that work might be proceeding in parallel), so long as they can talk in as much detail as required about what it, and its services, will look like.

 

I think this is another example of how, in software engineering, regularly considering the bigger picture, even when performing early focussed work, can avoid costly mistakes. In complex systems, the bigger and the smaller views are often related or constrained in unexpected ways.

In 2015, Tech Will Continue To Be the Enabler, Not the Whole Story

10251565_830162623679537_14497031_n

“The Year of Big Data”

“The Year of the Cloud”

“The Year of the Internet of Things”

These are snippets of New Year headlines predicting the year ahead… but from previous years, some as far back as 2011, not just from this January.

These technologies have been around, in some form, for a number of years and are inevitably constantly evolving. As with all reasonably new technologies, we have a habit, in January, of hailing this to be the year that they will make their mark. I’d argue all of these have already made their mark in some way, and will continue to do so as the relevant tech improves. But the real story — the real mark — is when organisations commit to and make genuine use of these technologies in business and human contexts.

The real Big Data story isn’t Hadoop, or Apache Spark (though they are the tech enablers)… it’s when a business consumes and makes use of petabytes of incoming and historical customer data to make timely decisions that optimise and improve (and, of-course, monetise) the experience of each customer. Just warehousing large amounts of data and, theoretically, being able to perform distributed computations against it isn’t really a Big Data story; we need to make use of it, in ways that impact and have the buy-in of the wider business. Many businesses have begun doing this in recent years, and that  is the real story.

The real Cloud story isn’t Heroku or Amazon Web Services (AWS), or SaaS, Paas or anything else ending “aas” (though they will continue to be the tech enablers)… it was the point at which businesses could make real deployments of their apps & services, beyond physically renting rack space or buying their own hardware, and could scale those deployments up and down at will to suit their ever-changing needs. It was the point at which budding entrepreneurs could use it to do the same, and bootstrap a startup from almost nothing.

The real story of the Internet of Things (IoT) probably isn’t any of the tech-based articles we’ve read so-far (though those early devices are examples of tech enablers), but will probably be when we actually monitor and respond to our environmental impact, manage infrastructure, optimise energy usage and diagnose or treat medical patients… on a large scale, via the use of connected devices. Some of the devices we’ve seen so far are heading in this direction, but they seem to be more about proofs of concept and novelty, and less about genuine benefit at this stage. So perhaps these are still early days for IoT and we’ve yet to see it make its real mark or write its real story, which will involve way more than the devices themselves.

So whilst it’s great to hail this year (and previous years) as the one in which certain technologies will make their mark, it’s worth remembering that they are merely the tech enablers in a wider business & human story, which is where their true mark will be made. Or else, what is technology really for?