For Better Software, Consider the Layers

(Adapted from a discussion I had earlier this year about why we don’t just “code the feature” and what software developers actually do)

layersSoftware is undeniably a many-layered thing… but I’ll spare you a bunch of weak comparisons with certain desserts.

Though most of the layers in software are invisible, their selection and our level of investment in them dictates a range of not-so-subtle factors that are apparent in the quality and experience of the end result, either immediately or over time.

On the surface, software interacts with the real world via a User Interface (UI), or perhaps by sending messages and emails to a user. These upper layers are primarily visual and need to mesh well with the user’s own ideas about what they are trying to achieve with the system. This is the world of User Experience (UX), and it is usually possible to describe these layers in terms of stories and to draw them as wireframes or more detailed designs.

Meanwhile, way, way below at some of the very lowest layers, we have code in various programming languages, formally compiled or interpreted, running as components on one or more machines, and communicating via network protocols. These lowest layers — though we may be forgiven for not realising it — are the actual software, as it executes. All of the layers above are simply abstractions; ways of hiding and dealing with the underlying complexity, to help us focus on portions of it and to relate it back to the real world.

What do those abstractions and layers look like? They are the architecture, modules, components, packages, subsystems and other sub-divisions we create to explain software. Though some may be drawn for us, we choose carefully where to draw these lines, and we try to name them so their impact on the real world result, or processing of it, is understandable and uses similar language. We often draw diagrams about them, talk about them and then build them in code.

Between the extremes of the top and the bottom layers, there is a journey… From the visual and less formal, towards achieving a result via the inevitable rigour and formality of the lower layers… and back again to the real world.

There simply is no way to avoid the need for that eventual lower-level formality, as the software must run on a machine that requires it and which understands no subtlety. It’s just a question of the layers / abstractions we choose to create en route and how much we decide to invest in them, that dictates the experienced functional and non-functional quality of the overall end result: Whether it works as intended, whether it always works as intended, how quickly it responds, whether it scales, how much further investment is required to change it later, etc.

It is almost as if our acknowledgement of and attitude to the non-obvious layers, formalities and rigour has an impact on the visible quality of the end result we achieve. Again, though we may be forgiven for not realising it.

Choice and investment in the layers isn’t about black-and-white decisions, but rather a series of compromises and considered choices, usually made entirely within the engineering team. These choices can sometimes be hard to explain or justify outside of that team, which is why I’ve found this explanation of layers can be useful.

It is possible not to invest much at all and to pick a more direct path so that end results are delivered quickly. This is often sensible to validate ideas before further investment. It is also possible to over-invest, so things are delivered slowly but where much of the effort never impacts the end result and is wasted.

Factors to consider investing in, that have a knock-on effect on the end result, are whether the chosen layers and abstractions are understandable (using language and concepts reflecting the real world), testable to minimise mistakes, reusable to minimise wasted code, loosely coupled with others (so changes in one don’t unnecessarily impact another), documented so further work is easier, and structured so that they can be adapted when the need for change arises, as it inevitably will.

Under-investment in various software layers, or their restructuring, can result in what we call “technical debt”. This is where the work required to make further changes at those levels is hampered either by illogical structure, redundant code, quick workarounds or lack of prior thought. Some degree of technical debt is understandable but, beyond a certain point, it will impact the overall result, either in terms of quality or the cost and timescales for making changes.

Luckily, with software, we can retrospectively invest in these layers and even change our choices about which layers to focus on. This allows us to begin lightly, with less investment, to get a result or to validate whether an idea has benefit, then pick layers to invest in for stability, quality, longer-term results, etc. This retrospective investment is what engineers sometimes call “hardening”, and the changes it implies are often referred to as “refactoring”. They may result in no apparent change, but they are retrospective investment that may drastically affect the quality of the end result, whether now or in the future.

Choice and investment in software layers, whether visible or non-visible, is an ongoing and never-ending process. It is also a discussion about resourcing, quality, deadlines, compromises and desired outcomes. All that is required is that we remember that the layers are (or should be) there in some form, and that time spent considering them affects the end result in very tangible ways.

Integration-Driven Development

It’s a familiar stereotype: The lone software nerd. Working in isolation. Rarely interacting with anyone.

In reality, this particular stereotype couldn’t be further from the truth. Everything we work on as software developers integrates with something or someone else.

cogsIt might be behind the scenes by providing or using an API, or by communicating with another module, an external system or a database. Or if we’re building a User Interface, that’s a visual integration with another person’s expectations and mental model of a task they want to perform. Granted, it’s much more creative and fluid than the ones mentioned earlier, but it’s an integration with another party nevertheless.

Nothing we develop exists in isolation: Integrations and interactions are crucial, and this usually means working with other people, either as end users or as the creators of the systems we need to integrate with. So much for that loner stereotype.

But we’re also human…

When we begin a development task, there’s a natural tendency to go depth-first and to work on a piece as we understand it at first. We like to work on the bit we feel in control of, and to feel “ready” before we expose it to others.

As it turns out, instead of this seemingly productive focus leading to our “best” work, once revealed, it can often be detrimental, both for us and for the project. Why…?

  • We’re making assumptions about the integration and interactions that are inevitably needed and, by not validating them, risking the need for re-work.
  • We’re delaying exposing our work to others at which point their own assumptions can be tested, risking re-work on their part.

The answer, contrary to our natural inclinations, seems to be to begin integrating and exposing our work as early as possible.

How to integrate early?

For non-visual work, we can actually build the integration, the API, the interface with the other module or system, as early as possible. This may involve a certain amount of mocking up and simulation to get it working, perhaps using fake data, but we can do this quickly and cheaply as those parts will be thrown away towards the end.

One crucial thing to remember… is to actually throw those mocks away and not to accidentally leave any behind. It’s a common mistake to go live with mock code remaining, so it’s best to mark it in some way or to use obviously fake data so as to be embarrassingly obvious.

For visual work, a lower-quality version with just the interactions working is a good starting point; a form of living wireframes, minimally demonstrable. Something people can play with before any visual polish is added.

There are undeniable challenges I’m ignoring here, as the incremental development from living wireframes to pixel-perfect renderings of designs can be a difficult transition that ignores fundamental differences between the two. Design work often leads to an internal restructuring of visual implementations. That said, these are problems worth tackling, as exposing something people can interact with early potentially prevents much rework anyway.

Validating assumptions about the complex and nuanced interaction with human beings is best done as early as possible.

Visual work also needs to integrate with and test assumptions about the data and other non-visual interactions upon which it will rely, if only informally, as mistakes here can also lead to rework.

What improves?

If we can convince ourselves to integrate early, and on an ongoing basis…

  • We expose our work earlier to others. It’s not only helpful but healthy. Perhaps time to get over our natural tendencies to perfect then reveal.
  • We often learn something unexpected that shapes how we build the bit we’re working on, and so avoids rework.
  • Expectations and assumptions can be tested; both ours and those of others.
  • A basic working version of the overall system is achievable earlier, whereby end-to-end assumptions can be tested, perhaps with stakeholders, and problems spotted.
  • Avoids the last-minute integration and “Oh, s***!” moments that so often lead to project delays.

What are the challenges?

Just like integration, change is inevitable.

Doesn’t this mean we should integrate later, once we “know everything”? Don’t early integrations just set us up for rework when change occurs?

I’d argue that early integration means we know who else is affected, and how, so we can swiftly make the change and fix the integration, rather than storing up the change for disclosure during late integration. It’s better to absorb the impact of the change early, including any knock-on effect. Learning can take place sooner and in all the places it is needed.

Due to earlier coupling, we do however need to be mindful of timings and upfront about the impact on other people, rather than just “breaking” integrations and waiting for others to discover them. Checking others are ready to absorb the change minimises disruption and provides a valuable way to communicate the change. We need to move sympathetically with the needs others.

Rather than looking on early integrations as inevitable victims of change, they are also a valuable source of it. Without early integration, we may be delaying discovering the need for the change and storing up the disruption for later.

 

If we do it early and on an ongoing basis, this breadth-first early approach to integration not only avoids late discovery of problems and minimises the need for re-work… it also makes software development quite an interactive activity.

So much for the loner stereotype!

“Don’t worry about the details”,… said no software developer ever

bubbles1bDevelopers spend our days deep in details, usually a thousand and one of them. Combining and orchestrating them up to the point where they collectively deliver something tangible to other — often non-technical — people.

Up and down the stack we go, from the bigger picture down to concrete implementation details, and back up again.

Good software developers can traverse that stack all day long; speaking to people or explaining concepts at all levels… in language appropriate to that level, using abstractions and terminology that hide the detail below, but always being aware that the detail is still there,… waiting to be dealt with.

We know the bottom line is that the details have to work, or there is no bigger picture.

The trick is learning the right times to get into those details, how deep to go, and when to acknowledge that problems at one level have an impact on a wider scope. What we try not to do is get lost in the details in ways that aren’t relevant to someone who just wants to know about the overall solution or a particular layer of it. We try not to baffle you or drag you down the rabbit hole with us,… and sometimes we succeed.

In most other professions, you operate at one level and you remain there. Not so with software, where we work with the details but orchestrate and explain them at a level appropriate to many audiences, whatever they might need. We deal with customers and other developers, and everyone in between.

So if you ask a software developer about a solution or talk to us about a problem… we’re usually traversing the stack as we talk, picking out what’s relevant… choosing language… hiding details, unless they impact you… storing up things irrelevant to you that need investigating later… trying to be open without being complex,… honest but not alarmist. And if we’re good, you can’t tell we’re doing it.

The details matter to all of us eventually. But it’s our job, as developers, to figure them out.

Why Software Engineering Isn’t Engineering, and the Implication for Deadlines

questionmarkSoftware engineering estimates and plans often fail to live up to the reality that follows. It seems to be the only engineering discipline in which this is the rule rather than the exception.

Before you leap to the defence of software projects you’ve worked on — perhaps even your own — think about how many times those projects delivered precisely what they said they would, and in the timeframe they originally promised. If yours did, you’re probably in a happy minority, or you’ve already embraced some of the realities I’m highlighting below.

“Engineering”… Really??!

Other than for extremely formal cleanroom and safety-critical systems, the flavour of software development undertaken in most companies isn’t really an engineering discipline at all, despite our ambitions to call it that. But the real problem is that we still predominantly estimate work and plan our activities as if it was, either within the software team or in our interactions with the wider company and its goals.

We architect and break down systems into subsystems, components and tasks, then estimate the resources required for each task. At the end of it all, we are usually surprised when our estimates don’t reflect the reality of what comes next. It is clear that the wider business is often similarly surprised and, therefore, no wonder that software estimates are often viewed with skepticism and disbelief.

Roads, Bridges, Bricks & Mortar

In physical engineering (such as civil or mechanical), estimates are based upon abstractions that are known to be quite accurate and dependable; component parts integrate and combine in ways that are by now well-understood. Each level is based on a foundation that is, at the lowest levels, ultimately based on well-researched principles: physics, chemistry, metals, concrete, glass, water, soil, gravity, etc.

Even the introduction of new modern materials is relatively risk-free as they can be tested, studied and understood in isolation or smaller builds, then integrated into larger builds with fairly predictable results.

That’s not to say that physical engineering doesn’t have it’s delays, complexities and flaws… but at least there’s the possibility of attempting to plan, reason and engineer them out.

Building on a Foundation of Complexity

Teams building software seem to ascribe the same physical characteristics to it as if it were more formal engineering. Whilst it is certainly true that there may still be physical laws governing the electronics and other hardware the software ultimately runs on, most software is built at a level of abstraction several hundred layers above that, and therefore on a foundation of potential complexity we may not understand quite so well.

Even where component parts of our system are well-understood (operating systems, languages, libraries), they can interact with one another, and with the foundations they in turn depend upon, in ways that components rarely would in a physical build.

Add to this the fact that most of our abstractions are about systems the precise likes of which we may have never personally built before, and it becomes clear that we are, at best, working with assumptions.

It seems we often mistake our ability to envisage a final system for the reality of the way that system will actually need to function, and the work involved in building it.

Complexity Appears Late

Problems and complexity appear notoriously late in software projects. Attempts to get an early understanding — such as proofs of concept, or use of spikes in the Agile world — can help, but they often don’t uncover the reality of the actual system we’re building… just one similar to (and therefore potentially totally unlike) it.

The result is that reality rarely adheres to our estimates and plans.

So What?

It is probably high time that we admitted most software engineering isn’t actually engineering, in the strictest sense of the term. The implication is that our estimates and plans — and our ability to hit deadlines — shouldn’t be relied upon in the same way. Embracing this truth, company-wide, might help us.

Secondly, I’d say that Agile, yet again, has something to contribute: We already know that it helps to nail requirements by working in an iterative and evolutionary manner alongside a customer, but it also assists in handling complexity and planning, by removing strict estimates and by working iteratively.

Most Agile “stories” are planned in points, t-shirt sizes, and other schemes, in an attempt to get away from time-based estimates and instead plan based on the perceived complexity and past ability of the team to deliver similar stories. Agile’s iterative approach means we can adjust course as complexity and other problems appear, just as we adjust course by regularly showing early versions of a system to a customer.

One implication of accepting this is that software projects will be notoriously hard to plan against calendar deadlines. This means the wider company will need to work with software teams in a way that depends less upon deadlines, estimates and plans. Many organisations embrace Agile in their software teams, but not in the wider company. This creates the untenable combination of Agile software work practices, but rigid / calendar deadlines and deliveries to the wider business… and the largely unacknowledged fact that these two worlds seldom align.

Maybe it’s time we embraced the reality of this less-than-formal “engineering” activity of ours, owned up similarly to the reality of our less-than-dependable “estimates”, and relied more on ways to work with these truths, such as Agile? This may finally mean embracing Agile in a truly company-wide manner, rather than just within software teams.

Hitting Software Deadlines: Essentials Only… then Upgrade

From “Deadline” can seem like a dirty word, in the software world. But without them, our work wouldn’t see the light of day, get an audience or have an impact. So despite their perceived unpleasantness, I’m all for them.

We tend to plan work towards a deadline in ways that ignore two inherent truths about software development, and therefore often miss them:

  • Complexity appears late: Despite our best efforts to surface it during planning and design activities. This is the reason plans go awry at the eleventh hour and deadlines that seemed achievable are inexplicably missed;
  • Completeness is hard to nail: Specifically, deadline-completeness. This means only the things that are strictly necessary for a viable release, launch or demo at the deadline. This is likely smaller than the list you feel you want to include, but adds crucial items that are usually not visible to a client, such as productionising, releasing, legal formalities, etc. Many of these are usually slotted in just-in-time before a deadline, or missed. Denial of the need to nail completeness, or discovery of further complexity here, can undermine a deadline.

Maybe we could try something along the following lines, to hit deadlines more often:

  • Find the Essentials for Deadline-Completeness: Begin by ruthlessly separating the work that must be completed for the deadline, but perhaps in a heavily-downgraded form.
    This might mean something releasable to a production environment (in a walled-off manner, minus a final trivial “reveal” on the deadline date), something that discharges legalities, and something that exercises the agreed use cases in a minimal but viable manner.
    Encourage people to write separate stories for bare bones deadline completeness, and then the rest.
    Also write meta stories for items required for completeness that would otherwise be forgotten due to not being visible to a client, such as deployments and productionising;
  • Hit deadline-completeness early: Complete the bare bones deadline-sensitive work as early as possible, and nothing else. In an Agile approach, this should be at least a sprint or two before the deadline itself. You will end up with a very minimal but released (though perhaps walled-off) system that would hit your deadline, but would perhaps disappoint you in a number of ways. Learn to accept this, because you won’t be revealing this version of your work and something much more crucial just happened that is worth celebrating: You already hit the deadline. You are already capable of delivering. So now, you can…
  • Upgrade, iteratively: Add in the work to upgrade that initial complete solution in tight iterations. Each of these should result in a system that is “better” in some way but still deadline-complete; remains releasable, still discharges legal obligations, still exercises the same use cases and perhaps some new and less crucial ones.
    Iterate on these upgrades, in a order of decreasing priority, until the deadline.

When the deadline comes, as it inevitably will, you are guaranteed to hit it in some viable and complete manner. There will always be upgrades you wish you could have included but they will be the least important ones and, much more crucially, you aren’t left with an incomplete solution that can’t released, demonstrated or used.

Approaching deadlines in this way means that complexity can be handled as it arises, without derailing the deadline. Hitting deadline-completeness in a minimal way, then upgrading, allows complexity to push out less-crucial upgrades if necessary. Yes, you don’t want it to… but the reality is it might need to. Post-completeness, there is room for the battle between complexity and upgrades to take place, however that plays out and without risking the deadline.

How do you persuade people to adopt this approach? By showing them that everything is still planned to be included, but that deadline completeness is prized first. Assure them that other upgrades, in decreasing priority order, will only be omitted if unforeseen complexities or problems arise. The maximum value possible is still going to be delivered. Nothing has changed but the approach to delivery and risk.

Isn’t this just Agile? If your approach to Agile (and there are many) includes a notion of deadline completeness, perhaps. But who really does Agile that way? After hitting the deadline early, the iterations are Agile-like. But this approach forces early inclusion of aspects that might be unseen to a customer in an Agile world — hence no stories — but which are crucial for the deadline, such as productionising.

Find the essentials and complete them early, then upgrade. By planning in a way that handles complexity like this, maybe we can let it do its worst — as it inevitably will — and still deliver to a deadline?

For a Coherent Software Product… Draw More Diagrams

Pencil, By Juliancolton (Own work) [Public domain], via Wikimedia Commons

Unless you’re building it entirely on your own — and even sometimes then — for a unified and coherent software product, you need prominently-shared diagrams of what you intend to build.

Most software products are built by more than one person; more than one mind. Most are built by more than one team and usually across more than one skill or discipline. That’s many backgrounds, many viewpoints… but still one intended product in which they all need to be unified.

No-one would imagine coding an API without a spec, yet we often build systems without a diagram of the architecture, components, boundaries or intended flows. Retrospective diagrams often never happen, or happen way beyond the point it became painfully obvious they were needed. We over-rely on the verbal signs that we are aligned, without backing it up with a diagram.

Without a shared vision of the intended outcome, we’re likely to translate any purely verbal descriptions into different mental models, and onwards into different technical solutions… which then need to be aligned. “Oh I didn’t realise”, becomes all too common. This is particularly true in places where multiple disciplines overlap: Product owner, UX, front end, back end, operations, marketing, etc.

Everyone sees their piece of the puzzle, but diagrams confirm how they will all fit together. Without them, we introduce opportunities for hidden agendas and unnecessary extras to creep in. Human nature dictates that, where ambiguity exists, we tend to work silently to our own preferences rather than seeking a unified team answer. A diagram can provide that answer, or at least highlight the blank space in which it needs to go.

What to draw?

Whatever you find yourself discussing… but especially user interfaces, architecture, components, flows, and absolutely all inter-team or component boundaries.

How many drawings?

Just enough… but at least the highest levels of the product, then break down where more clarity is required and particularly where team boundaries exist.

Not too many though… or no-one will look at them. Find a happy balance.

What to draw with?

Informality, and a shared medium… Keep diagrams informal, because speed is important in encouraging capturing of info. Use a tool for more formal diagrams, but a tool that everyone can access; the person drawing the diagram will not necessarily be the only one changing it or adding to it later.

Where to draw them?

Visibly! Stuck away in a repository or a wiki where people rarely (or begrudgingly) go, diagrams are next to useless. Get copies of them on up on the walls too.

Update them!

An obsolete diagram is worse than no diagram. Think living diagrams!

When a change arises, update the diagram now, even if only with a scribbled annotation on a printout to be added to the original later. This is why informality often wins; there is no pristine copy that needs painstakingly updating.

Assign someone the action of updating the diagram, and make sure that they do.

Encourage diagrams

Make sure whiteboards are available, with pens (if you have to hunt, the moment will pass). Also make sure wall space is available, on which to stick diagrams that will live for longer.

Whiteboards aren’t just for visual disciplines; encourage engineers to draw components, flows and architecture too. Even short-lived diagrams, such as those drawn in one meeting, can unify people at a point in time. Get everyone comfortable with the idea of standing up with a whiteboard pen… and drawing. It should be a daily occurrence.

There should be a blank whiteboard usable by anyone who feels the need, at any time, and preferably within reach. This is a cheap resource, so over-provide. The improvement in communication, and in a coherent product, will be worth it.

Why Time as a Solopreneur Might Make You a Better Team Player

forwardTeamwork: Good. Lone work: Bad.

Or at least that’s how the stereotype drummed into us through education and our careers seems to go. We learn to focus our CVs and resumés on our team contributions, and to downplay the things we achieved entirely alone.

But what if the things we achieved on our own somehow made us more effective the next time we joined a team? What if a balance of both experiences made for better team players?

It’s my experience that teamwork between people who have never known an occasional lone accomplishment can suffer from a few blind spots: Such team members may have an inaccurate view of their contribution and how to tune it, having only ever seen their work as part of a larger deliverable. There’s a danger of feeling like a full four-wheel contributor, whilst actually running on three wheels and being unaware how much your contribution leans on the work of others. People who have never known personal wins can see a team project as their only chance to demonstrate their abilities, and may be over-invested in personal outcomes rather than team outcomes, safe in the knowledge that they can shine elsewhere or next time.

So what do I mean by lone accomplishments? Lone accomplishments could be a product (software, hardware, anything), a piece of art, training materials, blog posts, or a book. Anything you actually launch, publish or deliver to an audience, for which the entire deliverable depends on you, and for which you have no backup or other input to assist. Anything for which there is no scapegoat once you deliver it; it is yours, good and bad.

I’ve done this twice, with software products of my own. They were far from a commercial success, but the lessons learned were worth more than the time I spent and the salary I forfeited in those months. Since then, I’ve gone back to teamwork in startups, and really appreciated the perspective that my lone achievements have given me.

So what do lone accomplishers bring to a team? How can lone work strengthen and define you?

  • People with lone projects under their belt tend to have a more accurate view of their own strengths and weaknesses, and an appreciation for the roles others perform in a company.. because they’ve probably filled them at some point, albeit probably rather clumsily. Most former solopreneurs will have noticed how they couldn’t possibly fill all those roles alone… and will appreciate that it is only possible to cover them well in a team where a variety of skillsets are present.
    For me, as a software engineer, I now have a great appreciation of the other roles involved in startup life: Marketing, PR, design, QA, Ops, etc. Prior to undertaking my own projects, I probably had a dismissive or inaccurate view of those roles.
  • People with lone projects under their belt already have personal “wins” in the bag, so they don’t feel the need to act as if the next team project is their own pet project or chance to shine. This makes conversations more about a team win, rather than a personal win, less religious, more exploratory, and more practical.
  • People with lone projects under their belt tend to appreciate the difficulties of building something entirely alone, and therefore the need for others. The mature lesson from this is a willingness to delegate… and truly delegate, giving people a long leash and adjusting it appropriately. Knowing I can’t do it alone, but might be able to achieve it with others, makes me value the contribution of others and the need to delegate effectively.

Why is lone work a rarity? Aside from the widespread celebration of teamwork, we probably shy away from lone work because we know it will truly define or break us; there is no middle ground, and so we fear it. Through it, we will either discover, once and for all, that we can lean into a challenge on our own, or that we don’t (yet) have what it takes. Unlike teamwork, there is no scapegoat, no fallback, no-one else to blame. This is a hard reality to face.

But I’m convinced that every day we put off an attempt at lone work, is a day we never really understand what we are capable of; both alone, and in terms of our contribution to a team.

I’m convinced that, particularly in industries that involve building something (software, hardware, arts), we should exercise our lone capabilities on a regular basis, and at least once a year. It’s an ongoing eye opener on who we are, what we’re capable of, and therefore what we each bring to a team.

Far from being a red flag when hiring, lone work — when properly learned from — could be the thing that truly makes a strong team player.

Why Early UX Should Scratch Deeper than the UI

uxThe early stages of software engineering projects aiming to expose anything more than a trivial User Interface (UI) quite sensibly begin with a heavy focus on the User Experience (UX), as this dictates a great deal of what the software needs to do. This is usually a stage of the project that produces many wireframes, mockups, and a sense of what the user — or different user “personas” — will be attempting to achieve with the software.

This UX-centric approach undeniably works really well, and tends to result in software that delivers a strong, usable user experience.

However, I’ve seen a UX-/UI-only focus in the early stages of some projects lead to blind spots and incorrect assumptions which can force costly rework later in the project. For this reason, I’d assert that early UX work should scratch deeper than just the UI, if only to validate assumptions and, in-particular, to confirm what’s actually possible in terms of the underlying data and services.

Server-Side Constraints

Too often, in UX-centric early project stages, the assertion is that the server-side data, APIs and services are much less relevant than the UI. This is often correct as it can distract from the goal of designing usable software, but there’s an underlying assumption that needs validating: That the server can deliver anything the UX/UI work dictates it needs.

This is often not the case… If the server-side data model is built on serious data modelling or clever algorithms, those services and that model are often constrained in terms of what they can provide. There may be heavy computational costs involved in exposing certain data in a timely manner, if it’s even possible.

As always, using specific examples of server-side data in UX work can really help to clarify what’s possible. Even just involving a few token Data / Server-Side engineers can be enough to verbally “sign off” that the server will be able to support what’s required.

This form of early full-stack validation can prevent costly rework later, and can even help to direct server-side work appropriately by providing added context about what the UI needs.

Server-Side Data Model and User Mental Model

An added benefit in considering the server-side data model during UX work is that the user’s own mental model of what they are trying to achieve when they use the software is so often a version of that server-side model.

Considering, albeit leanly, how we might model data on the server can help to clarify the ways in which a user might expect (via their own mental model) to visualise, interact with and modify that data. After all, this is effectively what they’re going to be doing by using the software; accessing and manipulating server-side data and services indirectly via a UI.

Even just tying together the correct terminology, so UX and server-side work are talking the same language, can prevent costly disconnects between different parts of the team.

Again, precise examples of server-side data, or a little involvement of the relevant engineers, can really help here.

Lean Approach to Scratching the Full Stack

But doesn’t involving server-side engineers and data scientists in UX discussions slow down those discussions? Doesn’t it negate the value of considering the software purely from the point of view of the user at first?

I’d say not, but only if it’s clear that the focus of this stage is still primarily UX, and that any deeper layers in the stack are to be validated only. Crucially, any details of how the eventual UI will interact with the server should be avoided, unless they too dictate what is possible in the UI. No-one needs to build the server at this stage (though that work might be proceeding in parallel), so long as they can talk in as much detail as required about what it, and its services, will look like.

 

I think this is another example of how, in software engineering, regularly considering the bigger picture, even when performing early focussed work, can avoid costly mistakes. In complex systems, the bigger and the smaller views are often related or constrained in unexpected ways.

Distil Complexity, Nail Theory, Build Software… Just Not All at Once

questionmark

Trying to pat your head whilst rubbing your stomach is the standard way to demonstrate that we, as humans, find it difficult to perform certain activities concurrently.

The software project equivalent is attempting to overlap activities that teams find it difficult to handle in combination. It’s just a reality that some things are hard enough on their own, and so we hamper our efforts if we attempt to combine them.

It’s been my experience that projects where there is at least an attempt to separate, as much as possible, the following activities are those that experience fewer hiccups. It may still be necessary to iterate through these activities regularly, and they are not waterfall-style one-offs, but being aware of the complexity of doing more than one of them at once can help to simplify things for teams and to ensure one flows into the other:

  • Distilling Domain Complexity – Every software product is built to deliver a solution in a unique problem domain. Often, there is much to clarify in this domain, and much complexity to unravel and distil. This can be something of a voyage of discovery, particularly if it takes a while to extract that knowledge from a domain expert and to figure out which aspects of it the project will require. As building products for a market can be like chasing a moving target, it is highly likely that this domain-related activity will be revisited many times during a project;
  • Nailing Academic Theory – Beyond well-worn Software Architecture and Computer Science principles, a project may rely upon deeper academic theory, such as Artificial Intelligence, Machine Learning, Big Data… or non-tech project-specific areas such as psychology, natural language, etc. Each of these alone are huge fields, full of theory. The problem is, you can’t build software with theory, and so we need to pin down practical forms of that theory that will be implemented in software. This may involve prototyping, comparative analysis of different approaches, and finally a choice between palatable options.
    There is a tendency to linger over these decisions, and an attractiveness in remaining in that land of possibilities. However, software engineering requires knowns, not possibilities, and the sooner theory is nailed down — for this project or iteration anyway, as we can always make different decisions next time — the better.
  • Building Software – Software engineering is concerned with building stable, scalable, understandable software to meet a need. There is sufficient difficulty in doing that in isolation, without overlapping with the previous activities. For large chunks of time, teams need to focus on building software, unencumbered by academic theory or domain ambiguities. It is almost an unwritten assumption on some projects that most of the difficulty lies in the previous two activities (problem domain and academic theory), and that building the resulting software solution will be comparatively easy or, at least, will be more of a known activity. However, the complexities and practicalities of building software require that teams focus on the activity where possible.

Although we may revisit each of these activities many times during a project, identifying and managing the boundaries between them helps to isolate them from one another. One great way to do that is by creating and maintaining great team documentation: Domain knowledge and concrete decisions based upon academic theory are perfect examples where documentation can form a neat output from one activity (albeit perhaps occasionally updated), to be consumed by another. Documentation needn’t be heavy and cumbersome and writing just enough, but before it is needed, should be encouraged.

I wonder, how many times that we’ve seen projects go awry could we identify examples of these activities bleeding into one another too much? And, where projects have gone comparatively smoothly, I’d be interested in hearing whether it was because these activities were understood to be complex in combination.

I guess it’s all part of improving the task of building complex software, with a human team.

Software Frameworks: Occasionally, Reject or Reinvent the Wheel

moonSoftware frameworks are everywhere, and rightly so: Spring, Grails, Angular, Knockout, Backbone… the list is endless.

In essence, a framework is an abstraction upon which we write our app-/product-specific code. We use them to enable reuse of well-worn functionality and, like all abstractions, to hide the implementation and underlying complexity.

Use of frameworks allows us to simplify and speed up development efforts. It’s rather a no-brainer, hence their proliferation.

But under what conditions would we choose not to use a framework or to write our own? Surely we’d just be reinventing the wheel? Perhaps, but just occasionally, maybe the wheel doesn’t fit or we need a better wheel.

Framework Fitness

To be used on a project, a framework must at least be:

  • Functional – It goes without saying that a framework must do what we need it to do; delivering the functionality we require. There must be a clear benefit in using it;
  • Configurable / Customisable – It must be possible configure or extend that functionality (via customisation or plugins), tailoring it to our specific needs, and perhaps even turn off portions of it if we don’t need them or if they are inappropriate;
  • Documented – There must be up-to-date documentation, to make the above possible. We mustn’t need to dig to discover how to use the framework;
  • Efficient – The framework must not introduce performance problems of its own. If there is a convenience in writing our app using a framework, that must not be paid for via a performance degradation we aren’t willing to accept.
    (Anyone who has seen the numerous levels in a Spring stack trace will know how intriguing it is to see just how much is going on behind the scenes, and will occasionally have wondered about the potential performance impact of it all);
  • Proven & Stable – Once we commit to a framework, we are essentially relying on it. The stability of our app / product is directly affected by the stability of the framework. If we want to deliver cast-iron stability to our end users, then we’d better use a framework that itself offers cast-iron stability! At the very least, a framework must have been proven to be robust in production, preferably by someone else, before we choose to launch a product with it;
  • Owned & Supported – If there are unforeseen problems, or bugs — and there are always bugs — it helps if the framework is supported either by a company or an active development community. It is no good waiting six months for a fix to a framework problem that can’t otherwise be worked around.

If a candidate framework is missing or weak in one or more of the above, we might need to look elsewhere or choose to develop the functionality we’re looking for ourselves, perhaps itself in the form of a framework for reuse elsewhere. The cost of ditching or switching frameworks if we don’t assess the suitability up-front can be significant.

Healthy Curiosity

Whether or not a particular frameworks suits our needs, I think there’s educational value in knowing at least a little about the technology that lies beneath that framework. Such knowledge of the foundations upon which it is based may help us to debug problems or, if appropriate, to invent a replacement for the framework ourselves.

Without knowledge of what the framework is doing, we won’t know whether it is doing any good. It is always worth having an idea of whether our own efforts would be better, if only to confirm that they wouldn’t be!

Healthy Skepticism

Blind use of frameworks because other similar projects are using them, or use of them on projects where we use a mere fragment or sliver of their functionality, should be discouraged. The risks probably outweigh the benefits. I think we always need to look at the upside of using a particular framework, specifically for this project.

 

In conclusion, frameworks are largely a huge benefit in software development, but only if we know why we’re using them, when we’re sure that they fit our purpose, and when we understand (at least at a high level) what they are doing beneath the hood.

Just occasionally, omission of the framework or use of own implementation may be a more sensible option. It’s worth remembering that.