Improving Difficult Conversations

Of all the skills I’ve learned so-far during my almost 20 years as a software engineer, perhaps the most valuable isn’t even a technical skill: Namely, how to get the most out of difficult conversations and meetings.

Some conversations are difficult because the technical content is deep, or tough, or unclear, or each person involved has a different level of understanding of the content. Others are hard because at least one ego is present and is intent on taking the conversation in a particular direction. Some conversations involve warring parties, people who have “always” disagreed, or teams with seemingly different remits.

None of these conversations go well if the individuals involved tackle them as individuals. Someone (preferably at least one person) needs to lead the conversation from the viewpoint of the whole group, to orchestrate the best outcome, to see things from both sides, to question, to summarise and to put their own viewpoints alongside those of the others rather than ahead of them, no-matter what affiliations they may have. Ideally, everyone in the group would do this, removing the need for someone to “chair” the meeting, which I think usually disempowers other people present.

To this day, I’m still amazed by how few people in the working world, engineers and non-engineers alike, learned these skills. Meetings and conversations cry out for them, but perhaps only once you’ve seen how well those conversations can go with a few small changes.

This week, I was in a meeting with 8 highly-intelligent engineers. None of them brought prior resentment or disagreements to the meeting, and the aim of the meeting was to decide on an approach to a certain software-related activity. But, each spoke primarily from their own viewpoint and seemed collectively confused about why the meeting was in chaos. In software engineering terms, each seemed to prefer a depth-first approach to the conversation, choosing to disappear down a rabbit warren into details that they felt familiar with, but which were possibly irrelevant, or out-of-context, or not-yet-introduced to the others present. Months of prior inaction brought a slight sense of powerlessness to the meeting, and the historic context was painted negatively, risking a sense that making changes in future was futile. The reality was that these 8 engineers had a blank canvas, total freedom and could choose whatever approach suited the group as a whole… a far more positive reality than the atmosphere in the meeting suggested.

I can’t sit through such meetings without trying (even when it’s inappropriate) to lead them now, or at least to throw in suggestions for how the meeting might flow better. Sometimes this involves asking someone to back up and start describing a topic from a different viewpoint; usually with a wider context, with more background, or postponing discussion of details until later. Other times, people need to be asked to put aside the assumption that prior differences between teams can’t be tackled because, if you go on that assumption, there really is no point in having the meeting and we should all just give up and go home 😉 People often feel powerless and they sometimes need reminding that then can ask for what they need, rather than remaining silent and assuming they won’t get it. They need reminding that they can change things, but they need to choose to.

All of the above suggestions, if dropped subtly into such meetings by someone, can change the dynamic significantly. People start to think about what they’re saying and try to describe topics in a way that others will more readily understand, using common language or explaining terms. They start to search for common ground. They start to see that long-standing situations can be changed if they entertain that possibility. The meeting starts to have a purpose, and therefore an outcome.

Of all the skills I’ve learned so-far as a software engineer, this non-technical one is perhaps the most valuable, and the most applicable to other contexts.

Bravely Embracing the need for a Ground-Up Re-Think, or an Architectural Pivot

changing-trajectoryWhen it comes to building commercial software, we are sensibly encouraged to create a Minimum Viable Product (MVP) and to grow it organically from there as requirements and market fit dictate. This allows for continual learning, incorporation of user feedback and usually results in a product that more closely represents actual requirements (as opposed to our original perception of those requirements).

“Big Design Up Front” (BDUF) is the other extreme, where it can take months or years for the first version of the product to hit the market… or rather, to miss the market, as is usually the case.

The problem is, some architectural decisions are so fundamental that, when they change, there simply isn’t a path from one to another. If you discover that your MVP incorporates an incorrect architectural choice, it may not be possible to incrementally migrate it to the desired architecture, whilst releasing usable software at each stage along the way. By “incorrect”, here, I mean a choice that hampers the product significantly, prevents it from building a user base, or simply leads it to likely obsolescence. I am not referring to the ever-present preference of software engineers to re-engineer with the latest technology, which is usually a distraction.

The only choice in situations where the architectural change seems warranted is to temporarily break from the orderly, organic and incremental development of the product, and to bravely re-engineer it as rapidly as possible. Minimising the extent of the re-engineering is crucial as, once it becomes clear that “everything’s changing”, wide-eyed engineers (and I speak as one) have a tendency to throw everything in the air and start again. Conversely, this should be seen as a chance to get rid of some technical debt, but only where that lies in the affected parts of the system. At the end of such a re-engineering effort, development must strictly return to an orderly, organic and incremental approach with usable software at each stage. The two approaches require a different team mindset and so the switch must be formal and team-wide.

Releasing a major re-engineering effort requires planning, and must be seamless (or at least effortless) to users. User data must be migrated, often after extensive transformation to suit the new architecture. An easy way to lose users at this stage is to foreclose on that migration effort, and this is worryingly common. In my opinion, there is always a way to write a migration tool. Ditching users, or requiring them to do work to migrate themselves, is unforgivable. We’ve all seen the “Please re-register for our new site” emails… and most likely walked away. It isn’t “just” user data.

Thin Client to Thick Client – So why this blog post? Well, I originally architected ChangesThatStick.com (CTS) as a thin-client system. Most of the heavy lifting is done on the server (written in Java), including generation of HTML views (and changes to those views) which are served to the browser. The browser simply displays the view content and uses some minimal JavaScript to obey the server’s ongoing instructions on how to subsequently modify that content as the user interacts with the system. This worked well for the first (Beta) version. In hindsight however, CTS should have been a thick-client solution, with the bulk of the logic written in browser-side JavaScript, and the server acting as a user model repository and exposing an API for certain core services (email, payments, etc).

This is definitely a situation in which there seems to be no incremental path to the desired architecture, whilst releasing usable software on the way. And staying with the limitations of the existing CTS probably seriously limits its future appeal.

I am under no illusion that this is anything but a lengthy task and, particularly in my current (employed) situation, could take several months. But I think the end goal warrants the effort.

So why the big change in heart now? At the time I originally developed CTS, my core focus was on Java and so a thin client solution seemed the easiest way to implement most of the functionality in that language and, most importantly, mainly on the server. I wanted to minimise the amount of JavaScript that I needed to write. In hindsight, a thin-client model slows the user’s interaction with the system as the system’s response to each click/gesture is limited by bandwidth. And you really do notice that, particularly on a 3G connection! A richer user experience could be achieved by handling all user interaction in the browser, and simply synchronising the underlying user model (entities, activities, goals) with the server in the background.

So I am swallowing just such a re-engineering project myself, realising that there is no gradual path from A to B, and planning how to make that transition as swiftly as possible. I will definitely be migrating user data automatically, and the initial UI will look very similar. Both of these decisions should minimise the impact on users. Beyond that, the new architecture will make things possible that I simply couldn’t consider right now: off-line usage, new Canvas visualisations, etc. It just seems a necessary change in approach.

Which all goes to show that… in software development, sometimes you have to break the rules, take brave steps, be nimble and responsive, but always return to principles and solid practices afterwards. I think the occasional need for a ground-up re-think or a major refactoring demonstrates this perfectly, and goes to show that questioning our approach regularly is a healthy and mature thing.

Resisting the need for bravely re-architecting a system or removing technical debt that is slowing down development, based purely on a fear of departing from “sound practices” for a period, is a certain path to becoming yesterday’s product 🙂