The Only Guarantee Is Change

One of my favourite characteristics of this industry is the sheer volume of publicly available discussion around techniques, methodologies and paradigms for how to design the software we write. There’s a plethora of different publications and articles available to us, through websites such as Hacker News and StackOverflow, as well as personal blogs, tweets, and wikis covering everything from the low level, such as sorting algorithms, high level concepts such as high-availability data store design or how to architect your ever increasing list of services. More than that, there is plenty of coverage on how to run your software team or organisation, how to hire, how to ensure velocity while not accruing crippling technical debt and inertia.

The emergent characteristic of this world seems to be that people have preferences or tend to fall on one side or another, with every publication on the new or rehashed ‘one true way’ or ‘wrong but useful’ idea met with naysayers and advocates of an alternative idea, as well as fervent supporters of that idea. It is inevitable that people seem to often form tribal-like attachments to their side of the fence - the world is complex and categorisation and classification are ways of simplifying and making sense of it, otherwise we’d need to keep an insurmountable amount of ideas in our heads, which is beyond most humans.

As inevitable and interesting as it is to see people picking sides, this led me to thinking - are there any universal truths or rules that always ring true in software engineering? Given the reality of all these different ideas, are there any meta-level ideas that are cut through the tribalism and are common to ‘effective’ software engineering principles across the ecosystem and industries. If so, could they be followed to create ‘effective’ software and products in any situation?

That ‘effective’ label is incredibly subjective as it is multi-faceted and somewhat depends on your perspective, but for the purpose of this article I interpet ‘effective’ software engineering as the ability to continously extract value from the product or application over time. This means looking at it from an internal perspective such as making changes, fixing bugs, firefighting issues and rebuilding, but also from an external perspective in terms of the end users and the ability to turn the cost of code into an asset via revenue or just pure utility.

So how do you determine an universal truth in software engineering? I want to start by casting the net wider, and highlight what I consider to be one of the few universal truths in this world - all you can guarantee is change. No matter what field you are in, your life experience, your biases and your background, this is true. Plus it’s not a particular new idea, prevalent in Buddhism for example.

From a software engineering point of view, it doesn’t matter which programming language you choose, which framework you choose, what database you use or how you structure you tteams. At some point down the line one or all of you previous choices will seem sub-optimal at best, ridiculously short-sighted at worst. Keeping this universal truth in mind means that your decisions are never made with a world view of ‘nothing about this can ever the change, the world will not change around this decision as that would be catastrophic’ - instead you face the fact that that reality will change, and you can build some robustness into your software and processes to prepare for this change. One way I like to think about this is keeping optionality as high as possible - wherever you can, try to keep your options open to further change, and avoiding don’t back yourself into a corner with limited options.

This is a tricky and perhaps paradoxical line to walk - every decision you make causes all the branches of options to collapse and those reduces optionally, but you need to make decisions in order the further the development of the software or product! It is certainly paradoxical, but at the the same time, because the only known quanity in the future is change, considering this rule and maintaining a suitable level of optionality where you can puts you in a position to do the best poddinlr job you can when the world inevtably shifts under your feet.

Therefore in my opinion, ‘effective’ software engineering are practices that derive or address this universal rule of all you can guarantee is change in order to guarantee the extraction of value from a product or application over time no matter what happens.


Enough philsophising - How do we actually do this? What considerations can we implement that don’t jeopardise the output of the product significantly but guarantee some level of optionality going forward. Here are some but not all of the points I’ve landed on, many of which are well known idioms already:

1) Knowledge Transfer: Do your best to break down silos of knowledge, in code and in processes, so you are insured against carnage when losing individuals, which you will. You may lose initial velocity not throwing every problem at the quickest, smartest engineer, but it’s worth sharing that knowledge to nut just mitigate risk, but also. Similarly, it’s worth tracking discussions and close to the code that reflects them as possible as well as generally, be that PR descriptions or comments if required. Systems will seem arcane to new eyes, but you don’t want to force these new eyes to make dangerous incorrect assumptions when a small amount of effort now would help them make the right assumptions. This is one important property of automated testing and documentation close to the code working in tandem - you define the how of the feature in the tests, and the why of the decisions made in the comments or PR description, especially when anything veers from normal expectations.

2) Code robustness in the face of change: This is related to the idea that you will inevitably need to retire code, add new features or at some point change something in the product, and the need to make that process easy as possible. There are many ways to achieve this, such as high level of automated testing, well designed interfaces between components. There are many better authors out there who have covered these in more detail, but robustness against future changes is not always at the forefront of the reasoning behind why these techniques are useful.

3) Remove unused features as soon as you can: This is similar to the point above but I want to call it out separately. One way to ensure you product has high optionality and remains robust in the face of future changes is to always be cognisant of removing dead features or code, regardless of how hard that is. Adding product feature 101 taking into the account the other 100 existing features will always be harder than adding feature 4 to the existing 3. Complexity in a product will grow linearly at best, exponentially at worst. It’s important to therefore champion within a product structure the removal of outdated features to keep the constraints around future changes as positive as possible. All code is legacy, if its not needed, prioritise removing it.

4) Telemetry and metrics: Executing on these principles is aided by how much information you have on hand to make decisions. When changes need to occur, and reality shifts so a new situation needs to be addressed, you will suddenly have new questions of your existing data and product usage - has our traffic changed, do you need to maintain this feature based on usage, do we need to adjust paid traffic budgets or move traffic to new a landing page to test a hypothesis. One key principle that can aid this is thinking about telemetry along the lines of observability - you want to be able to answer new questions retroactively without having to deploy new code. This is possible if you keep in mind how you design your data to include low specificity metrics like pageviews and general clicks for prospecting for new ideas in product analytics, rather than just highly specific events, and also pushing as many context properties into events where possible. Thinking in this way means that you should be a better position to answer these new questions with data, rather than gut feeling, and that will build a better data-driven decision culture long term.

5) Defer decisions to keep optionality high: This is a key skill that everyone in the product, design and engineering area can make use of, though it seems counterintuitive in an industry that prides itself on problem solving. From optionality point of view, the longer you wait to make certain the decisions, the more information you will have, and therefore the higher the chance you made a better long term decision that will prove to be right in the face of future changes.

In conclusion, a disclaimer - this is clearly just my opinion, and holds nothing new, and is honestly an almost paradoxical approach to some definitions of software design. However I believe there are benefits to considering the universal truth mentioned, and how we can factor that into how we design software, helping us balance decision making with maintaining useful levels of optionality. At the very least to give us one more tool in our arsenal to combat the every growing complexity of software as it continues to eat the world.

comments powered by Disqus