Modern Software Engineering Book Reflections

book cover of modern software engineering

Recently, I read Dave Farley’s Modern Software Engineering publication which outlines a modern, sensible approach to engineering. It articulately describes what it means to be engineer, and how to tackle engineering challenges.

Dave’s thought process and views on many topics felt familiar and similar to my own, not because we’ve worked together, but because I have the scars of trying many things that didn’t work, failed projects, missed deadlines, building the wrong things. Each chapter was like catching up with an old colleague who’s been there, has seen what works and does not, it was a refreshing read that I highly recommend to any engineer.

Reflecting on Dave’s great work, I decided to write my own views after being inspired by Dave, and other thought leaders in our industry.

⚙️ Building Incrementally

Performing heavy analysis and design slows progress, in simple software, this may perhaps be ok. However, rarely in the current era of building scalable, reliable and fault-tolerant systems is this true. Complex systems aren’t something that are simply produced by a single mind on the first iteration, instead it’s built incrementally, learning what works.

Some ideas which looked great on paper didn’t satisfy requirements because of unexpected unknowns. If you ever worked in a waterfall methodology this was its biggest flaw, with many notable systems turning into a big ball of mud, despite in some cases hundreds of millions in investment.

As software engineers, we have the power to change our design during implementation, adhering to some of the design principles such as loose coupling, modularity, and separation of concerns makes this change easier.

We’re in a unique position as engineers, because there are many disciplines that can’t just pivot easily, for example take building a bridge that spans several miles, a change of plan may be extremely difficult, and could be impossible due to constraints. As software engineers, we can just generally delete code, remove resources from our chosen Cloud Service Provider (CSP).

🔬 Performing Engineering

Being a software engineer is far more than just writing code. Getting architecture and design right for the component you’re working on is important, iteratively reflecting on approaches, learning what works, creating hypotheses, experimenting and drawing conclusions is really the value of a good engineer.

As engineers, we need to select approaches and be aware of what our choices really mean, what are the tradeoffs we’re making in our approach, a simple example would be considering (CAP theorem)[https://en.wikipedia.org/wiki/CAP_theorem] in the approach. We are trying to organise complexity all the time, and the further we go, complexity may scale linearly, but ideally not exponentially, it is up to engineers to try and tame.

"Code is the output of good engineering, not the driver of it"

There’s not great value in clever code, we want code to be simple, expressive, and easy to comprehend. Yes there’s a time and place for clever code such as coding golf, but in systems you’re building as part of many teams, you want to use code as a communication tool, things that aren’t clear can lead to misunderstandings and therefore bugs.

Compilers are very good at finding optimizations via inlining blocks, so whether something can be written in one or three lines shouldn’t be a concern of performance, even in systems such as high frequency trading, and if in doubt, test.

🌻 Change

It’s been covered that for anything but trivial systems, change is a constant for implementation and for the future of that system, therefore enabling change should be a first class consideration in our build. Following an agile approach of small, incremental, iterative change forces us to build in way that adheres to principals we’ve already mentioned (modularity, SoC etc), the payoff from this, is it greatly helps to flatten the cost of change in our system, not only in infancy, but its entire lifespan.

As you’ve likely experienced, tightly coupled systems become more difficult, more expensive and more dangerous to change. Releasing software like this becomes a scheduled event, because of this things take longer, releases are delayed. This is far from ideal, we’re slowing down the rate at which we can build, measure and learn, meanwhile a competitor that can release more often has a large advantage, this compounded over time could leave your company far behind its capabilities and eventually leading to financial ruin.

So, we agree that change is important, and changes often require having confidence in our releases. If we are confident that each release is bug free to the best of our knowledge, then the time it takes to release working software (compile, test) should be the only blocker. We can develop this confidence by following a test driven approach (TDD), following this forces good practices into our design, and code.

Over time, the tests will likely grow into thousands, tens of thousands giving us confidence that side effects of implementation features years after aren’t causing undesired issues.

Furthermore, having a deterministic method of releasing software with automated testing that increases confidence and can exercise our code at different levels is only a good thing. A continuous improvement/deployment system is the tool for this, we can include processes such as creating the binaries for our software, performing automated tests from unit to E2E, scanning for library vulnerabilities and so on.

An effective continuous integration which runs in a timely manner is a strategy which enables some of the largest companies in the world to release several thousand times a day by hundreds of engineering teams.

🧚 Dogma

It can be easy for a team to fall into a trap of doing what they know, why? Well quite simply it’s worked in the past, or it’s been seen in other projects. A trivial example of this could be installing half a dozen different ruling configurations (this approach is popular in JS projects). These rules dictate how we write specific idioms of code such as loops, types, order of function, documentation, and so on.

Whilst the intent is good, much of these dictator configs are pulled from large companies with specific goals, reasons and styling choices. Do these choices help our team in the specific context of what we’re trying to achieve? Are we as engineers in agreement with the benefits and tradeoffs of following these conventions? These rules can cause more effort and change our codebase, and perhaps for us, the thinking engineers, the tradeoff isn’t always worth it. It’s for us to decide, not to blindly follow what has been done before.

"Have no respect whatsoever for authority; forget who said it and instead look at what he/she starts with, where he ends up, and ask yourself, Is it reasonable?" - Richard Feynman

It’s our responsibility as engineers to make decisions based upon data, the evidence helps us navigate the sea of possibilities and acts as our compass to guide us towards our desired outcome, we can also evaluate if we are closer, or further away from our end goal too. Following a particular direction based on who was the loudest or had the most conviction whilst convenient isn’t how we should base our technical decisions. Naturally you don’t need to perform this exhaustively, but on design decisions it’s worth your time and effort.

🏁 Conclusion

Software engineering is difficult, period. Whilst tooling and technology are ever advancing so is the complexity of our software, we rely on it for mission critical systems, financial markets, and piloting all manner of vehicles, and its capabilities are ever increasing. The expectation even for general systems such as ecommerce is that they should be always available, fast and fault free regardless of where they are accessed in the world.

Things do go wrong, meeting expectations for customers, or legal legislation reasons presents many engineering challenges, and it is up to recognise and build resilience and fault tolerance into what we do.

How we design systems is critical to facilitate change. Good engineering teams build great software on all frameworks and languages, however it’s difficult to overcome the wrong design choices, especially if they’re cemented in. It is not enough, for just our software to be modular, we need modular organisations too.

If your ability to release software quickly in order to learn is blocked by constraints of depending on other teams, despite your engineering being agile, the business isn’t. Being decoupled is a main metric of high performing teams and organisations, you can only go as fast as your slowest constraint.

Finally we need to internalise the fact that systems impact real people such as you and I. An error or mistake could have a devastating impact on someone’s life, it could lead to a refusal of loan, visa, mortgage, healthcare and much more.

Using some of the techniques we’ve spoken about helps us view the impact of change, and iterate quickly. This can be driven by observability, analytics, events, and just sometimes just being a little more mindful of how our effort impacts our current, and future customers of all genders, race, age, and abilities.

Ash Grennan
Ash Grennan
Snr Software Engineer

Deliver value first, empower teams to make technical decisions. Snr Engineer @ Moonpig, hold a BSc & MSc in software engineering & certified AWS Solutions Architect (LinkedIn). A fan of Serverless computing, distributed systems, and anything published by serverless.com 🧡