1 Comment
⭠ Return to thread

I'm not certain about all the introduction, but for sure, the last part is basically what I'm fighting for: presume some things will fail, or unexpected, or new things will come up and disrupt the current work. Right now, the main delusion I see with most software projects is the idea that you can predict that, you can plan for it, you can anticipate the risk. It leads for abuse on both side: a lot of efforts is put on being resilient to things that are unlikely to happen, and a lot of deny is done on things that can happen but are out of our control. So, you get this alpha period pre-release where people put a sign "here is where we kill all the dragons that we breed during the last months", like if we know how many dragons would be and how hard it will be to kill them. You also get this gigantic and costly infrastructure able to scale to millions and reach 6 9 availability, for a product that hasn't found its market for yet, and still depends entirely on a single external SaaS with no minimal SLA, that no one will know how to replace if the provider goes the way of the Dodo.

I believe iterative development,

software development agility, continuous delivery, and many other improvements in the last 50y have been around that, but mostly focus on a single team, highly integrated, highly vertical. Where we are still need to do some work on is how to scale this, in revenue, in complexity, in people. Most of the tools I've seen trying to address those seem to be part of two different camp: either they imitated the small scale solution but on a larger scale, or they try to adapt the last century's large scale solution to the newer trends. The exception to this IMO is Wardley Mapping and somewhat Team Topologies. Both try to create a flexible and yet manageable and actionable view of the largest picture on their own domain, but I don't we have yet totally master this.

Expand full comment