The Mythical Man-Month05 December 2012
Having left my job I had started to consider where things had gone right and wrong in the companies I had worked for. The Mythical Man Month is by some considered the bible of software development, and since I had the time I decided to read it for comparison with my own views. I knew the book would show its age, but it stood the test of time because it focused on non-technological aspects such as psychology. As I found out, its reputation us well deserved.
Program vs. ProductCompared to a program made for private use, integrating a system intended as a product takes nine times the effort. Personally I think nine-fold is an exaggeration, but it is spot on when it talks about issues of tying together modules into a complete system. Although the resource limits used as an example are anarchic, the practical process of combining everything together reveals gaps.
Design for changeAlthough things like modular architecture are much more common these days, I am in two minds about the merits of too much generalisation in software. There is an approximate trade-off between generalisation and specialisation, which Brookes quantifies using the modular-versus-monolithic programming approaches. True, but misses the point.
In my experience where things go wrong is not so much too much or too little provision for change, as a complete absence of it. Usually as the result of time pressures, program code is written so tight to specific usages that it makes technically incorrect assumptions about what is valid operation. A nice example was an RTSP client that used a single network read call, on the assumption that the RTSP request/reply would not be fragmented.
Time to testBrookes uses a rule-of-thumb that actual program code writing only takes up a sixth of total project time, whereas testing is allocated two quarters. Brookes also differentiates between component testing and whole-system testing. I disagree with the ratios, but everything else tallies with what I have seen. All too often schedules are compressed, and no meaningful system-wide testing ever gets done.
Knowing when to stopMore generally is the issue of scheduling, and Brookes covers all the hazards. Interruptions to core programming tasks, minor feature additions having a disproportionate implementation cost, and above all there comes a point where a feature freeze is essential. Very telling one-liner about a technical lead who is very non-reactive to the ever-changing requests from marketing. From personal experience, anyone who is not actively hand-on falls into this managerial trap.
The feature freeze is critical, as there comes a point where a code base has to be consolidated. No matter how small, a new feature means new bugs, and this means testing for unintended interactions. Brookes goes as far to state that later bugs are all side-effects of fixes, so only crippling fixes should go into previous versions. Oddly enough, Brookes presents some ideas which looks like the ancestor of Unit Testing.
From-the-start specsBrookes calls it external specification, but in more modern parlance it is basically use case analysis. The implication is that the user manual should be written first, which although overkill does drive the point home. Without a concrete idea what a piece of software should do, developers are flying blind as far as how it should do things. Although the resulting speculative development has some benefits in allowing for experimentation, gambling and deadlines do not mix well. The latter is mentally very wearing.
Although Brookes asserts that specification and implementation can be done in parallel, there is only so far that vague assumptions can go, and use-case analysis cannot be deferred at all. I don't think throwing away a pilot system is inevitable, but the square peg in a round hole of "new" use cases in my view is why it happens, and it happens more often than it should.
Documentation is kingUltimately everyone needs to be reading from the same hymn sheet, and this means even supposedly "obvious" things being documented. Without this, conceptual integrity goes out the window, and people end up working against each other. Heck, people end up working against themselves - the easiest way to slot something new in often causes the most pain further down the road.
Developers also need consistent goal posts. Worst example I personally suffered was coming in on a Monday and finding that the boss had done what can only be called a major refactoring of the entire sub-project I was the primary developer of. I wasted several days just working out where things had been moved to, let alone the overarching design. And anything approaching implementation plans I had were automatically out the window. I think I gave up any real forward thinking around that point, with predictable results.