Duke Nukem Forever & Reworking code
Cosmin Stejerean linked to a really interesting article on wired.com which tells the story of how Duke Nukem failed over 12 years to ship their latest game, eventually giving up.
Phil has written a post about the article from the angle of his experience working with these types of companies and working out how to get something into production but as I read this article it seemed to have some relation to reworking code and why/how we approach this.
It can be reworking of code through either rewriting or refactoring, but the general idea is that it’s not directly contributing to getting something released.
One particular bit of the article stood out to me as being particularly interesting - it describes how they decided to change from the Quake II game engine to the Unreal one:
One evening just after E3, while the team sat together, a programmer threw out a bombshell: Maybe they should switch to Unreal? “The room got quiet for a moment,” Broussard recalled. Switching engines again seemed insane — it would cost another massive wad of money and require them to scrap much of the work they’d done. But Broussard decided to make the change. Only weeks after he showed off Duke Nukem Forever, he stunned the gaming industry by announcing the shift to the Unreal engine. “It was effectively a reboot of the project in many respects,”
What they effectively did here is to rip out a core bit of the architecture and totally change it which would be quite a difficult decision to make if you knew you had to deliver by a certain date.
We’ve made that decision on projects that I’ve worked on but you have need to come up with a very compelling argument to do so i.e. typically that productivity will be much improved by making the change.
Whenever we talked about refactoring code during our technical book club Dave always pointed out that refactoring for the sake of doing so is pointless which to an extent explains why it makes most sense to refactor either around the code that we’re currently working on or in the areas that are causing us most pain.
For me the goal of refactoring code is to make it easier to work with or easier to change but it’s useful to remember that we’re refactoring to help us reach another goal.
An idea which I quite like (suggested by Danilo) but haven’t tried yet is running technical retrospectives more frequently so that we can work out which areas of the code we need to work out for our current release and then make use of Stuart Caborn’s bowling card idea to keep track of how much effort we’ve spent on these problems.
It seems like any refactorings we decide to do need to be linked to a common vision which we’re trying to achieve and that seems to be where Duke Nukem went wrong. The vision of what was actually required for the game to be successful was lost and as many features as possible were added in.
The Duke Nukem approach seems quite similar to going through the code and making refactorings just to make the code 'better' even though we might not see any return from doing so.
In Debug It Paul Butcher suggests that we need to approach bugs in software with pragmatic zero tolerance by realising that while our aim is to have no bugs we need to keep sight of our ultimate goal while doing so.
I think we can apply the same rule when reworking code. We should look to write good code which is well factored and easy to change but realise that we’ll never be able to write perfect code and we shouldn’t beat ourselves up about it.
About the author
I'm currently working on short form content at ClickHouse. I publish short 5 minute videos showing how to solve data problems on YouTube @LearnDataWithMark. I previously worked on graph analytics at Neo4j, where I also co-authored the O'Reilly Graph Algorithms Book with Amy Hodler.