Our Economy’s Productivity Plateau

I’m of the age where we have bridged the digital divide. I saw how computers have made seemingly slow inroads in more and more of our lives. As a child, if I wanted information, I had to go to a book, maybe an encyclopedia or perhaps a knowledgeable adult. When we traveled, there was always a navigator who was looking at a map and keeping the driver informed of the next turn. Complex calculations required a piece of paper and a calculator, I even remember using an abacus. Phone calls were only made from fixed positions and text messages were notes written on a piece of paper and hand delivered.

While there is a sharp contrast in the convenience of the technology we have today, what is interesting is with that convenience, we don’t really get things done any faster. Having an easy to access GPS navigator may eliminate the need to occasionally stop and ask for directions, we still have to drive at roughly the same speeds. We can find information faster, but we don’t seem to do anything better with that knowledge.

As an engineer that saw both hand drawings and computer-aided design, I anticipated that designs would come dramatically faster with computers. I was really surprised to see that designing with CAD did not decrease the design time. We are able to get more detailed models, solve issues before they are manufactured and usually reduce the hard cost of development, but it still takes about the same time. Some would even argue that it takes longer now. Why is that?

This, of course, isn’t limited to just engineering, it goes across the entire economy. We do have more data, and we are learning a great deal from that data, but that data isn’t making us more productive. Productivity is the value that is created in a given timeframe and we still generate roughly the same value in the same amount of time as we did 50 years ago.

I would propose that the industrial revolution and our ability to innovate in that toolset hit the optimal point half-century ago. The machines that humans use to do things faster and better have stabilized and the improvements have not significantly increased outputs. Cars, tractors, factory automation, shipping, distribution, etc. each of these areas is only marginally better than they were a half-century ago. Add to that, the cost and effort to make those improvements may have outweighed the productivity gains or just evened them out.

How do we improve productivity, if we still think that is the goal? The answer seems to be rooted in how productivity is measured, the output per person over time. The limitation is in the person. A person seems to have a finite ability in augmenting the machines that they control. We need to eat, sleep, and engage in a range of activities to keep our minds sharp. Study after study has shown that increasing a human’s time on the job, doesn’t improve productivity, especially when thinking is the primary requirements. How do we get past the human? We need the machine to start thinking!

The idea that the machine can think, or should think has been around for a long time. This has been both anticipated and feared. To have our machines prepare our food, keep our environment tidy and take care of the menial parts of our life is very compelling. We could end up becoming slaves to our machines is what is feared. For us to be more productive though, we need the machine to discriminate at a level that we would call intelligence. For the car to drive, it needs to be able to react to unforeseen changes in the road. For a good haircut from a machine, it needs a sense of style. For a machine to design a building, it needs to make countless decisions that are subjective in nature.

With the many ideas about how this future looks, I see the same potential for good and ill that technology improvements have made throughout my life. I don’t see the machines primarily looking like humans as has permeated our vision of the future. I think most of our smart machines will continue to look like appliances. I think that we will see new machines that tend to look and act more like insects or small animals, machines that can move, monitor and react to the world around us. Machines that are cheap, efficient and specialize in specific tasks. Productivity will then happen with the machines directing themselves without the need for humans to direct every step of the process. The wheels are already in motion for this change to take place, how long it takes to move to this economy is now the question.

Legacy Firmware Done Well – A Guide to Get You There


Typically, when a firmware developer (or any software developer for that matter) has things his or her way they write all code from scratch.  It’s the ideal way to go since it allows a good process, good tests, static analysis and coding standards to be put in place from the start.  This is by far the cheapest way to build quality code.  But the reality is that rebuilding everything from the ground up isn’t always feasible.  There are times when legacy code1 has to be changed and maintained.  Depending on the company and the engineer’s role, this may even be the majority of the firmware developer’s work.  Is there a way to do it well or is it destined to just patch on top of patches in an endless bowl of spaghetti?

Every developer has been there before: You are staring at thousands of lines of code that you didn’t write and being asked to add a new feature or fix some bug that pops up every few months.  This graphic from Three Panel Soul sums up this scenario well:

If this scenario doesn’t terrify you then one of two things are going on:

  1. You have already developed a very good and methodical approach to maintaining legacy code and you are like the man in the middle panel.  You probably don’t need this article but might learn something anyway.
  2. You do not have a good understanding of software brittleness and the potential for even the smallest code changes to introduce big problems.  You need this article.  Read on!

Earlier in my career, I worked in the defense sector doing legacy sustaining work on embedded systems.  As I started to dig through reams of embedded C, I quickly learned that I was dealing with code that had never been rigorously tested and had little to no unit testing done against it.  I also learned that certain sections of code had always ‘just worked’ and if I was going to make changes in those sections, I had better not break anything.  It wasn’t long before I was gripped with the paralysis of needing to make changes but being terrified to do so.

If you are like me when I started, you feel like the man in the third panel.  You understand that even small changes to the code can introduce huge issues and so you have a vague sense of terror but you don’t really know where to begin when it comes to adopting a safe approach to maintaining legacy code.  I’m so glad you’re here.  In this post, I’m going to try to summarize a good and safe approach for you.  Please read on!

Mastering the Art

How do you master the art of legacy code maintenance?  Test Driven Development.

So what is TDD (Test Driven Development)2?  Fundamentally, it preaches that all software work starts with developing a test and then finishes with making the actual code change.  In other words, the code changes are driven by the tests.

If you’ve been in the field of software development for more than three days, you’ve probably heard of TDD before.  When it comes to legacy code, you might be thinking “look I know that test-driven development is great for new code development, but when it comes to legacy code it just doesn’t apply”.  I want to humbly insist that you couldn’t be more wrong.  Test-driven development is the key to efficient, high quality and safe code maintenance.  This is particularly true for code that has never had any tests run against it before.  In the following sections, I’ll break down why this is true and my hope is that by the end you’ll agree.

Build the Vise

The first step in changing code (in fact before any code has been changed) is to build a vise around the code.  No I’m not talking about your coffee addiction or your inexplicable propensity to eat from the McDonalds dollar menu on a weekly basis (see vice).  What is a vise?  According to websters:

any of various devices, usually having two jaws that may be brought together or separated by means of a screw, lever, or the like, used to hold an object firmly while work is being done on it.

I want to focus in on that last part, that bit about holding “an object firmly while work is being done on it”.  That is our key when it comes to stepping into unknown software or firmware.  We want to hold it in place, to prevent it from moving, to prevent the functionality of the code that is being worked on from changing.

When I was a kid, my dad had a vise on his tool bench.  I used to love going into the garage, clamping a old piece of wood into the vise and driving nails into it until it looked like a porcupine.  I always put the wood into the vise first because I knew that no matter how hard I hit it, it would stay firmly in place.  It wouldn’t fly off and put a hole in the wall or break a window.

In software sustainment, the developer is typically entering a section of code that needs some new feature or needs a bug fixed.  In the case of a bug fix, the goal is to fix the problem without changing the functionality of the code.  If a feature is being added, 99.9% of the functionality must remain identical.  The only functionality that is changing is the new feature.  In either case, a vise must be placed around the code such that the developer can safely and confidently make the changes and rest assured that only the intended functionality is changing and that bugs are not being introduced.

Here’s a simple step-by-step as well as a flowchart view for building your software vise:

Building a vise Step-by-step


  1. Identify the Necessary Change – figure out what code needs to be 
  2. changed to add the new feature or fix the bug.
  3. Analyze the Complexity – If the code is not part of a complex function (see cyclomatic complexity), the function is already isolated and ready for test.  Skip to step 5.
  4. Encapsulate Functionality – If the code is part of a complex function and writing unit tests against the entire function is not possible, just start with the code that needs to be changed and back out until you have captured a complete subset of functionality.  This may involve the 10 to 20 lines surrounding the code that will be changed.
  5. Isolate – Once the above suite of complete functionality has been identified, isolate it into its own function.  Be careful not to change the code other than the absolute bare minimum needed to isolate it into a separate function.
  6. Test – Build up a set of unit tests against the isolated function, using code coverage tools if necessary to ensure full coverage.
  7. Once all tests pass, the vise is in place.


Once these steps have been followed, a good software vise is in place and the code can be safely modified.

TDD – Avoiding the Path to Certain DOOM

Let’s dive an simple example.  The iconic first person shooting game DOOM, which was first released in 1993, has the huge advantage that the source code was released by creator John Carmack in 1997.  What more appropriate way to show the power of TDD in legacy code maintenance than on the DOOM source code?  Follow me on a hypothetical journey of chainsaws, gore and… code maintenance?

Jane Developer works as a software developer at Fictitious Software LLC and she has been tasked with modifying the DOOM source for a new embedded platform.  Because this new platform is a small embedded system, it has a processor that is not quite as powerful as the typical PC.  One of the tasks associated with porting to the new platform is to increase the initial speed at which the player is allowed to turn so that the laggy-ness of the slower platform is not quite as apparent.  Let’s follow Jane as she runs through the steps above to build a simple vise around her code:

Identify the Necessary Change

After some digging, Jane identifies that this change should be made in the G_BuildTiccmd() routine in the g_game.c module.  Studying this code more, it is apparent that the gameplay works like this: when the user holds down the right/left key or holds the joystick in the right or left position, the player begins to turn that way at the ‘slow’ speed and as the key or joystick continues to be held past the SLOWTURNTICS timeout, then the turn speed increases to whatever is currently contained in the speed variable.  The purpose of this explanation is not to give a tutorial on how turning works in the game, but rather to understand so that we can identify where the code will need to be changed to add the new feature.

Here’s a look at the function with the candidate for change shown on line 268:

This line is not highlighted.
This line is highlighted.
This line is highlighted.
This line is not highlighted.