days
-6
-7
hours
0
-1
minutes
-5
-6
seconds
0
-1
search
Never-ending feud

Is premature optimization really evil or simply sloppy ?

Gabriela Motroc
premature optimization
Evil man gesturing silence image via Shutterstock

Although more than 40 years have passed since Donald E. Knuth wrote that “premature optimization is the root of all evil,” we continue to use this idea-turned-adage in various forms, especially when something goes wrong.

Donald E. Knuth wrote in his 1974 paper titled Structured Programming With Go To Statements that “we should forget about small efficiencies, say about 97 percent of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3 percent.” However, many programmers continue to ignore the beginning of the quote, which sounds like this: “Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered.”

Premature optimization occurs when one decides to optimize without having enough information to reach a conclusion about how and where to do the optimization. The problem is that one cannot know precisely what will be the bottleneck and trying to optimize before obtaining empirical data could end up enhancing code complexity. Because one cannot know for sure when optimization is premature, it is essential to plan in advance and rely on measurements.

Evil vs sloppy —who wins?

Many people responded to a Quora challenge regarding the use of premature optimization in software development; they used different words and real-life examples to characterize this concept, but most of the people agreed that premature optimization is bad. One programmer opined that “optimizing early is a waste of time” while a software developer explained that even though “premature optimization is a warning that should be heeded, it can always be trumped by experience.”

However, this is not what Joe Duffy, Microsoft engineering director for languages believes; he wrote in a blog post that even senior programmers can make poor data structure choices, “often because they are more apt to choose a sophisticated and yet woefully inappropriate one.” Duffy explained that some people use Knuth’s adage to defend all sorts of choices that can only mean one thing: sloppy decision-making or laziness. He advanced the idea that even the best performance architects are wrong when they choose to follow their intuition because failing to understand the order of magnitude which matters, why and where it matters inevitably leads to failure.

Premature optimization traps

Premature optimization traps occur when one ends up writing complicated code instead of taking a moment to understand how performance affects the main function of the program. Not taking into consideration worst case scenarios is another trap which leads to failure because a programmer can end up paying more orders of magnitude in cost than it is expected under normal circumstances.

Using the right data structure will only increase performance and the program’s cleanliness, which means that one should give some thought to the data structure he or she is about to choose, otherwise more time and money will be spent fixing a problem that could have been avoided. Although after choosing one particular solution one can say that it’s too late to fix it and premature optimization is indeed evil, why not give it a second thought and do it right in the first place?

Author
Gabriela Motroc
Gabriela Motroc is editor of JAXenter.com and JAX Magazine. Before working at Software & Support Media Group, she studied International Communication Management at the Hague University of Applied Sciences.