10 more bad programming habits we secretly love

By Peter Wayner

We all know the thrill of bending the rules, or even breaking them. Maybe it’s going 56 in a 55-MPH zone, or letting the parking meter expire. Maybe it’s dividing two numbers without testing to see if the denominator is zero.

Programmers have a weird relationship with rules. On one hand, code is just a huge pile of rules—rules that are endlessly applied by dutiful silicon gates without fear or favor, almost always without alpha particle-induced error. We want the transistors to follow these rules perfectly.

But there’s another layer of rules that aren’t so sacrosanct. Unlike the instructions we feed to machines, the rules we make for ourselves are highly bendable. Some are simply stylistic, others are designed to bring consistency to our unruly piles of code. This set of rules applies to what we do, not how the machines respond.

The real debate is whether it’s a good idea for humans to break our own rules. Aren't we entitled to reinterpret them on the fly? Maybe it’s because some of the rules hail from a different era. Maybe some were half-baked notions from the start. Maybe some seemed like a smart idea at the time. Maybe some might be better called “habits.”

A few years ago, I compiled a list of bad programming habits we secretly love. In the interest of advancing the art of programming, here are 10 more programming habits so bad, they might be good.

10 bad programming habits developers love

Coding without comments

It's a well-known fact that undocumented code is a nightmare to understand and debug. Our programming classes teach us that writing good comments is essential. Literate programming, the style of programming that combines natural language and code, was invented by Don Knuth—perhaps the greatest programmer who ever lived. Who are we to argue?

But the sad truth is, there are times when comments make things worse. Sometimes, the documentation seems to have little to do with the code. Maybe the documentation team lives far away from the coding team, in another state—or really, another state of mind. Maybe the coders rolled in a critical patch without telling the documentation team about it, or the documentation team knows but hasn't come around to updating the comments yet. Sometimes, the coders don’t even update the comment at the top of a method they've changed. We're just left to figure it out on our own.

There are other problems. Maybe the comment was written in a natural language you don’t know. Maybe the concept couldn’t be easily summarized in anything less than seven paragraphs and the coder was on an agile sprint. Maybe the person doing the commenting was just wrong.

For all these reasons and a few more, some developers believe the best solution to useless comments is to include fewer of them—or none. Instead, they prefer to write simple, shorter functions that use longer, descriptive camelcase variable names as guidance. Absent an error in the compiler, the code ought to be the most accurate reflection of what the computer is doing.

Slow code

If you want your code to be fast, make it simple. If you want it to be really fast, make it complex. Finding the sweet spot for this particular assignment is not so easy.

It’s a trade-off. Generally, we want our programs to be fast. But complexity can be a drag if no one understands it later. So if speed isn’t essential, it might make sense to write code that’s a bit slower but also easier to understand. Sometimes simpler and slower is a better choice than super clever and super fast.

Rambly code

One of my coworkers loves to use all the clever new operators in JavaScript, like the ellipsis. The resulting code is more concise, which in their mind means simpler and better. All their code reviews come back with suggestions of where we can rewrite the code to use the new syntax.

Some of my other coworkers aren’t so sure that simpler is easier to understand. Reading the code requires unpacking the new operators, some of which may be used in a variety of different ways. Understanding how the operator was used requires pausing and thinking deeply, rather than the fast skimming they are used to. Reading the code becomes a chore.

There are historical arguments for why people don’t like supertight code. Languages like APL, which were designed to be incredibly tight and efficient thanks to their custom symbols, have essentially disappeared. Other languages like Python, which eschew curly brackets, continue to rise in popularity.

Lovers of the latest and greatest abstractions will continue to push concise new features and crow about brevity. They stake their claim on being modern, and hip. Some others, though, will continue to sneak longer and more readable code into the stack; they know that in the end, it’s just easier to read.

Ye olde code

People who design programming languages love to invent clever abstractions and syntactic structures that make solving certain types of problems a snap. Their languages are full of these abstractions, which is why sometimes the manuals for them are over a thousand pages long.

Some people believe that using these features is for the best. After all, they say, the first rule of power is “use it or lose it.” Shouldn’t we use every single drop of syntactic sugar described in that one-thousand-page manual?

That’s not always a good rule, though. Too many features can breed confusion. There are now so many clever syntactic gimmicks that no programmer could be conversant in them all. And why should we? How many ways do we need to test for nullity, say, or make inheritance work in multiple dimensions? Is one of them right, or better than the others? Surely, some programmers on the team will find a way to create drama by arguing about them and ruining lunch or the standup meeting.

At least one set of language designers decided the limit the feature set. The creators of the Go language said they wanted to build something that could be learned very quickly, maybe even in a day. That meant that all the coders on the team could read all the code. Fewer features lead to less confusion.

Roll-your-own code

Efficiency experts like to say, “Don’t reinvent the wheel.” Use the stock libraries that are well-tested and ready to run. Use the legacy code that’s already been proven.

But sometimes a new approach makes sense. Libraries often are written for generalists and everyday use cases. They’re loaded up with belts-and-suspenders tests to ensure that the data is consistent and the user won't gum up the works by sending the wrong parameters. But if you’ve got a special case, a few lines of specialized code could be dramatically faster. It won’t do everything the library can do, but it does what you need in half the time.

Of course, there are cases where this can be dangerous. Some code is so esoteric and complex, like in cryptographic systems, that it isn't a good idea to cobble together, even if you know all the math. But in the right situations, when the library is the big bottleneck for your workload, a few clever replacement functions might be miraculous.

Optimizing too early

It’s common for programmers to toss together some code and justify their quick work with the old maxim that premature optimization is a waste of time. The thinking is that no one knows which part of the code will be the real bottleneck until we fire up the whole system. Wasting hours crafting a great function is foolish if it’s only going to be called once a year.

This is generally a good rule of thumb. Some projects fail to leave the starting line because of too much overplanning and over-optimization. But there are plenty of cases where just a bit of forethought could make a big difference. Sometimes choosing the wrong data structures and schemas produces an architecture that isn’t easy to optimize after the fact. Sometimes their structure has been baked into so many parts of the code that a bit of clever refactoring just won't cut it. In these cases, a bit of premature optimization ends up being the right answer.

Carelessness

Everyone knows that good programmers look both ways before crossing a one-way street. They insert plenty of extra lines of code that are always double- and triple-checking the data before anything is done to it. A null pointer could have slipped in there, after all!

Alas, all that extra care can slow our code to a crawl. Sometimes, for reasons of performance, we need to ignore our instincts and just write code that doesn’t care so much. If we want code that runs fast, we should just do the bare minimum and no more.

Inconsistency

People generally like order, and programmers often insist that a pile of code use the same technique, algorithm, or syntax in every part. Such diligence makes life easier for anyone coming along later who must understand the code.

On the other hand, consistency has a cost in time and sometimes in complexity. Fixing the differences means going back and rewriting all the code that followed the wrong path. That alone can strain the budget.

A deeper problem comes with the relationship between different sections. Some projects rely on legacy code. Others depend on libraries. Many can’t function without APIs written by entirely different people in separate companies.

Smoothing the differences between these groups is often impossible, and there are only so many times you can rewrite the entire stack to fit the latest vision. A strange corner of our brain craves perfect order, but perhaps it's better to make peace with inconsistency.

Chasing bells and whistles

Another issue with too much consistency is that prevents innovation. It also encourages a kind of rigid adherence to the old way of doing things.

Sometimes adding new features, folding in new libraries, or integrating the stack with new APIs means breaking the old patterns. Yes, it will make life a bit more difficult for someone who has to shift gears while reading the code, but that’s the price of progress. It's also part of what makes coding fun.

Breaking the rules

For grins, I asked Google’s Gemini if the programmers broke any rules in the process of creating it. Gemini responded, “Rather than the programmers breaking specific rules, it's more accurate to say they might have pushed the boundaries on some best practices when creating large language models like me.”

“Large language models like me train on massive amounts of data, and there's an element of "unknowns" in how the model learns from that data.” said Gemini. “Some techniques used to create large language models can be very efficient, but it can be difficult to understand exactly how the model arrives at its answers.”

There you go. The LLMs know better than we do that the old rules are changing. When you can feed massive training sets into the box, you may not need to spend as much time understanding the algorithm. So go forth and be human! Let the LLMs mind the rules.

© Info World