My old post “Why Microsoft makes bad programmers” made waves last week on Twitter. And so, I decided to write about how I see things five years later, now that I’m not fully in the MS world.
Well, basically the same.
Visual Studio (and other IDEs – it’s not a Microsoft specific problem) is getting better and better in allowing us to create more complex code, and provides us with better tooling to debug it. I don’t see that stopping, and as long we’re going after the next tech, we’ll be provided with the right tools for correcting our ways.
Is it so bad? If you’re geeking out on technology, no problem. Be my guest.
If you’re getting paid to develop solutions for customers, that’s a whole different story.
Developers, testers, DBAs, dev-ops, and everyone else involved in developing and running systems are paid to do just that. We are professionally obligated to get a product into a satisfied customer’s hands.
That means producing working software as quickly as possible.
Now, technology is there to help us. I wouldn’t go back to building sites with classic ASP. The right technology can save us lots of time or money (if we make the right choice, that is).
Still, technology is only part of the solution.
While we’re having this debate, it keeps moving forward, and allows us to shoot ourselves in the foot in more wonderful ways. We’re encouraged to write more code, because now bigger solutions and projects are supported, and the search capabilities are stronger, and we can generate code snippets in a blink of an eye. All of those are now ready to be debugged by a bigger and better debugger.
Much like Dr Malcolm told us in the original Jurassic Park, we are so preoccupied with whether we can master the technology, that we didn’t stop to think if we should.
The majority of developers still works this way, because developers (and their managers) still consider themselves code manufacturers, rather than solution providers in a real economic world. They ignore the fact that more code is more trouble. And the feedback from tool makers, in the form of better generation and debugging tools, cannot be interpreted differently.
If you agree that less code is better, than obviously the tool vendors are encouraging us to move in the opposite direction, and compensate by giving us more tools to cover our mistakes. It is a way, but definitely not the optimal one.
In the original post I put TDD vs debuggers. I still think the test-first gets better and quicker results than debug-after. TDD encourages just enough code, reduces complexity, and shortens maintenance time. Good programmers use a set of practices, like TDD, that minimize time to market and raise the quality.
The rest stub their toe on the table, the one they put there themselves, and curse it.
Then do that again.