Abstractions

I remember back when Delphi came out. It was a revolutionary product, no doubt, but it added a high amount of abstraction to the development stack.

In fact, I remember seeing it demoed some half year or so before it would ship, ay CeBit. I was an avid Borland Pascal developer at the time, deep into one of my first commercial Windows projects. I was loving Borland Pascal 7 for Windows, and thought it provided just the right level of abstraction, while not keeping me too far away from the internals that were WndProcs, handles and window messages.

So naturally, as I was seeing Delphi demoed, I was thinking to myself: well, this certainly looks slick. But it sure as hell is gonna add a lot of bloat to my application stack, and keep me from having full control over how my window messages flow. That can't possibly be worth it, can it?

Turned out it was. I ended up going out to buy Delphi 1 the day it hit the streets (remember when you had to go out and buy software in cardboard boxes?). The next project I started was with Delphi, and i never once looked back.

So yes, Delphi did end up adding a huge amount of perceived "overhead" onto the application stack. But it was so worth it, in the development time saved, and in how easy Delphi made it to accomplish things. And as computers got faster and the underlying operating systems matured further, that extra overhead became all but negligible, two years down the road.

Fast-forward a some twenty years into the future.

A good many new development technologies have come up. For just one uncontroversial example, .NET and Java introduced Garbage Collection to the mainstream, over 10 years ago.

And in the Delphi community, even though 20 years have passed, we continue seeing the same fearful arguments and concerns about "new" technologies such as GC that many of us had against Delphi itself, back when it was introduced. For example, Just two days ago, someone was complaining about a new version of Delphi adding 4 bytes to an object's memory footprint (in exchange for eased memory management with Automatic Reference Counting), worried about the additional overhead.

I think that's very ironic.

As developers, we have a tendency to think that whatever level of abstraction we are currently working with is the perfect be-all-end-all of abstractions. We'd never want to go back to writing assembly code (what, are you nuts?) but we most certainly don't ever want to give up another level of control, either (let the compiler manage memory for me? why, I can do that just fine myself, thank you very much. Are you implying i'm too dumb to get that right by hand?).

The abstraction level that Delphi introduced, twenty years ago, was awesome, and game changing. But that was twenty years ago. The computing industry has moved on, and Delphi has not really changed much in this regard at all. And when it tries, people worry about the extra overhead – just as they dismiss .NET or other modern technologies, due to their perceived "overhead".

Of course this problem isn't helped by some of Embarcadero's unfortunate and misleading marketing. It's understandable that Embarcadero want to emphasize their use of "native code" as a strength, as that is what Delphi is built upon, and it's the messaging that its user-base wants to hear: "Native code good. Everything else, Teh Suck". "You've made the right choice by sticking to 20-years old technology".

But by going so over the top in its "native" messaging, and spreading so much FUD and provably false information – from claiming that .NET code runs "interpreted", to having diagrams that show how .NET and JavaScript ostensibly have too large a latency to react to mouse clicks in a timely fashion – they are in fact enforcing this resistance to they own improvements ideas, in their customer base.

And meanwhile, while Delphi developers worry that four bytes of extra memory make their buttons to slow to react to clicks, people elsewhere are implementing full 3D rendering engines in JavaScript.

Time waits for no developer.

Technology evolves, and if you look back over the past 20 years, it is truly amazing what has been accomplished. The CPU in our smartphones outruns what most people had sitting on their desk, twenty years ago. As technology improves, we as developers need to keep up with the changing landscape – and among other things, this means being open to leverage higher levels of abstractions, deferring more work to be handled by the computer, under the hood (be it by the compiler, the runtime or the operating system) and to focus on higher-level tasks for our code.

It's 2013. Thinking you still need to manually keep track of and call Free() on your objects is what's nuts. Worrying about 4 bytes of extra memory in your object in exchange for increased developer productivity is what's nuts.