Backwards compatibility: not backward at all
Ian Murdock had a few words to say on the subject. I want to complement his words in this article.
Let me quote from his article:
I’m often asked why I’m so obsessed with backward compatibility and, as a result, why I’ve made the issue such a central part of the LSB over the past year. Yes, it’s hard, particularly in the Linux world, because there are thousands of developers building the components that make up the platform, and it just takes one to break compatibility and make our lives difficult. Even worse, the idea of keeping extraneous stuff around for the long term “just” for the sake of compatibility is anathema to most engineers. Elegance of design is a much higher calling than the pedestrian task of making sure things don’t break.
Ian's concern is simple: he wants backwards compatibility. In the vernacular, this means old applications must not malfunction in new systems
. He recognizes, at the same time, that backwards compatibility is perceived as a rather laborious and unrewarding task, especially impeded due to the nature of our distributed development efforts.
He's right about that, and I'm thankful that he's pushing for more backwards compatibility. Let's understand why.
A worthwhile goal it is
There are people out there who entertain this notion (or some variant thereof): Why shouldn't an old application malfunction in a newer system? After all, "it's just a matter of recompiling the old app on the new system", right?
They couldn't possibly be more mistaken.
Three fronts of compatibility
There are several main compatibility concerns:
- ABI compatibility: Programs that use external (dynamic linking) libraries should have guarantees that updated (minor versions of) libraries won't stop working in the future. Otherwise, a single library can yield an entire system unusable.
- API compatibility: Applications written against a particular library (and version) should always compile against the latest (minor) version of that library.
- Formats and protocols: Newer (foundational) developments shouldn't break other, older applications that depend on them.
Sometimes, it's impossible to "just recompile"
There are dozens of thousands of applications for Linux out there. Do you honestly think their users have access to the source code, or the required skills, to update them and make them work with newer Linux systems?
I didn't think so, either. Gentooers comprise a minuscule share of Linux users. The rest of us want to install a package in seconds, and have it not break when another package is upgraded.
The huge hidden costs of not maintaining compatibility
Face it: eschewing compatibility in favor of the "new fashionable thing" saves a little time for the developer. But it makes everyone else incur in exploding hidden costs, which only increase with time.
Software isn't like a car. Stable software applications tend to break only when the underlying set of assumptions (in general, "operating system and system-level apps") change. Therefore, it's vital that the underlying system stays as stable as possible.
Stability isn't just "no blue screens", you know?
The ISVs: an important part of the ecosystem
What to do if the ISV says "I don't build for Linux because it's a hodgepodge of technologies in rapid flux"?
First: acknowledge that they're (partially) right.
Second: educate them, because the fact that Linux evolves quite rapidly doesn't necessarily mean that old builds of their products won't work on newer distribution releases. There is a fairly mature set of tools to ensure that. If anything, building an application that works across all Windows flavors is as hard (or harder) than building an application for Linux that relies on published and open standards.
Third: continue expanding the suite of mechanisms and tools that ease the ISV's job. If you ask me, projects like Autopackage provide a very valuable (if relatively unknown) service to ISVs. There should be more projects like that.
Fourth: there's a real need for a niche site or information (consulting?) service that explains, in detail, all the steps to go from a CVS drop of an ISV to a finished, installable package. And there's a pressing need to evangelize the joys of building ISV apps for Linux. There, I just dropped a business model on your floor -- any takers?
The challenges of a distributed software development system
We in the free software community, as Ian aptly (no pun intended) points out, have to battle an even greater challenge: we're distributed.
The LSB and Portland efforts are invaluable and, undoubtedly, they've lent credibility to the idea that sometimes, you don't have the source
. Let's face it, sometimes even a free software application hasn't been compiled for a newer distribution yet, and the old packages need to continue working.
I still have a binary RPM of KDiskCat (and RPMs of its dependencies, which to be fair I have to install with --force
) that I compiled on Mandrake 5.1 and runs on my Fedora 6 machine. KDiskCat never got ported to KDE 2 libraries, but I still get to use it so, as you can see, at least binary compatibility isn't a concern in this case.
So it's clear that I think we're doing fairly well. Newer technologies keep appearing at the system level on Linux (to be accurate, the GNU system), and distributors are doing fairly good at incorporating them while maintaining the environment stable and backwards-compatible enough for old apps to run.
Distributors are also catching incompatibility bugs, holding newer, incompatible packages, and providing a valuable buffer service between the "bleeding edge" and the end-user. All in all, I'd probably ask for faster binary builds of the latest stable releases of my favorite applications. But I certainly value my distribution's efforts to keep the base environment livable and stable.
If I have any criticism to make, it's the fact that the packaging isn't as good as it could be. Even using Smart, there are packages out there that force an entire tree of upgrades or downgrades to occur. Why on Earth this needs to happen, when the binaries themselves (unpacked with cpio
) work flawlessly, is something I could never understand.
Oh, yeah, by the way, that KDiskCat program starts much, much faster than Konqueror.
The corollary
At some point, all compatibility concerns create friction which impedes progress. That friction needs to be weighed against the progress it's impeding, and a compromise is usually in order. You name it: duplicate (major versions of) libraries, or entirely different approaches involving compatibility layers. In almost all cases, they're usually worth much more than their upfront cost.
There are countless hard decisions to be made. Let's hope those in charge of making those decision don't forget about the rest of the users. Remember that programs (even obsolete ones) are meant to enhance our lives, for as long as humanly possible.
It is the responsibility of those skilled in free software to build compatible, durable systems, if anything, because responsible craftsmanship requires it. It is the responsibility of all stakeholders in the free software movement to provide ISVs with a hospitable environment for their endeavors. Not doing this will turn our wishes of world domination into a footnote in the annals of history.
Now, if you'll excuse me, I'll go chase the "other" stability concern: a nasty kernel panic that occurs when I try to encode an MP3 with LAME while using the binary NVIDIA kernel module. Yes, panic. Understanding how this can be happening to my machine is out of my league.