Because of this, software companies who don’t primarily deal in open source have shunned Linux. It’s something of a chicken-and-egg argument to say who shunned whom first. And perhaps it’s academic: does it matter who took the first step away from the table?
Still, here’s the key problem: There’ll always be a big gap between Linux advocacy and Linux reality as long as it remains biased toward the near-complete exclusion of binary-only / closed-source / proprietary software on Linux.
The simplest and most obvious sign of how Linux’s open-only stance has played out in the real world is the sheer lack of binary-only software available for the platform. Not just applications, but device drivers, support tools—the whole gamut of things people take for granted in other places.
This isn’t to say that binary-only items don’t exist—just that they’re few and far between. And that Linux as a whole, both as an ecosystem and as a community, is biased against such things.
The ecosystem part is mostly logistics. People who try to supply binary-only apps or drivers for Linux quickly discover that it involves a great deal of work. This is because there’s no “Linux” generally—there’s a slew of distributions, a slew of kernel revisions, all subtly incompatible with each other on a binary level.
Last year I talked to a company (I won’t name them here) whose main business was on Windows. They have been putting out a binary-only Linux version of one of their main product for some time now. They admitted that the Linux versions lagged behind their Windows brethren because of the sheer effort involved in getting a binary-only app to behave properly across the major distributions they had targeted.
Worse, they had to think about each kernel revision within those distributions, going back about three or four iterations. This effectively makes them custodians of a dozen or more different editions of the same app for one platform. (The Windows version runs generically on all versions of Windows from 2000/XP forward.)
So additional effort is required to just make the apps do something that comes naturally on other platforms. For people who haven’t grown up with this as a way of life, it’s exhausting. It forces the manufacturer to support that many more separate editions of a single product. (I’m not sure this is what Linux advocates had in mind when they said the future of commercial software was in selling support and services.)
The few programs out there that have binary Linux editions—binary-only Oracle, for instance, or the Opera browser—are typically backed by major corporate muscle. And without big corporate cash, deploying something like Oracle—or even a program a fraction of its size—across multiple Linux distributions is so difficult that most companies either significantly narrow their focus or simply don’t try.
This is especially true if the program in question and Linux’s current target market (which is mainly servers) aren’t even congruent.
Live free or die?
A constant sentiment among some Linux advocates is that it’s best for Linux as a whole to reject closed-source drivers and software. To compromise on this issue means Linux runs the risk of falling into the hands of entities that can exert control over it.
Some of this fear is justified, especially in a software world that’s mulling the future of Sun and MySQL with increasing gloom (even if those things are largely the product of miscalculation on the parts of Sun and MySQL). But it’s not clear that all of Linux’s source-is-best stance is wholly a protective gesture to guard against the profit-only crowd.
More often than not, this stance is invoked to embody the concept that in five years, or maybe a decade (the exact time depends on the speaker), all software will be open source. And that the money to be made from this stuff will be from services and support, not software itself. Some even say that hardware will have its costs subsidized through support and be essentially free to own upfront.
It’s tempting to believe this is possible, especially when high-quality open source replacements for many commodity apps exist. But that requires ignoring all the high quality proprietary apps that show no sign of being replaced by open source equivalents.
A commonly-cited example is Photoshop. It can’t be replaced with open source in professional environments due to lack of support for patented products like Pantone. Graphics professionals aren’t going to wait with arms folded for those patents to expire—they’ve got work to do. They’re more than willing to pay money for a quality product, proprietary or not, that lets them do it. Pretending these things don’t exist, or trying to upend them by attacking the patent system, accomplishes nothing.
As for hardware costs being subsidized by support, I can only assume that argument is based on observations of the phone market. To assume what goes for phones will be what goes for computing hardware generally is not to think at all. The economy of phones is tied to the economy of the phone network, which couldn’t be more unalike the way, say, servers and the Internet at large work.
It holds even less water when you realize most people would be happy to use commodity devices like tablets or netbooks on cell networks … if only the network gatekeepers would let them.
The costs of freedom
So what are the end results of an insistence on open-only for Linux?
Mainly, it’s meant that Linux has had great trouble thriving in areas where the conveniences of proprietary software are taken for granted. The commodity desktop is the biggest example, where Linux remains a statistical blip or an occasional curiosity, instead of a strategically powerful alternative on the order of the Macintosh (or even the iPhone).
A major PC maker like Dell may sell Ubuntu-equipped PCs and netbooks, but few users seem willing to embrace a wholly new, largely backwards-incompatible ecosystem for the sake of either saving a few dollars or being more secure. The Mac offers them all of that and more, albeit at a premium cost. But still, the perceived value associated with buying into Apple’s walled garden has created many more satisfied customers than disgruntled refugees.
Apart from servers, the other biggest area where Linux has made inroads has been in mobile technology. And even there, it’s only been after major third-party work was involved. This isn’t itself a bad thing; Linux’s big appeal has always been its malleability. But it hasn’t created an end-user market for Linux as an ecosystem unto itself—just as a development toolkit, or raw material to be further processed.
Google’s Android and Chrome OS, for instance: Android doesn’t replace anything except other phones, and even Chrome OS’s own makers are clear that it is an adjunct to the desktop and not an attempt to usurp it.
Another major fallout from Linux’s open-only stance: the most robust hardware support for Linux comes only come from those who are using it as raw material for their own devices (as in the examples above). The rest will simply ignore Linux, or will throw only the most minimal effort behind a “community-supported driver.”
The reason: Linux’s marketshare, apart from places where it’s institutionally legion (mainly servers), isn’t big enough – or growing enough – to risk a viable business model on what amounts to good intentions.
Blaming hardware manufacturers for not getting with the open source program is myopic. There are far more people making hardware that uses closed-source binaries—and making money from that hardware—than there are people whose livelihoods depend on open source. They should have the right to choose whether to open their hardware (isn’t “choice” a big open source buzzword, anyway?).
By and large they have opted not to do so because there’s little or no tangible advantage for them. There are hardware makers who support Linux. But they do so because it matters to their business in some way—not because they earnestly believe they can reach a state where hardware can be subsidized wholly through support contracts.
Keeping an open mind
None of what I have written here should be construed as an argument that Linux has no value, or that it has no future. The sheer weight of the evidence to the contrary would break the pans of any scale. Its malleable nature—or rather, the fact that it has been mandated to be malleable—offers immense value.
What I am arguing is that Linux’s current position creates as many strictures as it does possibilities. The insistence that everything begins and ends with source code is of great value in the development sphere. But in the messier, far less structured real world, results count more than potential.
For proof of this, look no further than one project on Linux that has generated both great enthusiasm (among programmers) and controversy (among die-hard open-source supporters): the Mono project, an implementation of Microsoft’s .NET stack in Linux.
Mono allows an easy, consistent way to deploy proprietary apps on Linux in high-level languages, without having to deal directly with the shifting tectonic plates of different distributions and kernel versions. The project’s been attacked for its Microsoft roots. But that distinction means little to most of the computing world, which uses Microsoft along with a great many other closed- and open-source vendors. The world wants to use a mix of open-source and proprietary solutions whenever possible, and Linux—both the platform and the culture around it—makes that more difficult than it needs to be.
Linux’s maintainers need to be honest with themselves about how realistic it is to continue employing a strategy that essentially guarantees that Linux will forever be a development platform, rather than a deployment platform.