When discussing Free software, the term “installed base” seems rather popular. It is installation, not embedment or preinstallation, that tailors a product to the owner’s personal needs. Unfortunately, installed base, as opposed to market share, proves to be a tricky thing to gauge.
Related Articles |
|
Stallman Urges Users to Upgrade to GPLv3 Will Open Source Developers be Well Paid? Top Five Open Source Business Models You Never Heard Of GPLv2 or GPLv3?: Inside the Debate
|
At the center of this debate, one typically finds the GNU/Linux operating system. Many perceive Linux the greatest contender with the capability to bring Free software to the mainstream. Linux is commonly obtained through exchange of CDs, which can then be modified, pass from user to user, and be used to deploy the same software on multiple computers. The content of these CDs is usually (albeit not always) downloaded from the Internet. Lesser-known Linux distributions are sometimes obtained through peers or via BitTorrent, which cannot be properly tracked. These channels of communication are decentralized by nature.
Endless attempts have been made to count Linux users. User base vanity harbors confidence and leads to better support from the industry. Attempts to quantify growth have included Web sites whose sole purpose is to have Linux users register and provide details about their computers. Even the most prominent among these Web sites met very limited success. They were not able to keep up with change, let alone attract and grab the attention of all Linux users. Most Linux users were simply apathetic toward this cause.
In more recent years, the ubiquity of interconnected devices and computers has played an important role in statistics. Computing units that offer Web access have generated large piles of data. Statistical analysis of this data was thought to be another opportunity to study presence and geography of Linux users around the planet. It has, however, been a very deficient analysis. For a variety of reasons, too many assumptions were made, which led to flawed conclusions. To this date, no proper and valid analysis has been carried out.
Looking more closely at some difficulties in interpreting Web statistics, there are numerous factors to consider. There are obvious problems. The sample of selectively chosen Web sites often contains particular audiences which, on average, do not represent the entire population. Additionally, due to diversity in the identity of Linux, as it comes in as just one among a large number of distributions, identification strings are hard to understand. As such, many Linux users are simply being treated as though they use an “unknown” operating system. This “unknown” component is statistically significant, yet it tends to be ignored and discarded.
There are more problems that need to be taken into consideration. For example, data gathered by Web sites neglects to identify computers that are operated behind proxies, or even Squid. This data also assumes that everyone identifies himself or herself in a truly honest fashion. In fact, that certain Web sites were designed to reject access from every Web browser other than Internet Explorer. As a result, many Linux users are forced to pretend (by altering HTTP headers) that they use a typical Windows setup. This is known as spoofing or forging and it is a matter of convenience.
Related Articles |
|
Stallman Urges Users to Upgrade to GPLv3 Will Open Source Developers be Well Paid? Top Five Open Source Business Models You Never Heard Of GPLv2 or GPLv3?: Inside the Debate
|
The last factor to consider here are the botnets (zombie PCs) that travel the World Wide Web. It’s a relentless Web journey and this happens without the awareness of the rightful owner of the computer. This troublesome phenomenon means that large levels of Web traffic is devoured in a very wasteful fashion. It does not accurately reflect human consumption of information. Botnets act to ‘pollute’ log data and therefore tweak statistics. It rarely (if ever) works in favor of secure operating systems and Web browsers.
Web statistics and the research that revolves around them suffer from yet another false assumption. One must not simply accept the contention that all computers are connected to the Internet nowadays. If they are, their users do not necessarily visit an identical number of Web sites or consume an equal number of pages. Different operating systems are used in different settings. They serve a particular purpose and facilitate working tasks that might not require the Internet at all.
To use an example, Hollywood is considered a place where production studios adopt Linux, even on the desktop. In a recent interview with the press, CinePaint’s Project Manager said that “Linux is the default operating [system] on desktops and servers at major animation and visual effects studios, with maybe 98 percent [or more] penetration.” These computers, which include user-facing workstations, get used heavily for design and rendering work, but probably not for Web surfing.
There have been other projects that are intended to keep track of the number of Linux users by setting up a communication channel that connects a computer to the Linux distributor’s servers. These projects are neither mature nor widely adopted.
On the other hand, the increasing adoption of online software repositories has made this process more feasible without it being considered “spying.” And yet, the lack of a registration process leaves room for dynamic addressing, so a single unique user is still hard to identify. The user will remain a moving target on the network as long as system registration is an absent component. Free software is adverse to such privacy-compromising steps, so they are unlikely to ever become mandatory.
Last year, in an interview with Red Herring, Canonical’s CEO Mark Shuttleworth commented about the activity on his company’s repositories. At the time, at least 8 million distinct users or addresses with a particular version of his Linux distribution could be identified. That was only months after the release of this distribution, which many of us had already known as “Ubuntu.”
Related Articles |
|
Stallman Urges Users to Upgrade to GPLv3 Will Open Source Developers be Well Paid? Top Five Open Source Business Models You Never Heard Of GPLv2 or GPLv3?: Inside the Debate
|
Regardless of the adoption rate of Linux on the desktop, Linux enjoys double-figure inter-quarter growth on the server side. This trend has sustained itself for several years. There are, however, great difficulties to overcome when it comes to tracking how widespread – not just profitable – Linux has become on in the datacenter. Market figures regularly come from analysts, but these figures are based purely on sales. They only gauge revenue. They fail to account for the fact that Linux is free and is becoming easier to set up each year. Many companies take the do-it-yourself route and build their own server farms. They do not require much assistance, so deployment can be completed without a Linux purchase – per se – ever being made. The true growth of Linux will therefore stay an enigma for quite some time to come.
At the end of the day, let’s remember that Free software was not created to thrive in profits. There is no marketing department to boast of growth, either. Yet whether we use a search engine, or connect to a mail server, or acquire some snazzy gadget, Linux is likely to be there. The desktop, however, is perceived as an ultimate destination. It has the most visibility. Laptops and desktops can demonstrate that Linux has come and that it is here to stay and thrive. The back room usually escapes people’s attention, despite a gradual shift in paradigm, which encourages adoption of remote services and thinner clients.
Counting the number of Linux users might always remain an impossibility. Should you mind?