Download the authoritative guide: Cloud Computing 2019: Using the Cloud for Competitive Advantage
Is the GNU/Linux desktop headed in the right direction? Recently, I have started to wonder.
Despite the emphasis that major distributions place upon usability, nobody seems to ask the question about what definition of usability is being assumed, or what kind of users that definition produces. Or, whether those users will be capable of reaching the free software goal of being able to control their own computing.
The conventional wisdom is that free software began by mostly ignoring usability issues. It was software designed by geeks and for geeks, and functionality was more important than ease of use.
Then, gradually, influenced by documents such as the GNOME Human Interface Guidelines and the freedesktop.org standards, the community became aware of the need to consider usability, and came to rival the standards of proprietary software.
Now, with KDE and GNOME taking the desktop in new directions, Ubuntu overhauling usability, and OpenOffice.org revamping its look and feel via Project Renaissance, free software is in the middle of yet another great leap forward in usability.
In this story, the conventional wisdom has a lot of truth. Most of what passed for graphical interfaces a decade ago is hopelessly cluttered and directionless by modern standards. Nor can anyone deny that usability in free software still needs improvement. However, what the story leaves out is that the definition of usability that everyone takes for granted threatens to leave the traditions of free software behind -- and, in doing so, to rob free software of its main strengths.
You rarely find the assumptions behind modern usability discussed except in passing. However, the most basic principle is that a graphical interface should provide the functionality that most users need most often. In other words, a graphical interface is usually assumed to leave out functionality, and to be designed primarily for beginners.
The trouble with this assumption -- like many others -- is that it easily becomes a self-fulfilling prophecy. All interface designers that I met have told me that it is impossible to design a desktop application that includes all possible functionality or meets the need of every possible user. Besides, they add, if you tried, users would suffer the anxiety of having too many choices. And so you get file managers, for instance, that have less than a third of the functionality of basic commands like cp or mv.
But is this limitation inevitable? Granted, the assumption can create highly efficient (if limited) interfaces like the OS X desktop. Admittedly, too, I have seen botched efforts to make fully functional desktop apps, such as one I saw some years ago whose designers imagined that dropping all the options on to tabs was all that was needed to provide a front end for Apache.
Yet some efforts to provide full functionality do exist. For example, I have seen a number of applications such as Firefox that tidy up functionality that is of interest only to advanced users into a separate window or tab. So clearly, adding at least some additional complexity is some times possible. Nor is complexity stinted in such areas as customization, where free software users insist upon every possible option.
However, exactly how much complexity is possible in an interface or how often it can be added remains largely unknown, because those who design interfaces rarely experiment to find out. Because of their basic assumption, they already expect such efforts to fail, so they never attempt them.
In much the same way, I find myself wondering whether new users should be the target for interfaces. In 2009, I suspect, the new user is as mythical as a unicorn. Is there anyone left in industrial countries who has a use for a computer who doesn't already have one? Does anyone approach a new application without some familiarity with the conventions of layout? While the effort to simplify as much as possible shouldn't be abandoned, I do wonder if usability assumptions are now consistently under-estimating the adaptability of their audience.
Besides, even if intermediate and advanced users are not a majority, they are part of the audience as well, so at least some interface design should have them in mind. Too often, though, usability seems to ignore them entirely.
Desktop vs. command line
One reason why the basic design assumptions go largely unquestioned is probably the fact that usability is still a relatively new concern in free software. However, an even greater reason may be that operating systems like GNU/Linux already have a full-function interface designed for experts. It's called the command line.
Unfortunately, still another assumption is that the command line is outdated and unsuitable for the average user. In Windows, that is definitely the case. However, in GNU/Linux, the relationship between the desktop and the command line remains a close one. Often, desktop applications are built directly on top of commands. Beneath digiKam, for instance, you'll find gphoto2. Should you find -- as I did recently -- that digiKam is no longer working after an upgrade, you can open a terminal and use gphoto2 instead. Even in heavily graphical programs such as OpenOffice.org or Firefox, if you have a problem, the solution is often to add an option to the basic command behind the menu item or icon.
With this close connection, learning about the command line makes more sense in GNU/Linux than in many operating systems. You do not need to be full of command line macho, believing that the only real computing goes on at a command prompt, to appreciate the connection. While the command line is in many ways the opposite of the desktop, in that it is thorough and encourages the gaining of expertise, the two interfaces are complementary. For simple, routine tasks, the desktop is often preferable, especially if you are viewing graphics. If you want to administer your system or fine-tune performance, then the command line is the interface you need.
What bothers me about free software interface design is that, like similar work proprietary operating systems, it encourages the use of the desktop at the expense of the command line. Few desktop applications bother to indicate the commands for which they front, or to provide the same range of functionality as those commands. They keep users ignorant and unable to advance beyond a certain level of knowledge, even if they want to.
By contrast, a command may take longer to learn, and be more intimidating at first, but at least it provides no greater barriers to learning. The command line is all about hands-on exploration -- after all, the main reason that most GNU/Linux configuration files are in plain text is so that they can be edited by command line editors like vim or emacs.
In borrowing the assumptions of other operating systems, interface on GNU/Linux may be replacing the self-sufficiency traditional to Unix and GNU/Linux with the learned helplessness of those operating systems. Although those who are just migrating to GNU/Linux may not know what is happening, they are missing a tradition that could empower them.
Missing the main chance
From a philosophical perspective, this seems a missed opportunity. If you profess any ethical standards at all, informed consumers are always preferable to ignorant ones.
Not only that, but if all that GNU/Linux can offer is more of the same, then users have less incentive to switch to it. Even though desktops like GNOME and KDE are now the equal of Windows, the point is that Windows got to market first and is more familiar.
By contrast, by making the connections between the desktop and the command line clearer, or by attempting to reproduce the thoroughness and the accessibility of the command line in graphical form, GNU/Linux applications could offer a clear alternative -- nothing less, in fact, than an entirely different relation between user and computer than the one assumed by Windows or OS X.
This alternative is especially important if you believe in the goals of free software. If your goal is to allow users complete control over their computing, then you need to encourage them to explore and understand their systems. Hiding complexity may help less experienced users get up and running, but it also tends to stall their knowledge at a basic level. Certainly, the habits you learn will leave them badly equipped to exercise any control.
Don't get me wrong: The interface improvements of the last decade were necessary for GNU/Linux, and I am not suggesting they were irrelevant or useless. But I also worry that they risk moving too far away from the root assumptions of the operating system, and may be borrowing opposing and incompatible ones. In other words, as welcome as some of the improvements have been, I suspect that we could do even better.