Why the Linux Desktop Needs More Usability Testing

The open source community often neglects usability testing—and projects can suffer as a result.
(Page 1 of 2)

Linux desktop projects often overlook formal usability testing. Attempts to introduce it are generally short-lived. After a few experiments, most developers fall back on a series of informal alternatives.

Yet as Linux becomes increasingly popular, the need for usability becomes more pressing. Implemented properly, it might have prevented or mitigated some of the upheavals of the last few years.

Why Is Usability Testing Neglected?

The relative neglect of usability testing is hardly surprising. Of all the parts of the development process, it is probably the one most often skipped or short-changed throughout IT, especially in smaller companies.

Compared to code development, usability testing is considered unglamorous everywhere in the software industry. (One indication of this is the existence of more female testers than coders—even today, women enter low-prestige IT jobs more easily than elite ones).

Moreover, until the last few years, the distinction between developers and users was blurred in open source. For many years, developers were users, so that Eric Raymond could legitimately describe the start of a project as "a developer scratching his own itch."

Add in what Raymond describes as a "release early, release often" philosophy, and the lack of usability barely mattered. If a problem emerged, it could easily be fixed in the next release. Under these circumstances, Raymond's other admonitions to listen to users or customers have always been given less priority than his more colorful statements.

Another problem is that project members are often scattered around the world, meeting at most only a few times a year. As a result, arranging any testing that involves directly observing users is next to impossible on a large scale. Individual developers, no matter how much they support usability testing, are simply unlikely to have the resources to doing testing with more than a few people.

For all these reasons, usability testing has seldom been a priority in free software. Whenever one of the increasing numbers of non-developers using an application does register a complaint, the replies still frequently include a sarcastic invitation to contribute the necessary code themselves.

In other words, the users of the free desktop have grown far beyond the original practices, but communication between developers and users has not grown with it. The result is a gap that systematic usability testing could bridge directly.

Attempts at Testing

Now and then, projects do recognize the need for usability testing. For example, in 2006, Fedora announced plans for a usability sub-project. As I recall, there was even some ambitious talk about remote testing in which volunteers would be given testing scripts and log on to a testbed server.

But the project was never enthusiastically endorsed. Today, its home page was last modified in 2010. Discussion on its desktop mailing list is sparse and focuses on specific features more than major usability issues.

Similarly, the KDE Usability Project posted usability testing resources complete with templates and samples. A few KDE applications even posted profiles to help with testing. Yet, although the material on the page looks professional enough, as I write, it was last modified eight months ago. If anyone has actually used the resources on the page, they have yet to publish their findings.

While some exceptions may exist, large-scale usability testing is so rare that the last one connected to a major project was that done by Calum Benson of Sun Microsystems conducted on GNOME in 2001. This detailed study made headlines in its day, and its results were given permanent if abstracted form in the GNOME Human Interest Guidelines. But, so far as I'm aware, nothing comparable has been done since in any project.

Instead, so far as usability testing has been attempted at all, it has generally been on a small scale, with a variety of stopgap measures.

Alternatives to Formal Testing

Much of the time, changes to interfaces are due to an individual's interest—in other words, a continuation of the "scratch your own itch" approach. For example, although recent work on Plasma Active, KDE's tablet interface, appears to have been a collaboration between developers and interface designers, many changes in KDE are due to individual initiative, such as Aaron Seigo's modifications of KRunner.

Such "eating of your own dog food" can sometimes be useful, especially when working on individual applications. The difficulty is that senior developers are not typical users today because they have a familiarity with the software and a sophistication that casual users lack.

That may have been a main source of the poor reception for GNOME 3. According to Vincent Untz's blog, the basic design elements of GNOME 3 were determined by leading developers at the User Experience Hackfest in 2008.

Untz's description does sound like those involved made an honest effort to imagine new users' experience, criticizing several aspects of the design as "too hard." All the same, I suspect that GNOME 3's overall success would have been radically different if they had studied other users' reactions instead of merely voicing their own. As things are, the best parts of GNOME 3 tend to be specific features, such as the notifications, rather than the overall design.


Page 1 of 2

 
1 2
Next Page



Tags: open source, Linux, testing, development, desktop


0 Comments (click to add your comment)
Comment and Contribute

 


(Maximum characters: 1200). You have characters left.