FAQ: Why user levels are bad usability
This page exists to help people on the KDE and GNOME usability lists, which continually attempt to revive old and archived debates about user levels, to understand why this topic is unwelcome.
Since, it seems, netiquette is unfashionable nowadays, and people have forgotten the simple practice of reading the mailing list archives before posting, this page is in order to clarify this particular "user levels" issue.
So, for:
- mailing list moderators: keep a template in your e-mail application, pointing to this FAQ.
- users: before posting again anything about "user levels", read this FAQ in its entirety, then read your mailing list's archive.
And, without further ado, here's why.
What are "user levels"?
At the core of the "user levels" idea is the notion that the computer should adjust the visibility, position or accessibility of GUI options and controls (especially configuration settings) based on a predefined set of "user skill levels", which the computer must ask its user for.
In other words: if the user is a beginner, show him options a beginner would understand. If he's an advanced user, show him those options and add advanced options.
Why is it wrong?
On the surface, the idea is extremely tempting. It would seem to solve one of the most pressing conflicts for open source application developers: whether to add or remove configuration options, because, if this idea worked, it would be fairly simple to hide most of the configuration options for beginners, while also catering to the "advanced user" population. Best of both worlds.
But user levels don't work because:
- Users "don't know" (can't objectively assess) what their own skill level is. It's hard for users to objectively tell what skill level they should choose. Thus, they end up choosing a skill level based on a preconceived notion of themselves which has more to do with their self-esteem ("I suck at computers...") and beliefs ("I'm an überhax0r") than their true skill level.
- One person's medicine is another person's poison. Your "advanced" option is your neighbor's "piece of cake" setting.
- Users don't like to be discriminated against, or classified in groups "a priori". Especially by computer programs.
- No developer has, to date, devised a set of categories or skill levels to which the majority of people would agree. It's like asking the user if he's black or white or yellow. 90% of the time, users will be a bit of black, a bit of white and a bit of yellow, all at the same time. "Common sense" doesn't apply.
- No developer has come up with an effective way of saying "X option or action belongs in the Y skill level". Due to this, some users "lose" access to options tht are extremely relevant to their particular use cases, while others become overwhelmed with options that are unnecessary for them.
- When it's bug report time, this kind of setting complicates debugging workflow, a lot. What we need, as developers, is to further streamline the debugging process (including the stages where user communication and feedback is involved).
But what about if we hid some options from the user interface, and show them later on?
Won't work. Whether you choose one week or one hour for the timeout, interruptions get in the way of productivity. Adding or shuffling positions of controls or options will only add to the cognitive load of learning to use your user interface.
Think of your application as a lawyer. Your users' business is to use your app and "dispatch it" as quickly as possible, because their time is their money. Prompting a person to learn new things may be fair game when he/she's experimenting with sex — but not with a computer application. Users don't want to spend time relearning and rediscovering new things in your application — especially things you didn't show them in the first place.
Then, what do you propose?
In (more or less) order of diminishing returns:
- Follow common guidelines. Read and memorize your favorite platform's user interface guidelines, and follow them as closely as possible in your application. Worst case scenario, they'll prevent "spurious" bug reports from "annoying" users. Best case scenario, your app will actually benefit of tried and true ideas, embodied in your platform's UI, and embedded in your users' brains. And you'll learn a new thing or two about usability.
- Autoconfig. Make your application deduce configuration, instead of relying on the user to provide that information for you. Go the extra mile and write the "detection" code to enable this.
- Simplify. Dump unnecessary configuration options.
- Copy ideas. Steal ideas used in other popular — and similar — applications. Pay attention to the details. Often, the small details embedded in a user interface teature (very consisten behavior, pixel-perfect accuracy) win unsung praise from their users, and keep them loyal.
- Refactor. Run a quick survey on your user inteface among your closest friends, and ask them if they understand, and if they don't, what don't they understand. Then propose new UI layouts.
- Divide and conquer. If your application does too much in a single user interface, split your UI in more manageable parts. Whether you do two apps that meet via named pipes or databases on disk, or you do a single app with two windows, it's up to you. Just be consistent and stick to your strategy.
- Make your app work using "contexts" or "activity centers". That way, you can show and keep tightly related options, functionality and UI controls together.
Conclusion
Having said this, feel free to create flamefest on this post and expose your particular view by using the comment box below. I'll try and answer all questions and ideas, and incorporate their answers to this FAQ. But I'll make no promises!