Public notes for CS6750 - HCI Spring 2022 at Georgia Tech

Public webpage for sharing information about Dr. Joyner's CS6750 - Human Computer Interaction course in Spring 2022.

View the Project on GitHub idkaaa/cs-6750-hci-sp22-public

2.5 Design Principles and Heuristics

2.5.1 - Introduction to Design Principles

[MUSIC] Over the many years of HCI development, experts have come up with a wide variety of principles and heuristics for designing good interfaces. None of these are hard and fast rules like the law of gravity or something. But they’re useful guidelines to keep in mind when designing our interfaces. Likely, the most popular and influential of these is Don Norman’s six principles of design. Larry Constantine and Lucy Lockwood have a a similar set of principles of user interface design, with some overlaps but also some distinctions. Jacob Nielsen has a set of Ten Heuristics for user interface design that can be used for both design and evaluation. And while those are all interested in general usability, there also exists a set of seven principles called Principles of Universal Design. These are similarly concerned with usability, but more specifically for the greatest number of people. Putting these four sets together, we’ll talk about 15 unique principles for interaction design.

2.5.2 - The Sets

In this lesson, we’re going to talk about four sets of design principles. These aren’t the only four sets but are the ones that I see referenced most often, and we’ll talk about what some of the others might be at the end of the lesson. In his book, “The Design of Everyday Things”, Don Norman outlined his famous six design principles. This is probably the most famous set of design principles out there. The more recent versions of the book actually have a seventh principle. But that seventh principle was actually one of our entire lessons. Jakob Nielsen outlines 10 design heuristics in his book, “Usability Inspection Methods”. Many of Norman’s principles are similar to Nielsen’s, but there’s some unique ones as well. What’s interesting is Norman and Nielsen went into business together and form the Nielsen Norman Group, which is for user experience, training, consulting in HCI research. In their books, “Software for Use”, Larry Constantine and Lucy Lockwood outline an additional six principles. Again, many of them overlap with these two, but some of them are unique. Finally, Ronald Mace of North Carolina State University proposed seven principles of universal design. The Center for Excellence in universal design, whose mobile site is presented here, has continued research into this area. These are a little bit different than the heuristics and principles presented in the other three. While these three are most concerned with usability in general, universal design is specifically concerned with designing interfaces and devices that can be used by everyone regardless of age, disability, and so on. To make this lesson a little easier to follow, I’ve tried to merge these four sets of principles into one larger set, capturing the overlap between many of them. In this lesson, we’ll go through these 15 principles. These principles are intended to distill out some of the overlap between those different sets. This table shows those 15 principles. My names for each of them, and which sets that come from. Note that my 15 principles are just an abstraction or summary of these sets of principles, and you should make sure to understand the sets themselves as well. There are some subtle differences between the principles I’ve grouped together from different sets, and we’ll talk about those as we go forward. Again, note that these aren’t the only four sets of design principles out there. At the end of the lesson, we’ll chat about a few more, and we’ll also mention when others apply within this lesson as well.

2.5.3 - Discoverability

2.5.4 - Design Challenge: Discovering Gestures Question

Discoverability is one of the challenges for designing gesture-based interfaces. To understand this, let’s watch Morgan do some ordinary actions with her phone. [MUSIC] [SOUND] [MUSIC] We just saw Morgan do four things with the phone. Reject a call, take a screenshot, take a selfie, and make a phone call. For each of those, this phone actually has a corresponding gesture that would have made it easier. She could have just turned the phone over to reject the call or said, shoot, to take the selfie. The problem is that these are not discoverable. Having a menu of voice commands kind of defeats the purpose of saving screen real estate and simplicity through gestures and voice commands. So, brainstorm a bit. How would you make these gesture commands more discoverable?

2.5.4 - Design Challenge: Discovering Gestures Solution

There’s a lot of ways we might do this, from giving her a tutorial in advance, to giving her some tutoring in context. For example, we might use the title bar of the phone to just briefly flash a message letting the user know when something they’ve done could have been triggered by a gesture or a voice command. That way, we’re delivering instruction in the context of the activity. We could also give a log of those so that they can check back at their convenience and see the tasks they could have performed in other ways.

2.5.5 - Simplicity

2.5.6 - Affordances

2.5.7 - Affordance Vocabulary

It’s important to note that the language with which we talk about affordances is famously somewhat imprecise. Norman’s technical definitions of affordances are a little different than what we’ve used here. Affordances to Norman are actually inherent properties of the device. For example, a door bar like this one has the inherent property that the handle moves into the crossbar and opens a latch. That’s the affordance, it’s something that inherently does. A perceived affordance is a property attributed to the object by a human observer. This could be a subtle difference. Here, the perceived affordance would be pushability. Pushing though is a human behavior. So, pushability must be a perceived affordance because it relies on someone to do the pushing. What’s important is that a perceived affordance can actually be inaccurate. Norman famously complains when doors that are actually meant to be pushed have a handle like this, which looks like it’s supposed to be pulled. This is a place where a perceived affordance and an actual affordance are in conflict. The user perceives that the store is meant to be pulled, but the actual affordance is for it to be pushed, or to be more precise, the actual affordance is that the door opens inward based on these hinges. A signifier then is anything that helps the perceived affordance match the actual affordance. For example, some doors like this will have a block bar on the part that’s supposed to be pushed. That signifies to a user that this is a place they should test out the interaction that they perceive to be possible. On the store, we can put a sign that just says, Push. That would be a signifier that tries to alleviate the conflict between the actual affordance and the perceived affordance. Now based on these definitions, we can’t add affordances. Affordances are inherent in our system. Instead, we can add signifiers that help the perception of affordances match the actual affordances that are there. With these technical definitions of these terms, saying I added an affordance to the interface is like saying I added tastiness to that dish or I added beauty to that painting. Affordances, tastiness, beauty; these are all things that arise as a result of adding signifiers, or oregano, or some pretty shade of blue. But in practice, these distinctions around this vocabulary are often disobeyed. It’s not uncommon to hear people say, I added an affordance here, so the user knows what they’re supposed to do. To me, there’s really no harm in that. The distinctions between these terms are very important when developing a theory of HCI. But when you’re doing day-to-day design, we usually know what we’re talking about when we misuse these terms. So, I don’t think there’s any harm in being cavalier about how we use these terms, but it is important to know this distinction in case anyone ever brings up the difference.

2.5.8 - Mapping

2.5.9 - Design Challenge: Mapping and Switches Question

2.5.9 - Design Challenge: Mapping and Switches Solution

2.5.10 - Perceptibility

2.5.11 - Consistency

2.5.12 - Consistency: The Curious Case of Ctrl+Y

One of my favorite examples of how consistency matters comes from Microsoft’s Visual Studio development environment. To be clear, I adore Visual Studio, so I’m not just piling onto it. As you can see here, in most interfaces, Ctrl+Y is the redo hotkey. If you hit undo one too many times, you can press Ctrl+Y to redo the last undone action. But in Visual Studio, by default it’s Shift+ Alt + Backspace. What? And what’s worse than this is that Ctrl+Y is the delete line function, which is a function I never even heard of before Visual Studio. So, if you’re pressing Ctrl+Z a bunch of times to maybe rewind the changes you’ve made lately, and then you press Ctrl+Y out of habit because it’s what every other interface uses for redo, the effect is that you delete the current line instead of redoing anything, and that actually makes a new change which means you lose that entire tree of redoable actions. Anything you’ve undone can now not be recovered. It’s infuriating and yet it isn’t without its reasons and the reason is consistency. Ctrl+Y was the hotkey for the delete line function in WordStar, one of the very first word processors. Before Ctrl+Y was the hotkey for the more general redo function, there wasn’t even a redo function back then. I’ve heard that Y in this context stood for yank but I don’t know how true that is. But Ctrl+Y had been used to delete a line from WordStar all the way through Visual Basic Six, which was the predecessor to Visual Studio. So, in designing Visual Studio, Microsoft had a choice, be consistent with the convention from Word Star and Visual Basic Six or be consistent with the convention that we’re using in their other interfaces. They chose to be consistent with the predecessors to Visual Studio, and they’ve stayed consistent with that ever since. So, in trying to maintain the consistency principle in one way, they actually violated it in another way. So, if you try to leverage the consistency principle, you’re going to encounter some challenges. There may be multiple conflicting things with which you want to be consistent, there may be questions about whether a certain change is worth the dropping consistency. These are things to test with users which we talked about in the other unit of this course.

2.5.13 - Flexibility

2.5.14 - Equity

2.5.15 - Ease and Comfort

2.5.16 - Structure

2.5.17 - Constraints

2.5.18 - Normans Four Types of Constraints

Norman takes us a step further though, when he breaks down constraints into four sub-categories. These aren’t just about preventing wrong input. They’re also about insuring correct input. They’re about making sure the user knows what to do next. Physical constraints are those that are literally physically prevent you from performing the wrong action. A three-prong plug, for example, can only physically be inserted in one way, which prevents mistakes. USB sticks can only be physically inserted one way all the way. But the constraint doesn’t arise until you’ve already tried to do it incorrectly. You can look at a wall outlet and understand if you’re trying to put it incorrectly. But it’s harder to look at a USB and know whether you’re trying to insert it the right way. A second kind is a cultural constraint. These are those rules that are generally followed by different societies, like facing forward on escalators, or forming a line while waiting. In designing we might rely on these, but we should be careful of intercultural differences. A third kind of constraint is a semantic constraint. Those are constraints that are inherent to the meaning of a situation. They’re similar to affordances in that regard. For example, the purpose of a rear view mirror is to see behind you. So therefore, the mirror must reflect from behind, it’s inherent to the idea of a rear view mirror, that it should reflect in a certain way. In the future that meaning might change, autonomous vehicles might not need mirrors for passengers, so the semantic constraints of today, might be gone tomorrow. And finally the fourth kind of constraint is a logical constraint. Logical constraints are things that are self-evident based on a situation, not just based on the design of something like a semantic constraint, but based on the situation at hand. For example, imagine building some furniture. When you reach the end, there’s only one hole left, and only one screw. Logically, the one screw left is constrained to go in the one remaining hole. That’s a logical constraint.

2.5.19 - Reflections: Constraints Question

A lot of the principles we talk about are cases where you might never even notice if they’ve been done well. There are principles of invisible design, where succeeding allows the user to focus on the underlying tasks. But constraints are different. Constraints actively stand in the user’s way and that means they’ve become more visible. That’s often a bad thing, but in the case of constraints it serves the greater good. Constraints might prevent users from entering invalid input or force users to adopt certain safeguards. So of all the principles we’ve discussed, this might be the one you’ve noticed. So take a second, and think. Can you think of any times you’ve encountered interfaces that had constraints in them?

2.5.19 - Reflections: Constraints Solution

I have kind of an interesting example of this. I can’t demonstrate it well because the car has to be in motion, but on my Leaf there’s an option screen, and it lets you change the time and the date, and some other options on the car. And you can use that option screen until the car starts moving. But at that point, the menu blocks you from using it, saying you can only use it when the car is at rest. That’s for safety reasons. They don’t want people fiddling with the option screen while driving. What makes it interesting, though, is it’s a constraint that isn’t in the service of usability, it’s in the service of safety. The car is made less usable to make it more safe.

2.5.20 - Tolerance

2.5.21 - Feedback

2.5.22 - Documentation

Finally, Nielsen has one last heuristic regarding user error, documentation. I put this last for a reason, one goal of usable design is to avoid the need for documentation altogether. We want users to just interact naturally with our interfaces. In modern design, we probably can’t rely on users reading our documentation at all unless they’re being required to use our interface altogether. And Nielsen generally agrees. He writes that even though it’s better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on user’s task, list concrete steps to be carried out, and not be too large. I feel modern design as a whole has made great strides in this direction over the past several years. Nowadays, most often, when you use documentation online or wherever you might find it, it’s framed in terms of tasks. You input what you want to do, and it gives you a concrete list of steps to actually carry it out. That’s a refreshing change compared to older documentation, which was more dedicated to just listing out everything a given interface could do without any consideration to what you were actually trying to do.

2.5.23 - Exploring HCI: Design Principles and Heuristics

We’ve talked about a bunch of different design principles in this lesson. How these design principles apply to your design tasks will differ significantly based on what area you’re working in. In gestural interfaces, for example, constraints presented a big challenge because we can’t physically constrain our users’ movement, we have to give them feedback or feedforward in different ways. If we’re working in particularly complex domains, we have to think hard about what simplicity means. If the underlying task is complex, how simple can and should the interface actually be. We might find ourselves in domains with enormous concerns regarding universal design. If you create something that a person with a disability can’t use, you risk big problems both ethically and legally. So, take a few moments and reflect on how these design principles apply to the area of HCI that you’ve chosen to investigate.

2.5.24 - Other Sets of Principles

So, I’ve attempted to distill the 29 combined principles from Norman, Nielsen, Constantine, Lockwood, and the Center for Universal Design into just these 15. Here you can see where each of these principles comes from. I do recommend reading the original four lists to pick up on some of the more subtle differences between these principles that I’ve grouped together, especially perceptibility, tolerance, and feedback. Note also that in more recent editions, Norman has one more principle: conceptual models. That’s actually the subject of an entire lesson in this course. These also certainly aren’t the only four sets of design principles. There are several more. For example, Dix, Finlay, Abowd, and Beale proposed three categories of principles: learnability for how easily a new user can grasp an interface, flexibility for how many ways an interface can be used, and robustness for how well an interface gives feedback and recovers from errors. We talk about their learnability principles when we discussed mental models. Jill Gerhardt-Powals has a list of principles for cognitive engineering, aimed especially at reducing cognitive load. Her list is in particularly useful applications for data processing and visualization. In “The Human Interface”, Jef Raskin outline some additional revolutionary design rules. I wouldn’t necessarily advocate following them, but they’re interesting to see a very different approach to things. In “Computer Graphics Principles and Practice”, Jim Foley and others give some principles that apply specifically to 2D and 3D computer graphics. Finally, Susan Weinschenk and Dean Barker have a set of guidelines that provide an even more holistic view of interface design, including things like linguistic and cultural sensitivity, tempo and pace, and domain clarity. Even these are only some of the additional lists. There are many more that I encourage you to look into. We’ll provide some on the notes.

2.5.25 - Conclusion to Design Principles

In this lesson, I’ve tried to take the various different lists of usability guidelines from different sources and distill them down to a list you can really work with. We combine the lists from Don Norman, from Jakob Nielsen, Larry Constantine, Lucy Lockwood, and the institute for Universal Design into 15 principles. Now remember, these are just guidelines, principles, and heuristics. None of them are unbreakable rules. You’ll often find yourself wrestling with the tensions between multiple principles. There will be something cool you’ll want to implement but only most expert users will be able to understand it, or there might be some new interaction method you want to test but you aren’t sure how to make it visible or learnable to the user. These principles are things you should think about when designing. But they only get you so far. You still do need finding, prototyping, and evaluation to find out what actually works in reality.