3.3 Needfinding and Requirements Gathering

3.3.1 - Introduction to Needfinding

The first stage of the design lifecycle is need-finding or requirements gathering. This is the stage where we go and try to find out what the user really needs. The biggest mistake that the designer can make is jumping to the design process before understanding the user or the task. We want to develop a deep understanding of the task they're trying to accomplish and why. As we do this, it's important to try to come in with as few preconceived notions as possible. There's an old adage that says, "When all you have is a hammer, everything looks like a nail." This is similar. If you come in having already decided what approach you want to take, it's tempting to only see the problem in terms of the approach you've chosen. So, we're going to go through a process that attempts to avoid as many preconceived notions as possible. We're going to start by defining some general questions we want to answer throughout the data gathering process about who the user is, what they're doing, and what they need. Then, we'll go through several methods of generating answers to those questions to gain a better understanding of the user. Then, we'll talk about how to formalize the data we gather into a shareable model of the task and a list of requirements for our ultimate interface. Note that each of these tools could get a lesson on its own on how to do it. So, we'll try to provide some additional resources to read further on the tools you choose to use.

3.3.2 - Data Inventory

Before we start our need-finding exercises, we also want to enter with some understanding of the data we want to gather. These are the questions we ultimately want to answer. That's not to say we should be answering them every step of the way, but rather, we want to gather the data necessary to come to a conclusion at the end. Now, there are lots of inventories of the types of data you could gather, but here's one useful list. One, who are the users? What are their ages, genders, levels of expertise? Two, where are the users? What is there environment? Number three, what is the context of the task? What else is competing for users' attention? Four, what are their goals? What are they trying to accomplish? Five, right now, what do they need? What are the physical objects? What information do they need? What collaborators do they need? Six, what are their tasks? What are they doing physically, cognitively, socially? And seven, what are the subtasks? How do they accomplish those subtasks? When you're designing your need finding methods, each thing you do should match up with one or more of these questions.

3.3.3 - The Problem Space

In order to do some real need finding, the first thing we need to do is identify the problem space. Where is the task occurring, what else is going on, what are the user's explicit and implicit needs? We'll talk about some of the methods for doing that in this lesson, but before we get into those methods, we want to understand the scope of the space we're looking at. So consider the difference between these two actions. [MUSIC] Notice that in each of these, I'm doing the same task, turning off the alarm. But in the first scene we're focusing very narrowly on the interaction between the user and the interface. In the latter, we're taking into consideration a broader view of the problem space. We could zoom out even further if we wanted to and ask questions about Where and why people need alarm systems in the first place. That might lead us to designing things like security systems for dorm rooms or checking systems for office buildings. As we're going about need finding, we want to make sure we're taking the broad approach. Understanding the entire problem space in which we're interested, not just focusing narrowly on the user's interaction with a particular interface. So in our exploration of methods for need finding, we're going to start with the most authentic types of general observation, then move through progressively more targeted types of need finding.

3.3.4 - User Types

Just as we want to get an idea of the physical space of the problem, we also want to get an idea of the space of the user. In other words, we want to understand who we're designing for. That comes up a lot when doing design alternatives and prototyping, but we also want to make sure to gather information about the full range of users for whom we're designing. So, let's take the example of designing an audiobook app for people that exercise. Am I interested in audiobooks just for kids or for adults too? Am I interested in experts who are exercising or novices at it? Am I interested in experts listening to audiobooks? Or am I interested in novices at that as well? Those are pretty key questions. They differentiate whether I'm designing for business people who want to be able to exercise while reading or exercisers who want to be able to do something else while exercising. The task is similar for both, but the audience, their motivations, and their needs are different. So, I need to identify these different types of users and perform need-finding exercises on all of them. One of the most successful products of all times exceeded because of the tension to user types. The Sony Walkman became such a dramatic success because they identified different needs for different types of people, design their product in a way that it met all those needs, but then they marketed specifically to those different types of individuals. You can read more about that in a book called Doing Cultural Studies by Hugh Mackay and Linda Janes.

3.3.5 - 5 Tips: Avoiding Bias in Needfinding

  1. Number one, confirmation bias. Confirmation bias is the phenomenon where we see what we want to see. We enter with some preconceived ideas of what we'll see and we only notice the things that confirm our prior beliefs. Try to avoid this by specifically looking for signs that you're wrong, by testing your beliefs empirically, and by involving multiple individuals in the need-finding process.
  2. Number two, observer bias. When we're interacting directly with users, we may subconsciously bias them. We might be more helpful, for example, with users using the interface that we designed compared to the ones that other people designed. On surveys, we might accidentally phrase questions in a way that elicits the answers that we want to hear. Try to avoid this by separating experimenters with motives from the participants, by heavily scripting interactions with users, and by having someone else review your interview scripts and your surveys for leading questions.
  3. Number three, social desirability bias. People tend to be nice, people want to help. If you're testing an interface and the participants know that you're the designer of the interface, they'll want to say something nice about it to make you happy, but that gets in the way of getting good data. Try to avoid this by hiding what the socially desirable response is by conducting more naturalistic observations and by recording objective data.
  4. Number four, voluntary response bias. Studies have shown that people with stronger opinions are more likely to respond to optional surveys. You can see this often in online store reviews. The most common responses are often fives and ones. For us, that means if we prefer quantitative analysis on surveys, we risk oversampling the more extreme views. Avoid this by limiting how much of the survey content is shown to users before they begin survey, and by confirming any conclusions with other methods.
  5. Number five, Recall bias. Studies have also shown that people aren't always very good at recalling what they did, what they thought or how they felt during an activity they completed in the past. That can lead to misleading and incorrect data. Try to avoid this by studying casks and contexts by having users think out loud during activities or conducting interviews during the activity itself. Now, these biases can be largely controlled also by making sure to engage in multiple forms of need-finding.

3.3.6 - Naturalistic Observation

For certain tasks, a great way for us to understand the users need is to simply watch. A great way for me to start understanding what it's like to need an audiobook app for exercising is to come somewhere where people are exercising and just watch them exercise. This is called naturalistic observation, observing people in their natural context. So I'm fortunate that I actually live across the street from a park, so I can sit here in my rocking chair on my porch and just watch people exercising. Now, I want to start with very specific observations and then generalize out to more abstract tasks. That way I'll observe something called confirmation bias which is basically when you see what you want to see, so what do I notice? Well, I notice that there's a lot of different types of exercisers. There are walkers, joggers, runners I see some rollerbladers, I see some people doing yoga. I see a lot of people riding bikes but the bikers seem to be broken into two different kinds of groups. I see a lot of people biking kind of leisurely but I also see some bikers who are a little bit more strenuous about it. I'm also noticing that while joggers might be able to stop and start pretty quickly, that's harder for someone riding a bike. So I might want to avoid designs that force the user to pull out their phone a lot because that's going to be dangerous and awkward for people riding bikes. Now I also see people exercising in groups and also people exercising individually. For those people exercising in groups, I don't actually know if they'd be interested in this. Listening to something might kind of defeat the purpose of exercising together. So I'm going to have to note that down as a question I want to ask people later. I also see that many people tend to stretch before and after exercising and I'm wondering if we can use that. Then we can have some kind of starting and ending sequence for this, so that a single session is kind of book ending by both stretching, and interacting with our app. Note that by just watching people engage in the task of exercising, I'm gathering an enormous amount of information that might affect my design. But note also, that while naturalistic observation is great, I'm limited ethically in what I can do. I can't interact with users directly and I can't capture identifying information like videos and photographs that's why I can't show you what I'm seeing out here. I'm also limited in that I don't know anything about what those users are thinking. I don't know if the people working out in groups would want to be able to listen to audiobooks while they're doing yoga. I don't know if bluetooth headsets would be problematic for people riding bike, I need to do a lot more before I get to the design phase. But this has been very informative in my understanding of the problem space and giving me things I can ask people later on.

3.3.7 - 5 Tips: Naturalistic Observation

Here are five quick tips for doing naturalistic observation. Number one, take notes. Don't just sit around watching for a while. Be prepared to gather targeted information and observations about what you see. Number two, start specific, and then abstract. Right down the individual little actions you see people doing before trying to interpret or summarize them. If you jump to summarizing too soon, you risk tunnel vision. Number three, spread out your sessions. Rather than sitting somewhere for two hours, one day and then moving on, try to observe in shorter 10 to 15 minute sessions several times. You may find interesting different information, and your growing understanding and reflection on past exercises will inform your future sessions. Number four, find the partner. Observe together with someone else. Take your own notes, and then compare them later. So, you can see if you all interpreted the same scenarios or actions in the same way. Number five, look for questions. Naturalistic observation should inform the questions you decide to ask participants in more targeted need-finding exercises. You don't need to have all the answers based on observation alone. What you need is questions to investigate further.

3.3.8 - Participant Observation

Sometimes it's not just enough to watch people engaging in a task. Sometimes we want to experience a task for ourselves. So that's what I'm going to do. I listen to audiobooks a lot. I don't really exercise. I should, but I don't. But I'm going to try this out. So I've got my audiobook queued up, I've got my mic on so I can take notes as I run. So I'm going to go on a jog and see what I discover. So what did I learn? I learned that I'm out of shape for one thing. I already knew that but I learned it again. I also learned that this app would be very useful for anyone doing participant observation on exercisers. Because I kept having to stop to record notes for myself, which I could have done with this app that I'm trying to implement. But aside from that, I noticed that unexpected things happen pretty often that made me wish that I could easily go back in my book. Or sometimes there are just things I just wanted to hear again, but there was no easy way to do that. I also notice that there's definitely the need there for me. I already planned to listen to everything again now that I'm home because there were notes I wanted to take that I couldn't take easily. I also noticed that while sometimes I wanted to take notes, sometimes I also just want to leave a bookmark. Now we do have to be careful here though. Remember you are not your user. When you're working as a participant observer, you can avail useful insights, but you shouldn't over represent your own experiences. You should use this experience as a participant observer to inform what you ask users going forward

3.3.9 - Hacks and Workarounds

Let's zoom in a little bit more on what the user actually does or we can do naturalistic and participant observation without having to directly interact much with our users. We need to get inside users heads a little more to understand what they're thinking and doing. If you're trying to design interfaces to make existing tasks easier, one way to research that is to look at the hacks that users presently employ. How do they use interfaces in non-intended ways to accomplish tasks or how do they break out of the interface to accomplish a task that could have been accomplished with an interface? If you're designing a task meant to be performed at a desk like this, looking at the person's workspace can be a great way of accomplishing this. So for example, I have six monitors around. And yet, you still see Post-It notes on my computer. How could I possibly need more screen real estate? Well, Post-It notes can't be covered up. They don't take away from the existing screen real estate. They're visible even when the computer is off. So, this implicit notes here is the way to hack around the limitations of the computer interface. Now when you're looking at hacks, it's important to not just look at what the user does and assume you understand why. Look at their work around and ask them why they're using them. Find out why they don't just use the interface that's currently in place. You might find they just don't know about them, which presents a different kind of design challenge. Now hacks are related to another method we can use to uncover user needs as well, which are called errors. Whereas hacks are ways users get around the interface to accomplish their tasks, errors are slips or mistakes that users frequently make while performing the task within the interface.

3.3.10 - Errors

When we're trying to make iterative improvements, one of the best places we can look is at the errors users make with the tools that they currently have available. We can fix those errors, but we can also use those errors to understand a bit more about the user's mental model. So, here's a common example of an error for me, which is slip. I keep my email open in the window on the left. I frequently forget that it's my active window while I'm trying to type into one of the other windows, and as a result, I'll hit a bunch of hotkeys in my email interface. I'll tag random emails, delete random emails. It's just kind of a mess. Now, this is a slip because there's nothing wrong with my mental model of how this works. I understand there's an active window and it's not selected. The problem is that I can easily forget which window is active. Mistakes, on the other hand, are places where my mental model is weak, and for me, a place where that happens is when I'm using my Mac. I'm used to PC where the maximize button always makes a window take up the entire screen. I've honestly never fully understood the maximize button on a Mac. Sometimes, it seems to work like a PC maximize button. Sometimes, it just expands the window a bit, but not to the entire screen. Sometimes, it enters even like a full-screen mode hiding the top taskbar. I make mistakes there because I don't have a strong mental model of how it works. So, if you are watching me, you could see me making these errors and you could ask me why I'm making them. Why did I choose to do that if that was my goal? That works for both discovering hacks and discovering errors. Watch people performing their tasks and ask them about why certain things happened the way that they do. Discovering hacks and errors involves a little bit more user interaction than just watching people out in the wild. So, how about we do that if we're doing something like creating an app that people are going to use in public? Well, maybe we actually go up to people we see exercising out in public. We can actually get approval to do that, but that's going to be a little bit awkward and the data we get might not always be great. So, at this point, we might be better off recruiting people to come in and describe their experiences. People experience hacks and errors pretty consciously. So, our best bet would likely be to target local exercise groups or local areas where exercisers frequent, and recruit people to come in for a short study. Or maybe, we could recruit people to participate in a study during their normal exercise routine, taking notes on their experience or talking us through their thought process. We can actually take that to an extreme and actually adapt something like an apprenticeship approach where we actually train to become users.

3.3.11 - Apprenticeship and Ethnography

If we're designing interfaces for particularly complex tasks, we might quickly find out that just talking to our participants or observing them really isn't enough to get the understanding we need to design those interfaces. For particularly complex tasks, we might need to become experts ourselves in order to design those programs. This is informed by the domain of ethnography, which recommends researching a community or a job or anything like that, by becoming a participant in it. It goes beyond just participant observation though, it's really about integrating oneself into that area and becoming an expert in it and learning about it as you go. So we bring in our expertise and design in HCI and use that combined with the expertise that we develop to create new interfaces for those people. So for example, our video editors here at Udacity have an incredible incredibly complex workflow involving multiple programs, multiple workflows, lots of different people and lots of moving parts. There's no possible way I could ever sit down with someone for just an hour and get a good enough picture of what they do, to design a new interface that will help them out, I really need to train under them. I really need to become an expert at video editing and recording myself, in order to help them out. It's kind of like an apprenticeship approach. They would apprentice me in their field and I would use the knowledge that I gain to design new interfaces to help them out. So ethnography and apprenticeship are huge fields of research both on their own and as they apply to HCI. So if you're interested in using that approach take a look at some of the resources that we're providing.

3.3.12 - Interviews and Focus Groups

A most targeted way of gather information from users though is just to talk to them. One way of doing that might be to bring them in for an interview. So I'm sitting here with Morgan, who's one of the potential users for our audio book app targeted at exercisers. And we're especially interested in the kinds of task you perform while exercising and listening to audio books at the same time. So to start, what kind of challenges do you run into doing these two things at once? >> I think the biggest challenge is that it's hard to control it. I have headphones that have a button on them that can pause it and play. But if I want to do anything else I have to stop, pull up my phone and unlock it just to rewind. >> Yeah, that makes sense. Thank you. Interviews are useful ways to get at with the user is thinking when they're engaging in a task. You can do interviews one on one like this or you can even do interviews in a group with multiple users at the same time. Those tend to take on the form of focus groups, where a number of people are all talking together about some topic, and you can use them to tease out different kinds of information. Focus groups can elicit some information we don't get from this kind of an interview, but they also present the risk of overly convergent thinking. People tend to kind of agree with other instead of bringing in new ideas. So they should really be used in conjunction with interviews, as well as other need finding techniques.

3.3.13 - 5 Tips: Interviews

Here are five quick tips for conducting effective interviews. Now, we recommend reading more about this before you actually start interviewing people, but these should get you started. Number one, focus on the six W's when you're writing your questions, who, what, where, when, why, and how. Try to avoid questions that lend themselves to one word or yes or no answers, those are better gathered via surveys. Use your interview questions to ask open-ended semi-structured questions. Number two, be aware of bias. Look at how you're phrasing your questions and interactions and make sure you're not predisposing the participant to certain views. If you only smile when they say what you want them to say, for example, your risk biasing them to agree with you. Number three, listen. Many novice interviewers get caught up in having a conversation with a participant rather than gathering data from the participant. Make sure the participant is doing the vast majority of the talking and don't reveal anything that might predispose them to agree with you. Number four, organize the interview. Make sure to have an introduction phase, some lighter questions to build trust, and a summary at the end, so the user understands the purpose of the questions. Be ready to push the interview forward or pull it back on track. Number five, practice. Practice your questions on friends, family, or research partners in advance. Rehearse the entire interview. Gathering subjects is tough, so when you actually have them, you want to make sure to get the most out of them.

3.3.14 - Exercise: Interviews Question

Interviews are likely to be one of the most common ways you gather data. So let's run through some good and bad interview questions real quick. So here are six questions. Which of these would make good interview questions? Mark the ones that would be good. For the ones that would be bad, briefly brainstorm a way to rewrite the question to make it better. You can go ahead and skip forward to the exercise if you don't want to listen to me read them out. Number one, do you exercise? Number two, how often do you exercise? Number three, do you exercise for health or for pleasure? Number four, what, if anything do you listen to while exercising? Number five, what device do you use to listen to something while exercising? Number six, we're developing an app for listening to audio books while exercising. Would that be interesting to you?

3.3.14 - Exercise: Interviews Solution

Personally, I think three of these are good questions. Do you exercise, is not a great question, because it's kind of a yes or no question. How often do you exercise, is actually the better way of asking the same question. It's subsumes all the answers to do you exercise, but leaves more room for elaboration or more room for detail. Do you exercise for health or for pleasure, is not a great question, because it presents to the user a dichotomy. It might not be the way they actually think about the problem. Maybe there's some other reason they exercise. Maybe they do it to be social, for example. We want to leave open all the possibilities a user might have. So instead of asking, do you exercise for health or for pleasure, we probably want to ask, why do you exercise? The next two questions work pretty well, because they leave plenty of room for the participant to have a wide range of answers, and they're not leading them towards any particular answer. We're not asking, for example, what smartphone do you use to listen to something, because maybe they don't use a smartphone. This sixth one is interesting. We're developing an app for listening to audiobooks while exercising. Would that be interesting to you? What's wrong with that question? When we say, we're developing an app, we introduce something called social desirability bias. Because we're the ones developing the app, the user is going to feel some pressure to agree with us, to support our ideas. People like to support one another. And so even if they wouldn't be interested, they'll likely say that they would, because that's the supportive thing to say. No one wants to say, hey, great idea, David, but I would never use it. So what we want to make sure to do is create no incentive for a user to not give us the complete, honest answer. Worrying about hurting our feelings is one reason why they wouldn't be totally honest. So we might reword this question just to say, would you be interested in an app for listening to audiobooks while exercising? Now granted, the fact that we're the ones asking still probably will tip off the user that we're probably thinking about moving in that direction, but at least it's going to be a little more collaborative. We're not tipping them off that we're already planning to do this, we're telling them that we might be thinking about doing it. And so if they don't think it's a good idea, they kind of feel like they should tell us right now, to save us time down the road. So by rephrasing the question that way, we hopefully, avoid biasing the participant to just agree with us to be nice.

3.3.15 - Think-Aloud

Think-aloud protocols are similar to interviews in that we're asking users to talk about their perceptions of the task. But with think-aloud, we're asking them to actually do so in the context of the task. So instead of bringing Morgan in to answer some questions about listening to audiobooks while exercising, I'll ask her to actually think out loud while listening to audiobooks and exercising. If this was a different task like something on a computer, I could have her just come into my lab and work on it. But since this is out in the world, what I might just do is give her a voice recorder to record her thoughts while she's out running and listening. Now think aloud is very useful, because it can help us get at users thoughts that they forget when they are no longer engaged in the task. But it's also a bit dangerous by asking people to think aloud about their task, we encourage them to think about it more deliberately and that can change the way they actually act. So while it's useful to get an understanding of what they are thinking, we should check to see if there are places where what they do differs when thinking out loud about it. We can do that with what's called a post-event protocol, which is largely the same, except we wait to get the user's thoughts until immediately after the activity. That way, the activity is still fresh in their minds, but the act of thinking about it shouldn't affect their performance quite as much.

3.3.16 - Surveys

3.3.17 - 5 Tips: Surveys

  1. Number one, less is more. The biggest mistake that I see novice survey designers make is to ask way too much. That affects the response rate and the reliability of the data. Ask the minimum number of questions necessary to get the data that you need, and only ask questions that you know that you'll use.
  2. Number two, be aware of bias. Look at how you're phrasing the questions. Are there positive or negative connotations? Are participants implicitly pressured to answer one way or the other?
  3. Number three, tie them to the inventory. Make sure every question on your survey connects to some of the data that you want to gather. Start with the goals for the survey and write the questions from there.
  4. Number four, test it out. Before sending it to real participants, have your co-workers or colleagues test out your survey. Pretend they're real users and see if you would get the data you need from their responses.
  5. Number five, iterate. Survey design is like interface design. Test out your survey, see what works and what doesn't, and revise it accordingly. Give participants a chance to give feedback on the survey itself so that you can improve it for future iterations.

3.3.18 - Writing Good Survey Questions

Surveys are used often in HCI because of their convenience, but they're only useful if the questions are actually well-written. Tips like "Be aware of bias" and "Test it out" are good pieces of general advice, but there are also lots of specific things that we can do to make our survey questions better. So, in fact, there are six things I personally recommend in survey design: be clear, be concise, be specific, be expressive, be unbiased, and be usable. Let's go through what these actually mean in practice. Be clear means we want to make sure the user actually understands what we're asking. So, if we're using a numeric scale, for example, we don't want to just give them numbers. We want to actually code those numbers with what they mean. It's not uncommon to see some larger scales code only the first, last, and middle number, but it's always better to assign some kind of label to every single number, and make sure they're parallel. We wouldn't want some thing like highly dissatisfied, dissatisfied, neutral, a little satisfied, and satisfied. We also want to avoid overlapping ranges if we're asking about some range of numbers. So, here we're asking, "How many times per week do you watch Hulu?" So, if a user says they generally watch twice per week, it's not clear whether they would choose 0-2 or 2-5. Instead, we want to make sure the ranges don't overlap. If we're in doubt on whether the user will actually understand our question, we should provide some extra detail. For example, if we were asking, "Do you own a tablet computer?" We might infer that not all our users really understand what a tablet computer is. So, we'd go on to define and say it's a computer with a touchscreen and detachable keyboard. That improves the likelihood that the user actually understands what we're asking. If we're asking about a frequency, it's useful to timebox it. So, for example, if we ask, "How often do you exercise?" Users might not fully understand what the difference between rarely and occasionally is, for example. Is rarely once a week, once a month, once a year? Is frequently everyday, five times a week? So, instead, we probably want to ask a question like, "In the past seven days, how many times have you done this behavior?" That's a much more objective question and a lot easier to answer. Second, we want to be concise with our questions. We always want to make sure to ask our questions in plain language that the user can understand. So, for example, instead of asking something like, "What was the overall level of cleanliness that you observed within the car that you rented?" We'd ask, "How clean was the car?" Now, it is worth noting that sometimes being concise and being clear are at odds. Adding more detail inherently means being less concise. So, it's a trade-off. Use your best judgment to decide when adding more detail will be worth the trade-off. Third, when asking questions, we want to be specific. We want to avoid questions that are on super-big ideas. For example, "How satisfied were you with the interface?" Well, there's a lot of elements of satisfaction with using an interface. Asking about satisfaction with the interface as a whole is such a big question that it's hard to answer. Instead, we might ask a series of smaller questions like how satisfied were you with how quickly the interface responded to your commands or how satisfied were you with how easily you could find the command you were looking for. Part of this is avoiding what are called double-barrel questions. A double-barrel question is a question that asks about two things at the same time. So, for example, if we were asked, "How satisfied are you with the speed and availability of your mobile connection? What if a user were satisfied with the availability, but not satisfied with the speed?" How did they answer that question? So, instead, we will break this up into two questions, one asking about speed, one asking about availability. We also want to avoid questions that allows some internal conflict. This is similar to avoiding questions about big ideas. For example, how satisfied were you with your food? Well, I might have been satisfied with the taste of it, but not with the temperature of it or not the appearance of it. So, instead, we break that down into smaller questions that each address each individual component of satisfaction. Four, we want to be expressive or really what they should say is allow the user to be expressive, but that would break my nice little symmetry over here. We want to make sure to emphasize the user's opinions. Sometimes users taking our surveys are hesitant to be very emphatic or very critical. So, we want to make sure to emphasize in the questions that we're looking for their opinions. Instead of asking, "Is our subscription price too high?" We might ask, "Do you feel our subscription price is too high, too low, or about right?" In the second version, a user could say too high without feeling like they're being very combative. Whenever possible, we want to use ranges instead of yes and no questions. That allows the user to express more of the details about their individual answers. So, instead of asking, "Do you use social media? Yes or no?" We might ask, "In the past seven days, how much time have you spent on social media?" This allows the user to express something more closely resembling the complexity of their answer. If we're asking about something with levels of frequency or levels of agreement, we want to give lots of levels. Simply saying how satisfied are you, dissatisfied, or satisfied isn't enough to capture the full range of user opinions. I generally recommend always using at least five, so you can differentiate people who are highly satisfied, which means I have no complaints from people who are satisfied, which means I might have some complaints, but overall, it's a positive experience. That's actually a pretty useful distinction to arrive at. When possible, it's also useful to allow users to make multiple selections. For example, imagine we were asking, "What social media platform do you use the most?" Then we're losing something with those users who think they use multiple platforms with equal frequency. So, instead, why not let them choose more than one? There might be some good reason why we want them to choose only one, maybe some follow-up questions are based on that, but a lot of times it may also be beneficial to allow them to select multiple answers. For questions that are nominal or categorical, it's often good to let them add new categories. So, instead of just giving them six to choose from, we could give them six to choose from, but also a box to put in another one. That allows them to express ideas that we didn't anticipate. My fifth piece of advice is to be unbiased or to avoid bias wherever possible, and that last question is actually a good example of that as well. If we don't give them that other box, then we're biasing them with only our pre-established selections. Now, sometimes that's okay. If we've done a lot of surveys in the past and found that these are the only answers anyone ever puts in, then it's okay to limit the space to only those. Just remember, if you provide users categories and don't give them another box, then you might be biasing them towards only those opinions that you anticipated. But even if you provide another box, you still risk some bias. So, for example, if you ask, "Why did you choose our service over our competitors?" A user might look at these options and say, "Well, now that you mention it, I guess it was because of your good reputation." But if you ask them this question without giving them options, they may have given a different answer. It was the act of reading these options that made them think, "Maybe that's why I did that." So, often it's good to actually leave these potentially open-ended questions open. Let them just say in free text why they chose your service. Now, again, if you've done the survey for awhile and have a lot of these open-ended responses, and you found that there's only really four or five answers that users ever put in, then it's okay to distill those down to options. In that case, you've done enough data analysis to understand that these are really the only selections. But if you aren't yet sure the full space of answers you might receive, it can be better to leave it open-ended. We also need to avoid leading questions. This one is a little bit more obvious. If we're asking for opinions on our new interface, we don't want to say something like, "Did our brand new AI-based interface generate better recommendations? Yes or no?" Obviously, here we want the user to choose yes. Instead, we should ask it in a more neutral fashion. "How satisfied were you with the recommendations the interface generated?" Similarly, we want to avoid loaded questions. For example, "In the past seven days, how much time have you wasted on social media?" Asking the question like that is guaranteed to lower our estimates, compared to, "In the past seven days, how much time have you spent on social media?" Finally, my last word of advice is to make your survey usable. Now, a lot of this is actually going to come down to the details of the survey platform that you choose, but some of these are decisions that you can make as well. For example, it's always good to provide a progress bar that lets the user know how far along in the survey they actually are and adjust their expectations accordingly. It's not uncommon for users to quit surveys because they don't know how close they were to the end, even though in reality, they were only a few seconds away from the last question. Along the same line, it's good to make your page links consistent. If you have a five-page survey, you don't want one question on the first page, and 50 on the second page, and two on the third page. If a user opens a second page and sees 50 questions, they're going to naturally assume that the remaining pages also have 50 questions. So, try to make them consistent to set accurate expectations about how long the survey is going to take. Third, order your questions logically. There should be some natural flow to the order in which you ask different questions. You don't want to go from a demographic question to a satisfaction question, back to a demographic question. You want to gather your questions into topics. Ideally, they should take the user along the thought process that you want them to engage in while answering your questions. Fourth, at the end of the survey, it's good to alert users about unanswered questions. On the one hand, maybe the user didn't know they skipped the question. This lets them know, so they can go back and answer. But on the other hand, maybe they skipped that question intentionally, maybe they weren't comfortable answering, maybe they just don't have an answer, maybe your space of answer options didn't capture what they thought. So, you don't want to force them to go back and answer it, but you also want to account for times when they may have accidentally skipped it. So, let them know, but don't force them to go back. Finally, preview the survey yourself. This takes some discipline. I have lots of surveys that I never previewed and later found out I use check mark boxes instead of radio buttons for a particular question. So, force yourself to actually preview the survey and fill it out as if you were a real user. Don't just scroll through it, actually go through and answer each question. So, that was quite a lot of information, but I'm hoping the fact that most of the tips were pretty practical will make it easy to apply. When in doubt, remember, you can always ask for feedback on your survey questions before sending it out to actual participants.

3.3.19 - Exercise: Surveys Question

Writing survey questions is an art, as well as a science. So let's take a look at an intentionally poorly designed survey, and see everything we can find that's wrong with it. So on the left is a survey. It's kind of short, mostly because of screen real estate. Write down in the box on the right everything that is wrong with this survey. Feel free to skip forward if you don't want to listen to me read out the questions. On a scale of 1 to 4 with 1 meaning a lot and Why do you like to exercise? On a scale of 1 to 6 with 1 meaning not at all and Have you listened to an audiobook this year?

3.3.19 - Exercise: Surveys Solution

Here are a few of the problems that I intentionally put into this survey. Some of them are kind of obvious, but hopefully a couple others were a little bit more subtle and a little bit more interesting. First when I say on a scale of one to four with one meaning a lot and four meaning not at all, what do two and three mean exactly? It's not a very clear scale to just say the endpoint. Just giving the endpoints doesn't give a very clear scale. We usually also want to provide an odd number of options, so that users have kind of a neutral central option. Sometimes we'll want to force our participants to take one side or the other, but generally we want to give them that middle neutral option. Either way though, we definitely don't want to change the number of options between those two questions. Having one be 1 to 4 and the other be 1 to 6 is just confusing. And even worse, notice that we're reversing the scale between these two. In the first question, the low number means a lot. In the second question, the high number means a lot. That's just terrible design. We want to be consistent across our entire survey, both with the direction of our scale and the number of options unless there's a compelling reason not to. The second question is also guilty of being quite a leading question. Why do you like to exercise assumes the participant likes to exercise. What are they supposed to say if they don't? And finally, the last question is a yes or no question. Have you listened to an audiobook this year? Yes or no. No is kind of an interesting answer, but yes, I don't know if you listened to one audiobook this year or a 100 audiobooks this year. I don't know if you listened every single day or if you just listened once because you had a gift certificate. So we want to reword this question to be a little more open-ended and support a wider range of participant answers.

3.3.20 - Other Data Gathering Methods

So far we've discussed some of the more common approaches that need finding. Depending on your domain though, there might be some other things you can do. First, if you're designing for a task for which interfaces already exist, you might start by critiquing the interfaces that already exist using some of the evaluation methods that we'll cover later in the evaluation lesson. For example, if you're wanting to design a new system for ordering takeout food, you might evaluate the interfaces of calling in an order, ordering via mobile phone or ordering via a website. Second and similarly, [SOUND] if you're trying to develop a tool to address a problem that people that are already addressing, you might go look at user reviews and see what people already like and dislike about existing products. For example, there are dozens of alarm clock apps out there, and thousands of reviews. If you want to design a new one, you could start there to find out what people need or what their common complaints are. Third, if you're working on a task that already involves a lot of automatic logging like web surfing, you could try to get some logs of user interaction that have already been generated. For example, say you wanted to build a browser that's better at anticipating what the user will want to open next. You could grab datalogs and look for trends both within and across users. You can creative with your data gathering methods. The goal is to use a variety of methods to paint a complete picture of the user's task.

3.3.21 - Exercise: Needfinding Pros and Cons Question

In this lesson we've covered a wide variety of different methods for need finding. Each method has its own disadvantages and advantages. So let's start to wrap up the lesson by exploring this with an exercise. Here are the methods we've covered, and here are the potential advantages. For each row, for each advantage, mark which need-finding method actually has that advantage. Note that these might be somewhat relative, so your answer may differ from ours. Go ahead and skip to the exercise if you don't want to listen to me read these out. The columns from left to right are Naturalistic Observation, Participant Observation, Errors and Hacks, Interviews, Surveys, Focus Groups, Apprenticeship, and Think-Aloud. The potential advantages are Analyzes data that already exists, Requires no recruitment, Requires no synchronous participation, Investigates the participant's thoughts, occurs within the task context, and cheaply gathers lots of users' data.

3.3.21 - Exercise: Needfinding Pros and Cons Solution

Here's my answer to this very complicated exercise. Two methods that analyze data that already exists are Naturalistic Observation and Errors and Hacks. Naturalistic Observation doesn't necessarily analyze data that already exists, but it analyzes data that's being produced already on its own without observing it, so we don't have to go out and create an opportunity for data to happen. We just have to observe it and capture it where it's already taking place. Errors and Hacks, look at the way users already use interfaces to see what errors they regularly make or when they have to work around the interface. The two methods that require no recruitment are Naturalistic Observation and Participant Observation. In both cases, we don't need other human participants to come do anything differently based on the fact that we're doing some research. With interviews, surveys, Focus Groups, apprenticeship and Think-Aloud, we're always asking users to do something to accommodate us or to give us some data. And with Errors and Hacks, even if we can view that data on our own, we still need the user to give us permission to view their workspace or watch them do whatever they're doing. There might be some times when you can look for Errors and Hacks with Naturalistic Observation, but generally you need to get enough into the users head to understand why something's an error or why they need to use a certain hack. For the most part, all of these are going to need some synchronous participation. There might be some exceptions. For example, we could do a retrospective analysis of Errors and Hacks, or we can have someone do a Think-Aloud protocol where they actually write down their thoughts after doing a task. But generally speaking, the way most of these are usually done, they require synchronous participation. Surveys are the exception. Surveys we usually send out to someone, wait some period of time, and get back the results. So we never have to be interacting live with any of our participants. That's one of the reasons why surveys can get a lot more data than other methods. Adding more participants doesn't necessarily require more of our time, at least not to gather the data in the first place. Analyzing it might require more time at the end, but that's not synchronous either. As far as investigating participant thoughts is concerned, almost all these methods can investigate this when used correctly. We could do a survey does not actually investigate participants thoughts, but a well designed survey is going to trying get a heart of the users thinks about things. The only exception is Naturalistic Observation where by definition, we're just watching people we're not interacting with them or we're not asking them what they are thinking. It's always extremely valuable for us to be able to do some needfinding that occurs within the task context itself. And unfortunately interviews and surveys, which are some of our most common data gathering methods, very often don't occur within the task context. Naturalistic Observation and Participant Observation obviously do, but since they don't involved getting inside the real users head, their contributions are a little bit more limited. Apprenticeship and Think-Aloud really capture the benefits of occurring within the task context, because either way we get the user's thoughts while they're engaging with the task, or immediately thereafter. It is possible to do interviews and Focus Groups within the task contexts as well, it just isn't quite as common. Errors and Hacks are certainly debatable as well, because the Errors and Hacks themselves definitely occur within the task context, but our analysis of them usually doesn't. And finally, as we talk about when we discuss cognitive task analysis, one of the challenges with needfinding is that most of our approaches are extremely expensive. If we want to gather a lot of data cheaply, then we probably need to rely on surveys. Everything else is either going to incur a pretty significant cost or it just isn't capable of gathering a lot of data. For example, we could cheaply be participant observations for weeks on end, but we're only ever going to gather data from one person and that's never ideal.

3.3.22 - Design Challenge: Needfinding for Book Reading Question

The needfinding exercises that we've gone through so far focus on the needs of the exercisers. What can they do with their hands, what is the environment around them like while exercising, and so on? However, that's only half the picture for this particular design. Our ultimate goal is to bring the experience of consuming books to people that exercise, which means we also need to understand the task of book-reading on its own. Now a problem space is still around exercisers, so we wouldn't go through the entire design life cycle for book reading on its own. We don't need to design or prototype anything for them. But if we're going to bring the full book reading experience to people while exercising, we need to understand what that is. So take a moment and design an approach to needfinding for people who are reading on their own.

3.3.22 - Design Challenge: Needfinding for Book Reading Solution

We could apply pretty much every single need-finding method that we've discussed to this task. We could, for example, go to the library and just watch people reading and see how they're taking notes. We've all likely done it ourselves. We can reflect on what we do while reading, although again, we need to be careful not to over-value our own priorities and approaches. Reading is common enough, that we can easily find participants for interviews, surveys, think alouds. The challenge here will be deciding who our users really are. Books are ubiquitous. Are we trying to cater to everyone who reads deliberately? If so, we need to sample a wide range of users or initially, we could choose a subset. We might cater to students who are studying or busy business people, or people that specifically walk or bike to work. We might start with one of those groups and then abstract out over time. We might eventually abstract all the way to just anyone who's unable to read and take notes the traditional way like people driving cars or people with visual impairments but that's further down the road. The more important thing is that we define who our user is, define the task in which we're interested, and deliberately design for that user and that task through out the design life cycle.

3.3.23 - Iterative Needfinding

We've noted that design is a life cycle from needfinding to brainstorming design alternatives to prototyping to evaluation. And then, back to needfinding to continue the cycle again. Needfinding on it's own though can be a cycle by itself. For example, we might use the results of our naturalistic observation to inform the questions we asked during our interviews. For example, imagine that we noticed that very many joggers, jog with only one earphone in. That's a naturalistic observation, and then in an interview, we might ask, why do some of you jog with only one earphone in? And we might get the answer from the interview that it's to listen for cars or listen for someone trying to get their attention because they exercise in a busy area. Now that we understand why they have that behavior, maybe we develop a survey to try and see how widespread that behavior is, and ask, how many of you need to worry about what's around you when you're listening while driving? If we notice in those surveys a significant split in the number of people who were concerned about that, that might inform our next round of naturalistic observation. We might go out and look and see in what environments are people only wearing one headphone and in what environments are they wearing both. So in that way all of the different kinds of need finding that we do can inform our next round of other kinds of need finding. We can go through entire cycles just of need finding without ever going on to our design alternatives or prototyping stages. However, the prototyping and evaluation that we do will then become another input into this. During our evaluation we might discover things that will then inform what we need to do next as far as need finding is concerned. Creating prototypes and evaluating them gives us data on what works and what doesn't. And that might inform what we want to observe to better understand the task going forward. That's the reason why the output of evaluation is more needfinding. It would be a mistake to do one initial needfinding stage, and then jump in to a back and forth cycle of prototyping and evaluation.

3.3.24 - Revisiting the Inventory

During these need-finding exercises, you'll have gathered an enormous amount of information about your users. Ideally, you've combined different sets of these approaches. You've observed people performing the tasks, you've asked them about their thought process, and you tried it some yourself. Pay special attention to some of the places where the data seem to conflict. Are these cases where you as the designer understand some elements of the task that the users don't? Or are these cases where your expertise hasn't quite developed to the point of understanding the task? Once you've gone through the data gathering process, it's time to revisit that inventory of things we wanted to gather initially. One, who are the users? Two, where are the users? Three, what is the context of the task? Four, what are their goals? Five, right now, what do they need? Six, what are their tasks? And seven, what are the subtasks? Revisit these, with the results of your data gathering in mind.

3.3.25 - Representing the Need

Now that you have some understanding of the user's needs, it's time to try to formalize that into something we can use in design. There are a number of different ways we can do this. For example, maybe we create a step-by-step task outline of the user engaging in some task. We can break those tasks down into sub-tasks as well, all the way down to the operator level. We can further develop this kind of task outline into a hierarchical network, like we talked about before. This might involve more complexity than simply a linear series of actions. We might further augment this with a diagram of the structural relationships amongst the components in the system and how they interact. This might give us some information about how we get feedback back to the user or how they interact with our interface in the first place. From there, we might develop as even more into a flowchart equipped with decision-making points or points of interruptions. Notice how these representations are very similar to the outcomes of the task analyses and we talk about in the principles unit of our conversations. We can similarly use the data gathered from here to summarize a more comprehensive task analysis that will be useful in designing and prototyping our designs.

3.3.26 - Defining the Requirements

Finally, the final step for need-finding is to define our requirements. These are the requirements that our final interface must meet. They should be specific and evaluatable, and they can include some components that are outside of users tasks, as well, as defined by the project requirements. In terms of user tasks, we might have requirements for guarding functionality, what the interface can actually do, usability, how certain user interactions must work, learnability, how fast the user can start to use the interface, and accessibility, who can use the interface. We might also have some that are generated by external project requirements, like compatibility, what devices the interface can run on, compliance, how the interface protects user privacy, cost, how much the final tool can actually cost, and so on. We'll use these to evaluate the interfaces we develop, going forward.

3.3.27 - Exploring HCI: Needfinding

How might you need finding work in your chosen area of HCI? If you're looking at designing for some technological innovation like augmented or virtual interactions, the initial phase might not actually be that different. Your goal is to understand how people perform tasks right now without your interface. So, initially, you want to observe them in their naturalistic environment. Later though you'll need to start thinking about bringing participants to you to experience the devices firsthand. If you're interested in something like HCI for healthcare or education, you have a wealth of naturalistic observations available to you. You might even have existing interfaces doing what you want to do and you can try to leverage those as part of your need-finding exercises. Remember, no matter your area of application, you want to start with real users that might be observing them in the wild, talking to them directly, or looking at data they've already generated.

3.3.28 - Conclusion

Today, we've talked about needfinding. Needfinding is how you develop your understanding of the needs of your user, what tasks are they completing, what is the context of those tasks, what else is going on, what are they thinking during the task, and what did they have to hold in working memory. All these things feed into your understanding of your users' needs. We've discussed a number of different techniques to approach this, ranging from low intervention to high intervention. On the low side, we can just observe our users in the wild or we can become users ourselves and participate in the task. Working up, when you try to look at more closely at users areas to find errors or hacks or perused the data that they're already generating, we might interact with them directly through surveys, interviews, or focus groups, or we might choose to work alongside them, not just participating in the task independently, but learning from them and developing expertise itself. Once you've gained the efficient understanding, it's time to move on to the next step, brainstorming design alternatives.