Public notes for CS6750 - HCI Spring 2022 at Georgia Tech

Public webpage for sharing information about Dr. Joyner's CS6750 - Human Computer Interaction course in Spring 2022.

View the Project on GitHub idkaaa/cs-6750-hci-sp22-public

Unit 1: Introduction

1.1 Introduction to HCI

1.1.1 - Humans

[MUSIC] This is Morgan. » Hello. » Morgan is a human. » Last time I checked. » As a human, Morgan has various ways of perceiving the world around her, like seeing, hearing, and feeling. » Is anyone else seeing these? » There area few more as well like smelling and tasting, but we won’t deal with those as much. » Thank goodness. » But Morgan has more than senses. She also has memories, experiences, skills, knowledge. » Thanks. » In human computer interaction, we have to take into consideration every element of the human, from the way they perceive and interact with the world, to their long history of using computers and technology.

1.1.2 - Computers

This is a computer. Or at least, this is probably what you think of when you think of the term computer. But this is also a computer. And so is this. And so is this. And so is this. And so is this. » Hey! » This is Amanda, my video producer. » Go on, I’m rolling. » Right, and so is this. And so is this, and this, and this, and this, and even this. And so is this. And so is this. And so is this. [SOUND] And so is this. And so is this. Hey David? » One second, trying to get to Squirtle. There we go. With mobile devices and augmented reality, HCI is quite literally everything. Pokemon Go was released a few days before I recorded this and augmented reality games like this turn effectively the entire world into an instance of HCI. Even out here in the middle of nowhere, I’m still doing something with computers.

1.1.3 - Interaction

We have humans and we have computers and we’re interested in the interaction between them. That interaction can take different forms though. The most obvious seems to be the human interacts with the computer and the computer interacts with the human in response. They go back and forth interacting, and that’s a valid view. But it perhaps misses the more interesting part of HCI. We can also think of the human interacting with the task, through the computer. The interaction is really between the human and the task and the computer in the middle just mediates that interaction. Or to put this differently, the human and the computer together, interact with the task. Ideally in this case, we’re interested in making the interface as invisible as possible, so the user can spend as little time focusing on the interface and instead focus on the tasks that they’re trying to accomplish. Realistically, our interfaces are likely to stay somewhat visible. But our goal is to let the user spend as much time as possible thinking about the task, instead of thinking about our interface. We can all probably remember times when we’ve interacted with a piece of software and we felt like we spent all our time thinking about how to work the software. As opposed to accomplishing what we were using the software to do in the first place and that’s frustrating. So our goal as designers, is to help the human feel like they’re interacting directly with that task. While our interface, kind of vanishes, in the middle of that interaction.

1.1.4 - Reflections: Interacting & Interfaces Question

We’ll talk extensively about the idea of disappearing interfaces and designing with tasks in mind. But in all likelihood, you’ve used computers enough to already have some experience in this area. So take a moment and reflect on some of the tasks you do each day involving computers. Try to think of an example where you spend most of your time thinking about the task and an example where you spend most of your time thinking about the tool.

1.1.4 - Reflections: Interacting & Interfaces Solution

Video games actually give us some great examples of interfaces becoming invisible. A good video game is really characterized by the player feeling like they’re actually inside the game world, as opposed to controlling it by pressing some buttons on a controller. We can do that through some intuitive controller design like pressing forward moves forward, and pressing backward moves backwards. But a lot of times we’ll rely on the user to also learn how to control the game over time. But as they learn, it becomes invisible between them and their interaction. A classic example of a place where interaction is more visible, is the idea of having more than one remote control that controls what feels like the same system. So I have these two controllers that control my TV and my cable box together. And for me it feels like this is just one task, watching TV. But technologically, it’s actually different tasks. So I have to think about am I using the number pad on this controller or this controller, depending on what I’m trying to do at a given time. So I spend a lot of time thinking about the interface and not as much thinking about the task.

1.1.5 - The HCI Space

One of the most exciting parts of HCI is it’s incredible ubiquity. Computers are all around us and we interact with them everyday. It’s exciting to think about designing the types of tools and interfaces we spend so much time dealing with, but there’s a danger here too. Because we’re all humans interacting with computers, we think we’re experts at human-computer interaction. But that’s not the case. We might be experts at interacting with computers, but that doesn’t make us experts at designing interactions between other humans and computers. We’re like professional athletes or world-class scientists. Just because we’re experts doesn’t mean we know how to help other people also become experts. In my experience, many people look at HCI like this. The red dot represents what they know and the black circle represents what they think there’s to know. They know there’s probably somethings they don’t know yet, but they’re already pretty at it, and it wouldn’t be too hard to become an expert. After studying HCI for a bit though, they look more like this. You can see that they’ve increased what they know but their perception of what there is to know has grown even more. That’s the journey we’ll be taking together. You’ll learn to do work in HCI, but perhaps more importantly, you’ll learn how complex and large the field of HCI is. Your knowledge will increase, but yet you might exit the class less confident in your HCI ability than when you started. You’re taking the first step into a larger world.

1.1.6 - HCI in the Big Picture

Now, what we’ve described so far is a huge field, far too big to cover in any one class. In fact there are lots of places where you can get an entire masters degree or PhD in human computer interaction. Here are some of the schools that offer programs like that. And these are just the school that offer actual degree programs in HCI, not computer science degrees with specializations in HCI, which would be almost any computer science program. So let’s look more closely at what we’re interested in for the purpose of the next several weeks. To do that, let’s look at where HCI sits in a broader hierarchy of fields. We can think of HCI as a subset of a broader field of human factors engineering. Human factors engineering is interested in a lot of the same ideas that we’re interested in. But they aren’t just interested in computers. Then there are also sub disciplines within HCI. This is just one way to represent this. Some people, for example, would put UI design under UX design, or put UX design on the same level as HCI, but this is the way I choose to present it. Generally, these use many of the same principles that we use in HCI, but they might apply them to a more narrow domain, or they might have their own principles and methods that they use in addition to what we talk about in HCI in general. So to get a feel for what we’re talking about when we discuss HCI, let’s compare it to these other different fields.

1.1.7 - HCI vs. Human Factors

First, let’s start by comparing HCI to the broader field of human factors. Human factors is interested in designing interactions between people and products, systems or devices. That should sound familiar. We’re interested in designing the interactions between people and computers, but computers are themselves products or systems. But human factors is interested in non-computing parts of this as well. Let’s take an example. I have a pretty new electric car which means there are tons of computers all over it. From an HCI perspective, I might be interested in visualizing the data on the dashboard or helping the driver control the radio. Human factors is interested not only in how I interact with the computerized parts of the car, but the non computerized parts as well. It’s interested in things like the height of the steering wheel, the size of the mirrors, or the position of the chair. It’s interested in designing the entire environment, not just the electronic elements. But that means it’s interested in a lot of the same human characteristics that we care about, like how people perceive the world and their own expectations about it. So, many of the principles we’ll discuss in this class come from human factors engineering, applied more narrowly to computerized systems. But the exciting thing is that as computers become more and more ubiquitous, the number of application areas for HCI is growing. Twenty years ago, a car might not have been an application of HCI. Ten years ago, a wristwatch would have been more about industrial design than HCI. Within only the past couple years, things like shower heads and refrigerators has started to become truly computerized devices. As computers integrate themselves into more and more of our devices, the gap between human-computer interaction and human factors engineering is shrinking. As computers become more and more ubiquitous, there’s coming a time when pretty much every single thing on your car will actually be run through a computer. Don’t believe me? Check out the inside of Tesla’s Model S. When you look at the console of a Tesla, almost everything you see is a giant computer. Cars have become computers on wheels, watches have become computers on wristbands, car keys have become computers on key chains. Computers are everywhere, and so HCI is everywhere.

1.1.8 - HCI vs. User Interface Design

For many years, human-computer interaction was largely about user interface design. The earliest innovations in HCI were the creation of things like the light pen, the first computer mouse, which allow for flexible interaction with things on screen. But the focus was squarely on the screen. And so, we developed many principles about how to design things nicely for a screen. We borrowed from the magazine and print industries and identify the value of grids in displaying content and guiding the users eyes around our interfaces. We created laws that govern how difficult it is for users to select what they want on screen. We examined for example whether it’s easier to select a menu on a Mac, where the menus are always at the top of the screen. Or on a PC, where they’re grouped with the individual window. We develop techniques for helping interfaces adapt to different screen sizes and we develop methods for rapidly prototyping user interfaces using pen and paper or wire frames. Through this rich history, UI design really became its own well defined field. In fact, many of the concepts we’ll cover in HCI were originally developed in the context of UI design. But in HCI, we’re interested in things that go beyond the user’s interaction with a single screen. Technically, you an cover that in UI design as well, but traditionally most of the UI design classes I see focus on on-screen interaction. In HCI, we’ll talk about the more general methods that apply to any interface.

1.1.9 - HCI vs. User Experience Design

The relationship between HCI and user experience design is a little bit closer. In fact, if you ask a dozen people working in the field, you’ll likely get a dozen different answers about the difference. For the purposes of our conversations, we’ll think about the difference like this. HCI is largely about understanding the interactions between humans and computers. User experience design is about dictating the interactions between users and computers. In order to design user experiences very well, you need to understand the user. You need to understand their interactions with interfaces. That’s why I personally consider user experience design to be a subfield of the broader area of HCI. In our conversations, we’ll use the principles and methods from HCI to inform how we design user experiences. But it’s important to note that this relationship is deeply symbiotic. We might use that understanding to inform how we design user experiences but then we evaluate those designs and based on their success or failures, we’ll use that to inform our increasing knowledge of human computer interaction itself. If our understanding leads us to create good designs, that provides evidence that our understanding is correct. If we create a design with some understanding and that design doesn’t work, then maybe our understanding was flawed, and now, our understanding of human computer interaction as a whole will increase. This is similar to something called design-based research which we’ll talk about later. Using the results of our designs to conduct research and you might also notice this is very related to our concept of feedback cycles. Just as a user uses an interface to participate in some task and then evaluates the result of their interaction, so also, we design interfaces and evaluate their success. You’ll find the feedback cycles are really all over this course and all over the world in general.

1.1.10 - HCI vs. Psychology

The research side of HCI connects to the relationship between HCI and psychology. And if we zoom out even further on this hierarchy of disciplines, we might say that human factors engineering itself is in many ways the merger of engineering and psychology. As well as other fields of design and cognitive science. In HCI, the engineering side takes the form of software engineering, but this connection to psychology remains, and in fact, it’s symbiotic. We use our understanding of psychology, of human perception, of cognition to inform the way we design interfaces. We then use our evaluations of those interfaces to reflect on our understanding of psychology itself. In fact, at Georgia Tech, the Human Computer Interaction class is cross listed as a Computer Science and Psychology class. So let’s take an example of this. In 1992, psychologists working at Apple wanted to study how people organized the rapid flow of information in their work spaces. They observed that people tended to form piles of related material, kind of like a less formal filing system, and so they then designed a computer interface, that would mimic that ability. Finally, they used the results of that development to reflect on how people were managing their work spaces in the first place. So in the end, they had a better understanding of the thought processes of their users as well as an interface that actually helped users. So in the end, their design of an interface with an HCI informed their understanding of psychology more generally. We came away with a better understanding of the way humans think about their work spaces because of our experience designing something that was supposed to help them think about their work spaces.

1.1.11 - HCI: Research and Design

Now that we’ve talked at length about what HCI isn’t, let’s talk a little bit about what HCI actually is. On the one hand, HCI is about research. Many of the methods we’ll discuss are about researching the user, understanding their needs and evaluating their interactions with designs that we prototype for them. But on the other hand, HCI is about design. After all, design is that prototyping phase, even though we’re prototyping with research in mind. HCI is about designing interactions to help humans interact with computers, oftentimes using some known principles for good interaction, things like designing with distributed cognition in mind or making sure the user develops good mental models of the way the interface works, or making sure to design with universal design in mind. We’ll talk about all these topics later in our conversations. You don’t need to worry about understanding any of these right now. What is important to understand right now, is that these aren’t two isolated sides. The results of our user research inform the designs we construct and the results of our designs inform our ongoing research. Again, you might notice this is very similar to the feedback cycles we’re designing for our users. They use what they know to participate in the task, and then use the feedback from that participation to inform what they know. We use what we know to design good interfaces, and then use the results of those interfaces to inform our ongoing research. This is the heart of what HCI is for the purpose of our conversations; using research to inform our designs and using the results of those designs to inform our ongoing research. In this cycle, appears anywhere that humans are using computers to participate in tasks. That can be sitting at a desk using a screen. That could be using a smartphone or a smartwatch. That can be participating with some kind of touch or gesture-based system, or it could be some interesting emerging technologies like virtual and augmented reality. In HCI, we’re interested in the general principles and methods for designing and researching all of these things.

1.1.12 - Welcome to HCI

So, now you know what we’re going to cover in our exploration of HCI. We’re going to talk about some of the fundamental design principles that HCI researchers have discovered over the years. We’re going to talk about performing user research whether it’ll be for new interfaces or exploring human cognition. We’re going to talk about the relationship between these two, how our research informs what we design, and how what we design helps us conducted research, and we’re going to talk about how these principles work, in a lot of domains from technologies like augmented reality to disciplines like healthcare. I hope you’re excited. I know I am. I like to think of this not just as a course about human-computer interaction, but also an example of human computer interaction. Humans are using computers in new and engaging ways to teach about human-computer interaction. We hope this course exemplifies the principles, as well as teaches them.

1.2 - Introduction to CS6750

1.2.1 - Introduction to CS6750

[MUSIC] Now that you understand a little bit about what human-computer interaction is, let’s talk about what this class is going to be like. In this lesson, I am going to take you through a high level overview of this class. What material we’ll cover, how it fits together, and what you should expect to know by the end of the course. I’ll also talk a little bit about the assessments we’ll use in the class, but be aware, these assessments are only applicable to students taking this class through Georgia Tech. If you’re watching this course on your own or taking it to complement other courses you’re taking, those assessments won’t apply to you, but you’ll get to hear a little bit about what students in the actual course do. If you are a student in the course, you should know the assessments do tend to change a bit semester to semester. I’m going to try and stay as general as possible to capture future changes, but make sure to pay attention to the specific materials you’re provided for your semester.

1.2.2 - Learning Goals

In education, a learning goal is something we want you to understand at the end of the course. It’s the knowledge contained within your head that you might not have had when we got started. In this class we have three major learning goals. First, we want you to understand some of the common principles in human computer interaction. These are the tried and true rules on how to design good interactions between humans and computers. Second, we want you to understand design life cycle. That’s how interfaces go from conception to prototypes to evaluation. And we especially want you to understand the roll of iteration in this process. Third, we want you to understand the expense of the human computer inherent interaction field and the current applications available for HCI. HCI is really everywhere, from domains like healthcare, to technologies like virtual reality, to emerging techniques like sonification. We want you to understand the broad range of application areas for HCI in the modern world.

1.2.3 - Learning Outcomes: To Design

While learning goal is something we want you to know at the end of the course, a learning outcome is something we want you to be able to do. This course really has one learning outcomes but there are some nuances to it. The learning outcome for this course is to be able to design effective interactions between humans and computers. The first part of this learning outcome is to design. But what is design? Well for us design is going to take two forms. First, design is an activity where you’re appl known principles to a new problem. For example we’ll talk a lot about the importance of getting users to write kind of feedback at the right time. That’s a plan of principle of feedback to some new design problem ww encounter. But design is a second form as well, design is also a process where you gather information, use it to develop design alternatives, evaluate them with users and revise them accordingly. When designing interface for some tasks, I would ask some potential users how they perform some task right now. I develop multiple different ideas for how we can help them. I give those to the users to evaluate, and I will use the experiences to try to improve the interface on that time. So let’s take an example of this. Imagine I was designing a new thermostat. On the one hand, designing a new thermostat means applying known HCR principles, like feedback and error tolerance to some new design. On the other hand, designing a new thermostat means creating different ideas, giving them to users, getting their feedback and then revising those designs. Both these sides of design are very important. You don’t want to ignore decades of experience when designing new interfaces, but simply applying known principles to a new problem doesn’t guarantee you have a good design. Designing is about both these things. And in fact, these two things are a vast majority of material that we’ll cover in this course. We’ll cover the principles uncovered by a human factors engineering and human computer interaction research. And we’ll cover the methods used in the HCI. We’re gathering user requirements, developing designs, and evaluating new interfaces.

1.2.4 - Learning Outcomes: Effectiveness

The first part of this learning outcome to design needed some definition, but the second part seems pretty straightforward, right? Not exactly. Effectiveness is defined in terms of our goal. The most obvious goal here might be usability and for a lot of that’s exactly what we’re interested in. If I’m designing a new thermostat, I want the user to be able to create the outcome they want as easily as possible. But maybe usability isn’t my goal, maybe it’s research. Maybe I’m interested in investigating what makes people think that the thermostat is working correctly. In that case, I might deliberately create some thermostats that are harder to read, just to see how that changes people’s perceptions of the system. Or it could be that my goal isn’t to make the certain activity easier but rather to change that activity. Maybe I’m interested in reducing a home’s carbon footprint. In that case, my goal is to get people to use less electricity. I might design the interface of the thermostat specifically to encourage people to use less. Maybe I’d show them a comparison to their neighbor’s usage, or allow them to set energy usage goals. Or I could make the thermostat physically harder to turn up and down. So effectiveness is very much determined by the goal that I have in mind. We’ll generally assume that our goal is usability, unless we state otherwise. But we’ll definitely talk about some of those other goals as well.

1.2.5 - Learning Outcomes: Between Humans and Computer

The final part of our desired learning outcome is between humans and computers. We want to design effective interactions between humans and computers. Well, what is important to note here, is where we’re placing the emphasis. Note that we didn’t say designing effective interfaces, because that puts the entire focus on the interface. We’re deeply interested in the human’s role in this interaction. So rather than designing interfaces, designing programs, designing tools, we’re designing interactions. We’re designing tasks. We’re designing how people accomplish their goals, not just the interface that they use to accomplish their goals. Take our thermostat for example. When we started this process, our goal shouldn’t be to design a thermostat. Our goal should be to design the way in which a person controls the temperature in their home. That subtle shift in emphasis is powerful. If you set out to design a better thermostat, you might design a wall-mounted device that’s easier to read or easier to use. But if you set to design a better way for people to control the temperature in their home, you might end up with Nest. A device that learns from the user and starts to control the temperature automatically.

1.2.6 - Learning Strategies: Video Material

Learning strategies are how we plan to actually impart that knowledge to you. This is how we attempt to help you achieve the learning goals and learning outcomes. Within these videos, we’ll use a number of different strategies to try to help you understand the core principles and methodologies of HCI. We’ll use learning by example. Every lesson and, in fact, this course, as a whole. Is organized around a collection of running examples that will come up over and over again. We use learning by doing. Throughout the course we’ll ask you to engage in designing interactions to solve different problems in different contexts. These aren’t required, since there’s really no way we can verify if you’ve done them, but we really hope you’ll take a few minutes and think about these. We’ll also use learning by reflection a lot. We’ll ask yo to reflect on times when you’ve encountered these things in your own every day life. These strategies are useful because they connect to your own personal experiences but once again, there’s a danger here. One of the recurrent points in HCI is that when you are designing interactions, you are not your own user. Focusing too much on your own experiences can give you a false sense of expertise. So I’ll use some strategies to help take you out of that comfort zone and confront how little you might understand these tasks with which you thought you were so familiar.

1.2.7 - Learning Strategies: Georgia Tech

Within the full course at Georgia Tech, there are a number of other strategies in which you’ll engage as well. First, we’re really passionate about leveraging the student community in this class to improve the experience for everyone. Taking this class with you are people with experience in a variety of industries, many of whom have significant experience in HCI. So some strategies we’ll use include peer learning, collaborative learning, learning by teaching, and communities of practice. You’ll learn both from each other and with each other. You’ll play the role of student, teacher, and partner, and you will learn from each perspective. In addition, the entire course is built around the idea of project-based learning. Early in the semester, you’ll form a team and start looking at a problem we’ve selected, or maybe one in which you’re already interested. This project will then become the domain through which you explore the principles and methods of human-computer interaction. Who knows? By the end of the semester, you might even generate something with the potential to go forward as a real-world product, or as a research project.

1.2.8 - Learning Assessments

Learning goals are what we want you to understand. Learning outcomes or what we want you to be able to do. Learning assessments then, are how we evaluate whether you can do what we want you to be able to do and understand what we want you to understand. The learning outcome to this class is to be able to design effective interactions between humans and computers. So the primary assessments in this class are to, say it with me, design effective interactions between humans and computers. You’ll start with some relatively small scale tasks, recommending improvements to existing interfaces or or undertaking some small design challenges. But as the semester goes on, you’ll scope up towards a bigger challenge. You’ll initially investigate that challenge individually and then you’ll merge into teams to prototype and evaluate a full solution to the challenge you chose. At the end, you’ll be evaluated not just on the final design you generate but on the process by which it was generated.

1.2.9 - Course Structure

We’ll close by talking about the overall structure of the content you’ll be consuming. The course’s lessons are designed to be as independent as possible, so you should be able to skip around if you want, but there’s a certain logic to our planned presentation order. We discussed earlier the model HCI, how design informs research and research then informs design, so we’ll start by discussing some of the core design principles of HCI. Then we’ll discuss the research methodologies for uncovering new user information, the interative design lifecycle. We’ll close by giving you the opportunity to peek at what’s going on in the HCI community at large.

1.2.10 - 5 Tips For Doing Well in CS6750

Here are five quick tips for doing well in this course. Number one, look over the assignments early. Some of our assignments, you can sit down and do them in an hour. But others require some advanced coordination to talk to users, develop prototypes, or test with real people. So, go ahead and at least read all the assignment descriptions. Number two, start the assignments early. That’s not just typical teacher talks saying, “You can’t get this done at the last minute.” You probably can. But you’re using interfaces like these in your everyday life. By starting early, you’re likely to get inspiration just in your day-to-day routine, and that’s going to make writing the assignments significantly easier than trying to sit down and come up with something on the spot. Number three, participate. Interact with your classmates. Post on the forums, read others posts. The knowledge and experience you gain there is just as valuable as anything you’ll get listening to me and these videos. Number four, select an application area to explore. Next lesson, you’ll hear about several of the interesting areas of HCI research and development going on right now. Developing in many of these areas is outside the scope of this class. But I encourage you to pick an area in which you’re interested, and mentally revisit it throughout the course. Number five, leave behind what you know or at least try. HCI is a huge area. Yet, many people believe that because they’re already good at using computers, they’d be good at designing user experiences. But HCI above all else is about gaining a grounded understanding of the user’s needs, not assuming we already understand them. So, while it’s great to apply the courses’ principles to your everyday life, be cautious about designing to narrowly based only on your own experiences.

1.2.11 - Conclusion

In this lesson, I’ve tried to give you some expectations of what this course will be like. We’ve gone over the course’s goals, outcomes, learning strategies, and assessments. We’ve covered the course’s learning outcome in great detail, to design effective interactions between humans and computers. Now, I focus mostly on the video material because the assignments, projects, and exams are separate from these videos, and are likely to change pretty significantly semester to semester. The video material here will cover three general areas; principles, methods, and applications. To really get into the applications, it’s useful to understand the principles and methods. But the same time, it’s useful to keep the applications in mind, while learning about the principles and methods. So, next, we’re going to briefly preview some of the application areas, for you to keep in mind during the rest of our conversations. Then, after we cover principles and methods, we’ll invite you to revisit these application areas and leave room to explore whatever you find most interesting.

1.3 Exploring HCI

1.3.1 - Introduction to Exploring HCI

[MUSIC] [SOUND] Computers are finding their way into more and more devices, and as a result HCI is becoming more and more ubiquitous. It used to be that you wouldn’t really need to think too much about HCI when designing a car or designing a refrigerator, but more and more computing is pervading everything. At the same time, new technological developments are opening up new areas for exploration. We’re seeing a lot of really fascinating progress in areas like virtual reality, augmented reality, wearable devices. As we study HCI, we’re going to talk a lot about things you’ve already used like computers and phones. But we want you to keep in mind some of these more cutting edge application areas as well. After all, if you’re really interested in going into HCI professionally, you’ll be designing for these new application areas. So we’re going to quickly preview some of these. We’ll divide them into three areas, technologies, domains and ideas. Technologies are emerging technological capabilities that let us create new and interesting user interactions. Domains are pre-existing areas that could be significantly disrupted by computer interfaces like healthcare and education. Ideas span both of these. They are the theories about the way people interact with interfaces and the world around them. Now, our delineation of this is kind of artificial. There’s a lot of overlap. New technologies like augmented reality are what allow emerging ideas like contact sensitive computing to really have the power that they do. But for organization, we’ll group our application areas into these three categories. When one of these areas catches your eye, take a little while and delve into it a little bit deeper. Then keep that topic area in mind as you go through the rest of the HCI material. We’ll revisit your chosen area throughout the course, and ask you to reflect on the application of the course’s principals and methods to your application area.

1.3.2 - Technology: Virtual Reality

The year that I’m recording this is what many have described as; the year that virtual reality finally hits the mainstream. By the time you watch this, you’ll probably be able to assess whether or not that was true, so come back in time and let me know. Virtual reality is an entire new classification of interaction and visualization and we’re definitely still at the beginning of figuring out what we can do with these new tools. You could be one of the ones who figures out the best way to resolve motion sickness or how to give proper feedback on gestural interactions. A lot of the press around virtual reality has been around video games but that’s definitely not the only application. Tourism, commerce, art, education, virtual reality has applications to dozens of spaces. For example, there’s a lab in Michigan that’s using virtual reality to treat phobias. They’re creating a safe space where people can very authentically and realistically confront their fears. The possible applications of virtual reality are really staggering, so I’d encourage you to check them out as you go through this class.

1.3.3 - Technology: Augmented Reality

Virtual reality generally works by replacing the real world’s visual, auditory, and sometimes even all factory or kinesthetic stimuli with it’s own input. Augmented reality on the other hand, compliments what you see and hear in the real world. So for example, imagine a headset like a Google Glass that automatically overlays directions right on your visual field. If you were driving, it would highlight the route to take, instead of just popping up some visual reminder. The input it provides complements stimuli coming from the real world, and instead of just replacing them. And that creates some enormous challenges, but also some really incredible opportunities as well. Imagine the devices that can integrate directly into our everyday lives, enhancing our reality. Imagine systems that could, for example, automatically translate text or speech in a foreign language, or could show your reviews for restaurants as you walk down the street. Imagine a system that students could use while touring national parks or museums, that would automatically point out interesting information, custom tailored to that student’s own interests. The applications of augmented reality could be truly stunning, but it relies on cameras to take input from the world, and that actually raises some interesting societal problems. There are questions about what putting cameras everywhere would mean. So keep those in mind when we get to interfaces and politics, in unit two.

1.3.4 - Technology: UbiComp and Wearables

Ubiquitous computing refers to the trend towards embedding computing power in more and more everyday objects. You might also hear it referred to as pervasive computing. It’s deeply related to the emerging idea of an Internet of Things. A few years ago, you wouldn’t have found computers, and refrigerators, and wristwatches, but as microprocessors became cheaper, and as the world became increasingly interconnected, computers are becoming more and more ubiquitous. Modern HCI means thinking about whether someone might use a computer while they’re driving a car or going on a run. It means figuring out how to build smart devices that also some of the cognitive load from the user, like refrigerators that track their own contents and deliver advice to the users right at the right time. This push for increasing pervasiveness has also led to the rise of wearable technologies. Exercise monitors are probably the most common examples of this but smartwatches, Google Glass, augmented reality headsets, even things like advanced hearing aids and robotic prosthetic limbs are all examples of wearable technology. This push carries us in the areas usually reserved for human factors engineering and industrial design, which exemplifies the increasing role of HCI in the design of new products.

1.3.5 - Technology: Robotics

A lot of the current focus on robotics is on their physical construction and abilities or on the artificial intelligence that underlies their physical forms. But as a robotics becomes more and more mainstream, we’re going to see the emergence of a new sub-field of human-computer interaction, human-robot interaction. The field actually already exists. The first conference on human-robot interaction took place in 2006 in Salt Lake City, and several similar conferences have been created since then. As robots into the mainstream, we’re going to have to answer some interesting questions about how we interact with them. For example, how do we ensure that robots don’t harm humans through faulty reasoning? How do we integrate robots into our social lives or do we even need to? As robots are capable of more and more, how do we deal with the loss of demand for human work? Now, these questions all lie at the intersection of HCI artificial intelligence and philosophy, in general, but there’s some more concrete questions we can answer as well. How do we pragmatically equip robots with the ability to naturally interact with humans based on things like voice and touch? How do we provide tacit subtle feedback to humans interacting with robots to confirm their input is being received and properly understood? How do we support humans in teaching things to robots instead of just programming them or alternatively can we create robots that can teach things to humans? We already see robotics advances applied to things like health care and disability services, and I’m really excited to see where you take it next.

1.3.6 - Technology: Mobile

One of the biggest changes to computing over the past several years has been the incredible growth of mobile as a computing platform. We really live in a mobile first world, and that introduces some significant design challenges. Screen real estate is now far more limited. The input methods are less precise, and the user is distracted. The mobile computing also presents some really big opportunities for HCI. Thanks in large part to mobile, we’re no longer interested just in a person sitting in front of a computer. With mobile phones, most people have a computer with them at all times anyway. We can use that to support experiences from navigation to stargazing. Mobile computing is deeply related to fields like context-aware computing, ubiquitous computing, and augmented reality as it possesses the hardware necessary to complement those efforts. But even on its own, mobile computing presents some fascinating challenges to address. For me, the big one is that we haven’t yet reached a point where we can use mobile phones for all the tasks we do on computers. Smart phones are great for social networking, personal organization, games, and lots of other things. But we haven’t yet reached a point where the majority of people would sit down to write an essay or do some programming on smartphones. Why haven’t we? What do we need to do to make smart phones into true replacements for traditional desktop and laptop computers?

1.3.7 - Idea: Context-Sensitive Computing

What time is it? You can go ahead and go to lunch. Did that exchange make any sense? I asked Amanda for the time and she replied by saying I can go ahead and go get lunch. The text seems completely nonsensical and yet hearing that, you may have filled in the context that makes this conversation logical. You might think that I asked a while ago what time we were breaking for lunch or maybe I mentioned that I forgot to eat breakfast. Amanda would have that context and she’d use it to understand why I’m probably asking for the time. Context is a fundamental part of the way humans interact with other humans. Some lessons we’ll talk about even suggests that we are completely incapable of interacting without context. If context is such a pervasive part of the way humans communicate, then to build good interfaces between humans and computers, we must equip computers with some understanding of context. That’s where context-sensitive computing comes in. Context-sensitive computing attempts to give computer interfaces the contextual knowledge that humans have in their everyday lives. For example, I use my mobile phone differently depending on whether I’m sitting on the couch at home, or using it in my car, or walking around on the sidewalk. Imagine I didn’t have to deliberately inform my phone of what mode I was in though. Imagine if it just detected that I was in my car and automatically brought up Google Maps and audible for me. Services have started to emerge to provide this but there’s an enormous amount of research to be done on context-sensitive computing especially as it relates to things like wearables, augmented reality, and ubiquitous computing.

1.3.8 - Idea: Gesture-Based Interaction

As this course goes, on you’ll find that I’m on camera more often than you might be accustomed to seeing any udacity course. Around half this course actually takes place with me on camera, there are a couple of reasons for that. The big one is that this is human-computer interaction, so it makes sense to put a strong emphasis on the human. But another big one is that when I’m on camera I can express myself through gestures instead of just through words and voice intonations. I can, for example, make a fist and really drive home and emphasize a point. I can explain that a topic applies to a very narrow portion of the field or a very wide portion of the field. We communicate naturally with gestures every day. In fact, we even have an entire language built out of gestures. So, wouldn’t it be great if our computers can interpret our gestures as well? That’s the emerging field of gesture-based interaction. You’ve seen this with things like the Microsoft Kinect which has far-ranging applications from health care to gaming. We’ve started to see some applications of gesture-based interaction on the go as well with wristbands that react to certain hand motions. Gesture-based interaction has enormous potential. The fingers have some of the finest muscle movements, meaning that a system based on finger movements could support an incredible number of interactions. We might see a day when it’s possible to type invisibly in the air in front of you based on the system’s recognition to the movement of the muscles in your wrist. That might finally allow mobile devices to despise traditional computers altogether.

1.3.9 - Idea: Pen- and Touch-Based Interaction

I always find it interesting how certain technologies seem to come around full circle. For centuries we only interacted directly with the things that we built and then computers came along. And suddenly we needed interfaces between us and our tasks. Now, computers are trying to actively capture natural ways we’ve always interacted. Almost every computer I encounter now days has a touch screen. That’s a powerful technique for creating simple user interfaces because it shortens the distance between the user and the tasks they’re trying to accomplish. Think about someone using a mouse for the first time. He might need to look back and forth from the screen to the mouse, to see how interacting down here, change things he sees up here. With a touch based interface, he interacts the same way he uses things in the real world around him. A challenge can sometimes be a lack of precision, but to make up for that we’ve also create pen based interaction. Just like a person can use a pen on paper, they can also use a pen on a touch screen. And in fact, you might be quite familiar with that, because most Udacity courses use exactly that technology. They record someone writing on a screen. That gives us the precision necessary to interact very delicately and specifically with our task. And as a result tablet based interaction methods have been used in fields like art and music. Most comics you find on the internet are actually drawn exactly like this, combining the precision of human fingers with the power of computation.

1.3.10 - Idea: Information Visualization

One of the biggest trends of the information age is the incredible availability of data. Scientists and researchers use data science and machine learning to look at lots of data and draw conclusions. But often times, these conclusions are only useful if we can turn around and communicate them to ordinary people. That’s where Information Visualization comes in. Now, at first glance you might not think of data visualization as an example of HCI. After all, I could draw data visualization on a napkin, and print in a newspaper, and there’s no computer involved anywhere in that process. But computers give us a powerful way to re-represent data in complex, animated, and interactive ways. We’ll put links to some excellent examples in the notes. Now, what’s particularly notable about data visualization in HCI, is the degree with which it fits perfectly with our methodologies for designing good interfaces. One goal of a good interface is to match the user’s mental model to the reality of the task at hand in the same way the goal of information visualization is to match the reader’s mental model of the phenomenon to the reality of it. So, the same principles we discussed for designing good representations, apply directly to designing good visualizations. After all, a visualization is just a representation of data.

1.3.11 - Idea: CSCW

CSCW stands for computer-supported cooperative work. The field is just what the name says. How do we use computers to support people working together? You’re watching this course online, so I’ll tell the experienced as closely. Maybe you’ve worked on a group project with a geographically distributed group. Maybe you’ve had a job working remotely. Distributed teams are one example of CSCW in action, but there are others. The community often breaks things down to two dimensions, time and place. We can think of design as whether or not we’re designing for the users in the same time and place or users at different times and in different places. This course is an example of designing for different time and different place. You’re watching this long after I’ve recorded it, likely from far away from our studio. Workplace chat utilities, like Slack and HipChat, would be examples of same time, different place. They allow people to communicate instantly across space, mimicking the real-time office experience. Now, imagine a kiosk at a museum that asks visitors to enter their location to create a map of where everyone comes from. That would be different time, same place. Everyone uses the interface at the same place, but across time. Even when we’re in the same time and place, computers can still support cooperation. In fact, right now, a man is running our camera, Ben’s running the teleprompter, and I’m standing up here talking at you. These different computers are supporting us and cooperating to create this course. So, we can often think of CSCW as mediating cooperation across traditional geographic or temporal borders, but it can also help us with co-located simultaneous cooperation.

1.3.12 - Idea: Social Computing

Social computing is the portion of HCI that’s interested in how computers affect the way we interact and socialize. One thing that falls under this umbrella, is the idea of re-creating social norms within computational systems. So, for example, when you chat online, you might often use emojis or emoticons. Those are virtual re-creations of some of the tacit interaction we have with each other on a day-to-day basis. So, for example, these all take on different meanings depending on the emotion provided. Social computing is interested in a lot more than just emojis, of course, from online gaming and Wikipedia to social media to dating websites. Social computing is really interested in all areas, where computing intersects with our social lives.

1.3.13 - Domain: Special Needs

One of the most exciting application areas for HCI is in helping people with special needs. Computing can help us compensate for disability, injuries, aging. Think of a robotic prosthetic for example. Of course, part of that is engineering, part of it is neuroscience. But it’s also important to understand how the person intends to use such a limb and the tasks they need to perform. That’s HCI intersecting with robotics. Or take another example from some work done here at Georgia Tech by Bruce Walker. How do you communicate data to a blind person? We’ve talked about information visualization, but if it’s a visualization, it’s leaving out a significant portion of the population. So, Dr. Walker Sonification Lab works in communicating data using sound. A lot of the emerging areas of HCI technology could have extraordinary significance to people with special needs. Imagine virtual reality for people suffering from some form of paralysis, or imagine using artificial intelligence with context-aware computing to create an autonomous wheelchair. These are projects would only target a small portion of the population. But the impact of that portion would be absolutely indescribable.

1.3.14 - Domain: Education

Hi, and welcome to Educational Technology. My name is David Joyner, and I’m thrilled to bring you this course. As you might guess, education is one of my favorite application areas of HCI. In fact, as I’m recording this, I’ve been teaching educational technology at Georgia Tech for about a year, and a huge portion of designing educational technology is really just straightforward HCI. But education puts some unique twists on the HCI process. Most fascinatingly, education is an area where you might not always want to make things as easy as possible. You might use HCI to introduce some desirable difficulties, some learning experiences for students. But it’s important to ensure that the cognitive load students experienced during a learning task is based on the material itself, not based on trying to figure out our interfaces. The worst thing you can do in HCI for education is raise the student’s cognitive load because they’re too busy thinking about your interface instead of the subject matter itself. Lots of very noble efforts in designing technology for education have failed due to poor HCI. So, if you’re interested in going in educational technology, you’ll find a lot of valuable lessons in human-computer interaction.

1.3.15 - Domain: Healthcare

A lot of current efforts in healthcare are about processing the massive quantities of data that are recorded everyday. But in order to make that data useful, it has to connect to real people at some point. Maybe it’s equipping doctors with tools to more easily visually evaluate and compare different diagnoses. Maybe it’s giving patients the tools necessary to monitor their own health and treatment options. Maybe that’s information visualization so patients can understand how certain decisions affect their well-being. Maybe it’s context aware computing that can detect when patients are about to do something they probably shouldn’t do. There are also numerous applications of HCI to personal health like Fitbit for exercise monitoring or MyFitnessPal for tracking your diet. Those interfaces succeed if they’re easily usable for users. Ideally, they’d be almost invisible. But perhaps the most fascinating upcoming intersection of HCI and health care is in virtual reality. Virtual reality exercise programs are already pretty common to make living an active lifestyle more fun, but what about virtual reality for therapy? That’s actually already happening. We can use virtual reality to help people confront fears and anxieties in a safe, but highly authentic place. Healthcare in general is concerned with the health of humans. And computers are pretty commonly used in modern healthcare. So the applications of human computer interaction to healthcare are really huge.

1.3.16 - Domain: Security

Classes on network security are often most concerned with the algorithms and encryption methods that must be safeguarded to ensure secure communications. But the most secure communication strategies in the world are weakened if people just refuse to use them. And historically, we’ve found people have very little patience for instances where security measures get in the way of them doing their tasks. For security to be useful it has to be usable. If it isn’t usable, people just won’t use it. XEI can increase the usability of security in a number of ways. For one, it can make those actions simply easier to perform. CAPTCHAs are forms that are meant to ensure users are humans. And they used to involve recognizing letters in complex images, but now they’re often as simple as a check-box. The computer recognizes human-like mouse movements and uses that to evaluate whether the user is a human. That makes it much less frustrating to participate in that security activity. But HCI can also make security more usable by visualizing and communicating the need. Many people get frustrated when systems require passwords that meet certain standards or complexity, but that’s because it seems arbitrary. If the system instead expresses to the user the rationale behind the requirement, the requirement can be much less frustrating. I’ve even seen a password form that treats password selection like a game where you’re ranked against others for how difficult your password would be to guess. That’s a way to incentivize strong password selection making security more usable.

1.3.17 - Domain: Games

Video games are one of the purest examples of HCI. They’re actually a great place to study HCI, because so many of the topics we discuss are so salient. For example, we discussed the need for logical mapping between actions and effects. A good game exemplifies that. The actions that the user takes with the controller should feel like they’re actually interacting within the game world. We discussed the power of feedback cycles. Video games are near constant feedback cycles as the user performs actions, evaluates the results and adjust accordingly. In fact, if you read through video game reviews you’ll find that many of the criticisms are actually criticisms of bad HCI. The controls are tough to use, it’s hard to figure out what happened. The penalty for failure is too low or too high. All of these are examples of poor interface design. In gaming though there’s such a tight connection between the task and the interface. Their frustrations with a task can help us quickly identify problems with the interface.

1.3.18 - Reflections: Exploring HCI

Throughout our conversations, we’re going to explore some of the fundamental principles and methods of HCI. Depending on the curriculum surrounding this material, you’ll complete assignments, projects, exams, and other assessments in some of these design areas. However, we’d also like you to apply what you learned to an area of your choice. So, pick an area. Either one we’ve mentioned here or one you’d like to know about separately, and keep it in mind as we go through the course. Our hope is that by the end of the course, you’ll be able to apply what you learn here to the area in which you’re interested in working.

1.3.19 - Conclusion to Exploring HCI

In this lesson, our goal has been to give you an overview of the exciting expansive ongoing HCI research and development. We encourage you to select a topic you find interesting, read about a little bit more, and think about it as you go through the course. Then, in unit four, we’ll provide some additional readings and materials on many of these topics for you to peruse. In fact, you can feel free to jump ahead to there now as well. But before we get too far into what we want to design, we first must cover the fundamental principles and methods of HCI.