Public notes for CS6750 - HCI Spring 2022 at Georgia Tech

Public webpage for sharing information about Dr. Joyner's CS6750 - Human Computer Interaction course in Spring 2022.

View the Project on GitHub idkaaa/cs-6750-hci-sp22-public

Unit 5: Conclusion

5.1 Course Recap

5.1.1 - Introduction

We’ve reached the end of our HCI course. To close things off, let’s briefly recap what we’ve covered. The purpose of this lesson isn’t just to give you an inventory of the course content. The real reason for this lesson is to give us an excuse to have this cool inception effect over here. No, really, it’s to repeat all that content again to load it into your working memory one more time. After all, we know that the more often you load some content into short-term memory, the more strongly it remains in long-term memory. That was a principal we learned in the lesson on human abilities, and it covers where we start and end every lesson by repeating the material that we covered. Each repeat loads it into your short-term memory one more time, further solidifying it. So, to close this course, we’ll do this one more time. Although we hope, you’ll come back again and again and watch some of the material whenever you need a refresher.

5.1.2 - Recap: HCI Principles

Our first unit was the unit on design principles. To start that unit off, we’d began by investigating the process of identifying a task. We discussed how a task is not just the actions that a user performs, but it’s the combination of their motivations, and their goals, and their context, and so on. We emphasized that we’re not just interface designers, we are task designers. We design tasks and then we create interfaces that make those tasks possible. We then explored three views on the user’s role in a particular task. We might view them as an information processor, like another computer in the system. We might view them as a predictor, or interpreter, someone operating a mental model of the system. We might view them as a participant, someone working in a larger context beyond just our interface. And then, finally, we discussed how the views we take inform the designs we create.

5.1.3 - Recap: Feedback Cycle

We started the first unit by discussing feedback cycles. Feedback cycles, as we discussed, are ubiquitous in basically all areas of life. They’re how we learn and adapt to our environments, and they’re also how we learn and adapt to the interfaces available to us. We then described the two parts of feedback cycles. Gulfs of execution covered how users go from personal goals to external actions, and gulfs of evaluation covered how the users then evaluated whether or not the results of those actions met their original goals. We can describe a lot of HCI as designing ways to bridge these two gulfs, helping users more easily accomplish their goals, and helping users more quickly identify when their goals have been accomplished.

5.1.4 - Recap: Direct Manipulation and Invisible Interfaces

We then moved on to direct manipulation, which was one way to create interfaces with very short goals of execution and evaluation. Direct manipulation involved creating interfaces where the user felt like they were directly interacting with the object of the task. Instead of typing commands or selecting operators, they would more physically participate with the interface. The goal of this, and really of any good interface design, was to create interfaces that become invisible. Invisible interfaces are those that allow the user to focus completely on the task instead of on the interface. We noted that nearly any interface can become invisible when the user has enough practice and expertise, but our goal is to create interfaces that vanish sooner by good design.

5.1.5 - Recap: Human Abilities

We’re discussing human computer interaction and that means we have to understand the human portion of the equation. So, we also took a crash course toward some basic psychology. We broke the human down into three systems: perception, cognition and the motor system. With perception, we covered the strengths and limitations of the visual, auditory and kinesthetic senses. We discussed how each of them can be useful for different kinds of information. Then, we discussed some of the limitations to human cognition. We focused a lot on memory, how many things we can store at a time and how things can be stored more permanently. We also discussed the notion of cognitive load. We focused especially on how we should use our interfaces to reduce the user’s cognitive load. Finally, we discussed the limitations of the human motor system, especially how those limitations change with age or in the presence of certain distractions. We’re designing for humans, so these limitations and advantages are key to how we design our interfaces.

5.1.6 - Recap: Design Principles and Heuristics

Human-computer interaction has a long and rich history, initially drawn from human factors engineering before becoming a field of its own. During that history, it’s developed lots of principles and heuristics for how to design good interfaces, lots and lots and lots in fact, literally thousands of principles. In this lesson, we covered 15 of the most significant ones, drawn from four sets of design principles. We covered the principles of Don Norman, Jakob Nielsen, Larry Constantine, Lucy Lockwood and the Center for Universal Design. Among these, I would probably argue that the most significant principles to remember where affordances, mappings, and constraints. Affordances are parts of the interface that by their very design, tell the user what they should do. A good mapping then tells the user what the effect of that interaction will likely be, and then constraints ensure that the user only chooses to do the correct things. With those three combined, as well as the other heuristics that we covered, we can create interfaces that vanish between the user and the task very quickly.

5.1.7 - Recap: Mental Models and Representations

Every one of our users has some mental understanding of their task as well as where our interface fits into that task. We call that their mental model. They use that mental model to simulate and predict the effects of certain actions, that’s why we call this the predictor model of the user. Our goal is for the user’s mental model to match the reality of the task in the interface. To accomplish that, we try to design representations with clear mappings to the underlying task. That’s how we can ensure that the user’s mental model of the system is both accurate and useful. We discussed here mistakes which are errors that occur based on inaccurate mental models, and we discussed slips which are errors that occur despite accurate mental models. We also talked about a couple of the challenges that can arise in trying to help users build accurate mental models. One of those was called expert blind spot, that occurs when we lose sight of our own expertise and we forget what it’s like to be a novice. The other was learned helplessness, which is when users learn that they have no real ability to accomplish their goals because of a broken feedback cycle. In designing representations that lead to accurate mental models, we need to make sure to avoid both of these problems.

5.1.8 - Recap: Task Analysis

We’ve discussed repeatedly that HCI is in large part about understanding tasks. As designers, we designed tasks that feature interfaces, not just interfaces alone. To accomplish that, it’s important to have a very clear understanding of the task for which we’re designing. So toward that end, we discussed task analysis. Task analyses are ways of breaking down tasks into formal workflows to aid the design process. So, we covered two general kinds of task analyses. We talked about information processor models like the GOMS model, which focuses on the user’s goals, operators, and methods. Those are primarily concerned with what we can observe. We also talked about cognitive task analysis which tried to get inside the user’s head so we can understand the user’s thought process during the task as well. Both of these approaches are valuable to designing good tasks with usable interfaces.

5.1.9 - Recap Distributed Cognition

Earlier, we discussed the idea of cognitive load. Cognitive load was the principle that humans have a set amount of cognitive resources, and if there are overloaded and their performance suffers and they get frustrated. So, how do we reduce the user’s cognitive load? Well, we can make the task easier, sure, but we can also add to their cognitive resources. That’s the principle of distributed cognition. The interactions of humans and artifacts together have more cognitive resources in individuals acting alone. Devices and interfaces can exhibit cognitive properties like memory and reasoning offloading those needs from the user. We also discussed three related theories: social cognition, situated action, and activity theory. All three of these put a strong emphasis on the context of a task whether it be the physical context, the social context or the societal context.

5.1.10 - Recap: HCI Methods

The Design Principles unit of the course cover the fundamental principles and ideas developed over decades of work in this space, but we can’t create good interfaces just by applying old principles to new problems. Those old principles can help us make progress much faster, but to design good interfaces, we have to involve the user. That’s perhaps the most important principle of HCI, user-centered design. User-centered design advocated keeping the user at the heart of all of our design activities. For us, that isn’t just the person using the tool, but it’s also all the people affected by the tool’s very existence. So, to keep the user in mind, we use an iterative design life cycle that focuses on getting feedback from the user early and often. The methods of that life cycle were the core of the Methods unit of this course.

5.1.11 - Recap: Ethics and Human Research

When we’re doing research in HCI, we have access to some pretty sensitive personal data about our participants. There are huge ethical considerations about privacy and coercion, that we have to keep in mind when participating in the design lifecycle. So, we discussed the role of institutional review board, or IRB for university research. They oversee studies and make sure we’re preserving our participants rights. They also make sure that the benefits of our research outweigh the risks, and as part of that they help ensure our methods our sound enough, to have benefits in the first place. We also discussed how industry doesn’t have the same kind of oversight. But some companies have partnered with universities to participate in their IRBs. While other companies have formed their own internal IRBs. All of this is driven by the need to preserve the rights of our users, that’s a key part of user-centered design.

5.1.12 - Recap: Needfinding and Requirements Gathering

The first stage of the design life cycle was need-finding. Need-finding was how he developed a keen understanding of the needs of our users. One of the biggest mistakes a designer can make is assuming they already understand the user and the task before ever interacting with them. There are several questions about the user we need to answer before we’re ready to start designing. So, to get a good understanding of the user and the task, we discuss several methods. We might start with methods that have little direct interaction with the users, like watching them in the wild or trying out the task ourselves. We might use those to inform some more targeted need-finding exercises like interviews and focus groups and surveys. By combining multiple need-finding approaches, we can build a strong model of the user and the task that will help us design usable interfaces.

5.1.13 - Recap: Design Alternatives

Once we have a solid understanding of the user and the task, we want to start brainstorming possible designs. The important thing here is to make sure we don’t get fixated on one idea too early. That’s sort of tunnel vision, risks missing lots of fantastic ideas. So, we want to make sure to engage in a well-defined brainstorming process. I personally recommend that we start with individual brainstorming, and then set up a group brainstorming session that ensures that everyone’s individual ideas gets heard. From there, we propose some different ideas on how to explore those design alternatives through methods like personas, and scenarios, and timelines. Our end goal here was to arrive at a set of designs worth moving on to the prototyping stage.

5.1.14 - Recap: Prototyping

In user-centered design, our goal is to get user feedback early and often. So, once we have some design alternatives, our goal is to get them in front of users as quickly as possible. That’s the prototyping stage where we take those design alternatives and build prototypes we can show to actual users. Now at first, this prototypes might be very low fidelity. We might start just by describing our ideas or drawing them on paper. Those are verbal or paper prototypes. We want to keep our designs easy to revise. We might even revise them live while working with users. As we get more and more feedback, we build higher fidelity prototypes to explore more detailed questions about our designs. We might use wireframes or set up live simulated demos. At every stage of the process, we design our prototypes in a way to get the user’s feedback and inform the next iteration of our design life cycle.

5.1.15 - Recap: Evaluation

The goal of the design lifecycle is to get frequent user feedback or to put it differently to frequently evaluate our interface ideas. Frequent rapid feedback cycles are important for users of our interface, but they’re are also important to us as designers of interfaces. So, that is where evaluation comes into play. Once we’ve designed an interface whether it’s just an idea in our head or a full-fledged working version. It’s time to evaluate it with users. Early on, that may be qualitative evaluation to get the full picture of the user experience. Later on, that might be empirical evaluation to more formally capture the results of the interface. Along the way, we might also employ predictive evaluation to try to anticipate how users will react to our designs. These methods are the foundation of user-centered design. Design that features frequent user evaluation of our ideas.

5.1.16 - Recap: HCI and Agile Development

Traditional HCI can be a slow deliberate process but by constant interaction with users as we slowly ramp up the fidelity of our prototypes. In the past, that was because every phase of the process was very expensive, from development to distribution, to evaluation, to updating. But now, in some contexts those steps have become much much cheaper. Some web development can be done by simple drag and drop interfaces. We can now distribute applications to millions of users essentially for free. We can then pull back enormous quantities of live data from them and push updates to all of them in a real time. Given all that, sometimes it might be more prudent to particularly working the versions quickly to start getting real user data faster. So, we also discussed agile methods for development while keeping an eye on the design life cycle. These methods aren’t appropriate for all areas, especially those with high cost of failure but for many places they can really increase the iteration speed of our design lifecycle.

5.1.17 - Exploring HCI: Recap

Over the course of our conversations, I’ve asked you to revisit the area of HCI in which you’re most interested during each topic. I’ve asked you to brainstorm how the various design principles and theories, apply to the area that you chose. I’ve asked you to think of how a design lifecycle might be created, that addresses that chosen application area. We’ve also given you lots of information to read about your chosen topic. You have all the tools necessary to start developing. I’m looking forward to seeing what you come up with.

5.1.18 - Conclusion

One of the famous models for communicating is, tell them what you’re going to tell them, tell them, then tell them what you told them. That’s what we’ve tried to do here at several different levels of abstraction. At the beginning of the course, we told you the overall structure of the course. Within each unit, we outlined the content of that unit. Within each lesson, we previewed the content of that particular lesson, and then we delivered the content. Then, we summarized each lesson, and we summarized each unit. Now, we’ve summarized the course as a whole. So, we’re done now, right? Not quite. There are two other useful things to cover, the closely related fields to HCI and where you might want to go next.

5.2.1 - Introduction

At the beginning of our conversations, we talked about how HCI is part of a broader hierarchy of fields. It draws a lot from human factors engineering, and in fact, in many ways, it’s human factors engineering applied specifically to software. It also has numerous subfields, like user interface design, user experience design, computer-supported cooperative work, and more. In this lesson, we want to let you know where you might want to go next in your exploration of HCI, in terms of subject matter. Note that these are different from the areas of application of HCI. When we talk about things like virtual reality or educational technology, we’re describing fields to which HCI applies. Here, we’re talking about HCI subfields themselves.

5.2.2 - Human Factors and Industrial Design

Human-computer interaction was concerned with the interaction between users and tasks as mediated by things with computing power. Nowadays, that’s more and more common, it’s a relatively recent phenomenon that things like watches and cameras were examples of computers. Where a device doesn’t have computational power behind it though, there are still lots of design considerations to make. In fact, many of our principals and many of our methods apply just as well to non computational interfaces. What makes human factors engineering particularly interesting, is that it deals with more constraints, physical constraints. Things like the height of the user or the size of a hand come up in human factors engineering. What’s interesting though is that as many devices start to have computational resources added to them, human factors engineering and human-computer interactions start to interact more and more. My watch for example has no computational resources to it, it’s completely within the human factors area, but smartwatches see some interesting interactions between human factors and HCI. Human factors determines the size of the watch, which determines the size of the battery or the sensors that can be placed inside. Those things then influence what we can do from the HCI perspective. So, if you’re dealing with anything related to ubiquitous computing, wearable devices, contextual computing, human factors engineering is a great place to start. So, we’ll throw you some resources in the notes below.

5.2.3 - User Interface Design

Likely the most significant subfield of HCI is user interface design. Colloquially, user interface design most often refers to design between a user and some rectangular screen, be it a traditional computer, a tablet, a laptop, a smartphone, and so on. For a long time, user interface design in HCI were actually pretty much synonymous because the vast majority of interaction between users and computers happened to via a rectangular onscreen interface, connected to a mouse and keyboard. It’s relatively recent that we’ve started to see interaction break out of that literal box, and to a certain extent, the term UI design, a user interface design, captures this as well. Interfaces don’t have to be screens. Colloquially though, I find most classes on user interface design, focus on designing screens and interacting with screens. That’s a massive and well-developed field. In fact, a lot of this course’s material comes from the user interface design space. There are also some more narrow things that user interface design is concerned with though. UI design has its own set of design principles that apply more narrowly to the design of traditional computer software or Web sites. Some of these principles guide how people visually group things together, and these are called Gestalt grouping principles. When things are close together, we mentally put them into groups. You’ll likely see two groups of three, and one group of six. When there are implicit lines, we see a continuation. You’ll likely see this as a cube, even though the lines are broken. We also group things together based on similarity. You probably see this as four blue squares, two gray diamonds, and three orange circles. While right now you see this as a three by three grid of green circles, watch what happens when they move. Even after they stopped moving, you likely still see the diamond that was formed by those moving circles. These Gestalt principles in some ways, applied a user-interface designs emphasis on designing with good grids in mind. Just the way magazines and newspapers have done for centuries. We’ve already discussed this example a lot, and part of it is because this new interface leverages the analogy to the old interface. But its value isn’t just in the analogy, its value is also in the way it instantiates the same Gestalt principles that guided the layout of a newspaper. Finally, user interface design touches on one of my favorite topics, typography. Typography is often covered in user interface design courses. So generally speaking, while in HCI we’ve taken special care to talk about general methods that deal with any kind of computer interface between users and tasks, user interface design zooms more closely in on the design of interfaces on screens. If you want to study user-interface design some more, we’ll add some links to related courses and materials online to the notes below.

5.2.4 - User Experience Design

For a lot of people, HCI and user experience design are essentially the same thing. For me though, UX design is a little bit more prescriptive while HCI is a little bit more descriptive. HCI describes how people learn and interpret interfaces while UX design prescribes what you should do with that information. But in many ways, let’s just apply the HCI. The content is the same. The question is how you use it. If you choose to continue your studies in UX design though, there are a few new things you’ll encounter. You’ll see an increased focus on the mood and the joy of the user. We want them to have a great experience not just accomplish a task. You’ll see more attention paid to the user’s motivations behind engaging with the interface. You also see a greater emphasis on the interaction between the user, the interface, and the contexts all around them, as these are all parts of the user experience. These are all things we’ve talked about but user experience gets into prescribing these with more specificity. So, if you want some more information on user experience design, we’ll put some resources in the notes below.

5.2.5 - HCI & Psychology

Early in our conversations, we described how HCI is generally about an interaction between design and research. We use research findings to inform our designs, and we use the success of our designs to create new research findings. As you explore more HCI though, you’ll find that there’s plenty of room to specialize in one side of the other. Professional designers focus almost exclusively on the creation side, but there are lots of people who focus exclusively on the research side of HCI. Many universities like Georgia Tech and Carnegie Mellon have research faculty dedicated to understanding the way people interact with technology at individual group and societal levels. So, if you’re more interested in understanding how people interact with interfaces than in designing new ones, you might be interested in taking more of a research bent on HCI. This class is actually built from the research perspective more than the designer perspective, so you’ve already got a great foundation. We’ll add some links below if you want to explore some more.

5.2.6 - Human-Centered Computing

HCI research broadens to a field called human-centered computing. While much of HCI research is concerned with the immediate interactions between people and computers, human-centered computing is interested more generally in how computers and humans affect each other at the societal level. There’s a lot of really fascinating research going on in this area. Some of the questions people are addressing are, how did participants in the Arab Spring use tools like Facebook and Twitter to coordinate? How can information visualizations be employed to help people better understand their diet and their energy usage? How does access to computing resources influence early childhood education? Now, notice that these issues don’t necessarily involve designing new interfaces or creating new tools. They involve looking at the way people interact with computers more generally, and not just specific tools, but the ubiquity of technology as a whole. So, if you find this interesting, we’ll put some more materials in the notes below.

5.2.7 - Cognitive Science

When we described mental models, we were actually briefly touching on a deep body of literature from the cognitive science field. Cognitive science is the general study of human thought, mental organization, and memory. Now, cognitive science isn’t a subfield of HCI, but HCI informs a good portion of cognitive science research. That’s because HCI gives us a way to explore aspects of human cognition. We can design interfaces that assume humans think or process in a certain way, and we can use the results of those interfaces to further develop our theories. In this way, HCI is a probe into human thought that informs the development of cognitive science, and cognitive science, in turn, gives us theories on human abilities that inform the interfaces that we design. So, if you’re interested in studying people more closely using HCI as a probe or a tool, we’ll give you some more resources to explore.

5.2.8 - Computer-Supported Collaboration

When we discussed feedback cycles, we mentioned that the user experience applies not only at the individual level but also at the group level. Distributed cognition too was interested in how interactions between people can be mediated by interfaces and how the output and knowledge of those interactions can’t be attributed narrowly to one particular part of the system, but rather to the system as a whole. The ubiquity of human interaction and the potential of computers to mediate interactions between people gives rise to field that investigate collaboration across interfaces. These fields ask questions like: how can computers be used to allow people to collaborate across distance and across time, and how can computers be used to enhance the collaboration of people working together at the same place at the same time? These fields look at how computers can support things like cooperative work in collaborative learning. For example, how does Wikipedia enable people across enormous variations in location and time to work together to capture knowledge? Or how do online courses allow teachers and students to interact and learn asynchronously across distances? Or how can computers be used to facilitate conversations between people with different backgrounds and expertises and even languages? These are pretty well-developed fields. So, if you’d like to learn more, we’ll put some more information in the notes below.

5.2.9 - Intelligent User Interfaces

To close out, my work is at the intersection of artificial intelligence, human-computer interaction, and education. My research is largely on how to use AI to create valuable learning experiences. Setting aside the education part for a second, though, there’s a rich interaction between artificial intelligence and human-computer interaction in the form of what we call intelligent user interfaces. This field looks at how we can apply AI techniques to adapting user interfaces to their users. Now, an infamous example of this is Clippy, the Microsoft Office assistant. He tried to infer what you’re working on and give you in-context feedback on it. Intelligent user interfaces have come a long way since then, though. Google Now, for example, is consistently trying to learn from your routine and give you information when you need it. One of my favorite experiences with intelligent user interfaces came from the interaction between Google Maps, Gmail, and Google Calendar. Google Calendar had automatically imported a restaurant reservation I had made from Gmail along with the location of the restaurant. Then, Google Maps, knowing where I was, detected that there was unusual traffic between me and the reservation, and it buzz me to let me know when to leave to arrive on time. I hadn’t checked traffic, but I was on time for my reservation because of the intelligence of that user interface. It knew what I needed to know and when I needed to know it. So, if you’d like to hear more about the overlap between artificial intelligence, human-computer interaction, and intelligent user interfaces, we’ll put some information in the notes below.

5.2.10 - Conclusion

Human-computer interaction is a massive field with lots of subdomains. This course has been a combination of the fundamental methods and principles of HCI, but there are lots of directions you could go from here. In this lesson, I’ve attempted to give you some ideas of where you might look next. You might be interested in the research side of HCI and exploring more about how technology influences the way people think and act. You might be interested in the design side and creating excellent user interfaces and experiences. You might be interested in collaboration and helping people interact across technology or you might be interested in artificial intelligence and designing interfaces that adapt to the user’s needs. Whatever your interest, there’s a rich amount of content in HCI in front of you. Now, the last thing we need to think about is, how do you best get that content?

5.3 Next Steps

5.3.1 - Introduction

To close our journey through HCI, let’s take a look at where you might go from here. We’ve already talked about different application areas of HCI like, virtual reality in educational technology, and those certainly apply to what you could do next. We’ve also talked elsewhere about the deeper topics in HCI you might investigate like intelligent user interfaces, or Human-Centered Computing. But to close, let’s talk about the formal educational steps you might take going forward to get deeper into HCI.

5.3.2 - Ongoing Research

The quickest way to get more involved in HCI if you’re at Georgia Tech, is to see about joining a professor’s research team. On the Georgia Tech HCI faculty listing, you’ll find listings for every HCI faculty member along with their research interests. Find someone who’s working on the same kinds of things that you want to work on, and see if they’d let you join one of their ongoing research projects. Professors are extremely busy people, but one of the things I love about Georgia Tech is the school’s focus on fostering student education and involvement in addition to fostering quality research. So, it’s quite likely that you’ll find someone willing to let you join up and prove that you can contribute. Let’s check our list of what kind of projects are going on. Near research projects are coming up all the time, and new faculty are joining every year, that’s why I’m not listing names of specific projects. But if any of these domains are interesting to you, check out the HCI faculty websites, and see if there’s anything to which you’d like to contribute.

5.3.3 - MOOCs

HCI is a popular topic and emerging MOOCs as well. So if you’re looking to continue your HCI education in a slightly more formal way but don’t want to shell out the money for a formal degree, there are a lot of great places you can start. First, interactiondesign.org is a treasure trove of HCI information. It has a fantastic free open access body of literature to study independently, including additional information on many topics we’ve covered throughout this course. The site also runs quite a few of its own closed courses as well. For more traditional MOOCs, Udacity has free courses on HCI as applies to product design and mobile app design as well as a MOOC by Don Norman based on his famous book, Design of Everyday Things. Over at edX.org, MIT has released a series of MOOCs targeting user experiences in mobile app development. The University of Michigan has an X series on UX research and Shanghai University has a course on user experience design with a special emphasis on human factors and culture. On Coursera, Scott Klemmer one of the most prominent HCI researchers, has a specialization entitled, Interaction Design. The University of Minnesota also has a specialization on UI design that covers a lot of the same topics that we covered here developed in part by another Georgia Tech along [inaudible]. The University of London also has a specialization on responsive web design. Georgia Tech is planning on a specialization on human computer interaction as well that might be live by the time you see this. That’s all just core HCI courses. All of these providers and more have courses on applied areas of HCI like video game design, educational technology, virtual reality and more. Most of these courses are available for free to watch. Note also that new MOOCs are coming online all the time, so there are quite likely a lot that I haven’t mentioned here. So, check out Udacity, checkout edX, checkout Coursera, checkout FutureLearn. Also Checkout classcentral.com for a list of MOOCs across several different platforms or just google “HCI MOOC”. This space is so new and fast-paced that by the time you view this video, half the MOOCs I’ve mentioned might be gone and twice as many new ones may have been created. We’ll try to keep an updated list of available courses in the notes below.

5.3.4 - MSCS-HCI

If you want to take this a step further though, you might get an actual Master’s in Computer Science, specializing in HCI. Now if you’re a Georgia Tech student watching this, you might already be working towards a Master’s in CS, and you might be wondering if the HCI specialization is available online. Now right now while I’m recording this it’s not, but I’ll pause for a second so Amanda can let us know if that’s changed. If you’re an on-campus student watching this, or if you’re just watching the open MOOC, then you might want to look into an MS and CS with a focus on HCI. Most MS, CS programs I’ve seen have an HCI specialization, or at least an HCI focus. The specialization lets you go far deeper into the field taking several classes on topics like core HCI, User Interface Design, Educational Technology, Mixed Reality Design, Information Visualization and more. We’ll gather a list of schools with MS, CS programs with HCI specializations and provide it in the notes below.

5.3.5 - MS-HCI

If you already have a strong background in CS, you might want to go all the way to getting a master’s specifically in HCI. These programs aren’t as common as MSCS Programs with HCI specializations, but many universities do have them including Georgia Tech, Carnegie Mellon, University of Washington, the University of Maryland, Iowa State University, and the Rochester Institute of Technology. I myself completed the MS-HCI program here at Georgia Tech, before starting my PhD. In focusing an entire master’s degree in HCI, you’ll find you have even more time to spend getting into the relationship between HCI and some other fields. At Georgia Tech for example, the MS-HCI program has specializations in interactive computing, psychology, industrial design and digital media. That allows the flexibility to focus on different areas of HCI like, how to integrate with physical devices in industrial design, and how it helps us understand human cognition in psychology. Most masters programs in HCI that I’ve seen are also heavily project focused, Carnegie Mellon for example, provides sponsored capstone projects from industry, and every student in Georgia Tech’s MS-HCI program completes a six credit hour independent project. So, if you’re really set on moving forward with a career in HCI, a dedicated masters in the field is a great way to move forward.

5.3.6 - PhD-HCI

If research really is your calling though, then you might want to move on to a PhD which can have a specialization in HCI as well. Now, a PhD isn’t for everyone. There’s a tendency to view it as the next logical step after a master’s degree, but a PhD program is far more of an apprenticeship program. At Georgia Tech, at least, a PhD actually requires fewer classes to complete than a master’s, but that’s because 90 percent of your time is spent working closely with a faculty research adviser. But if you’re very interested in research, the PhD may very well be the way to go. So, as a reminder, here were some of the project areas that have ongoing research in HCI going on here at Georgia Tech. A PhD program is a huge commitment. It’s your full-time job for typically five years. It’s absolutely not for everyone, but it absolutely is for some people. So, if you’re really passionate about what you’ve learned here, a PhD focusing on HCI maybe the way to go.

5.3.7 - PhD-HCC

Finally, the highest level of educational achievement in HCI is likely a PhD, specifically in HCI or similar field. Here at Georgia Tech and at schools like Clemson and Maryland, that’s a PhD in Human-Centered Computing which is actually what my PhD was in. Other schools have PhDs in similar fields, Carnegie-Mellon, Iowa State University, IUPUI, and others have PhD programs in HCI specifically. Pursuing a PhD in HCI or HCC lets you take a deep dive into how humans and computers interact in our modern world. You might dive deep into artificial intelligence, and cognitive science using AI agents to reflect on how humans think. You might delve into learning sciences and technology studying the intersection between computers and education in depth. You might focus on social computing and how online communities function or how social media is changing our society, or you might stick closer to the core of HCI and focus on how people make sense of new technologies. You might answer questions like, “How do we give immediate feedback on motion controls?” Or, “How do we adapt a user interface to the user’s mood?” There are enormous questions to answer. I’m excited to see some of you move on to change the world in very profound ways.

5.3.8 - Thank you - and good luck

No matter what you do next, I hope you’ve enjoyed this foray into the world of human-computer interaction. If you in this course feeling like you actually know less than when you started, then that’s perfectly fine. My goal was not only to teach you about HCI, but also to help you understand how big this community is. I look forward to seeing you all go further in community and make a difference in the world. To close, I have to give a special thank you to Georgia Tech for creating this fantastic online masters program in which I’m developing this course. I’d also like to thank the HCI faculty at Georgia Tech for letting me be the one to record this and bring it to you. Most of all, I’d like to thank Amanda and Morgan, my partners in creating this course for being totally responsible for how amazing it’s looked. I like to think this isn’t just a course about human-computer interaction, but it’s also an example of human-computer interactions, humans using computers to teach about HCI in new and engaging ways. Thank you for watching.