Public notes for CS6750 - HCI Spring 2022 at Georgia Tech

Public webpage for sharing information about Dr. Joyner's CS6750 - Human Computer Interaction course in Spring 2022.

View the Project on GitHub idkaaa/cs-6750-hci-sp22-public

Unit 4: Applications

4.1 Applications: Technology

4.1.1 - Introduction to Applications

If you’re watching this course in the order it was originally produced, you’ve now learned the foundational principles of HCI and the research methods behind developing interfaces. Now at the beginning of the course, we also previewed some of the open areas of HCI development and research. Now, what we’d like to do is give you a jump-start in looking into what areas are most interesting to you. In the lessons that follow, we’ll replay the preview videos for each topic from the beginning of the course, and then provide a small library of information on each topic. We certainly don’t expect you to go through every portion of all these lessons. Find what’s interesting to you and use these materials as your jumping off point.

4.1.2 - Technology: Virtual Reality

The year that I’m recording this is what many have described as; the year that virtual reality finally hits the mainstream. By the time you watch this, you’ll probably be able to assess whether or not that was true, so come back in time and let me know. Virtual reality is an entire new classification of interaction and visualization and we’re definitely still at the beginning of figuring out what we can do with these new tools. You could be one of the ones who figures out the best way to resolve motion sickness or how to give proper feedback on gestural interactions. A lot of the press around virtual reality has been around video games but that’s definitely not the only application. Tourism, commerce, art, education, virtual reality has applications to dozens of spaces. For example, there’s a lab in Michigan that’s using virtual reality to treat phobias. They’re creating a safe space where people can very authentically and realistically confront their fears. The possible applications of virtual reality are really staggering, so I’d encourage you to check them out as you go through this class.

4.1.3 - Technology: Augmented Reality

Virtual reality generally works by replacing the real world’s visual, auditory, and sometimes even all factory or kinesthetic stimuli with it’s own input. Augmented reality on the other hand, compliments what you see and hear in the real world. So for example, imagine a headset like a Google Glass that automatically overlays directions right on your visual field. If you were driving, it would highlight the route to take, instead of just popping up some visual reminder. The input it provides complements stimuli coming from the real world, and instead of just replacing them. And that creates some enormous challenges, but also some really incredible opportunities as well. Imagine the devices that can integrate directly into our everyday lives, enhancing our reality. Imagine systems that could, for example, automatically translate text or speech in a foreign language, or could show your reviews for restaurants as you walk down the street. Imagine a system that students could use while touring national parks or museums, that would automatically point out interesting information, custom tailored to that student’s own interests. The applications of augmented reality could be truly stunning, but it relies on cameras to take input from the world, and that actually raises some interesting societal problems. There are questions about what putting cameras everywhere would mean. So keep those in mind when we get to interfaces and politics, in unit two.

4.1.4 - Technology: UbiComp and Wearables

Ubiquitous computing refers to the trend towards embedding computing power in more and more everyday objects. You might also hear it referred to as pervasive computing. It’s deeply related to the emerging idea of an Internet of Things. A few years ago, you wouldn’t have found computers, and refrigerators, and wristwatches, but as microprocessors became cheaper, and as the world became increasingly interconnected, computers are becoming more and more ubiquitous. Modern HCI means thinking about whether someone might use a computer while they’re driving a car or going on a run. It means figuring out how to build smart devices that also some of the cognitive load from the user, like refrigerators that track their own contents and deliver advice to the users right at the right time. This push for increasing pervasiveness has also led to the rise of wearable technologies. Exercise monitors are probably the most common examples of this but smartwatches, Google Glass, augmented reality headsets, even things like advanced hearing aids and robotic prosthetic limbs are all examples of wearable technology. This push carries us in the areas usually reserved for human factors engineering and industrial design, which exemplifies the increasing role of HCI in the design of new products.

4.1.5 - Technology: Robotics

A lot of the current focus on robotics is on their physical construction and abilities or on the artificial intelligence that underlies their physical forms. But as a robotics becomes more and more mainstream, we’re going to see the emergence of a new sub-field of human-computer interaction, human-robot interaction. The field actually already exists. The first conference on human-robot interaction took place in 2006 in Salt Lake City, and several similar conferences have been created since then. As robots into the mainstream, we’re going to have to answer some interesting questions about how we interact with them. For example, how do we ensure that robots don’t harm humans through faulty reasoning? How do we integrate robots into our social lives or do we even need to? As robots are capable of more and more, how do we deal with the loss of demand for human work? Now, these questions all lie at the intersection of HCI artificial intelligence and philosophy, in general, but there’s some more concrete questions we can answer as well. How do we pragmatically equip robots with the ability to naturally interact with humans based on things like voice and touch? How do we provide tacit subtle feedback to humans interacting with robots to confirm their input is being received and properly understood? How do we support humans in teaching things to robots instead of just programming them or alternatively can we create robots that can teach things to humans? We already see robotics advances applied to things like health care and disability services, and I’m really excited to see where you take it next.

4.1.6 - Technology: Mobile

One of the biggest changes to computing over the past several years has been the incredible growth of mobile as a computing platform. We really live in a mobile first world, and that introduces some significant design challenges. Screen real estate is now far more limited. The input methods are less precise, and the user is distracted. The mobile computing also presents some really big opportunities for HCI. Thanks in large part to mobile, we’re no longer interested just in a person sitting in front of a computer. With mobile phones, most people have a computer with them at all times anyway. We can use that to support experiences from navigation to stargazing. Mobile computing is deeply related to fields like context-aware computing, ubiquitous computing, and augmented reality as it possesses the hardware necessary to complement those efforts. But even on its own, mobile computing presents some fascinating challenges to address. For me, the big one is that we haven’t yet reached a point where we can use mobile phones for all the tasks we do on computers. Smart phones are great for social networking, personal organization, games, and lots of other things. But we haven’t yet reached a point where the majority of people would sit down to write an essay or do some programming on smartphones. Why haven’t we? What do we need to do to make smart phones into true replacements for traditional desktop and laptop computers?

4.2 Applications: Ideas

4.2.1 - Idea: Context-Sensitive Computing

What time is it? You can go ahead and go to lunch. Did that exchange make any sense? I asked Amanda for the time and she replied by saying I can go ahead and go get lunch. The text seems completely nonsensical and yet hearing that, you may have filled in the context that makes this conversation logical. You might think that I asked a while ago what time we were breaking for lunch or maybe I mentioned that I forgot to eat breakfast. Amanda would have that context and she’d use it to understand why I’m probably asking for the time. Context is a fundamental part of the way humans interact with other humans. Some lessons we’ll talk about even suggests that we are completely incapable of interacting without context. If context is such a pervasive part of the way humans communicate, then to build good interfaces between humans and computers, we must equip computers with some understanding of context. That’s where context-sensitive computing comes in. Context-sensitive computing attempts to give computer interfaces the contextual knowledge that humans have in their everyday lives. For example, I use my mobile phone differently depending on whether I’m sitting on the couch at home, or using it in my car, or walking around on the sidewalk. Imagine I didn’t have to deliberately inform my phone of what mode I was in though. Imagine if it just detected that I was in my car and automatically brought up Google Maps and audible for me. Services have started to emerge to provide this but there’s an enormous amount of research to be done on context-sensitive computing especially as it relates to things like wearables, augmented reality, and ubiquitous computing.

4.2.2 - Idea: Gesture-Based Interaction

As this course goes, on you’ll find that I’m on camera more often than you might be accustomed to seeing any udacity course. Around half this course actually takes place with me on camera, there are a couple of reasons for that. The big one is that this is human-computer interaction, so it makes sense to put a strong emphasis on the human. But another big one is that when I’m on camera I can express myself through gestures instead of just through words and voice intonations. I can, for example, make a fist and really drive home and emphasize a point. I can explain that a topic applies to a very narrow portion of the field or a very wide portion of the field. We communicate naturally with gestures every day. In fact, we even have an entire language built out of gestures. So, wouldn’t it be great if our computers can interpret our gestures as well? That’s the emerging field of gesture-based interaction. You’ve seen this with things like the Microsoft Kinect which has far-ranging applications from health care to gaming. We’ve started to see some applications of gesture-based interaction on the go as well with wristbands that react to certain hand motions. Gesture-based interaction has enormous potential. The fingers have some of the finest muscle movements, meaning that a system based on finger movements could support an incredible number of interactions. We might see a day when it’s possible to type invisibly in the air in front of you based on the system’s recognition to the movement of the muscles in your wrist. That might finally allow mobile devices to despise traditional computers altogether.

4.2.3 - Idea: Pen- and Touch-Based Interaction

I always find it interesting how certain technologies seem to come around full circle. For centuries we only interacted directly with the things that we built and then computers came along. And suddenly we needed interfaces between us and our tasks. Now, computers are trying to actively capture natural ways we’ve always interacted. Almost every computer I encounter now days has a touch screen. That’s a powerful technique for creating simple user interfaces because it shortens the distance between the user and the tasks they’re trying to accomplish. Think about someone using a mouse for the first time. He might need to look back and forth from the screen to the mouse, to see how interacting down here, change things he sees up here. With a touch based interface, he interacts the same way he uses things in the real world around him. A challenge can sometimes be a lack of precision, but to make up for that we’ve also create pen based interaction. Just like a person can use a pen on paper, they can also use a pen on a touch screen. And in fact, you might be quite familiar with that, because most Udacity courses use exactly that technology. They record someone writing on a screen. That gives us the precision necessary to interact very delicately and specifically with our task. And as a result tablet based interaction methods have been used in fields like art and music. Most comics you find on the internet are actually drawn exactly like this, combining the precision of human fingers with the power of computation.

4.2.4 - Idea: Information Visualization

One of the biggest trends of the information age is the incredible availability of data. Scientists and researchers use data science and machine learning to look at lots of data and draw conclusions. But often times, these conclusions are only useful if we can turn around and communicate them to ordinary people. That’s where Information Visualization comes in. Now, at first glance you might not think of data visualization as an example of HCI. After all, I could draw data visualization on a napkin, and print in a newspaper, and there’s no computer involved anywhere in that process. But computers give us a powerful way to re-represent data in complex, animated, and interactive ways. We’ll put links to some excellent examples in the notes. Now, what’s particularly notable about data visualization in HCI, is the degree with which it fits perfectly with our methodologies for designing good interfaces. One goal of a good interface is to match the user’s mental model to the reality of the task at hand in the same way the goal of information visualization is to match the reader’s mental model of the phenomenon to the reality of it. So, the same principles we discussed for designing good representations, apply directly to designing good visualizations. After all, a visualization is just a representation of data.

4.2.5 - Idea: CSCW

CSCW stands for computer-supported cooperative work. The field is just what the name says. How do we use computers to support people working together? You’re watching this course online, so I’ll tell the experienced as closely. Maybe you’ve worked on a group project with a geographically distributed group. Maybe you’ve had a job working remotely. Distributed teams are one example of CSCW in action, but there are others. The community often breaks things down to two dimensions, time and place. We can think of design as whether or not we’re designing for the users in the same time and place or users at different times and in different places. This course is an example of designing for different time and different place. You’re watching this long after I’ve recorded it, likely from far away from our studio. Workplace chat utilities, like Slack and HipChat, would be examples of same time, different place. They allow people to communicate instantly across space, mimicking the real-time office experience. Now, imagine a kiosk at a museum that asks visitors to enter their location to create a map of where everyone comes from. That would be different time, same place. Everyone uses the interface at the same place, but across time. Even when we’re in the same time and place, computers can still support cooperation. In fact, right now, a man is running our camera, Ben’s running the teleprompter, and I’m standing up here talking at you. These different computers are supporting us and cooperating to create this course. So, we can often think of CSCW as mediating cooperation across traditional geographic or temporal borders, but it can also help us with co-located simultaneous cooperation.

4.2.6 - Idea: Social Computing

Social computing is the portion of HCI that’s interested in how computers affect the way we interact and socialize. One thing that falls under this umbrella, is the idea of re-creating social norms within computational systems. So, for example, when you chat online, you might often use emojis or emoticons. Those are virtual re-creations of some of the tacit interaction we have with each other on a day-to-day basis. So, for example, these all take on different meanings depending on the emotion provided. Social computing is interested in a lot more than just emojis, of course, from online gaming and Wikipedia to social media to dating websites. Social computing is really interested in all areas, where computing intersects with our social lives.

4.3 Applications: Domains

4.3.1 - Domain: Special Needs

One of the most exciting application areas for HCI is in helping people with special needs. Computing can help us compensate for disability, injuries, aging. Think of a robotic prosthetic for example. Of course, part of that is engineering, part of it is neuroscience. But it’s also important to understand how the person intends to use such a limb and the tasks they need to perform. That’s HCI intersecting with robotics. Or take another example from some work done here at Georgia Tech by Bruce Walker. How do you communicate data to a blind person? We’ve talked about information visualization, but if it’s a visualization, it’s leaving out a significant portion of the population. So, Dr. Walker Sonification Lab works in communicating data using sound. A lot of the emerging areas of HCI technology could have extraordinary significance to people with special needs. Imagine virtual reality for people suffering from some form of paralysis, or imagine using artificial intelligence with context-aware computing to create an autonomous wheelchair. These are projects would only target a small portion of the population. But the impact of that portion would be absolutely indescribable.

4.3.2 - Domain: Education

Hi, and welcome to Educational Technology. My name is David Joyner, and I’m thrilled to bring you this course. As you might guess, education is one of my favorite application areas of HCI. In fact, as I’m recording this, I’ve been teaching educational technology at Georgia Tech for about a year, and a huge portion of designing educational technology is really just straightforward HCI. But education puts some unique twists on the HCI process. Most fascinatingly, education is an area where you might not always want to make things as easy as possible. You might use HCI to introduce some desirable difficulties, some learning experiences for students. But it’s important to ensure that the cognitive load students experienced during a learning task is based on the material itself, not based on trying to figure out our interfaces. The worst thing you can do in HCI for education is raise the student’s cognitive load because they’re too busy thinking about your interface instead of the subject matter itself. Lots of very noble efforts in designing technology for education have failed due to poor HCI. So, if you’re interested in going in educational technology, you’ll find a lot of valuable lessons in human-computer interaction.

4.3.3 - Domain: Healthcare

A lot of current efforts in healthcare are about processing the massive quantities of data that are recorded everyday. But in order to make that data useful, it has to connect to real people at some point. Maybe it’s equipping doctors with tools to more easily visually evaluate and compare different diagnoses. Maybe it’s giving patients the tools necessary to monitor their own health and treatment options. Maybe that’s information visualization so patients can understand how certain decisions affect their well-being. Maybe it’s context aware computing that can detect when patients are about to do something they probably shouldn’t do. There are also numerous applications of HCI to personal health like Fitbit for exercise monitoring or MyFitnessPal for tracking your diet. Those interfaces succeed if they’re easily usable for users. Ideally, they’d be almost invisible. But perhaps the most fascinating upcoming intersection of HCI and health care is in virtual reality. Virtual reality exercise programs are already pretty common to make living an active lifestyle more fun, but what about virtual reality for therapy? That’s actually already happening. We can use virtual reality to help people confront fears and anxieties in a safe, but highly authentic place. Healthcare in general is concerned with the health of humans. And computers are pretty commonly used in modern healthcare. So the applications of human computer interaction to healthcare are really huge.

4.3.4 - Domain: Security

Classes on network security are often most concerned with the algorithms and encryption methods that must be safeguarded to ensure secure communications. But the most secure communication strategies in the world are weakened if people just refuse to use them. And historically, we’ve found people have very little patience for instances where security measures get in the way of them doing their tasks. For security to be useful it has to be usable. If it isn’t usable, people just won’t use it. XEI can increase the usability of security in a number of ways. For one, it can make those actions simply easier to perform. CAPTCHAs are forms that are meant to ensure users are humans. And they used to involve recognizing letters in complex images, but now they’re often as simple as a check-box. The computer recognizes human-like mouse movements and uses that to evaluate whether the user is a human. That makes it much less frustrating to participate in that security activity. But HCI can also make security more usable by visualizing and communicating the need. Many people get frustrated when systems require passwords that meet certain standards or complexity, but that’s because it seems arbitrary. If the system instead expresses to the user the rationale behind the requirement, the requirement can be much less frustrating. I’ve even seen a password form that treats password selection like a game where you’re ranked against others for how difficult your password would be to guess. That’s a way to incentivize strong password selection making security more usable.

4.3.5 - Domain: Games

Video games are one of the purest examples of HCI. They’re actually a great place to study HCI, because so many of the topics we discuss are so salient. For example, we discussed the need for logical mapping between actions and effects. A good game exemplifies that. The actions that the user takes with the controller should feel like they’re actually interacting within the game world. We discussed the power of feedback cycles. Video games are near constant feedback cycles as the user performs actions, evaluates the results and adjust accordingly. In fact, if you read through video game reviews you’ll find that many of the criticisms are actually criticisms of bad HCI. The controls are tough to use, it’s hard to figure out what happened. The penalty for failure is too low or too high. All of these are examples of poor interface design. In gaming though there’s such a tight connection between the task and the interface. Their frustrations with a task can help us quickly identify problems with the interface.