Evaluation: Heuristic Analysis
Heuristic Analysis ScriptNow we get to the analytical evaluation. The analytical approach of evaluating consists of methods like heuristic evaluation, modeling and cognitive walkthroughs. It generally employs experts rather than users. On the next slides, we will discuss this approach in more detail.
The heuristic evaluation is often based on the 10 usability heuristics by Nielsen, which we already have talked about in the analysis section. For this process, we rely on experts and not on actual users. These experts use our prototype, sometimes they slip into the role of a persona we have identified earlier and try to identify possible problems that can occur while interacting with the product. After the individual evaluations with the different experts, a joint session where the experts can exchange their thoughts is possible. For the heuristic evaluation, we have to brief the experts. It's important that every expert receives the same briefing. Afterwards, the experts spend one to two hours independently inspecting the prototype. Often it is helpful when the experts take at least two passes through the interface. The first pass to get a feeling for the flow of interaction while the second pass allows the evaluator a closer look at specific interface elements in the context of the whole prototype or product. This way, the expert can identify potential usability problems. In the debriefing session, the experts come together to discuss their findings and to prioritize problems and suggest solutions. When conducting a heuristic evaluation we can use this severity rating scale. When a participant rates an aspect of our prototype with a 4, we should feel obliged to fix this problem before we release the prototype as a product. We recommend you take a look into the different rating attributes and to click on the link regarding this topic for further information. This becomes especially interesting for you when you're about to evaluate your own prototype for this course.
Another form of analytic evaluation is the cognitive walkthrough. Cognitive walkthroughs involve simulating a user's problem solving process at each step in the human computer dialog, checking to see if the user's goal and memory for actions can be assumed to lead to the next correct action. For this method, we let users perform a task and focus on issues on the cognitive level, which means we ask if the user has problems understanding the interaction. Does he know what to do and how does he actually interact? When our system provides feedback, does the user understand if his actions were correct or not? During this walkthrough, it is interesting for us to find out if the user recognizes the next interaction step to achieve the task or if he does not know what to do next. Furthermore, we want to know if it is apparent for the user that the correct action is available and if the user is able to interpret the feedback from the system correctly. On this slide, you can see the process of the cognitive walkthrough summed up. For this method, we have experts who slip into the personas we have created before and then perform specific tasks using our product. They answer our questions, try to identify issues and document what we need to change about the system. Afterwards we can revise our design based on the expert suggestions.
Besides the heuristic evaluation and the cognitive walkthrough, we also have the possibility to create models predicting how users would interact and how efficiently they would be achieving a task based on physical and mental operations needed for this specific task. For this method, we do not even need experts or users for that matter. Here, you can find examples for GOMS models GOMS stands for "goals, operators, methods and selection rules" the model was developed in the early 1980s as an attempt to model the knowledge and cognitive processes involved when users interact with systems. We ask what the users want to achieve, which actions need to be taken to achieve the goal, what are the steps to be taken to achieve this goal and which methods are used in which context. KLM is one variant of GOMS models. It provides actual numerical predictions of user performance and is computational. It predicts how long it will take an expert user to accomplish a routine task without errors using an interactive computer system. As you can see here, the keystroke level model consists of six operators. Here is an example of the keystroke level model. You can find further details in the publication by Sharp at al., which is linked below this video. You might wonder why we should use models when it is apparent that any assumption could be wrong. Well, the answer is quite simple: it is cheaper and often less time consuming than an evaluation with real users or experts.
Do a heuristic usability evaluation of the POP prototype from your your partner group (another team in this course) as experts using the template we provide. Again consider Nielsen’s (1994) paradigm Discount Usability Engineering, as well as Nielsen's collection of information on Heuristic Evaluations. Details can be found in Nielsen's (1993, chapter 5.11).
Guerrilla HCI: Using discount usability engineering to penetrate the intimidation barrier by Jacob Nielsen (1994), paper
Heuristic Evaluation by Norman Nielsen Group, various online articles
Severity Ratings for Usability Problems by Jakob Nielsen, online article
Usability Engineering Chapter 5.11: Heuristic Evaluation by Jakob Nielsen, book chapter
Interaction Design: beyond human-computer interaction by Sharp et al., book
Upload the evaluation report in a folder iteration-2/heuristic-evaluation
in the GitHub master
branch. Offer your evaluation and feedbacks to your partner group (another team in this course) to conduct the Task 10.