Week 6 Lecture Notes: Interaction Design Frameworks & Experience Analysis

Virtual, Augmented and Spatial Computing

1 Overview

Week 6 is the analytical pivot of the module. Having established hardware knowledge (Weeks 1–2), perceptual foundations (Week 3), and interaction mechanics (Weeks 4–5), we now develop the critical vocabulary needed to analyse, evaluate, and improve XR experiences. This week also launches Assessment 2.


2 1. Why Frameworks Matter

Design frameworks are not rigid rules — they are structured lenses that help us:

  • Analyse existing experiences systematically rather than impressionistically
  • Communicate design decisions to collaborators and stakeholders
  • Identify problems before committing to implementation
  • Compare experiences across different platforms and contexts

Without frameworks, XR critique tends to collapse into “it felt good” or “it felt bad” — subjective impressions that cannot drive improvement.


3 2. Norman’s Action Cycle

Donald Norman’s seven-stage action cycle (from The Design of Everyday Things, 1988, revised 2013) describes how humans interact with any system:

  1. Form a goal: What does the user want to achieve?
  2. Form an intention: How do they plan to achieve it?
  3. Specify an action: Which specific action will they take?
  4. Execute the action: Perform the physical action
  5. Perceive the state of the world: What changed?
  6. Interpret the state: What does the change mean?
  7. Evaluate the outcome: Did it achieve the goal?

3.1 2.1 Applying the Cycle to XR

The cycle is particularly useful for XR because it maps directly onto the interaction design challenges we’ve covered:

Stage XR Design Question
Form goal Is the goal clear from the environment?
Form intention Does the user know what actions are possible?
Specify action Is the correct gesture/input discoverable?
Execute action Is the input reliable and comfortable?
Perceive state Is there clear feedback?
Interpret state Is the feedback unambiguous?
Evaluate outcome Does the user know if they succeeded?

3.2 2.2 The Gulf of Execution and Gulf of Evaluation

Norman identifies two key gaps:

Gulf of Execution: The gap between what the user wants to do and what actions are available. In XR, this manifests as invisible affordances — objects that can be interacted with but give no indication of this.

Gulf of Evaluation: The gap between the system’s state and the user’s perception of it. In XR, this manifests as missing or ambiguous feedback — the user performs an action but doesn’t know if it worked.

Good XR design minimises both gulfs.


4 3. Milgram’s Reality-Virtuality Continuum

Paul Milgram and Fumio Kishino (1994) proposed a continuum from fully real to fully virtual environments:

Real ←————————————————————————→ Virtual
Environment    AR    MR    AV    Environment

Augmented Reality (AR): Primarily real world with virtual overlays. The user sees the real world and virtual content is added. Examples: Snap Spectacles, Hololens 2 in AR mode, phone-based AR.

Mixed Reality (MR): Real and virtual content coexist and interact. Virtual objects can occlude real ones and vice versa. Examples: Hololens 2, Quest 3 colour passthrough.

Augmented Virtuality (AV): Primarily virtual with real elements incorporated. Rare in consumer products.

Virtual Reality (VR): Fully virtual environment. Examples: Quest 2, Vive Pro, Pico Neo Eye.

4.1 3.1 Design Implications

The position on the continuum determines fundamental interaction design decisions:

Continuum Position Interaction Considerations
AR Must work in unpredictable real environments; lighting varies; user is mobile
MR Spatial anchoring critical; virtual objects must respect real-world physics
VR Full control of environment; can design for specific hardware; comfort is primary concern

5 4. Presence Theory (Slater)

Mel Slater’s presence model (2009) distinguishes two components:

Place Illusion (PI): The sense of “being there” — the feeling that you are physically located in the virtual space. PI is primarily driven by perceptual factors: field of view, tracking quality, latency, stereoscopy.

Plausibility Illusion (Psi): The sense that “this is really happening” — that events in the virtual world are real events, not simulations. Psi is driven by the coherence and responsiveness of the virtual world.

5.1 4.1 How Interaction Affects Presence

Poor interaction design breaks both components:

  • Breaks PI: When you notice the interface (floating menus, visible ray casters, controller models that don’t match your hands), you are reminded that you are in a simulation.
  • Breaks Psi: When the world doesn’t respond consistently (objects that don’t fall, physics that behaves unexpectedly, interactions that don’t work), the plausibility of the experience collapses.

This is why interaction design is not just a usability concern — it is a fundamental component of the XR experience itself.


6 5. Interaction Patterns

6.1 5.1 Progressive Disclosure

Reveal complexity gradually. Present only the options relevant to the current task; reveal additional options as the user progresses.

XR application: Don’t present all menu options at once. Start with the primary action; reveal secondary options through exploration or explicit request.

6.2 5.2 Spatial Consistency

Objects stay where the user places them. The virtual world has memory. This is a fundamental expectation users bring from the real world.

XR application: Avoid resetting object positions between scenes or sessions without explicit user action. If objects must reset, provide clear visual indication.

6.3 5.3 Embodied Metaphor

Map virtual actions to real-world physical actions. Leverage the user’s existing physical knowledge.

Examples: - Pull a lever to open a door (not press a button) - Throw an object by physically throwing (not pressing a button) - Write by physically writing (not typing)

6.4 5.4 Graceful Degradation

When tracking fails, input is lost, or the system encounters an error, degrade gracefully rather than crashing or freezing.

XR application: If hand tracking is lost, fall back to controller input. If a gesture fails, provide an alternative button input. Always give the user a way out.


7 6. Interaction Anti-Patterns

Anti-Pattern Why It Fails Better Alternative
Floating menus Break immersion, vergence conflict Diegetic UI, wrist menu
Tiny targets Violate Fitts’ Law Minimum 2cm at arm’s length
No feedback User doesn’t know if action worked Visual + audio + haptic
Gorilla arm UI Fatigue in minutes Waist-height interaction zone
Forced smooth locomotion Sickness for sensitive users Always offer teleport
Invisible affordances User can’t discover interactions Highlight, glow, audio cue
Irreversible actions User afraid to explore Confirm dialogs, undo

8 7. Evaluation Methods

8.1 7.1 Heuristic Evaluation

Expert review against a set of established principles. Fast and cheap — can be done without users.

Process: 1. Select 3–5 evaluators 2. Each evaluator independently reviews the experience against the heuristics 3. Evaluators rate severity of each violation (0–4 scale) 4. Aggregate findings and prioritise

Nielsen’s 10 Heuristics adapted for XR (see lecture slides for full list)

8.2 7.2 Think-Aloud Protocol

Users verbalise their thoughts while using the experience. Reveals mental models and points of confusion.

XR challenge: Verbalising while wearing a headset is awkward. Consider: - Recording audio from inside the headset - Using a mirrored display so an observer can see what the user sees - Brief the user before starting — they often go quiet in XR

8.3 7.3 Standardised Questionnaires

Questionnaire Measures Items
SUS (System Usability Scale) Overall usability 10 items
SSQ (Simulator Sickness Questionnaire) Comfort/sickness 16 items
IPQ (Igroup Presence Questionnaire) Sense of presence 14 items
NASA-TLX Cognitive load 6 dimensions

For Assessment 2, SUS is recommended as a minimum. SSQ is recommended if your prototype involves locomotion.


9 Self-Check Questions

  1. Apply Norman’s Action Cycle to a specific interaction in an XR experience you have used. Where does it fail?
  2. Where on Milgram’s continuum would you place the Hololens 2? The Quest 2? The Snap Spectacles?
  3. What is the difference between Place Illusion and Plausibility Illusion?
  4. Name three interaction anti-patterns and explain why each fails.
  5. When would you use a heuristic evaluation rather than a user test?

10 References

  • Norman, D. (2013) The Design of Everyday Things (revised ed.). Basic Books.
  • Milgram, P. & Kishino, F. (1994) “A taxonomy of mixed reality visual displays.” IEICE Transactions on Information and Systems, E77-D(12), 1321–1329.
  • Slater, M. (2009) “Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments.” Philosophical Transactions of the Royal Society B, 364(1535), 3549–3557.
  • Nielsen, J. (1994) “Heuristic evaluation.” In Nielsen, J. & Mack, R.L. (eds.) Usability Inspection Methods. Wiley.
  • Brooke, J. (1996) “SUS: A ‘quick and dirty’ usability scale.” In Jordan et al. (eds.) Usability Evaluation in Industry. Taylor & Francis.
  • Kennedy, R.S. et al. (1993) “Simulator sickness questionnaire.” Presence: Teleoperators and Virtual Environments, 2(3), 203–220.