Dr Charles Martin
Markdown Formatting Check: There is a CI/CD job that
checks your markdown formatting using the markdownlint-cli
tool. Syntax rules are listed here
in our script, rules MD013 and MD041 are
disabled. All other rules are active.
For the final project you will have to choose a research question to explore.
This is a clear question (one sentence) that guides the design of your research project.
RQs have been called survival beacons because they should guide all aspects of our research plans.
How do we choose a research question and write it clearly?
Important skill for any research activity.
This framework inspired by Lennart Nacke, everybody’s favourite HCI writing coach on LinkedIn.
To be clear, a research question starts with a question word (what, how, why, can, do, should) and ends with a question mark. It can just be one sentence.
Seems too easy… let’s try it together.
What effects can a haptic wearable interface have on lack of awareness during meetings and later work performance?
Encodes the broad area, the problem, the justification, the context, etc.
Interfaces:
Problems to Solve:
Let’s write a research question!
Together, let’s Spin the wheels to decide on a broad area and a problem.
Then, decide on a “justification” and write a research question.
Remember that the RQ should include the broad area, the problem, and the justification.
Use the poll everywhere link to suggest research questions and vote on the best ones.
Write for 2-3 minutes, vote for 1 minute, then let’s discuss.
Evaluation: collecting and analysing data from user experiences with an artefact.
Goal: to improve the artefact’s design.
Addresses: functionality, usability, user experience
Appropriate for all different kinds of artefacts and prototypes.
Methods vary according to goals.
Does the design do what the users need and want?
Examples:
Six usability goals:
Depends on your evaluation goal!
Evaluation serves different purposes at different stages of the design process
A controlled evaluation setting is not the normal place for using a technology or for the user to be.
Evaluating a technology or context of use in the normal setting for the user.
Field studies can:
Helps to establish ecological validity.
E.g., designers ask colleagues for design feedback: Yichen Wang’s arMIDI system early design process with supervisor and colleagues (Wang et al., 2025).
The evaluation setting guides certain dimensions of developed artefacts.
You’re all HCI researchers and we need to evaluate this interactive toy.
We need to choose:
Talk for 2-3 minutes and then we will hear some answers 🗣️🎤⭐️
What do we need to keep in mind to plan evaluations?
Universities have processes to approve the ethical aspects of research that collects data from humans following established rules (National Health and Medical Research Council (NHMRC) et al., 2025).
We don’t go deeply into research ethics in this course but the four issues above are the core ones.
E.g.:
A blue backround in the user interface leads to faster task completion.
Hypotheses must be falsifiable and can only be dismissed! (A bit different from the more general “research questions”).
To dismiss or support a hypothesis we generally need significance testing and quantitative methods.
Which participants test which conditions?
| Design | Advantages | Disadvantages |
|---|---|---|
| Different participants (between-participants design) | - No order effects | - Requires many participants - Individual differences can affect results - Random assignment helps minimize differences |
| Same participants (within-participants design) | - Eliminates individual differences between conditions | - Requires counterbalancing - Risk of order effects (e.g., learning or fatigue) |
| Matched participants (pair-wise design) | - No order effects - Reduces impact of individual differences |
- Time-consuming to find matched pairs - May miss other influential variables |
Reveal insights about actual use and long-term integration that lab studies often miss.
HCI is hard. To do a study, you usually need to:
Is there any way to do evaluation without users?
Budd (2007) introduces further heuristics focussed on web, here’s some from the list:


Estimate user performance without needing real users, using formulas to assess task efficiency — useful in early design stages or when testing with users is difficult.
Fitts’ Law (Fitts, 1954):
Who has a question?