jump to navigation

How Can Artificial Intelligence (AI) Be Applied to Human Computer Interaction (HCI) Design Testing? – a draft paper 28/01/2026

Posted by abasiel in Uncategorized.
Tags: , , , , ,
add a comment

Hello UX/HCI Researchers – Please see our draft introduction below. If you are interested in this topic of AI and HCI/UX application, please do email Dr Anthony Basiel at abasiel@gmail.com

LSICT – India conference
ICICDA’26 | Intelligent Systems and Robotics | Human-Computer Interaction
https://www.icicdasrmvdp.com/call-for-papers

Dr Anthony ‘Skip’ Basiel
Academic Director – London School of Intelligent Computing and Technology
a.basiel@lsict.org.uk | https://lsict.org.uk

Dr Mike Howarth
Education Media Consultant
michael.howarth@mhmvr.co.uk | http://mhmvr.co.uk

Introduction

According to Luo (2025). ‘ For the past few decades, user research has inherently come with a trade-off: scale or quality.’ This research explores the role of Artificial Intelligence (AI) in human computer interaction or user experience (UX) design.

‘Don’t start with AI, start with the problem’ is the mantra from Caleb Sponheim (2026), Human Computer Interaction (HCI) Design Consultant from the NN/G UX (User Experience) Expert Group (2026). This paper explores the proposition that if you start with a technology, delivering real value to your website users and customers may be more difficult. To start several HCI terms are offered to help form a common language between authors and readers. Exploring strategies to address the question of how AI can be applied to Human Computer Interaction Design testing is next discussed. Several case studies are then provided as real-world examples of the application of AI to support humans interacting with technology. Analysis of the strategies is discussed in detail through critical review. Conclusions and recommendations are offered to the reader as a way to apply theory with practice to inform the future of integrating AI with human computer interaction design.

Definition of AI

The White Paper (White and Case (2025) describes “AI,” “AI systems” and/or “AI technologies” as “products and services that are ‘adaptable’ and ‘autonomous’. Generative AI can be seen as “deep or large language models able to generate text and other content based on the data on which they were trained”. AI systems often develop the ability to perform new forms of inference not directly envisioned by their human programmers.

AI technology enables the programming or training of a device or software to:

(a)    Perceive environments through the use of data

(b)    Interpret data using automated processing designed to approximate cognitive abilities

(c)    Make recommendations, predictions or decisions with a view to achieving a specific objective.

Definition of Human Computer Interaction Personas:

 Personas are representations of archetypal or “median” user groups created from user data (e.g., interview, ob­servation, survey, or log data). Apart from simply summarising user data, personas should personify user data to encourage perspective-taking and evoke empathy toward user groups (Cooper, A. 2014).

In the NN/G website Personas: Study Guide, Kaplan (2022 p.1) sees personas as, ‘ a fictional, yet realistic, description of a typical or target user of the product [or website]. Through the development of a persona you may promote empathy, increase awareness and memorability of target users, prioritise features, and inform [UX] design decisions.’

Using AI in UX/HCI design testing

In using any system, technological or human, there will be potential advantages and disadvantages. The table below provides some insight into how we may adapt and apply AI into the design testing for UX and HCI user or task-centred designs.

Table 1 Dis/advantages of using AI for UX & HCI design testing

Potential advantagesPotential disadvantages
Saving time to generate UX survey questionsArtificial optimism: Synthetic users can be “too agreeable” and lack the emotional depth or unpredictability of real humans.
Save time to get UX responses for surveys and interviews based on AI-generated personas  Hallucinations: AI personas may “invent” data if asked questions outside their training data or persona description. 

Hawthorne Effect

Will a human respond differently to a HCI/UX test if they know they are being watched or the UX Consultant is in the same room? What about using an AI agent to respond to HCK/UX tests?

In the UX Planet website Medium (2026), Purwar, S. and Kamuni, M. (2019) discuss the issue of humans responding differently to HCI/UX testing questions when the UX Consultant is present in person (or by web video conferencing). This observer effect is the tendency of people to work harder and perform better when they are participants in an experiment or UX research study. It suggests that individuals may change their behaviour due to the attention they are receiving from UX Testers rather than because of the manipulation of independent variables. While an AI bot is not human, it’s dataset comes from human generated information which may cause the AI responses to hallucinate or produce responses that mimic a Hathorne effect.

Here is a sample PDF file of a case study we are starting to explore using AI to design UX persona surveys: https://abasiel.uk/wp-content/uploads/2026/01/ai-for-ux-testing.pdf