jump to navigation

How Can Artificial Intelligence (AI) Be Applied to Human Computer Interaction (HCI) Design Testing? – a draft paper 28/01/2026

Posted by abasiel in Uncategorized.
Tags: , , , , ,
add a comment

Hello UX/HCI Researchers – Please see our draft introduction below. If you are interested in this topic of AI and HCI/UX application, please do email Dr Anthony Basiel at abasiel@gmail.com

LSICT – India conference
ICICDA’26 | Intelligent Systems and Robotics | Human-Computer Interaction
https://www.icicdasrmvdp.com/call-for-papers

Dr Anthony ‘Skip’ Basiel
Academic Director – London School of Intelligent Computing and Technology
a.basiel@lsict.org.uk | https://lsict.org.uk

Dr Mike Howarth
Education Media Consultant
michael.howarth@mhmvr.co.uk | http://mhmvr.co.uk

Introduction

According to Luo (2025). ‘ For the past few decades, user research has inherently come with a trade-off: scale or quality.’ This research explores the role of Artificial Intelligence (AI) in human computer interaction or user experience (UX) design.

‘Don’t start with AI, start with the problem’ is the mantra from Caleb Sponheim (2026), Human Computer Interaction (HCI) Design Consultant from the NN/G UX (User Experience) Expert Group (2026). This paper explores the proposition that if you start with a technology, delivering real value to your website users and customers may be more difficult. To start several HCI terms are offered to help form a common language between authors and readers. Exploring strategies to address the question of how AI can be applied to Human Computer Interaction Design testing is next discussed. Several case studies are then provided as real-world examples of the application of AI to support humans interacting with technology. Analysis of the strategies is discussed in detail through critical review. Conclusions and recommendations are offered to the reader as a way to apply theory with practice to inform the future of integrating AI with human computer interaction design.

Definition of AI

The White Paper (White and Case (2025) describes “AI,” “AI systems” and/or “AI technologies” as “products and services that are ‘adaptable’ and ‘autonomous’. Generative AI can be seen as “deep or large language models able to generate text and other content based on the data on which they were trained”. AI systems often develop the ability to perform new forms of inference not directly envisioned by their human programmers.

AI technology enables the programming or training of a device or software to:

(a)    Perceive environments through the use of data

(b)    Interpret data using automated processing designed to approximate cognitive abilities

(c)    Make recommendations, predictions or decisions with a view to achieving a specific objective.

Definition of Human Computer Interaction Personas:

 Personas are representations of archetypal or “median” user groups created from user data (e.g., interview, ob­servation, survey, or log data). Apart from simply summarising user data, personas should personify user data to encourage perspective-taking and evoke empathy toward user groups (Cooper, A. 2014).

In the NN/G website Personas: Study Guide, Kaplan (2022 p.1) sees personas as, ‘ a fictional, yet realistic, description of a typical or target user of the product [or website]. Through the development of a persona you may promote empathy, increase awareness and memorability of target users, prioritise features, and inform [UX] design decisions.’

Using AI in UX/HCI design testing

In using any system, technological or human, there will be potential advantages and disadvantages. The table below provides some insight into how we may adapt and apply AI into the design testing for UX and HCI user or task-centred designs.

Table 1 Dis/advantages of using AI for UX & HCI design testing

Potential advantagesPotential disadvantages
Saving time to generate UX survey questionsArtificial optimism: Synthetic users can be “too agreeable” and lack the emotional depth or unpredictability of real humans.
Save time to get UX responses for surveys and interviews based on AI-generated personas  Hallucinations: AI personas may “invent” data if asked questions outside their training data or persona description. 

Hawthorne Effect

Will a human respond differently to a HCI/UX test if they know they are being watched or the UX Consultant is in the same room? What about using an AI agent to respond to HCK/UX tests?

In the UX Planet website Medium (2026), Purwar, S. and Kamuni, M. (2019) discuss the issue of humans responding differently to HCI/UX testing questions when the UX Consultant is present in person (or by web video conferencing). This observer effect is the tendency of people to work harder and perform better when they are participants in an experiment or UX research study. It suggests that individuals may change their behaviour due to the attention they are receiving from UX Testers rather than because of the manipulation of independent variables. While an AI bot is not human, it’s dataset comes from human generated information which may cause the AI responses to hallucinate or produce responses that mimic a Hathorne effect.

Here is a sample PDF file of a case study we are starting to explore using AI to design UX persona surveys: https://abasiel.uk/wp-content/uploads/2026/01/ai-for-ux-testing.pdf

SchoRes ICSSH 2025 21/04/2025

Posted by abasiel in Uncategorized.
Tags: , , , ,
1 comment so far

SchoRes International Conference on Social Sciences & Humanities

I have been asked to be keynote speaker at this conference. See https://www.schores.org/conferences_details.php?acid=124

Here is the abstract and conclusion for the paper we are submitting below. Please email abasiel@gmail.com if you would like the full draft paper. This is a sample AI video I made to demonstrate how we can use AI to make a learning simulation:

Abstract
This paper explores the theory and practice of using artificial intelligence (AI) in curriculum and assessment design. The objectives are to apply AI in four stages. First, using AI in a multiple-choice question student-generated pre-test. This helps establish prior or tacit knowledge of the learners and stakeholders. Second, using AI in conjunction with blended learning role-plays or simulations. The third stage uses AI to construct and evaluate a post-simulation debriefing. Finally, in assessment stage four, AI is used to map the learning objectives to the evidence in the video recording transcripts. A case study methodology gathers evidence through surveys, interviews, video recorded observations and AI analysis of transcript text. The study concludes that AI can be integrated into the various stages of curriculum and assessment design in the role of a ‘thought partner’ as well as content producer. In these early days of applying AI to teaching, learning pedagogy and assessment an evidence-based approach to this innovative convergence of technology is recommended.

Keywords: artificial intelligence, assessment, curriculum, debriefing, pre-test, role-play, simulation, transcript evaluation.

  1. Conclusion

The objectives of this paper are to apply AI in four stages of the curriculum and assessment of learning simulation designs. The first stage used AI in a multiple-choice question student-generated pre-test. Learners answered questions and critically reviewed topics. AI was demonstrated in conjunction with blended learning role-plays or simulations by producing possible scenario descriptions and/or scripts. AI was used to construct and evaluate a post-simulation debriefing script in the next stage of the model learning objectives were linked to key words as evidence to support its use in assessment.

Limitations

The various pilot case studies presented in this paper were conducted with small sample size groups. Further case studies using the 4-stage AI learning simulations model are needed to gather more evidence from cross-disciplinary use.

Recommendations

Each stage of the application of AI to the curriculum and assessment models presented in this paper can be applied to the context of the reader through these recommendations:

  • As the pre-test question pool grows, a review of the quality of the MCQs can filter out weaker samples.
  • Simulations and role plays can be synthesized with traditional instructional materials to provide learners’ a safe setting to develop knowledge, skills and confidence.
  • AI has the potential to support debriefing and reflection on learning through more applications of transcript analysis.

The authors encourage readers to contact them for collaboration opportunities to further research and develop the learning models and technologies introduced in this paper.

British Congress Presentation: AI for Healthcare Training 11/06/2024

Posted by abasiel in Uncategorized.
Tags: , , , , ,
add a comment
Introduction Video – British Congress 2024: Dr Anthony ‘Skip’ Basiel www.britishcongress.co.uk.

AI and Big Data Session: 27 June 2024
‘An Artificial Intelligence (AI) Learning Simulation Model to Support Healthcare Professionals’ – Dr Anthony Basiel

This research explores ways to use AI integrated in a learning simulation. The aim is to measure the impact of using AI on the confidence level of healthcare professionals doing continuous professional development (CPD) training.

Learning and development events are recorded and auto-transcribed during a 4-stage process:

Stage 1: Pre-test – First we establish the learner’s prior knowledge. Stakeholders complete a quiz that sets the baseline of the academic and tacit (hands-on) knowledge. The questions are generated from the Tutor and Learners aided by an AI agent.

Stage 2: Simulation – A real-world scenario or role-play simulation based on the training learning objectives is used to test the application of the healthcare professional’s knowledge and skills. An AI generator is used by the Tutor to produce a draft of the script for the Facilitator(s). The learners respond to the prompts of the facilitator(s) to demonstrate their expertise and skills in the context of the problem addressed. The event is recorded.

Stage 3: Debriefing – After the role-play a pair of Simulation Evaluators (SE) moderate a debriefing discussion group. This Socratic circle model produces evidence to validate the healthcare professional’s capability and confidence level. One SE is human asking empathetic prompts. An AI generated digital twin provides generic debriefing questions through a computer webinar interface.

Stage 4: Evaluation – The learning outcomes are mapped to the transcript text with assistance from an AI agent. The simulation debriefing results are matched to the starting quiz marks to determine the level of learning. An online survey is used to get quantitative and qualitative data to measure confidence levels of the professionals.

This interactive presentation will give the audience an opportunity to contribute to an online survey to compose a pre-simulation quiz question with a correct multiple choice solution that is linked to a reference by using an AI agent.


Hello Congress Delegates. I hope this video introduction finds you well.

I am Dr Anthony ‘Skip’ Basiel from the School of Computing Science at Bournemouth University, Dorset – UK. My research explores blended learning solutions to promote creativity and innovation for healthcare professionals. Recently I was a Post-doc Research Fellow at the Faculty of Health and Social Science where I investigated various technologies and learning designs for face-to-face and online learning simulations. My Doctorate in Learning Technology Design has provided a strong pedagogic and technical foundation to research and develop immersive webinar designs using blended Socratic discussion circles with augmented reality.

I’m presenting my keynote entitled, ‘An Artificial Intelligence (AI) Learning Simulation Model to Support Healthcare Professionals’ Continuing Professional Development’ at the IKSAD British Congress.

With over 60 international publications since 1996, I invite you to visit my website at https://abasiel.uk where you can find journal papers, online learning toolkits and interactive web video tutorials to sample. Please do email abasiel@gmail.com if you are interested in collaborating on any research projects.

I look forward to meeting you in London on the 27th June and encourage you to register for this global event at: www.britishcongress.co.uk.

Thanks for your interest.

Free Online Workshop: AI Turing Test using Digital Twins 23/02/2024

Posted by abasiel in Uncategorized.
Tags: , , , , , , , ,
add a comment

Transforming 360* Immersive Webinar Design to Promote Interactivity: Synthesising Augmented Reality and Artificial Intelligence Agents

Anthony Basiel abasiel@bournemouth.ac.uk, UK
David Wortley david@davidwortley.com, UK
Mike Howarth michael.howarth@mhmvr.co.uk, UK
Steve Humphrey executivetrainerswh@gmail.com, UK

See the video introduction by Dr Anthony Basiel: https://youtu.be/RghF6Pg8-a4

Hello Conference Delegates

You are invited to join us on Thursday 29 Feb. ’24 at 3pm in for our exciting interactive workshop experiment to debate the conference theme through the use of AI digital twins.

Our next-generation Turing Test gives you the opportunity to vote online to see which presenter is human and who script is AI generated. Have a look at our pre-event activity at: https://app.klaxoon.com/participate/board/EGEKSHR
or scan the QR code below:

Webinar info: https://www.eventbrite.com/e/bournemouth-university-occe-24-conference-workshop-tickets-839949139487?utm-campaign=social&utm-content=attendeeshare&utm-medium=discovery&utm-term=listing&utm-source=cp&aff=ebdsshcopyurl