jump to navigation

A 4-stage AI assisted assessment model 21/05/2024

Posted by abasiel in Uncategorized.
Tags: , , , ,
1 comment so far

Dr Anthony ‘Skip’ Basiel | abasiel@gmail.com

This Project Proposal aims to use AI to develop an innovative assessment model by blending with learning simulations. Education Institutions in the United Kingdom (UK) use learning assurance procedures to produce evidence of a learner’s academic knowledge, capabilities or skills (Hoecht, A 2006). There are 3 general types of summative assessment. Traditional exams use a criterion-based methodology (Scarpa, R. 2011). This is often an individual written (open or closed book / on-or-offline) essay set to a sequence of questions. An external marking criteria or rubric is applied to the responses to establish a grade ranking of learning outcome (LO) mastery from distinction to merit to pass or fail (Cox, G 2015). This assessment system does not consider the learner’s prior knowledge. The initial assumption is ‘tabula rasa’, or the learner is a blank slate (DLE 2024). Rather than measuring the learners’ assessment evidence against a fixed external set of criteria, a norm-referenced model can be used (Hoko, J). Here the learner’s capabilities are ranked against the others in their assessment group. The highest marks are seen as distinction in relation to a grading curve (Aviles, C. 2001), but still the assessment rubric is an external criterion.

What assessment model can factor-in the learner’s prior and professional experiential, or tacit, knowledge? An ipsative assessment design uses a pre-test comparison to the final evaluation (Hughes, G. 2011). In this way we set learning benchmarks. First, we establish the baseline of the learner’s knowledge and skills. Next, stakeholders progress through the learning event’s curriculum. A summative assessment concludes the sequence. Marking is done by measuring the difference between the pre-test to the endpoint.

These summative assessment models can also use a web 2.0 learner-generated content methodology (Basiel A. & Coyne P.2009). A learner-centred pedagogy uses a bottom-up curriculum/assessment design demonstrated in two interventions:
 Self-assessment – Pre-set learning outcomes are mapped to the evidence in the deliverables of the assessment media. For example, in a written exam the LO is linked to the page providing evidence of mastery. In a dynamic role-play (learning simulation) assessment, the link could be in the transcript or video of the skill performed.
 Peer-review – A study partner/group provides the learner with peer feedback from the stakeholders’ perspective of the LOs. A read-aloud protocol (Gibson 2008) can be used to hear grammar mistakes or unclear narration.

This project explores how AI can be used in combination with learning simulations as a process supported by the content and/or skills required to be successful (Linser, R. & Ip, A. 2002). Shaharuddin et al. (2012) recognises the importance of underpinning the learning simulation choreography with a social constructivists’ foundation. The stakeholders in the simulated learning event make meaning as they address the issue or solve the problem of the scenario. The learner can be an individual interacting with an avatar or other humans via a webinar platform. The simulation facilitator(s) provide and induction and context to the scenario before the role-play is initiated.

This proposed blended learning model uses a convergence of AI with interactive webinar design. It is a trans-disciplinary approach that aims to promote the development of procedural knowledge, creativity and innovation as well (if needed) as the demonstration of specific skills. For example, if the simulation was to access the SGS counselling courses (2024) a scenario to validate the required counselling practice could be demonstrated through the virtual role-play.
Where does the AI component fit into the framework design? There are 3 stages in our AI Blended Simulation Assessment model:

  1. Pre-test
  2. Virtual Simulation
  3. Debriefing
    Stage 1:
    A generative AI tool such as Teachermatic (2024) can provide a pre-test multiple choice quiz (MCQ) with correct responses and links to core reading references. The prompt may include the knowledge domain, academic/professional level, course learning outcomes/competencies to acquire, links to government standards (e.g. data protection, GDPR 2024), ethical guidelines (e.g. BERA 2024), etc. The tutor can then review and refine the pre-assessment tool to help establish the learner’s starting knowledge. A variation of this stage can be to get the students to contribute to the MCQ pool of questions, answers and reference sources. This is another example of a learner-generated (Web 2.0) assessment approach.
    Stage 2: The production of the learning simulation is linked to the academic partners for this pilot case study. An output of the study will be establishing the appropriate blend of the use of AI, video and human role-play that will be used with each tutor and their subject specific requirements. Using the module learning outcomes as a guide a script can be played out by using the human participants and/or digital twins. An example of how AI and digital twins can be uses is seen through a blended workshop conducted at Bournemouth University (2024) and presented in a conference at the University of London (2024) (Basiel 2024). Recordings of the event provides evidence that can validate the module learning outcomes. Variations of digital simulation environments will be explored. Klaxoon (2024) is a platform we can use who have partnered with us in the past.
    Stage 3: Arguably the most important part of the learning situation is debriefing at the end (Fanning 2007). One or two Facilitators (on-or-offline) ask open-ended questions to the key simulation actors (Cheng et al. 2015). The discussion identifies general reflection issues such as what went well, where could you improve and what would you do differently? Subject specific questions linked to the learning outcomes can be added to the debriefing script. One variation to be explored is the use of digital twins as one of the debriefing Facilitators for non-empathetic questions. The virtual debriefing model can use a Socratic discussion circle (Howarth, M.S. and Basiel A. 2022). Key actors sit in a central circle with the observers on the outer circle. Responses start with the inner circle actors with the outer circle learners joining with their perspectives. In person debriefing can use a 360* augmented reality camera to record the event allowing the viewer to click on the video to see around the room from different angles. See an example at https://abasiel.uk/2024/03/15/360-immersive-blended-webinar-ai-turing-test/ .

    SMART (specific, measurable, achievable, relevant and time-bound) Project Objectives:
    This proposal aims to address the BERA goals to research and develop the innovative use of AI in a Further Education context. The SMART objectives are:
     Establish a sound project management strategy to promote communication between the study partner organisations and the stakeholders.
     Identify appropriate technologies to support the case study for the AI Blended Simulation Assessment model.
     Apply underpinning pedagogic designs to research and develop the case study stages (e.g. pre-test, simulation, debriefing).
     Comply with BERA ethics and GDPR policy to collect and analyse data to attempt to validate project conclusions.
     Build a virtual learning community to disseminate project findings.

    Evaluation
    The project designs will be validated by data collected in a pre/post event survey and webinar focus groups. We establish what the stakeholders’ pre-expectations are going into the learning event which are matched against the summative assessment results.

References

Aviles, C. B. (2001). Grading with norm-referenced or criterion-referenced measurements: To curve or not to curve, that is the question. Social Work Education, 20(5), 603–608. https://doi.org/10.1080/02615470120072869
Basiel A. Coyne P. (2010), ‘Exploring a Professional Social Network System to Support Learning in the Workplace Social’ Web Evolution: Integrating Semantic Applications and Web 2.0 Technologies DOI: 10.4018/978-1-60566-272-5.ch001
Basiel, A. (2024) https://abasiel.uk/2024/04/26/u-of-london-ride-conference-24/

BERA (2024) https://www.bera.ac.uk/publication/ethical-guidelines-for-educational-research-2018

Cheng, A, Palaganas, J. Walter, Rudolph, J., Robinson, T., Grant, V. (2015), ‘Co-debriefing for Simulation-based Education, A Primer for Facilitators’, Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare 10(2):p 69-75, April 2015. | DOI: 10.1097/SIH.0000000000000077
Cox, G.; Morrison, J.; Brathwaite, B. (2015). The Rubric: An Assessment Tool to Guide Students and Markers. En 1ST INTERNATIONAL CONFERENCE ON HIGHER EDUCATION ADVANCES (HEAD’ 15). Editorial Universitat Politècnica de València. 26-32. https://doi.org/10.4995/HEAd15.2015.414
Discourses on Learning in Education (DLE 2024) https://learningdiscourses.com/discourse/tabula-rasa/

Fanning, R. M.; Gaba, D. M.(2007), The Role of Debriefing in Simulation-Based Learning The Journal of the Society for Simulation in Healthcare 2(2):p 115-125, Summer 2007. | DOI: 10.1097/SIH.0b013e3180315539
GDPR (2024) https://gdpr-info.eu/

Gibson S. (2008), Reading aloud: a useful learning tool?, ELT Journal, Volume 62, Issue 1, January 2008, Pages 29–36, https://doi.org/10.1093/elt/ccm075

Hoecht, A. Quality Assurance in UK Higher Education: Issues of Trust, Control, Professional Autonomy and Accountability. High Educ 51, 541–563 (2006). https://doi.org/10.1007/s10734-004-2533-2
Hoko, J. Aaron. “Evaluating Instructional Effectiveness: Norm-Referenced and Criterion-Referenced Tests.” Educational Technology, vol. 26, no. 10, 1986, pp. 44–47. JSTOR, http://www.jstor.org/stable/44424737. Accessed 28 Apr. 2024.
Howarth, M.S., Basiel, A. (2022). Production of the 70:20:10 Webinar. In: MacCallum, K., Parsons, D. (eds) Industry Practices, Processes and Techniques Adopted in Education. Springer, Singapore. https://doi.org/10.1007/978-981-19-3517-6_13
Hughes, G. (2011) ‘Towards a personal best: a case for introducing ipsative assessment in higher education’, Studies in Higher Education, 36(3), pp. 353–367. doi: 10.1080/03075079.2010.486859.
Klaxoon (2024) – https://klaxoon.com

Linser, R. & Ip, A. (2002). Beyond the Current E-Learning paradigm: Applications of Role Play Simulations (RPS) – case studies. In M. Driscoll & T. Reeves (Eds.), Proceedings of E-Learn 2002–World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 606-611). Montreal, Canada: Association for the Advancement of Computing in Education (AACE). Retrieved April 29, 2024 from https://www.learntechlib.org/primary/p/15277/.

Scarpa, Raymond DNP, AOCN; Connelly, Patricia E. PhD, CCC-A, ABA. Innovations in Performance Assessment: A Criterion Based Performance Assessment for Advanced Practice Nurses Using a Synergistic Theoretical Nursing Framework. Nursing Administration Quarterly 35(2):p 164-173, April 2011. | DOI: 10.1097/NAQ.0b013e31820fface
SGS (2024) Counselling Courses https://www.sgscol.ac.uk/study/counselling

Shaharuddin Md. Salleh, Zaidatun Tasir, Nurbiha A. Shukor (2012), Web-Based Simulation Learning Framework to Enhance Students’ Critical Thinking Skills,
Procedia – Social and Behavioral Sciences, Volume 64, 2012, Pages 372-381,ISSN 1877-0428,https://doi.org/10.1016/j.sbspro.2012.11.044.
(https://www.sciencedirect.com/science/article/pii/S1877042812050215)
Teachermatic (2024) https://teachermatic.com/