Date of Award
Fall 11-21-2017
Degree Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Psychology
First Advisor
Jane Halpert, PhD
Second Advisor
Goran Kuljanin, PhD
Third Advisor
Doug Cellar, PhD
Abstract
Pedagogical agents are, "conversational virtual characters employed in electronic learning environments to serve various instructional functions" (Veletsianos & Miller, 2008). They can take a variety of forms, and have been designed to serve various instructional roles, such as mentors, experts, motivators, and others. Given the increased availability and sophistication of technology in recent decades, these agents have become increasingly common as facilitators to training in educational settings, private institutions, and the military. Software to aid in the creation of pedagogical agents is widely available. Additionally, software use and agent creation often requires little formal training, affording nearly anyone the opportunity to create content and digital trainers to deliver it. While the popularity of these instructional agents has increased rapidly in practice, it has outpaced research into best practices for agent design and instructional methods. The personas programmed into pedagogical agents are recognizable by the people interacting with them, and have been shown to impact various learning outcomes. The form and realism of training agents have also been shown to have substantial impacts on people's perceptions and relationships with these beings. Additionally, agents can be designed in environments that utilize different methods of content delivery (e.g., spoken words versus text), resulting in varying levels of cognitive load (and thus, varying learning outcomes). In an educational setting, agent perceptions and interactions could impact the effectiveness of a training program. This meta-analysis uses the Integrated Model of Training Evaluation and Effectiveness (IMTEE) as an over-arching framework to examine the effects of training characteristics on training evaluation measures (Alvarez, Salas, & Garofano, 2004). Training characteristics refer to any training-specific qualities that may impact learning outcomes compared to other training programs that offer the same or similar content. Training evaluation refers to the practice of measuring important training outcomes to determine whether or not a training initiative meets its stated objectives. The pedagogical agent training characteristics evaluated in this study include agent iconicity (level of detail and realism), agent roles, and agent instructional modalities. The evaluation measures being examined include post-training self-efficacy, cognitive learning, training performance, and transfer performance. The Uncanny Valley Theory (Mori, 1970) suggests that agent iconicity (level of detail and realism) is expected to relate to training evaluation measures differently for human-like and non-human-like agents, such that low levels of iconicity (high realism) in non-human-like agents and moderate levels of iconicity in human-like agents would result in optimal training outcomes. These hypotheses were partially supported in that trainees achieved the highest levels of performance on transfer tasks when working with moderately realistic human-like trainers. No significant effects were seen for non-human-like trainers. Additionally, it was expected that the relationship between instructional modality and all training evaluation measures would be positive and stronger for modalities that produce deeper cognitive processing (Explaining and Questioning) than the modalities that produce shallower processing (Executing and Showing). This hypothesis was not supported. The relationship between agent role and all training evaluation measures was expected to be positive and stronger for modalities that produce deeper cognitive processing (Coaching and Testing) than the roles that produce shallower processing (Supplanting and Demonstrating). This hypothesis was not supported. Additionally, agents that minimize extraneous cognitive processing were also expected to outperform those that require excess cognitive demands. Agents that utilize speech, personalized messages, facial expressions, and gestures were expected to lead to improved training outcomes compared to those that primarily utilize text, speak in monologue, are expressionless, and/or are devoid of gestures. This hypothesis was partially supported in that agents who were merely present on-screen (physically directing learner attention) resulted in the lowest transfer task performance compared to more active agents who delivered actual content (via speech or text). Learner control (versus trainer control) over support delivery was expected to contribute to improved training outcomes, and support that is delayed in its delivery was expected to hinder performance on training evaluation measures. These hypotheses were not supported. This meta-analysis, backed by an integration of theories from computer science and multiple disciplines within psychology, contributes to the field of employee training by informing decisions regarding when and how pedagogical agents can best be used in applied setting as viable training tools.
Recommended Citation
Quesnell, Timothy J., "Effects of Pedagogical Agent Design on Training Evaluation Measures: A Meta-Analysis" (2017). College of Science and Health Theses and Dissertations. 242.
https://via.library.depaul.edu/csh_etd/242
SLP Collection
no