Date of Award
Doctor of Philosophy (PhD)
Suzanne T. Bell, Ph.D.
Jane Halpert, Ph.D.
Alice Stuhlmacher, Ph.D.
It is common practice to administer personality assessments in personnel selection due to their ability to cost-effectively predict organizationally relevant criteria with relatively small subgroup differences. However, concerns are often raised about test-taker response bias. The proposed research focuses on one issue related to personality test accuracy, namely faking. Also called response distortion or inflation, faking represents a multidimensional behavior that is both intentional and deceptive and seeks to benefit one’s own interests. The current study uses the Theory of Planned Behavior (Ajzen, 1985) and expectancy theories of motivation (e.g., Vroom, 1964) as a theoretical basis for understanding faking. Prevalence estimates vary, but common estimates state that around 30% of applicants can be classified as fakers. Faking on personality assessments can influence response scores, the ability to make valid inferences from these scores, and even selection decisions. The utility of the selection system is critically undermined to the extent that any or all of these factors are altered.
Given the prevalence of faking, research is focused on preventing faking on personality assessments and/or reducing its negative impact on organizational decision-making. This dissertation uses meta-analysis to evaluate the efficacy of different faking interventions. There are two main categories of interventions: preventive and remedial. Remedial interventions are focused on altering the interpretations or decisions made from personality scores after test-taker data has been collected. Preventive strategies, on the other hand, seek to limit faking before the behavior occurs. For instance, warnings seek to limit faking intention while time limits, forced choice formats, and decreased item transparency seek to limit faking ability. Meta-analysis will be used to quantitatively aggregate the results of multiple primary studies. Meta-analysis can be used to test relationships not addressed in the primary study, and can provide summary statements about effects observed in the largely disjointed faking literature. Two meta-analyses were conducted in order to better understand the effectiveness of faking interventions. First, effect size estimates of the difference between personality scale mean scores (i.e., sample-weighted d’s) across intervention conditions was conducted. Second, meta-analysis of the relationships between personality traits and performance outcomes (i.e., correlation) was conducted in order to estimate the criterion-related validities of personality traits across different intervention conditions.
Results suggest that interventions are generally effective at reducing faking behavior, evidenced by smaller sample-weighted mean d’s for studies with a faking intervention compared to those without any intervention. Warnings are generally more effective than forced-choice or item transparency interventions at reducing faking behavior. Randomizing items, on the other hand, does little to influence faking. Although based on a limited primary studies, the criterion-related validity of personality scores on performance outcomes were not enhanced due to the presence of a faking intervention. Taken together, these results suggest that faking interventions may influence observed personality scores but did not seem to influence the ability to make valid inferences based on the scores.
Adair, Christopher, "Interventions for Addressing Faking on Personality Assessments for Employee Selection: A Meta-Analysis" (2014). College of Science and Health Theses and Dissertations. 93.