WORK AND ORGANISATIONAL PSYCHOLOGY Vol 2: Assessment and Selection PDF

Title WORK AND ORGANISATIONAL PSYCHOLOGY Vol 2: Assessment and Selection
Author Gregory Boyle
Pages 17
File Size 124.8 KB
File Type PDF
Total Downloads 78
Total Views 934

Summary

WORK AND ORGANISATIONAL PSYCHOLOGY SAGE BENCHMARKS IN PSYCHOLOGY WORK AND ORGANISATIONAL PSYCHOLOGY VOLUME II Assessment and Selection Edited by Gregory J. Boyle, John G. O’Gorman and Gerard J. Fogarty Los Angeles | London | New Delhi Singapore | Washington DC SAGE Publications Ltd © Introduction an...


Description

Accelerat ing t he world's research.

WORK AND ORGANISATIONAL PSYCHOLOGY Vol 2: Assessment and Selection Gregory Boyle

Related papers

Download a PDF Pack of t he best relat ed papers 

Crit eria for Select ion and Evaluat ion of Scales and Measures Gregory Boyle Cont ribut ion of Cat t ellian Personalit y Inst rument s Gregory Boyle Edit ors' int roduct ion - Cont emporary perspect ives on t he psychology of individual differences.pdf Gregory Boyle

WORK AND ORGANISATIONAL PSYCHOLOGY

SAGE BENCHMARKS IN PSYCHOLOGY

WORK AND ORGANISATIONAL PSYCHOLOGY VOLUME II Assessment and Selection

Edited by

Gregory J. Boyle, John G. O’Gorman and Gerard J. Fogarty

Los Angeles | London | New Delhi Singapore | Washington DC

SAGE Publications Ltd 1 Oliver’s Yard 55 City Road London EC1Y 1SP

© Introduction and editorial arrangement by Gregory J. Boyle, John G. O’Gorman and Gerard J. Fogarty, 2015 First published 2015

SAGE Publications Inc. 2455 Teller Road Thousand Oaks, California 91320 SAGE Publications India Pvt Ltd B 1/I 1, Mohan Cooperative Industrial Area Mathura Road New Delhi 110 044 SAGE Publications Asia-Pacific Pte Ltd 3 Church Street #10-04 Samsung Hub Singapore 049483

Editor: Luke Block Assistant editor: Colette Wilson Permissions: Enid Andrew Production controller: Bhairav Dutt Sharma Proofreader: Marketing manager: Teri Williams Cover design: Wendy Scott Typeset by Diligent Typesetter, Delhi Printed and bound by CPI Group (UK) Ltd, Croydon, CR0 4YY [for Antony Rowe]

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act, 1988, this publication may be reproduced, stored or transmitted in any form, or by any means, only with the prior permission in writing of the publishers, or in the case of reprographic reproduction, in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. Every effort has been made to trace and acknowledge all the copyright owners of the material reprinted herein. However, if any copyright owners have not been located and contacted at the time of publication, the publishers will be pleased to make the necessary arrangements at the first opportunity.

Library of Congress Control Number: 2015936340 British Library Cataloguing in Publication data A catalogue record for this book is available from the British Library

At SAGE we take sustainability seriously. Most of our products are printed in the UK using FSC papers and boards. When we print overseas we ensure sustainable papers are used as measured by the Egmont grading system. We undertake an annual audit to monitor our sustainability.

ISBN: 978-1-4739-1671-5 (set of five volumes)

Contents Volume II: Assessment and Selection Introduction: Personnel Assessment and Selection Gregory J. Boyle, John G. O’Gorman, and Gerard J. Fogarty 17. The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings Frank L. Schmidt and John E. Hunter 18. Which Personality Attributes Are Most Important in the Workplace? Paul R. Sackett and Philip T. Walmsley 19. A Meta-Analytic Study of General Mental Ability Validity for Different Occupations in the European Community Jesús F. Salgado, Neil Anderson, Silvia Moscoso, Cristina Bertua, Filip de Fruyt and Jean Pierre Rolland 20. Predicting Training Success with General Mental Ability, Specific Ability Tests, and (Un)Structured Interviews: A Meta-analysis with Unique Samples Matthias Ziegler, Erik Dietl, Erik Danay, Markus Vogel and Markus Bühner 21. Extending Boundaries of Human Resource Concepts and Practices: An Innovative Recruitment Method for Indigenous Australians in Remote Regions Cecil A.L. Pearson and Sandra Daff 22. Fact and Fiction in Cognitive Ability Testing for Admissions and Hiring Decisions Nathan R. Kuncel and Sarah A. Hezlett 23. Employment Interview Reliability: New Meta-analytic Estimates by Structure and Format Allen I. Huffcutt, Satoris S. Culbertson and William S. Weyhrauch 24. Work Sample Selection Tests and Expected Reduction in Adverse Impact: A Cautionary Note Philip Bobko, Philip L. Roth and Maury A. Buster 25. Incremental Validity of Assessment Center Ratings over Cognitive Ability Tests: A Study at the Executive Management Level Diana E. Krause, Martin Kersting, Eric D. Heggestad and George C. Thornton, III 26. Career Assessment and the Sixteen Personality Factor Questionnaire J.M. Schuerger 27. Personality and Employee Selection: Credibility Regained Cynthia D. Fisher and Gregory J. Boyle 28. “Dark Side” Personality Styles as Predictors of Task, Contextual, and Job Performance Silvia Moscoso and Jesús F. Salgado

vii

1 29

51

81

103

123

135

159

177

197 217

235

vi Contents

29. Personality and Job Satisfaction: The Mediating Role of Job Characteristics Timothy A. Judge, Joyce E. Bono and Edwin A. Locke 30. Intelligence, Personality, and Interests in the Career Choice Process Phillip L. Ackerman and Margaret E. Beier 31. A Different Look at Why Selection Procedures Work: The Role of Candidates’ Ability to Identify Criteria Martin Kleinmann, Pia V. Ingold, Filip Lievens, Anne Jansen, Klaus G. Melchers and Cornelius J. König 32. Social Media for Selection? Validity and Adverse Impact Potential of a Facebook-based Assessment Chad H. Van Iddekinge, Stephen E. Lanivich, Philip L. Roth and Elliott Junco

247 275

289

311

Introduction: Personnel Assessment and Selection Gregory J. Boyle, John G. O’Gorman and Gerard J. Fogarty

S

electing the right person for the job and the right job for the person was an early focus in the application of modern psychology to the world of work (Koppes & Pickren, 2007). Selection, in the more limited sense of the right person for the job, was to continue as a major theme in industrial/organisational (I/O) psychology throughout the 20th century (Zickar & Gibby, 2007). Selecting the right job for the person became a separate field known as vocational guidance (cf. Noty, 1986; Super, 1955). Early advocates of the use of psychological measures in selection saw its value in improving industrial efficiency. First in Germany with William Stern (1911), and later in the United States with Hugo Münsterberg (1913), the psychological methods of controlled observation and measurement were shown to be powerful tools for adding value to decision making about whom best to employ. Stern showed an early interest in vocational aptitude and use of Binet’s methods for studying the role of cognitive ability in performance (Allport, 1938; Heider, 1968). In many ways, Stern pioneered the study of differential psychology – an integral part of psychological selection work ever since (Boyle & Saklofske, 2004). Münsterberg, like Stern, applied the measurement of individual differences to many fields of human endeavour. His work on selection of motormen for the Boston tramway service and selection of operators for the Bell Telephone Company is still regarded as foundational to the field (Moscovitz, 1977). Entry of the United States into World War I provided a major impetus for the measurement of individual differences, especially intelligence, in selection and classification. Robert Yerkes, Lewis Terman, and other applied psychologists administered the Army alpha and beta cognitive ability tests to well over a million military recruits paving the way for large-scale application to selection within industry (Rogers, 1995). Measures of personality traits and vocational interests were subsequently added to measures of intelligence and used widely within both government and private enterprise sectors. Some of the major interest inventories include the SelfDirected Search (SDS; Holland & Messer, 2013), the Vocational Preference Inventory (VPI; Holland, 1985), the Kuder Occupational Interest Survey

viii Introduction

(KOIS; Kuder, 1974), and Vocational Preference Record (VPR; Kuder, 1991), the Strong–Campbell Interest Inventory (SCII; Strong et al., 1994), the Jackson Vocational Interest Survey (JVIS; Jackson, 1999), the Vocational Interest Measure (VIM; Sweney & Cattell, 1980), and the Rothwell–Miller Interest Blank (RMIB; Miller, 1968), to mention just a few. Although subject to criticism almost from the outset (Cronbach, 1975), use of psychological methods for selection became a major focus of controversy with the rise of the civil rights movement in the United States. The possibility of perpetuating disadvantage by employing methods that discriminated against minority groups became the subject of inquiry in the courts and legislature. To the charge of bias were added charges of invasion of citizens’ privacy, and limiting variation in the workforce to a corporate stereotype of the “right” employee (Rogers, 1995). Bias was a major concern, underlining that selection can be as much a political process as a psychological and technical one. Schmidt and Hunter (1998) provide a broad overview of the field of selection. In recent years, meta-analytic techniques have been used extensively in I/O research (e.g., see DeGeest & Schmidt, 2011; Fernandez & Boyle, 1996; in Volume I). Sackett and Walmsley (2014) review the meta-analytic evidence but supplement it with findings about what employers consider to be other important attributes. Using meta-analysis, Schmidt and Hunter aggregate the findings of years of research into the practical value of the most widely used methods of selection. They first summarize previous work on various methods of assessment that provide the best prediction of performance, including structured interviews, standardized tests of general cognitive ability or GCA (Spearman, 1904) and work sample tests that seek to simulate the tasks or job environment. Schmidt and Hunter then examine the value of combining these methods and show that GCA plus a structured interview, GCA plus a test of integrity, and GCA plus work sample tests maximize the prediction of workplace criteria.

General Cognitive Ability Using meta-analysis as their primary tool, Salgado et al. (2003) report substantial operational validity (extent of association between test and criterion) for GCA against training and performance criteria across occupational groups. They also report that job complexity serves as a moderator variable, such that the more complex the task, the higher is the predictive validity. As DeGeest and Schmidt (2011) point out, “work with the GATB test provided the first metaanalytic evidence that GCA was a highly valid predictor of job performance for all jobs [and] that GCA validity varies from 0.74 for the most mentally demanding jobs to 0.39 for unskilled jobs (Hunter et al., 2006).” DeGeest and Schmidt also found that GCA is a valid predictor of “occupational status, income, job performance, and rate of career advancement.”

Introduction ix

The concept of GCA necessarily combines specific cognitive abilities into an aggregate, which has been a source of debate since the early days of factor analysis of measures of cognitive abilities (cf. Thurstone, 1947). The utility of a single general factor has been compared with that for an array of specific cognitive abilities (e.g., see Boyle, 1988, 1995; Brody, 1992; Cattell, 1987, 1998; Horn, 1988; McGrew, 2009; Schneider & McGrew, 2012; Woliver & Saeks, 1986; Ziegler et al., 2011). For example, the Comprehensive Ability Battery (CAB; Hakstian & Cattell, 1982) is a measure of 20 primary cognitive abilities that is still in use today. While the utility of GCA is confirmed, measures of specific abilities add to the predictive variance. The selection of minority group members was investigated by Pearson and Daff (2011). Their study was set in a mining company located in northern Australia and involved recruitment of Indigenous men and women to work in the mine or in the adjacent township. The authors report the use of six tasks involving reasoning with shapes and patterns that can be administered using only oral communication, which they argue respects the cultural background of applicants and acknowledges the poor English literacy skills of Indigenous people in remote areas of Australia. Importantly, little formal education (no more than the end of high school) is required for those who administer the tasks. The tasks purport to assess 18 different aptitudes, although only data using total scores are reported. These show acceptable levels of interrater agreement and capacity to discriminate between recruits who are subsequently retained and those who are lost to employment for various reasons. The report has several limitations but is included here because of the difficulties encountered in Australia and elsewhere when tests developed in one culture are used in another. Note that in using an aggregate score across the six tasks Pearson and Daff implicitly assume that there is an underlying single dimension influencing performance such as GCA (cf. Brody, 1992). Kuncel and Hezlett (2010) summarize the predictive value of GCA measures in a variety of organisational settings and briefly touch on some of the myths surrounding this construct. They refer to the problem of test bias and its adverse impact and examine the role of socioeconomic status in accounting for test–criterion relations.

Interviews The most widely used selection device is the interview. It may be used alone or in combination with other methods and has widespread acceptance among those who have the task of selecting applicants and among applicants themselves. For much of the 20th century, however, interviews were not thought much of by psychologists involved in selection because their predictive validity appeared to be low. Weisner and Cronshaw (1988) drew

x

Introduction

attention to the degree of structure in the interview – a variable moderating interview–criterion relations. Structured interviews as we see from Schmidt and Hunter’s (1998) work have validities the approaching that of the best psychometric instruments, whereas unstructured interviews perform poorly. Huffcutt et al. (2013) helps us understand why structured interviews fare so much better and that has to do with their reliability, which according to psychometric theory (e.g., Nunnally, 1978) places an upper limit on validity. The authors provide a careful analysis and show that there is a monotonic, although not linear, increase in reliability with increasing structure. They note too that individual interviews even when structured are not as reliable as panel interviews in which several interviewers participate.

Work Samples, Situation Judgement Tests, and Assessment Centres Bobko et al. (2005) return us to work samples and the issue of adverse impact. The latter they operationalize in terms of the standardized difference between the means for minority and majority samples rather than in terms of selection ratios for the two groups. They examine the “general understanding” that work sample tests show less adverse impact than other forms of selection technique such as GCA. Their results indicate that this is not the case, at least for the two samples they employed, in that the standardized difference for aggregate work sample scores is about that for a test of GCA. The reason for this contradiction of “general understanding”, they suggest, is that they employed applicant samples (job seekers), where range restriction is less of a problem, rather than incumbent samples (those on the job). A selection method somewhat related to work samples are Situational Judgement Tests (SJTs). These simulate the work environment, although not necessarily with high fidelity (e.g., only a written description might be provided rather than, say, a video recording) and pose questions to the applicant about situations that might be encountered. A series of optional responses is provided and the applicant is to select one or rank order them in terms of what s/he considers is most appropriate. The degree of match between responses provided and those of “experts” has become popular (e.g., it is widely used in medical school admissions) because of its discernible face validity. Lievens et al. (2012) raise the potential problem of “coachability” whereby scores on SJTs can be enhanced by training on situations similar to those being used in a particular context. They report an advantage in scores between coached and uncoached groups of about a half a standard deviation. An approach that involves high-fidelity simulation of the workplace is the Assessment Centre (AC) method, frequently used in executive selection (Krause et al., 2006). Small groups of applicants are observed over an extended period of time as they complete tasks that might be encountered

Introduction xi

on the job or that challenge group and interpersonal skills such as leadership. The history of the method goes back to officer selection in the German Army in World War I and has been used in military and corporate contexts in a number of countries ever since. Kleinmann et al. (2011) argue that in complex situations where AC observations or structured interviews are predictive, it is because the applicants understand the criteria to be predicted and behave accordingly. Rather than a source of artefact they argue that this capacity “to see through” the purpose of the method is fundamental to its value. Krause et al. examine the additive predictive value of the AC over GCA. The question of additive predictive value or incremental validity is important given the high cost of the AC. The authors report that the addition of AC adds approximately 5% to the predictive variance of GCA alone. This may not seem much, but there is the additional benefit of face validity for all stakeholders with this form of selection.

Measures of Personality As well as measures of GCA, samples of behaviour from simulated work environments, and structured interviews, selection specialists also have included a variety of personality measures (Prewett et al., 2013; Tett & Christiansen, 2008). One widely used self-report personality measure in selection and guidance work has been Cattell’s Sixteen Personality Factor Questionnaire (16PF; Cattell et al., 1970; Cattell & Mead, 2008). Schuerger (1995) reported on the use of the 16PF in matching individuals based on their 16PF profiles to the average profile of individuals employed in specific occupations. According to Schuerger, “Occupational research on the 16PF has been extensive. The 1970 handbook includes profiles for 73 occupations and 13 performance prediction equations . . . Krug’s book (Krug, 1981) cites 81 available occupational profiles. . . . there is published material on well over 200 separate occupations.” As measures of the putative “Big Five” constructs account for little more than half the known trait variance within the normal personality sphere alone (Boyle, 2008; Boyle et al., 1995; Cattell, 1995), it is useful to consider other personality measures (e.g., see Boyle et al., 2015). As Fisher and Boyle (1997) point out, in the selection context, aside from widely used self-report personality measures such as the 16PF, the California Psychological Inventory (CPI), and the Occupational Personality Questionnaire (OPQ), “homogeneous item composites” (HICs) in the Hogan Personality Inventory (HPI) have been used to construct “job-relevant personality-like tests of honesty, integrity, and employee reliability (Ones et al., 1993).” In addition, Moscoso and Salgado (2004) included measures of abnormal personality derived from the DSM-IV as predictors of job performance some eight months after initial personality assessment. They report that correlations between dysfunctional personality styles and overa...


Similar Free PDFs