R&S Chapter 2 - Foundations of Recruitment & Selection I Reliability & Validity PDF

Title R&S Chapter 2 - Foundations of Recruitment & Selection I Reliability & Validity
Author Emma Johnston
Course human resources management
Institution George Brown College
Pages 6
File Size 560.1 KB
File Type PDF
Total Downloads 78
Total Views 144

Summary

Download R&S Chapter 2 - Foundations of Recruitment & Selection I Reliability & Validity PDF


Description

R&S CHAPTER 2 - FOUNDATIONS OF RECRUITMENT AND SELECTION I: RELIABILITY AND VALIDITY Learning Outcomes After reading this chapter, you should be able to:  discuss the basic components that make up a traditional personnel selection model; Mo hinh chon lua nhan su truyen thong  explain the concepts of reliability and validity; do tin cay va gia tri  recognize the importance and necessity of establishing the reliability and validity of measures used in personnel selection;  identify common strategies that are used to provide evidence on the reliability and validity of measures used in personnel selection;  discuss the requirement for measures used in personnel selection to evaluate applicants fairly and in an unbiased fashion; and  describe the practical steps needed to develop a legally defensible selection system. 1. Components of Selection Systems 



KSAOs: the knowledge, skills, abilities, and other attributes necessary for a new incumbent to do well on the job; also referred to as job, employment, or worker specifications (nhan vien chinh thuc) (Con dc gla mo ta chi tiet yeu cau cho cong viec) Competencies: groups of related behaviors or attributes that are needed for successful job performance in an organization

2. Recruitment and Selection: The Hiring Process Recruitment and Selection Today 2.1 outlines the hiring practices used by the Toronto Police Service.  

This process illustrates the major components of personnel selection that will be discussed in class and in the text. The model emphasizes the importance of valid and reliable recruitment and selection testing.



Hiring decisions should meet legal requirements and should not be based upon gut feelings or intuition (see Table 2.1).

3. Constructs and Variables 

Construct: refers to ideas or concepts constructed or invoked to explain relationships between observations – For example, the construct “extraversion” has been invoked to explain the relationship between “social forthrightness” and sales.



Variable: refers to how someone or something varies on the construct of interest – For example, the variable “IQ” is used to represent variability in intelligence.

4. Reliability  Reliability – the degree/ to which observed scores are free from random measurement errors – provides an indication of the stability or dependability of a set of measurements over repeated applications of the measurement procedure 

Reliability is subject to error of measurement. – These affect the meaning of the interpretation of reliability estimates. – Figure 2.2 illustrates error in reliability. ❖ The textbook provides a good run-through using an example of measurement error in the “Reliability” section leading up to Figure 2.2.

4.1. Interpreting Reliability Coefficients 

True score: the average score that an individual would earn on an infinite number of administrations of the same test or parallel versions of the same test



Error score: the hypothetical difference between an observed score and a true score

4.2. Measurement Error 

Measurement error: the hypothetical difference between an observed score and a true score; comprises both random error and systematic error



Standard error of measurement: a statistical index that summarizes information related to measurement error and reflects how an individual’s score would vary, on average, over repeated observations that were made under identical conditions ❖ Professional standards simply say it should be “sufficiently high.” The U.S. Department of Labor produced an employer’s best practices guide for testing and assessment, which is available through the O*NET Resource Center.

❖ The guide presents guidelines for interpreting reliability coefficients, which is reproduced in Table 2.2, GENERAL GUIDELINES FOR INTERPRETING RELIABILITY COEFFICIENTS, on page 40 of the textbook. 4.3. Factors Affecting Reliability  The factors that introduce error for estimating reliability in any set of measurements can be organized into the following three broad categories: – temporary individual characteristics – lack of standardization – chance  See Recruitment & Selection Today 2.2, Examples of Factors That May Affect Reliability.

4.4. Methods of Estimating Reliability  Test and Retest – The identical measurement procedure is used to assess the same characteristic over the same group of people on different occasions.  Alternate Forms – When using a second round of interviews, HR managers use a different set of questions than was used in the first round of interviews.  Internal Consistency – Rather than select any particular pair of items, the correlations between the scores in all possible pairs of items are calculated and then averaged.  Inter-rater Reliability – When scores from different independent raters are similar, we say there is inter-rater reliability.  Choosing an Index of Reliability – It remains within the professional judgment of the human resources specialist to choose an appropriate index of reliability and to determine the level of reliability that is acceptable for use of specific measure. 5. Validity  the degree to which accumulated evidence and theory support specific interpretations of test scores in the context of the test’s proposed use  refers to the legitimacy or correctness of the inferences that are drawn from a set of measurements or other specified procedures 5.1. Validity Strategies  Validation Strategies – Construct and content validity are validation strategies that provide evidence based on test content. – Criterion-related validity provides evidence based on relationships to other variables.  Content validity: whether the items on a test appear to match the content or subject matter they are intended to assess; assessed through judgments of experts in the subject area



 



Construct validity: the degree to which a test or procedure assesses an underlying theoretical construct it is supposed to measure; assessed through multiple sources of evidence showing that it measures what it purports to measure and no other constructs; e.g., an IQ test must measure intelligence and not personality Criterion-related validity: the relationship between a predictor (test score) and an outcome measure; assessed by obtaining the correlation between the predictor and outcome scores Face validity: the degree to which the test takers (not subject matter experts) view the content of a test or test items as relevant to the context in which the test is being administered

Predictive validation – predictive evidence obtained through research designs that establish a correlation between predictor scores obtained before an applicant is hired and then again after an applicant is hired – concurrent evidence obtained through research designs that establish a correlation between predictor and criteria scores from information that is collected at approximately the same time from a specific group of workers ❖ See Recruitment and Selection Notebook 2.2 in textbook.

5.2. Validity Generalization: the application of validity evidence, obtained through meta-analysis of data obtained from many situations, to other situations that are similar to those on which the meta-analysis is based 5.3. Factors Affecting Validity Coefficients  Range restriction – When measurements are made on a subgroup that is more homogeneous than the larger group from which it is selected, validity coefficients obtained on the subgroup are likely to be smaller than those obtained on the larger group. – This reduction in the size of the validity coefficient due to the selection process is called range restriction.



Measurement error: The reliability of a measure places an upper limit on validity.



Sampling error: Estimates of the validity within a population may vary considerably between samples; estimates from small samples are likely to be variable.

6. Bias and Fairness 

Bias: systematic errors in measurement or inferences made from measurements that are related to different identifiable group membership characteristics, such as age, sex, or race



Differential prediction – occurs when the predicted, average performance score of a subgroup (e.g., males or females) is systematically higher or lower than the average score predicted for the group as a whole – results in a larger proportion of the lower-scoring group being rejected on the basis of their test scores, even though they would have performed successfully had they been hired Fairness in measurement – refers to the value judgments people make about the decisions or outcomes that are based on measurements – the principle that every test taker should be assessed in an equitable manner ❖ The Principles for the Validation and Use of Personnel Selection Procedures states this about fairness: “Fairness is a social rather than a psychometric concept. Its definition depends on what one considers to be fair. Fairness has no single meaning, and, therefore, no single statistical or psychometric definition.” ❖ The Principles goes on to identify three meanings of fairness that are relevant in selection (Society for Industrial and Organizational Psychology. 2003; see text).



7. The Legal Environment   

Selection programs and practices must operate within the current legal context. Recruitment and Selection Notebook 2.3 provides practical steps to ensure that a test is both reliable and valid. The legal environment for recruitment and selection is discussed in detail in Chapter 3 of the textbook.

8. Chapter Summary   

The best way of ensuring good tests is to be familiar with measurement, reliability, and validity issues and to use only those procedures that will withstand legal scrutiny. The reliability and validity of the information used as part of the personnel selection procedures must be established empirically. The methods used to establish reliability and validity can be complex and require a good statistical background.

TEST 2: Explain the concept of reliability, the three factors affecting reliability and methods of estimating reliability; Explain the concepts of validity, fairness and bias; Discuss the importance of establishing reliability and validity used in personnel selection (testing); DQ 1, 2, 6 (Chapter 2)...


Similar Free PDFs