Industrial Psychology (Michael Aamodt) PDF

Title Industrial Psychology (Michael Aamodt)
Author Kate kate
Course BS Psychology
Institution Rizal Technological University
Pages 13
File Size 393.8 KB
File Type PDF
Total Downloads 36
Total Views 311

Summary

Chapter 5 EMPLOYEE SELECTION: REFERENCES AND TESTINGPredicting Performance Using References and Letters of Recommendation  Reference check is the process of confirming the accuracy of information provided by an applicant.  Reference is the expression of an opinion, either orally or through a writt...


Description

Chapter 5 EMPLOYEE SELECTION: REFERENCES AND TESTING

3. 4.

Predicting Performance Using References and Letters of Recommendation  Reference check is the process of confirming the accuracy of information provided by an applicant.  Reference is the expression of an opinion, either orally or through a written checklist, regarding an applicant’s ability, previous performance, work habits, character, or potential for future success.  Letter of Recommendation is a letter expressing an opinion regarding an applicant’s ability, previous performance, work habits, character, or potential for future success. The content and format of a letter of recommendation are determined by the letter writer. Reasons for Using References and Recommendations 1. Confirming Details on a Résumé  Résumé fraud-The intentional placement of untrue information on a résumé 2. Checking for Discipline Problems  Negligent hiring- A situation in which an employee with a previous criminal record commits a crime as part of his/her employment. 3. Discovering New Information About the Applicant 4. Predicting Future Performance  Validity coefficient- The correlation between scores on a selection method (e.g., interview, cognitive ability test) and a measure of job performance (e.g., supervisor rating, absenteeism).  Corrected validity- A term usually found with metaanalysis, referring to a correlation coefficient that has been corrected for predictor and criterion reliability and for range restriction. Corrected validity is sometimes called “true validity.” 5. Ethical Issues Predicting Performance Using Applicant Training and Education For many jobs, it is common that applicants must have a minimum level of education or training to be considered. That is, an organization might require that managerial applicants have a bachelor’s degree to pass the initial applicant screening process. Predicting Performance Using Applicant Ability Ability tests tap the extent to which an applicant can learn or perform a job-related skill. 1. Cognitive ability includes such dimensions as oral and written comprehension, oral and written expression, numerical facility, originality, memorization, reasoning (mathematical, deductive, inductive), and general learning.  Cognitive ability test Tests designed to measure the level of intelligence or the amount of knowledge possessed by an applicant. 2. Perceptual ability- measure of facility with such processes as spatial relations and form perception.

Psychomotor ability- measure of facility with such processes as finger dexterity and motor coordination. Physical ability tests- Tests that measure an applicant’s level of physical ability required for a job.  Job simulation - applicants actually demonstrate job-related physical behaviors.  physical agility tests

Predicting Performance Using Applicant Skill Rather than measuring an applicant’s current knowledge or potential to perform a job (ability), some selection techniques measure the extent to which an applicant already has a job-related skill. 1. Work sample- the applicant performs actual jobrelated tasks. For example, an applicant for a job as automotive mechanic might be asked to fix a torn fan belt; a secretarial applicant might be asked to type a letter; and a truck-driver applicant might be asked to back a truck up to a loading dock. 2. Assessment center is a selection technique characterized by the use of multiple assessment methods that allow multiple assessors to actually observe applicants perform simulated job tasks. Development and Components of assessment center:  In-basket technique- An assessment center exercise designed to simulate the types of information that daily come across a manager’s OR employee’s desk in order to observe the applicant’s responses to such information.  Simulation- An exercise designed to place an applicant in a situation that is similar to the one that will be encountered on the job.  Work sample- A method of selecting employees in which an applicant is asked to perform samples of actual job-related tasks.  Leaderless Group Discussion- In this exercise, applicants meet in small groups and are given a job-related problem to solve or a job-related issue to discuss.  Business game- An exercise, usually found in assessment centers, that is designed to simulate the business and marketing activities that take place in an organization. Predicting Performance Using Prior Experience 1. Experience Ratings- The basis for experience ratings is the idea that past experience will predict future experience. Support for this notion comes from a meta-analysis by Quiñones, Ford, and Teachout (1995) that found a significant relationship between experience and future job performance (r .27). 2. Biodata- A method of selection involving application blanks that contain questions that research has shown will predict job performance. Development of a Biodata Instrument 1. File approach- The gathering of biodata from employee files rather than by questionnaire. 2. Questionnaire approach- The method of obtaining biodata from questionnaires rather than from employee files.

3. 4.

5.

6.

Criterion group- Division of employees into groups based on high and low scores on a particular criterion. Vertical percentage method- For scoring biodata in which the percentage of unsuccessful employees responding in a particular way is subtracted from the percentage of successful employees responding in the same way. Derivation sample- A group of employees who were used in creating the initial weights for a biodata instrument. Hold-out sample- A group of employees who are not used in creating the initial weights for a biodata instrument but instead are used to double-check the accuracy of the initial weights.



5. 6.

Predicting Performance Using Personality, Interests and Character Personality Inventories - A psychological assessment designed to measure various aspects of an applicant’s personality. 1. Tests of normal personality measure the traits exhibited by normal individuals in everyday life. Examples of such traits are extraversion, shyness, assertiveness, and friendliness.  Minnesota Multiphasic Personality Inventory-2 (MMPI-2) The most widely used objective test of psychopathology. 2. Tests of Psychopathology determine whether individuals have serious psychological problems such as depression, bipolar disorder, and schizophrenia.  Projective tests- A subjective test in which a subject is asked to perform relatively unstructured tasks, such as drawing pictures, and in which a psychologist analyzes his or her responses.  Rorschach Inkblot Test- A projective personality test.  Thematic Apperception Test (TAT)- A projective personality test in which testtakers are shown pictures and asked to tell stories. It is designed to measure various need levels.  Objective tests- A type of personality test that is structured to limit the respondent to a few answers that will be scored by standardized keys. 3. Interest inventory- A psychological test designed to identify vocational areas in which an individual might be interested.  Strong Interest Inventory (SII)- A popular interest inventory used to help people choose careers.  Vocational counseling- The process of helping an individual choose and prepare for the most suitable career. 4. Integrity test- Also called an honesty test; a psychological test designed to predict an applicant’s tendency to steal. (Shrinkage; The amount of goods lost by an organization as a result of theft, breakage, or other loss)  Polygraph- An electronic test intended to determine honesty by measuring an individual’s physiological changes after being asked questions.

7.

Voice stress analyzer- An electronic test to determine honesty by measuring an individual’s voice changes after being asked questions.  Overt integrity test- A type of honesty test that asks questions about applicants’ attitudes toward theft and their previous theft history.  Personality-based integrity test- A type of honesty test that measures personality traits thought to be related to antisocial behavior. Conditional reasoning test- Test designed to reduce faking by asking test-takers to select the reason that best explains a statement. Credit History- According to a survey by the Society for Human Resource Management, 47% of employers conduct credit checks for at least some jobs (SHRM, 2012). These credit checks are conducted for two reasons: (1) Employers believe that people who owe money might be more likely to steal or accept bribes, and (2) employees with good credit are more responsible and conscientious and thus will be better employees. Graphology- Also called handwriting analysis, a method of measuring personality by looking at the way in which a person writes.

Predicting Performance Limitations Due to Medical and Psychological Problems 1. Drug Testing 2. Psychological Exams 3. Medical Exams Chapter 6 EVALUATING SELECTION TECHNIQUES AND DECISIONS Characteristics of Effective Selection Techniques 1. Reliability- The extent to which a score from a test or from an evaluation is consistent and free from error.  Test-retest reliability- The extent to which repeated administration of the same test will achieve similar results. (Temporal stability- The consistency of test scores across time.)  Alternate-forms reliability- The extent to which two forms of the same test are similar. (Counterbalancing- A method of controlling for order effects by giving half of a sample Test A first, followed by Test B, and giving the other half of the sample Test B first, followed by Test A. Form stability- The extent to which the scores on two forms of a test are similar.)  Internal Reliability- A third way to determine the reliability of a test or inventory is to look at the consistency with which an applicant responds to items measuring a similar dimension or construct (e.g., personality trait, ability, area of knowledge). The extent to which similar items are answered in similar ways is referred to as internal consistency and measures item stability. Item stability- the extent to which responses to the same test items are consistent. Item homogeneity –

The extent to which test items measure the same construct.)  Scorer reliability- The extent to which two people scoring a test agree on the test score, or the extent to which a test is scored correctly. 2. Validity- The degree to which inferences from test scores are justified by the evidence.  Content validity- The extent to which tests or test items sample the content that they are supposed to measure.  Criterion validity- The extent to which a test score is related to some measure of job performance. (Criterion: measure of job performance, such as attendance, productivity, or a supervisor rating. Concurrent validity: form of criterion validity that correlates test scores with measures of job performance for employees currently working for an organization. Predictive validity: form of criterion validity in which test scores of applicants are compared at a later date with a measure of job performance.  Construct validity- The extent to which a test actually measures the construct that it purports to measure. Choosing a Way to Measure Validity:  Face validity- The extent to which a test appears to be valid. ( Barnum statements- Statements, such as those used in astrological forecasts that are so general that they can be true of almost anyone.) Establishing the Usefulness of a Selection Device 1. Taylor-Russell tables (Taylor & Russell, 1939) are designed to estimate the percentage of future employees who will be successful on the job if an organization uses a particular test. The philosophy behind the Taylor-Russell tables is that a test will be useful to an organization if (1) the test is valid, (2) the organization can be selective in its hiring because it has more applicants than openings, and (3) there are plenty of current employees who are not performing well, thus there is room for improvement. To use the Taylor-Russell tables, three pieces of information must be obtained. o criterion validity coefficient o selection ratio % of applicants an org. hires o base rate - % of ee who are successful 2. Proportion of Correct Decisions- A utility method that compares the percentage of times a selection decision was accurate with the percentage of successful employees. 3. Lawshe tables- Tables that use the base rate, test validity, and applicant percentile on a test to determine the probability of future success for that applicant. 4. Brogden-Cronbach-Gleser Utility FormulaAnother way to determine the value of a test in a given situation is by computing the amount of money an organization would save if it used the test to select employees. o Utility formula Method of ascertaining the extent to which an organization will

benefit from the use of a particular selection system. Determining the Fairness of a Test 1. Measurement bias- Group differences in test scores that are unrelated to the construct being measured.  Adverse impact- An employment practice that results in members of a protected class being negatively affected at a higher rate than members of the majority class. Adverse impact is usually determined by the four-fifths rule. 2. Predictive bias- A situation in which the predicted level of job success falsely favors one group over another.  Single-group validity- The characteristic of a test that significantly predicts a criterion for one class of people but not for another.  Differential validity- The characteristic of a test that significantly predicts a criterion for two groups, such as both minorities and nonminorities, but predicts significantly better for one of the two groups Making the Hiring Decision 1. Unadjusted Top-Down Selection- Selecting applicants in straight rank order of their test scores.  Compensatory approach- A method of making selection decisions in which a high score on one test can compensate for a low score on another test. For example, a high GPA might compensate for a low GRE score. 2. Rule of three- A variation on top-down selection in which the names of the top three applicants are given to a hiring authority who can then select any of the three. 3. Passing score- The minimum test score that an applicant must achieve to be considered for hire.  Multiple-cutoff approach- A selection strategy in which applicants must meet or exceed the passing score on more than one selection test.  Multiple-hurdle approachSelection practice of administering one test at a time so that applicants must pass that test before being allowed to take the next test. 4. Banding- A statistical technique based on the standard error of measurement that allows similar test scores to be grouped.  Standard error of measurement (SEM) The number of points that a test score could be off due to test unreliability. Chapter 7 EVALUATING EMPLOYEE PERFORMANCE Step 1: Determine the Reason for Evaluating Employee Performance The first step in the performance appraisal process is to determine the reasons your organization wants to evaluate employee performance. That is, does the organization want to use the results to improve performance? Give raises on the basis of performance? This determination is important because the various performance appraisal techniques are appropriate for some purposes but not for others.

1.

2. 3.

4. 5.

Providing Employee Training and Feedback  Performance appraisal review- A meeting between a supervisor and a subordinate for the purpose of discussing performance appraisal results. Determining Salary Increases Making Promotion Decisions  Peter Principle- The idea that organizations tend to promote good employees until they reach the level at which they are not competent—in other words, their highest level of incompetence. Making Termination Decisions Conducting Personnel Research

Step 2: Identify Environmental and Cultural Limitations Step 3: Determine Who Will Evaluate Performance (360-degree feedback- A performance appraisal system in which feedback is obtained from multiple sources such as supervisors, subordinates, and peers. Multiplesource feedback- A performance appraisal strategy in which an employee receives feedback from sources (e.g., clients, subordinates, peers) other than just his or her supervisor.) 1. Supervisors 2. Peers 3. Subordinates 4. Customers 5. Self-appraisal Step 4: Select the Best Appraisal Methods to Accomplish Your Goals The next step in the performance appraisal process is to select the performance criteria and appraisal methods that will best accomplish your goals for the system. Criteria are ways of describing employee success. Decision 1: Focus of the Appraisal Dimensions 1. Trait-Focused Performance Dimensions - A traitfocused system concentrates on such employee attributes as dependability, honesty, and courtesy. Though commonly used, trait-focused performance appraisal instruments are not a good idea because they provide. 2. Competency-Focused Performance Dimensions Rather than concentrating on an employee’s traits, competency-focused dimensions concentrate on the employee’s knowledge, skills, and abilities 3. Task-Focused Performance Dimensions - Taskfocused dimensions are organized by the similarity of tasks that are performed. For a police officer, such dimensions might include following radio procedures or court testimony. 4. Goal-Focused Performance Dimensions- The fourth type of performance dimension is to organize the appraisal on the basis of goals to be accomplished by the employee. 5. Contextual Performance - In the above discussion, the four ways to focus performance dimensions all concentrated on the technical aspects of performing a job. In recent years, psychologists have begun to study contextual performance, that is, the effort an employee makes to get along with peers, improve the organization, and perform tasks that are needed but are not

necessarily an official part of the employee’s job description. Decision 2: Should Dimensions Be Weighted? Once the type of dimension has been determined, the next decision is whether the dimensions should be weighted so that some are more important than others. Grading systems in the classes you have taken provide good examples of weighting dimensions. For example, you may have had a class where the final exam was given more weight than other exams or a class in which a particular project carried more weight than others. Decision 3: Use of Employee Comparisons, Objective Measures, or Ratings Once the types of dimensions have been considered, the next decision is whether to evaluate performance by comparing employees with one another (ranking), using objective measures such as attendance and number of units sold, or having supervisors rate how well the employee has performed on each of the dimensions. 1. Employee Comparisons- To reduce leniency, employees can be compared with one another instead of being rated individually on a scale. The easiest and most common of these methods is the rank order. In this approach, employees are ranked in order by their judged performance for each relevant dimension. 2. Objective Measures  Quantity of Work- A type of objective criterion used to measure job performance by counting the number of relevant job behaviors that occur.  Quality of Work- A type of objective criterion used to measure job performance by comparing a job behavior with a standard.  Attendance  Safety- employees who follow safety rules and who have no occupational accidents do not cost an organization as much money as those who break rules, equipment, and possibly their own bodies. 3. Ratings of Performance  Graphic rating scale- A method of performance appraisal that involves rating employee performance on an interval or ratio scale.  Behavioral checklists consist of a list of behaviors, expectations, or results for each dimension. This list is used to force.  Comparison with Other employeesSupervisors can rate performance on a dimension by comparing the employee’s level of performance with that of other employees.  Frequency of Desired Behaviors  Extent to which organizational expectations a...


Similar Free PDFs