FINAL EXAM 2015, questions and answers PDF

Title FINAL EXAM 2015, questions and answers
Course Psychological Assessment
Institution University of South Africa
Pages 59
File Size 979.7 KB
File Type PDF
Total Downloads 454
Total Views 693

Summary

1 JANUARY/FEBRUARY 2013 JANUARY/FEBRUARY 2013 EXAM 1 2 Question 1 Define the two types of criterion-related validity, describe the process of determining these and give an example of each. Briefly explain what it means if a test is biased in terms of criterion related validity. ANSWER: There are thr...


Description

2 Question 1 Define the two types of criterion-related validity, describe the process of determining these and give an example of each. Briefly explain what it means if a test is biased in terms of criterion related validity.

ANSWER:

There are three types of validity or validation procedures, namely:  Content-description procedure,  construct-identification procedure and  criterion-prediction procedures. Criterion-prediction validity or criterion-related validity falls under the criterion-prediction procedures.



Criterion-related validity, a quantitative statistical procedure, can be defined as validity that involves the calculation of a correlation coefficient between a predictor or more than one predictor and a criterion.



Criterion-related validity can also be defined as the degree to which a measure is related to some standard or criterion that is known to indicate the construct accurately.

There are two types of criterion-related validity that are distinguished by temporal positioning of the criterion measure in relation to the measure of the scale which is being validated: namely;  Concurrent validity: the degree to which a new measure is related to pre-existing measures of the construct. It involves the accuracy with which a measure can identify or diagnose the current behaviour or status regarding specific skills or characteristics of an individual. Implies the correlation of two (or more) concurrent sets of behaviour or constructs.  Predictive validity: refers to the accuracy with which a measure can predict the future behaviour or category status of an individual that are logically related to the construct. Implies that psychological measures can be used for decision-making.

2

3

The distinction between these two types of criterion-related validity is based on the purpose for which the measure is used. To establish Criterion-related validity one has to compare the measure with another measure of the same construct, called criterion measure. Test bias in terms of criterion-related validity is referred to as criterion contamination. This is the effect of any factor or variable on a criterion such that it is no longer a valid measure. The criterion must be free from any form of bias as this will influence the correlation coefficient with a predictor. Rating scales (e.g. performance ratings) are often used but they are subject to rating biases, where the rater may err on being too lenient or make judgements on a “general impression” of a person.

The essential characteristics of predictive bias are that it is a type of invalidity that prejudices one group more than another group;

 group differences in test achievement are not reflected by corresponding differences in the behaviour domain that the test is meant to measure;  it involves constant and systematic errors (e.g. attenuation as a result of the unreliability of the criterion), in contrast to errors that can be ascribed to coincidental or chance factors (sample errors) in the estimation of the criterion score the constant or systematic errors are usually associated with group membership;  it leads to unfair discrimination against the group whose criterion score is under predicted, i.e. in practice the group does better in respect of the criterion than is predicted on the basis of the test scores.

Bias in the predictive validity of a test can be investigated by making use of the o validity coefficient (i.e. the correlation between the test score and the criterion score), o slope and cut-off of the regression line and the standard error of estimate.

3

4

Possible criterion measures include,  academic achievements,  performance in specialised training,

ASJPR

 job performance,  psychiatric diagnoses,  rating and other valid tests.

4

5

Question 2 (a) Critically discuss the psychometric approach to intelligence and relate yourdiscussion to the South African context

ANSWER:

The major focus of the psychometric or correlational approach to psychology is the study and measurement of individual differences in psychological characteristics, most notably, latent or inferred traits such as intelligence. The primary methods of statistical analysis – that is, correlational and, in particular, factor-analytic methods – are designed to discover the underlying sources of variation among individuals. Research is directed primarily toward determining or postulating the structure of mental abilities. Because the interest is on latent (unobservable) traits, the nature of such traits is inferred from theory and from research findings, such as the results of factor-analytic studies and validity studies. While these major psychometric theories are diverse in emphasis, each has contributed to the kinds of inferences we make about the nature of the latent variables of intelligence and thus to our understanding of what is measured by intelligence tests. Binet's emphasis on judgment and reasoning and, similarly, Spearman's principle of education of relations and correlates formed the foundation of most current conceptions of intelligence. Intelligence (g) is not what we know at a given time, but how well we can reason, solve problems, think abstractly, and manipulate information flexibly and efficiently, particularly when the stimulus materials present some degree of novelty. Novelty is common to g tests because the subject cannot fall back on already acquired knowledge or skill. The multiple factor theorists contributed the now-accepted concepts and measures of group factors of ability, for example, verbal, numerical, and spatial abilities. Thus they contributed the 5

6

suggestion that intelligence is not a single unitary ability, but rather a composite of several or many components of ability, each of which may be important for different kinds of human endeavours. Vernon's hierarchical model is particularly useful in summarizing those aspects of intelligence related to academic performance. While present-day IQ tests measure g, they are often also heavily loaded with that cluster of abilities summarized by Vernon as v:ed, especially verbal abilities, and to a lesser extent numerical and spatial abilities. Because one of the major uses of intelligence tests has been to predict level of performance in schools, including colleges and universities, an emphasis on testing the verbal and symbolic (primarily numerical and spatial) abilities has developed. Tests emphasizing Vernon's v:edare often referred to as “school ability” or “scholastic aptitude” tests. The important point about such tests is that although they are considered “intelligence” tests, they tend to assess those aspects of intelligence that are highly related to academic performance but may not assess other important aspects of ability. Catell’s concepts of “fluid” and “crystallized” intelligence have been useful in clarifying the differences between intelligence and achievement and in suggesting the addition of “learning ability” to our definition of intelligence. Essentially, Cattell's crystallized intelligence might be termed “achievement”, since it pertains to and measures acquired knowledge. Cattell's fluid intelligence is more similar in conception to Spearman's g– that is, the ability to see relationships in any content, familiar or novel. However, the causal analysis of the high correlations between gf and gc suggests that persons high in gf learn more readily and thus are more likely to score high on gcas well. An important aspect of intelligence, then, may be learning ability. To summarise these concepts: Intelligence may be considered a combination of 1. a general, or g, component reflecting overall reasoning and problem-solving abilities, judgment, and learning ability; and 2. subcomponents reflecting school ability and more specific group factors of ability representing various content areas or types of mental operations.The existence of g is inferred from the positive correlations among tests of mental ability varying in content and type of intellectual process involved, as long as those processes involve some form of mental manipulation rather than simple demonstration of acquired knowledge.

6

7

The existence of separate components of gis inferred from factor-analytic studies.Different tests of intelligence differentially emphasize these components of mental ability, so an understanding of these components and their importance in different tests is important to the careful and effective use of tests of human mental ability.

Discuss the purpose and rational of the Junior South African Individual scale (JSAIS) and indicate how the test relates to Spearman’s theory of intelligence.

ANSWER:

Purpose: The first aim of the JSAIS is to measure as many of the intelligence related mental abilities as possible, but emphasizing those which are more closely associated with effective functioning at school, and therefore with the prediction of scholastic achievement.The aim is therefore to obtain a profile of the strong and weak aspects of a testee's intellectual functioning. The second aim is to measure the general factor of intelligence (Spearman's g factor). The third aim is to provide an instrument that can be used as an aid in diagnosing different levels of mental retardation in children with the further aim of providing them with different levels of special education.

Rationale: With the construction of the JSAIS the assumption was made that intelligence may be regarded as consisting of related problem-solving abilities, some of which are more closely associated with effective functioning at school, and therefore, the prediction of scholastic achievement, than others. The further assumption was made that the total score on the tests included in the intelligence scale represents an underlying general factor of intelligence (Spearman's g factor) and that the so-

7

8

called primary mental abilities are revealed at a lower level of generality and are related to different mental processes and different test content. The rationale for the selection of tests for the JSAIS was based on a process-x-content modelin which each of five processes is combined with three types of test material. The process and content facets are shown in Figure 1.

PROCESS X CONTENT MODEL OF THE JSAIS Figure 1.

Concept attainment (cognition) Convergent production Evaluation Divergent Production

Verbal Numerical Spatial (non-verbal)

One of the primary aims of the JSAIS is to provide reliable profile data for the evaluation of a child's relatively strong and weak points in respect of certain important facets of intelligence. It is important to remember that a particular profile of abilities does not necessarily remain constant as the child grows older. A large variation among test scores in a profile can sometimes result from temporary mental inefficiencies while in other cases it may reflect a more permanent inability caused for instance by disturbed school experiences, organic injury or hereditary factors. The dynamic and functional nature of a child's learning problems can often be better understood with the aid of a profile analysis. Another aim of the JSAIS is to provide an instrument that can be used to identify children needing special education because of mental retardation. The JSAIS manual describes children with GIQs of 69 and less as cognitively handicapped, GIQs between 70 and 79 as borderline cases, between 80 and 89 as low average and between 90 and 109 as average. It is essential to remember that these descriptions apply when norms are established for a population with a relatively low proportion of socio-economically deprived children. Socio-economic deprivation may play a crucial causal role in determining the level of a child's GIQ. There may also be emotional and/or motivational factors that temporarily influence a child's 8

9

measured GIQ. Therefore, there is no justification for pigeon-holing a child on the basis of a specific GIQ obtained at a single testing. It was previously mentioned that there are test age norms in the JSAIS manual for the subtests included in the GIQ Scale. This implies that the JSAIS may also be used to determine a test age for a child older than eight years for each subtest in the GIQ scale, if his mean test age (for the subtests in the GIQ scale, is probably less than eight years but more than three years).The mean test age of a testee for the subtests of the GIQ Scale can be regarded as a summary of his intellectual capabilities. Conversion of test age to a new index by dividing the test age by the testee's chronological age is not recommended

9

10

Question 1 A motor manufacturer developed the Technical Aptitude Test as part of their selection battery for potential trainees. Two forms of the test, namely Form A and Form B were developed to prevent cheating when testing large groups of candidates. The test-retest reliability coefficients for the two forms are 0 68 and 0 70. The alternate form reliability coefficient for the test is 0.45 (a) Define Reliability (b) Explain how test-retest reliability is determined (c) Explain how alternate form reliability is determined (d) Critically discuss the reliability of the Technical Aptitude Test commenting on the use of the test in this context ANSWER:

(a). Define Reliability The Reliability of a measure refers to the consistency with which it measures whatever it measures. However, consistency always implies a certain amount of error in measurement.A person's performance in one administration of a measure does not reflect with complete accuracy the "true" amount of the trait that the individual possesses. There may be other systematic or chance factors present, such as the person's emotional state of mind, fatigue, noise outside the test room etc. which may affect his/her score on the measure. 10

11

We can capture the true and error measurement in equation form as: X= T + E Where: X= observed score (the total score) T= proportion true score (reliability of the measure) E= proportion error score (unexplained variance).

(b). Explain how test-retest reliability is determined

Test-Retest Reliability. This is a measure of reliability obtained by administering the same test twice over a period of time to a group of individuals on a similar occasion. The reliability coefficient (r )ttin this case is simply the correlation between the scores obtained by the same persons on the two administrations of the test. The error variance corresponds to the random fluctuations of performance from one test session to the other.These variations may result in part from uncontrolled testing conditions, such as extreme changes in weather, sudden noises and other distractions, or a broken pencil point. To some extent, however, they arise from changes in the condition of the test takers themselves, as illustrated by illness, fatigue, emotional strain etc like. Retest reliability shows the extent to which scores on a test can be generalized over different occasions. The higher the reliability, the less susceptible the scores are to the random daily changes in the condition of the test takers or of the testing environment.

(c). Explain how alternative form reliability is determined

The correlation between the scores obtained on the two forms represents the reliability coefficient of the test. It will be noted that such a reliability coefficient is a measure of both temporal stability and consistency of response to different item samples (or test forms).

11

12

This coefficient thus combines two types of reliability. Since both types are important for most testing purposes, however, alternate-form reliability provides a useful measure for evaluating many tests.

(c). Critically discuss the reliability of the Technical Aptitude Test commenting on the use of the test in this context

Incidentally, for selection purposes, we wouldn't normally use test-retest method.We choose the top candidates by looking at everyone's scaled scores (stanines): where 1 is poor, 5 is average and 9 is superior. Aptitude tests assess a person’s potential for specific areas/abilities. However, this aptitude test was developed by a motor manufacturer. We usually get stanines from a norm table within a manual and since the test maker is a motor manufacturer, he cannot be a registered and qualified professional in the psychological field. He has to be if the test is used for selection purposes and the test has to be classified and listed with HPCSA.Thus the test has not been standardised, nor are the norms available for it. Consequently, this test will yield invalid scores and cannot be used. There are also legal implications: the car manufacturer can be imprisoned if he uses this test for selection purposes. The Employment Equity Act (No. 55 of 1998, section 8) refers to psychological tests and assessment specifically and states: Psychological testing and other forms or assessment of an employee are prohibited unless the test or assessment being used: a. Has been scientifically shown to be valid and reliable. b. Can be applied fairly to all employees. c. Is not biased against any employee or group.

Infact, the candidates can sue him if they are not selected.

12

13

Question 2 (a) Describe any four situational techniques that can be used for making employment decisions (b) Discuss the person-environment-fit approach to assessment in career counselling. Refer in your answer to each of the domains that should be assessed as part of this approach. ANSWER:

(a) Describe any four situational techniques that can be used for making employment decisions

Simulations Simulations (role play) attempt to recreate an everyday work situation. Participants are requested to play a particular role and to deal with a specific problem.Other role players are instructed to play certain fixed roles in order to create consistency of different simulations. Trained observers observe the candidate to assess specified behavioural dimensions. Reliability and validity of simulations can vary according to the training and experience of observers, knowledge of the behaviours that need to be observed and the consistency of the other role players. 13

14

Vignettes Vignettes are similar to simulations but are based on a video or film presentation in which the candidate is requested to play the role of a particular person and to deal with a problem displayed in the vignette. Vignettes are more consistent in terms of the presentation of the scenario than simulations, but are open to many possible solutions from which a candidate may choose. Leadersless group exercises In this instance, a group of candidates is requested to perform a particular task or to deal with a specific problem while being observed. Trained observers rate the leadership qualities and other behavioural dimensions that the candidates display during their interactions.

In-basket tests The in-basket test typically consists of a number of letters, memos and reports that the average manager or supervisor is typically confronted with in his/her in-basket or email inbox.The candidate is requested to deal with correspondence in an optimal way. The responses are then evaluated by a trained observer.

(b) Discuss the person-environment-fit approach to assessment in career counselling. Refer in your answer to each of the domains that should be assessed as part of this approach.

The Person-Environment fit approach

Parsons stated the basic principles of the trait-and-factor approach to career counselling, which has since evolved into the person-environment fit approach.


Similar Free PDFs