Sample/practice exam 15 November, questions PDF

Title Sample/practice exam 15 November, questions
Course Psychology
Institution Far Eastern University
Pages 3
File Size 85.1 KB
File Type PDF
Total Downloads 67
Total Views 118

Summary

Download Sample/practice exam 15 November, questions PDF


Description

PSYCHOLOGICAL ASSESSMENT RELIABILITY & VALIDITY 1. In legal terminology, a valid contract is a contract that A. measures what it purports to measure. B. has been executed with the proper formalities. C. is well grounded on principles of evidence D. none one of these 2.

As the term is applied to a test, validity is a "judgment or estimate of how well a test A. measures what it purports to measure. B. measures what it purports to measure in a particular context. C. satisfies the deductions that could logically be made from inferences about it. D. All of these 3.

A test reviewer comes to the conclusion that a certain test is a valid test. This means that the reviewed test has been shown to be valid for A. a particular use with a particular population for the life of the test. B. a particular use with a universal population of test takers for a limited time. C. universal use with all test takers for the life of the test. D. a particular use with a particular population at a particular time. 4.

Each of the three approaches to validity assessment in the Trinitarian model should be thought of as A. mutually exclusive as evidence of tests validity with any one source necessary and sufficient for demonstrating test validity. B. One type of evidence that, with others, contributes to a "judgment concerning the validity of a test. C. insufficient, either by themselves or together with the other two, to demonstrate the validity of a test. D. none of these 5. The validation of a test is a process A. that can be carried out by the test author. B. that can be carried out by the test user. C. of gathering evidence of the tests validity. D. All of these 6.

Comedian Rodney Dangerfield was cited in the text to illustrate a point about how which of the following is viewed A. test validation B. content validity C. face validity D. construct validity 7.

“It’s a measure of validity that arrived at by a comprehensive analysis of how scores on the test relate to other test scores.” This statement is a reference to A. face validity B. content validity C. the Trinitarian index D. construct validity 8.

As mentioned in Chapter 6 of your text, the measurement of content validity is particularly important in A. classroom settings, where tests will form the basis of a grade. B. employment settings, where tests may be used to promote employees. C. courtroom settings, where tests may be used to determine competence. D. screening for the potential of emission of violent or aggressive behavior. 9.

If a test developer has only a “fuzzy” vision of the construct being measured, then A. the content validity of the test is likely to suffer. B. the construct validity of the test is likely to suffer. C. content irrelevant to the targeted construct may be measured. D. All of these

11. In order to remain consistent with a tests blueprint, a test administered on a regular basis is likely to require A. item pool management. B. base rate maintenance. C. predictive validity certification D. none of these 12. Criterion-related validity is to predictive validity As to criterion-related validity is to A. construct validity. B. content validity. C. concurrent validity. D. test bias. 13. A test is considered valid when the test A. measures what it purports to measure. B. measures whatever it is that it measures consistently. C. can be administered efficiently and cost effectively. D. has little or no error associated with it. 14. Which is NOT a method of evaluating the validity of a test A. evaluating scores on the test as compared to scores obtained on other tests B. evaluating the content of the testC. evaluating the percentage of passing and failing grades on the testD. evaluating test scores as they relate to predictions from a particular theory 15. Predictive and concurrent under A. content validity. B. criterion related validity. C. face validity. D. true score validity.

validity can be subsumed

16. Relating scores obtained on a test to other test scores or data from other assessment procedures is typically done in an effort to establish the _______validity of a test. A. Content related B. criterion related C. face D. about face 17. A. B. C. D.

Face validity refers to the most preferred method for determining validity. another name for content validity. the appearance of relevancy of the test items. validity determined by means of face8to8face interviews.

18. Face validity A. may influence the way the test taker approaches the situation. B. relates more to what the test appears to measure than what the test may actually measure. C. is given short-shrift as compared to other indices of validity. D. All of these 19. Before constructing a comprehensive final examination that covers everything you have studied since Day 1 of your course, your instructor reviews the objectives of the course, the textbook, and all lecture notes. Your instructor is clearly making a diligent effort to maximize the ________validity of the final examination. A. content B. criterion-related C. predictive D. internal consistency 20. A standard against which a test or test score is evaluated is known as A. a facet. B. a correlation coefficient. C. a validity coefficient. D. a criterion. 21. Which of the following is BEST be viewed as varieties of criterion-related validity A. concurrent validity and face validity

10. Test blueprinting is applied in the design of A. an attitude test. B. a personality test. C. an aptitude test. D. All of these

B. content validity and predictive validity C. concurrent validity and predictive validity D. concurrent validity and content validity 22. The form of criterion-related validity that reflects the degree to which a test score is correlated with a criterion measure obtained at the same time that the test score was obtained is known as A. predictive validity

C. a test or testing practice that systematically favors the performance of one group of testta'ers over another. D. All of these

B. construct validity C. concurrent validity D. content validity. 23. The form of criterion-related validity that reflects the degree to which a test score correlates with a criterion measure that was obtained some time subsequent to the test score is known as A. predictive validity. B. construct validity. C. concurrent validity. D. content validity. 24. A key difference between concurrent and predictive validity has to do with A. the time frame during which data on the criterion measure is collected. B. the magnitude of the reliability coefficient that will be considered signifcant at the 0.05 level. C. the magnitude of the validity coeffocient that will be considered significant at the 0.05 level. D. Both b and c 25. Which is an example of a criterion A. achievement test scores B. success in being able to repair a defective toaster C. student ratings of teaching effectiveness D. All of these 26. The magnitude of a validity coefficient may be affected by A. attrition of the sample. B. restriction of range. C. inflation of range. D. All of these 27. Which magnitude of validity coefficient is typically acceptable to conclude that a test is valid A. 1.50 B. 1.80 C. above 1.90 D. none of these 28. A. B. C. D.

A construct is unobservable. something that describes behavior. something that is assumed to exist All of these

29. Which qualifies as a construct A. depression B. intelligence C. mechanical aptitude D. All of these 30. All validity evidence _________validity. A. content B. criterion-related C. predictive D. construct

can

be

interpreted

as

31. Which statistic is appropriate for use to estimating the heterogeneity of a test composed of multiple choice items A. Point biserial correlation coefficient B. Pearson product moment correlation coefficient C. coefficient alpha D. chi square 32. Test scores may be affected in pre and post testing by A. therapy. B. medication. C. education. D. All of these 33. Which of the following is TRUE of test bias as compared to test fairness A. test bias is dependent on statistical analyses while test fairness relates to values. B. test bias is dependent on values while test fairness relates to statistical analyses. C. Whether a test is fair can be answered with certainty while

35. Which of the following is the BEST way to minimize test bias A. create separate norm groups for different groups so that any potential bias is reduced. B. have a panel of experts review the test items at various stages during the tests development. C. pre-screen examiners to be used in the test administration for any signs of bias or prejudice. D. employ the multitrait8multimethod matrix to screen items for bias. 36. If a newly developed test designed to measure happiness correlates with other tests of happiness but not with tests of sadness, this is referred to as ______ and______evidence of validity, respectively. A. Convergent;discriminant B. discriminant;convergent C. homogeneous;concurrent D. concurrent;homogeneous 37. An estimate of test-retest reliability is often referred to as a coefficient of stability when the time interval between the test and retest is more than: A. 6 months B. 3 months C. 60 days D. 30 days 38. Poorly worded items that cause students to differentially respond to the same questions contribute to what type of error variance? A. test administration error B. content sampling C. content sampling and test-scoring and interpretation variance D. test-scoring and interpretation variance 39. Test-retest reliability estimates would be least appropriate for: A. IQ tests B. tests that measure art aptitude C. tests that measure moment-to-moment mood(s) D. academic achievement tests on topics, such as ancient history 40. Which source of error variance affects parallel- or alternate forms reliability estimates, but does not effect test-retest estimates? A. fatigue B. item sampling C. learning D. practice 41. As the degree of reliability increases, the proportion of: A. none are correct B. total variance attributed to error variance increases C. total variance attributed to true variance increases D. total variance attributed to true variance decreases 42. As the confidence interval increases, the range of scores a single test score is likely to fall into: A. increases B. first increases, then decreases C. decreases D. remains the same 43. Which of the following factors may influence a split-half reliability estimate? A. fatigue B. anxiety C. item difficulty D. all are correct 44. If items from a test are measuring the same trait, then estimates of reliability yielded from split-half methods will typically be ____________ compared with KR-20. A. higher B. lower...


Similar Free PDFs