Assessment OF Learning 1 Module PDF

Title Assessment OF Learning 1 Module
Author Giovanni Alcain
Course Education
Institution Philippine College of Technology
Pages 67
File Size 1.9 MB
File Type PDF
Total Downloads 688
Total Views 839

Summary

Warning: TT: undefined function: 32 Warning: TT: undefined function: 32ASSESSMENTOFLEARNING 1TEACHINGMATERIALSCompiled by Giovanni A. Alcain, CST, LPT, (MAEd-Eng on-going)EDUCASSESSMENT OF LEARNING 1MODULE 1 – BASIC CONCEPTS IN ASSESSMENT OF LEARNINGAssessment –refers to the process of gathering, de...


Description

ASSESSMENT OF LEARNING 1

TEACHING MATERIALS

Compiled by Giovanni A. Alcain, CST, LPT, (MAEd-Eng on-going)

EDUC10 ASSESSMENT OF LEARNING 1

MODULE 1 – BASIC CONCEPTS IN ASSESSMENT OF LEARNING Assessment –refers to the process of gathering, describing or quantifying information about the student performance. It includes paper and pencil test, extended responses (example essays) and performance assessment are usually referred to as‖authentic assessment‖ task (example presentation of research work) Measurement-is a process of obtaining a numerical description of the degree to which an individual possesses a particular characteristic. Measurements answers the questions‖how much? Evaluation- it refers to the process of examining the performance of student. It also determines whether or not the student has met the lesson instructional objectives. Test –is an instrument or systematic procedures designed to measure the quality, ability, skill or knowledge of students by giving a set of question in a uniform manner. Since test is a form of assessment, tests also answer the question‖how does individual student perform? Testing-is a method used to measure the level of achievement or performance of the learners. It also refers to the administration, scoring and interpretation of an instrument (procedure) designed to elicit information about performance in a simple of a particular area of behavior. Types of Measurement There are two ways of interpreting the student performance in relation to classroom instruction. These are the Norm-reference tests and Criterion-referenced tests. Norm-reference test is a test designed to measure the performance of a student compared with other students. Each individual is compared with other examinees and assigned a score-usually expressed as percentile, a grade equivalent score or a stanine. The achievement of student is reported for broad skill areas, although some norm referenced tests do report student achievement for individual. The purpose is to rank each student with respect to the achievement of others in broad areas of knowledge and to discriminate high and low achievers. Criterion- referenced test is a test designed to measure the performance of students with respect to some particular criterion or standard. Each individual is compared with a pre determined set of standard for acceptable achievement. The performance of the other examinees are irrelevant. A student’s score is usually expressed as a percentage and student achievement is reported for individual skills, The purpose is to determine whether each student has achieved specific skills or concepts. And to find out how mush students know before instruction begins and after it has finished. Other terms less often used for criterion-referenced are objective referenced, domain referenced, content referenced and universe referenced.

According to Robert L. Linn and Norma E. gronlund (1995) pointed out the common characteristics and differences of Norm-Referenced Tests and Criterion-Referenced Tests

Common Characteristics of Norm-Referenced Test and Criterion-Referenced Tests 1. 2. 3. 4. 5. 6.

Both require specification of the achievement domain to be measured Both require a relevant and representative sample of test items Both use the same types of test items Both used the same rules for item writing (except for item difficulty) Both are judge with the same qualities of goodness (validity and reliability) Both are useful in educational assessment

Differences between Norm-Referenced Tests and Criterion Referenced Tests Norm –Referenced Tests

Criterion-Referenced Tests

1. Typically covers a large domain of learning tasks, with just few items measuring each specific task.

1.Typically focuses on a delimited domain of learning tasks, with a relative large number of items measuring each specific task.

2. Emphasizes discrimination among individuals in terms of relative of level of learning. 3. Favors items of large difficulty and typically omits very easy and very hard items

2.Emphasizes among individuals can and cannot perform. 3.Matches item difficulty to learning tasks, without altering item difficulty or omitting easy or hard times

4. Interpretation requires clearly defined 4.Interpretation requires a clearly defined and group delimited achievement domain

TYPES OF ASSESSMENT There are four type of assessment in terms of their functional role in relation to classroom instruction. These are the placement assessment, diagnostic assessment, formative assessment and summative assessment. A. Placement Assessment is concerned with the entry performance of student, the purpose of placement evaluation is to determine the prerequisite skills, degree of mastery of the course objectives and the best mode of learning. B. Diagnostic Assessment is a type of assessment given before instruction. It aims to identify the strengths and weaknesses of the students regarding the topics to be discussed. The purpose of diagnostic assessment:

1. To determine the level of competence of the students 2. To identify the students who have already knowledge about the lesson; 3. To determine the causes of learning problems and formulate a plane for remedial action. C. Formative Assessment is a type of assessment used to monitor the learning progress of the students during or after instruction. Purpose of formative assessment: 1. To provide feed back immediately to both student and teacher regarding the success and failure of learning. 2. To identify the learning errors that is need of correction 3. To provide information to the teacher for modifying instruction and used for improving learning and instruction D. Summative Assessment is a type of assessment usually given at the end of a course or unit. Purpose of summative assessment: 1. To determine the extent to which the instructional objectives have been met; 2. To certify student mastery of the intended outcome and used for assigning grades; 3. To provide information for judging appropriateness of the instructional objectives 4. To determine the effectiveness of instruction

MODULE 2 - PRINCIPLES OF HIGH-QUALITY CLASSROOM ASSESSMENT 1. 2. 3. 4. 5. 6. 7. 8.

Clarity of learning targets Appropriateness of Assessment Methods Validity Reliability Fairness Positive Consequences Practicality and Efficiency Ethics

1. CLARITY OF LEARNING TARGETS Assessment can be made precise, accurate and dependable only if what are to be achieved are clearly stated and feasible. The learning targets, involving knowledge, reasoning, skills, products and effects, need to be stated in behavioral terms which denote something which can be observed through the behavior of the students. Cognitive Targets Benjamin Bloom (1954) proposed a hierarchy of educational objectives at the cognitive level. These are: Knowledge – acquisition of facts, concepts and theories Comprehension - understanding, involves cognition or awareness of the interrelationships

Application – transfer of knowledge from one field of study to another of from one concept to another concept in the same discipline Analysis – breaking down of a concept or idea into its components and explaining g the concept as a composition of these concepts Synthesis – opposite of analysis, entails putting together the components in order to summarize the concept Evaluation and Reasoning – valuing and judgment or putting the ―worth‖ of a concept or principle. Skills, Competencies and Abilities Targets Skills – specific activities or tasks that a student can proficiently do Competencies – cluster of skills Abilities – made up of relate competencies categorized as:   

Cognitive Affective Psychomotor

Products, Outputs and Project Targets     

tangible and concrete evidence of a student’s ability need to clearly specify the level of workmanship of projects expert skilled novice

2. APPROPRIATENESS OF ASSESSMENT METHODS Written-Response Instruments Objective tests – appropriate for assessing the various levels of hierarchy of educational objectives Essays – can test the students’ grasp of the higher level cognitive skills Checklists – list of several characteristics or activities presented to the subjects of a study, where they will analyze and place a mark opposite to the characteristics. Product Rating Scales  

Used to rate products like book reports, maps, charts, diagrams, notebooks, creative endeavors Need to be developed to assess various products over the years

Performance Tests - Performance checklist  

Consists of a list of behaviors that make up a certain type of performance Used to determine whether or not an individual behaves in a certain way when asked to complete a particular task

Oral Questioning – appropriate assessment method when the objectives are to:  

Assess the students’ stock knowledge and/or Determine the students’ ability to communicate ideas in coherent verbal sentences.

Observation and Self Reports  Useful supplementary methods when used in conjunction with oral questioning and performance tests

3. VALIDITY  

Something valid is something fair. A valid test is one that measures what it is supposed to measure.

Types of Validity     

Face: What do students think of the test? Construct: Am I testing in the way I taught? Content: Am I testing what I taught? Criterion-related: How does this compare with the existing valid test? Tests can be made more valid by making them more subjective (open items).

MORE ON VALIDITY Validity – appropriateness, correctness, meaningfulness and usefulness of the specific conclusions that a teacher reaches regarding the teaching-learning situation. Content validity – content and format of the instrument  Students’ adequate experience  Coverage of sufficient material  Reflect the degree of emphasis Face validity – outward appearance of the test, the lowest form of test validity Criterion-related validity – the test is judge against a specific criterion Construct validity – the test is loaded on a ―construct‖ or factor

4.RELIABILITY  

Something reliable is something that works well and that you can trust. A reliable test is a consistent measure of what it is supposed to measure.

Questions:  Can we trust the results of the test?  Would we get the same results if the tests were taken again and scored by a different person? Tests can be made more reliable by making them more objective (controlled items).



Reliability is the extent to which an experiment, test, or any measuring procedure yields the same result on repeated trials.



Equivalency reliability is the extent to which two items measure identical concepts at an identical level of difficulty. Equivalency reliability is determined by relating two sets of test scores to one another to highlight the degree of relationship or association.



Stability reliability (sometimes called test, re-test reliability) is the agreement of measuring instruments over time. To determine stability, a measure or test is repeated on the same subjects at a future date.



Internal consistency is the extent to which tests or procedures assess the same characteristic, skill or quality. It is a measure of the precision between the observers or of the measuring instruments used in a study.



Interrater reliability is the extent to which two or more individuals (coders or raters) agree. Interrater reliability addresses the consistency of the implementation of a rating system.

RELIABILITY – CONSISTENCY, DEPENDABILITY, STABILITY WHICH CAN BE ESTIMATED BY Split-half method  Calculated using the following: Richardson – KR 20 and KR21 

Spearman-Brown prophecy formula and Kuder-

Consistency of test results when the same test is administered at two different time periods such as Test-retest method and Correlating the two test results.

5. FAIRNESS The concept that assessment should be 'fair' covers a number of aspects.  Student Knowledge and learning targets of assessment

 Opportunity to learn  Prerequisite knowledge and skills  Avoiding teacher stereotype  Avoiding bias in assessment tasks and procedures  6. POSITIVE CONSEQUENCES Learning assessments provide students with effective feedback and potentially improve their motivation and/or self-esteem. Moreover, assessments of learning gives students the tools to assess themselves and understand how to improve positive consequence on students, teachers, parents, and other stakeholders 7. PRACTICALITY AND EFFICIENCY  

Something practical is something effective in real situations. A practical test is one which can be practically administered.

Questions:  

Will the test take longer to design than apply? Will the test be easy to mark?

Tests can be made more practical by making it more objective (more controlled items)

     

Teacher Familiarity with the Method Time required Complexity of Administration Ease of scoring Ease of Interpretation Cost Teachers should be familiar with the test, - does not require too much time implementable

8. ETHICS   1. 2. 3.

Informed consent Anonymity and Confidentiality Gathering data Recording Data Reporting Data

ETHICS IN ASSESSMENT – ―RIGHT AND WRONG‖  Conforming to the standards of conduct of a given profession or group  Ethical issues that may be raised

1. 2. 3. 4.

Possible harm to the participants. Confidentiality. Presence of concealment or deception. Temptation to assist students.

MODULE 3 – DEVELOPMENT OF CLASSROOM TOOLS FOR MEASURING KNOWLEDGE AND UNDERSTANDING

DIFFERENT TYPES OF TESTS MAIN POINTS FOR COMPARSON

TYPES OF TEST

Psychological 

Aims to measure students intelligence or mental ability in a large degree without reference to what the students has learned Measures the intangible characteristics of an individual (e.g. Aptitude Tests, Personality Tests, Intelligence Tests) Survey



Covers a broad range of objectives  Measures general achievement in certain subjects  Constructed by trained professional Norm- Referenced







 Purpose



Scope of Content

Educational

Result is interpreted by comparing one student’s performance with other students’

Aims to measure the result of instructions and learning (e.g. Performance Tests)

Mastery Covers a specific objective  Measures fundamental skills and abilities  Typically constructed by the teacher Criterion-Referenced Result is interpreted by comparing student’s performance based on a predefined

 



performance Some will really pass There is competition for a limited percentage of high scores Describes pupil’s performance compared to others

 



standard All or none may pass There is no competition for a limited percentage of high score Describes pupil’s mastery of course objectives

Interpretation  

Language Mode

Verbal Words are used by students in attaching meaning to or responding to test items

Standardized     Construction  

 

 

Non-verbal Students do not use words in attaching meaning to or in responding to test items (e.g. graphs, numbers, 3-D subjects) Informal

Constructed by a professional item writer Covers a broad range of content covered in a subject area Uses mainly multiple choice Items written are screened and the best items were chosen for the final instrument Can be scored by a machine Interpretation of results is usually norm-referenced Individual



Constructed by a classroom teacher



Covers a narrow range of content



Various types of items are used Teacher picks or writes items as needed for the test

Mostly given orally or requires actual demonstration of skill One-on-one situations, thus, many opportunities for



This is a paper-andpen test



Loss of rapport, insight and knowledge about each



 

Scored manually by the teacher Interpretation is usually criterionreferenced Group

 Manner of Administration

   Effect of Biases

clinical observation Chance to follow-up examinee’s response in order to clarify or comprehend it more clearly Objective Scorer’s personal judgement does not affect the scoring Worded that only one answer is acceptable Little or no disagreement on what is the correct answer



Subjective   

Power 

 Time Limit and Level of Difficulty

Consists of series of items arranged in ascending order of difficulty Measures student’s ability to answer more and more difficult items



 



Consists of items approximately equal in difficulty



Measure’s student’s speed or rate and accuracy in responding Supply

There are choices for the answer Multiple choice, True or False, Matching Type



Can be answered quickly Prone to guessing



Time consuming to construct







Format 

Affected by scorer’s personal opinions, biases and judgement Several answers are possible Possible to disagreement on what is the correct answer Speed

Selective 

examinee Same amount of time needed to gather information from one student

There are no choices for the answer Short answer, Completion, Restricted or Extended Essay May require a longer time to answer Less chance to guessing but prone to bluffing Time consuming to answer and score

TYPES OF TESTS ACCORDING TO FORMAT 1. Selective Type – provides choices for the answer a. Multiple Choice – consists of a stem which describes the problem and 3 or more alternatives which give the suggested solutions. The incorrect alternatives are the distractions. b. True-False or Alternative Response – consists of declarative statement that one has to mark true or false, right or wrong, correct or incorrect, yes or no, fact or opinion,, and the like c. Matching Type – consists of two parallel columns: Column A, the column of premises from which a match is sought; Column B, the column of responses from which the selection is made. 2. Supply Test a. Short Answer – uses a direct question that can be answered by a word, phrase, a number, or a symbol b. Completion Test – It consists of an incomplete statement 3. Essay Test a. Restricted Response – limits the content of the response by restricting the scope of the topic b. Extended Response – allows the students to select any factual information that they think is pertinent, to organize their answers in accordance with their best judgement Projective Test  A psychological test that uses images in order to evoke responses from a subject and reveal hidden aspects of the subject’s mental life 

These were developed in an attempt to eliminate some of the major problems inherent in the use of self-report measures, such as the tendency of some respondents to give “socially responsible” responses.

Important Projective Techniques 1. Word Association Test. An individual is given a clue or hint and asked to respond to the first thing that comes to mind. 2. Completion Test. In this the respondents are asked to complete an incomplete sentence or st...


Similar Free PDFs