Lab Practical - Week 1 - Operationalising Variables PDF

Title Lab Practical - Week 1 - Operationalising Variables
Course Research Skills 3b
Institution University of Dundee
Pages 3
File Size 99.1 KB
File Type PDF
Total Downloads 27
Total Views 129

Summary

This document includes the revised and organised lecture notes for the first lab practical on Psychological Research Skills 3b at the University of Dundee (PY32004). This lecture was delivered by Amrita Ahluwalia and was based on the topic of operationalising variables. For copyright purposes, some ...


Description

Operationalising Variables Operationalising refers to the way we define and measure variables in our studies. Operationalising is an important part of the scientific method. The scientific method is a cyclical process, and important parts of the scientific method include: 1. 2. 3. 4. 5. 6. 7.

Developing Theories Making observations (thoughts) Developing Research Questions Formulating Hypothesis Developing Testable Predictions Collect data to test Predictions The cycle starts again.

Operationalising is how we turn hypothesis about general concepts (Step 4) into testable predictions (Step 5). Research needs to be precise. Operational definitions state exactly how a variable will be measured, and this is important for replication. Making clear what’s been measured makes studies replicable. Also, it is important to make the details of methodology needed to reproduce a study. The “Open Material Badge” is needed for making publicly available the digitally shareable materials/methods necessary to reproduce the reported results. Psychology (mostly) studies people, and people are complicated subjects, as they are dynamic and change in response to things (including being measured). This makes being precise hard. Some variables are easier to measure (e.g. age, weight, height, gender, nationality), while, more often, other variables are harder to measure (intelligence, anxiety, self-esteem, depression, political attitudes, emotions, personality, social attitudes). Psychology constructs are often not directly observable and represent complex patterns of behaviour and internal processes. An important goal of Psychological research is to define complex psychological contracts in ways that accurately describe them and can be measured. Through operationalisation, researchers can systematically measure processes and phenomena that are not directly observable. For any conceptual variable, there will be many different ways of measuring it. The use of multiple, converging operational definitions is common in Psychology. This is both a good and a bad thing. There are three kinds of measures generally used in Psychology: 1. Self-Report -> Participants report on their own thoughts, feelings, attitudes, behaviours (e.g. Likert Scales, open-ended questions).

2. Behavioural -> Some aspects of participants’ behaviours are observed and recorded. This could be in a controlled lab-setting or in a more natural setting (e.g. Doll experiment). 3. Physiological -> Recording any of a wide variety of physiological responses and processes (e.g. heart rate, blood pressure, hormone levels, electrical activity, and blood flow in the brain). It is important to note that these measures are used as indicators for things the researchers are interested in measuring. The measures are just used as proxy, and, generally, are not the things the researchers are actually interested in. Researches have an obligation to show the measures they are using are good. In order for measure to be good, measures need to be: -

Reliable Measures need to be consistent across different situations. There are statistical ways to measure reliability. It needs to be consistent: a) Over Time -> Test-Retest Reliability b) Across Items -> Internal Consistency c) Across Researchers -> Interrater Reliability

-

Valid Measures need to measure what they should be measuring. Reliability (consistency) is part of this, as, if a measure is reliable, then is more likely to be valid. But that is not enough, as a measure could be very reliable but not be a valid measure of our measure. For example, measuring a finger as a way to measure how a child is growing might be a reliable measure, but it is not valid. Measures need to involve balancing different kinds of validity: a) Face Validity -> A measure seem ‘on the face of it’, seem to be measuring the right construct. b) Content Validity -> A measure seem to be adequately ‘covering’ all the aspects that are relevant for the construct. c) Criterion Validity -> A measure seem to correlate with other measures that are related to. d) Discriminant Validity -> A measure should not correlate with other measures that are distinct from.

There are no statistical ways of assessing these kinds of validity, but researchers use their judgment to assess these kinds of validity. There are statistical ways of assessing these kinds of validity.

Activity: 1. Think of a Research Question 2. Identify the key concepts relevant to your research question 3. Identify variables relevant to those concepts 4. Identify ways to measure those variables 5. How would you assess reliability and validity?...


Similar Free PDFs