Chapter 6 notes PDF

Title Chapter 6 notes
Author Paris Chey
Course Research Design In Psychology
Institution University of Georgia
Pages 7
File Size 164.2 KB
File Type PDF
Total Downloads 11
Total Views 191

Summary

PSYC3980 with Trina Cyterski...


Description

Surveys and Observations: Describing What People Do Construct Validity of Surveys and Polls ● The word survey is often used when people are asked about a consumer product, whereas the word poll is used when people are asked about their social or political opinions ● These two terms can be interchangeable, and in this book, poll and survey mean the same thing: a method of position questions to people on the phone, in personal interviews, on written questionnaires, or online ● Construct validity ○ How well measurement was taken ○ Operationalization, accuracy of measurement ● External validity ○ Can it be generalized? ○ Is the sample representative? ● Statistical is present but not as important ● Surveys can follow four basic formats ● Researchers may ask open-ended questions that allow respondents to answer any way they like ○ Drawback: the responses must be coded and categorized, a process that is often difficult and time consuming ● Open ended ○ Pros: broad ○ Cons: hard to analyze the data/subjective ● One specific way to ask survey questions uses forced-choice questions, in which people give their opinion by picking the best of two or more options ○ Often used in political polls ● Forced choice ○ “Which of the two statements most fits you?” ○ Lose of nuanced/subtle information ○ Works if people are fence sitting→ people pick all neutral→ some people are neutral but some people do it on purpose ○ Can fix it by having an even scale so there is no middle (prevents fence sitting) ○ Rating scales ● Likert scale- people are presented with a statement and are asked to use a rating scale to indicate their degree of agreement ○ When such a scale contains more than one item and each response value is labeled with the specific terms strongly agree, agree, neither agree or disagree, disagree, strongly disagree ○ If it does not follow this format exactly then it may be called a Likert-type scale ● Semantic differential format- respondents might be asked to rate a target object using a numeric scale that is anchored with adjectives Writing Well Worded Questions Question Wording Matters ● surveys/questionnaires

○ ○ ○

Carefully prepared questions improve the construct validity of a poll or survey Question wording can change the results of a survey Take language down to a middle school reading level because people will answer a question even if they don’t understand it the words→ compromises construct validity ● Issue is always compromised construct validity ● Go with simpler questions to be safe, don’t use abbreviations or slang ● Leading question- one whose wording leads people to a particular response ● Leading question ○ Might suggest one choice or word it in a way that one choice sounds unappealing ● Wording matters; when people answer questions that suggest a particular viewpoint, at least some people change their answers ● In general, if the intention of a survey is to capture respondents’ true opinions, the survey writers might attempt to word every question as neutrally as possible ○ When researchers want to measure how much the wording matters for their topic, they word each question more than one way ○ If the results are the same regardless of the wording, they can conclude that question wording does not affect people’s responses to that particular topic ○ If the results are different, then they may need to report the results separately for each version of the question Double-Barreled Questions ● The wording of a question is sometimes so complicated that respondents have trouble answering in a way that accurately reflects their opinions ● It is always best to ask a simple question in a survey ○ If people understand the question they can give a clear, direct and meaningful answer ● Double-barreled question- asks two questions in one ○ Poor construct validity- people might be responding to the first half of the question, the second half, or both ○ Item could be measuring the first construct, the second construct, or both ● Double barreled questions ○ Asking two questions in one ○ Respondents might be only answering one question Negative Wording ● Negatively worded questions are another way survey items can be unnecessarily complicated ○ Whenever a question contains negative phrasing, it can cause confusion, thereby reducing the construct validity of a survey or poll ○ “Impossible” “never” ○ If you ask a question in more than one way, the researcher can study the items’ internal consistency to see whether people respond similarly to both questions (using Cronbach’s alpha) ○ Negatively worded questions can reduce construct validity because they might capture people’s ability or motivation to figure out the question rather than their

true opinions ● Double negative questions can get confusing to follow Question Order ● The order in which questions are asked can affect the responses to a survey ○ Earlier questions can change the way respondents understand and answer the later questions ○ The most direct way to control for the effect of question order is to prepare different versions of a survey, with the questions in different sequences ■ If the results for the first order differs from the results for the second order, researchers can report both sets of results separately ■ They also might be safe in assuming that people’s endorsement of the first question on any survey is unaffected by previous questions ● Question order ○ People’s responses to question 3 can be different because it came after question 2 Encouraging Accurate Responses ● People might give inaccurate answers because they don’t make an effort to think about each question, because they want to look good, or because they are simply unable to report accurately about their motivations and memories People Can Give Meaningful Responses ● Self-reports are often ideal ○ Provides you with the most meaningful information you can get Sometimes People Use Shortcuts ● Response sets- also known as non differentiation, are a type of shortcut respondents can take when answering nonsurvey questions ○ Although response sets do not cause many problems for answering a single stand alone item, people might adopt a consistent way of answering all of the questions -- especially towards the end of a long questionnaire ○ Response sets weaken construct validity because these survey respondents are not saying what they actually think ● One common response set is acquiescence, or yea-saying ○ This occurs when people say “yes” or “strongly agree” to every item instead of thinking carefully about each one ○ People have a bias to agree with any item no matter what it states ○ Acquiescence can threaten construct validity because instead of measuring the construct of true feelings of well being, the survey could be measuring the tendency to agree, or the lack of motivation to think carefully ● Response sets ○ Acquiescence- agreeable tendency (“I agree”/”yea-saying”) ● The most common way to tell the difference between a respondent who is yea-saying and one who actually agrees is by including reverse-worded items ○ One benefit is that reverse-worded items might slow people down so they answer more carefully ○ More construct validity because high or low averages would be measuring true

happiness or unhappiness, instead of acquiescence Drawback- sometimes the result is negatively worded items, which are harder to answer ● Another specific response set is fence sitting- playing it safe by answering in the middle of the scale, especially when survey items are controversial ○ Confusing questions or unclear can result in people answering in the middle/answering “I don’t know” ○ Fence sitters can weaken a survey’s construct validity when middle-of-the-road scores suggest that some responders don’t have an opinion when they actually do ○ It can be difficult to distinguish those who are unwilling to take a side from those who are ambivalent ○ One way to try and avoid this is to take away any neutral option ■ When a scale contains an even number of response options, the person has to choose one side or the other because there is no neutral choice ■ Drawback- sometimes people really do not have an opinion or an answer, so having to choose a side is an invalid representation of their truly neutral state ○ Another way is to use forced-choice questions, in which people must pick one of the two answers ■ Can frustrate people who feel their own opinion is somewhere in the middle of the two options Trying To Look Good ● When survey respondents give answers that make them look better than they are, the responses decrease the survey’s construct validity- socially desirable responding or faking good ○ Socially desirable answering- inherent ○ False information in surveys can always happen ○ The idea- because respondents are embarrassed, shy, worried about giving an unpopular opinion→ they won’t tell the truth on a survey/selfreport measure ○ Faking bad- similar but less common phenomenon ● Researcher might try to avoid this by trying to ensure that the participants know their responses are anonymous ○ Drawback- anonymous respondents may treat surveys less seriously ○ Less likely to accurately report a simple behavior ○ One way to minimize this problem- include special survey items that identify socially desirable responders with target items ○ If people agree with many such items, researcher may discard that individual’s data from the final set, under suspicion that they are exaggerating on the other survey items, or not paying close enough attention ● Researchers can also ask people’s friends to rate them ○ Others know us better than we know ourselves ● Researchers use increasingly special, computerized measures to evaluate people’s ○

implicit opinions about sensitive topics ○ Implicit Association Test- asks people to respond quickly to positive and negative words on the right and left of a computer screen ■ Intermixed with the words may be faces from different social groups ■ People respond to all possible combinations, including positive words with black faces, negative words with white faces, etc. ■ When people respond more efficiently to one combination, researchers infer that the person may hold negative attitudes on an implicit or unconscious level Self-Reporting “More Than They Can Know” ● As researchers strive to encourage accurate responses, they also ask whether people are capable of reporting accurately on their own feelings, thoughts, and actions ● In some cases, self reports can be inaccurate ● Researchers cannot assume the reasons people give for their own behavior are their actual reasons→ people may not be able to accurately explain why they acted as they did Self Reporting Memories or Events ● Psychological research has shown that people’s memories about events in which they participated are not very accurate ● People’s confidence in the accuracy of their memories is virtually unrelated to how accurate the memories actually are ○ Question the construct validity of studies like these Rating Products ● Consumers’ ratings are correlated with the cost of the product and the prestige of the brand Construct Validity of Behavioral Observations ● Surveys and poll results are among the most common types of data used to support a frequency claim ● Researchers also study people simply by watching them in action ● When a researcher watches people or animals and systematically records how they behave or what they are doing, it is called observational research ○ Some scientists believe observing behavior is better than collecting self-reports through surveys, because people cannot always report on their behavior or past events accurately ● Observational research can be the basis for frequency claims Some Claims Based on Observational Data ● Examples of how observational methods have been used to answer research questions in psychology Observations Can Be Better Than Self Report Making Reliable and Valid Observations ● Observational research is a way to operationalize a conceptual variable, so when interrogating a study we need to ask about the construct validity of any observational measure ○ What is the variable of interest, and did the observations accurately measure that

variable? ● The construct validity of observations can be threatened by three problems: Observer Bias: When Observers See What They Expect to See ● Observer bias occurs when observers’ expectations influence their interpretation of the participants’ behaviors or the outcome of the study ○ Observers rate behaviors according to their own expectations or hypotheses instead of rating behaviors objectively ● Observer bias- observer sees what they want to see ○ blind/masked studies- when observers and/or researchers are unaware of hypothesis/design Observer Effects: When Participants Confirm Observer Expectations ● Problematic when observer biases affect researchers’ own interpretations of what they see ● Even worse when observers inadvertently change the behavior of those they are observing, such that participant behavior changes to match observer expectations ● Observer effects or expectancy effects- this phenomenon can occur even in seemingly objective observations ● Observer effects/expectancy ○ Blind masked studies ● Objectivity ○ Higher construct validity ○ “I expect you’re going to be nicer and I conveyed it to you and it elicited more kindness” Preventing Observer Bias and Observer Effects ● Researchers must ensure the construct validity of observational measures by taking steps to avoid observer bias and observer effects ● Careful researchers train their observers well ○ They develop clear rating instructions, often called code books, so the observers can make reliable judgements with less bias ○ Codebooks are precise statements of how the variables are operationalized ○ The more precise and clear the codebook statements are→ the more valid the operationalizations will be ● Researchers can assess the construct validity of a coded measure by using multiple observers→ allows the researchers to assess the interrater reliability of their measures ○ The abbreviation ICC is a correlation that quantifies degree of agreement ○ The closer the correlation is to 1.0, the more observers agree with one another ● If two observers of the same event agree on what happened, the researchers can be more confident ● Common observations are not necessarily always valid because two observers could have the same bias ○ Interrater reliability is not the only important thing Masked Research Designs ● A common way to prevent observer bias and observer effects is to use a masked

design, or blind design, in which the observers are unaware of the purpose of the study and the conditions to which participants have been assigned Reactivity: When Participants React To Being Watched ● Sometimes the mere presence of an outsider is enough to change the behavior of those being watched ● Reactivity is a change in behavior when study participants know another person is watching ○ Occurs with animal subjects too ● Target reactivity ○ Unobtrusive observations ○ People behave differently when they are observed/think they’re being observed Solution 1: Blend In ● One way to avoid observer effects is to make unobtrusive observations- make yourself less noticeable Solution 2: Wait It Out ● Let someone get used to your presence Solution 3: Measure the Behavior’s Results ● Use unobtrusive data- measure the traces a particular behavior leaves behind Observing People Ethically ● Certain ethical decisions may be influenced by the policies of a university where a study is conducted ○ IRBs assess each study to decide whether it can be conducted ethically...


Similar Free PDFs