Research Methods Exam 1 Study Guide PDF

Title Research Methods Exam 1 Study Guide
Course Psychology Research Methods (Research Methods-Psychology)
Institution George Washington University
Pages 14
File Size 294.6 KB
File Type PDF
Total Downloads 7
Total Views 160

Summary

Research Methods Exam 1 Study Guide Chapters 1-  Definition of Psychology: the scientific study of the behavior of individuals and their mental processes  -We often directly observe behavior and inter-mental processes   Scientific Knowledge vs. Ordinary Knowledge: o Scientific Knowledge:  Appr...


Description

Research Methods Exam 1 Study Guide     

  

 

Chapters 1-3 Definition of Psychology: the scientific study of the behavior of individuals and their mental processes -We often directly observe behavior and inter-mental processes Scientific Knowledge vs. Ordinary Knowledge: o Scientific Knowledge:  Approach: empirical  Observation: systematic, controlled  Concepts: operational specificity  Hypothesis: specific and testable  Measurement: reliable and valid  Attitude: critical and skeptical  Ordinary knowledge:  Approach: intuitive  Observation: casual, uncontrolled  Concepts: ambiguous  Hypothesis: less specific and testable  Measurement: not reliable/ valid  Attitude: accepting Physical Sciences VS Social Sciences 6 Steps of a research project 1. Ask a question – stems from theory  Theory: attempts to understand precisely why certain events or processes occur as they do 2. Develop a hypothesis o Hypothesis: a specific and testable proposition that describes a relationship between two or more variables (prediction) 3. Select a method and design the study 3. Collecting data – bigger sample = more power = more likely to support hypothesis 3. Analyze data – draw conclusions – do I have support of hypothesis?  Need to conduct statistical analysis before we draw conclusions; cant just look at raw numbers – how many people did you talk to, etc? 6. Report findings

Anatomy of a Research Article o Abstract: summarize what you did in 140 words or less – easiest to write last o Introduction: hypothesis is clearly stated at end  Review of research to support hypothesis

 

 

             

Method: 3 subheadings/sections Participants: how many, where did they come from, how were they assigned to conditions, average age, race, gender – only need info that is relevant to hypothesis Materials: things used: questionnaires, room set up Procedure: what did participants do from the time they arrived to the time they left – don’t go on tangents  Explain questionnaires, etc. in materials section Results: what were results Discussion: starts off narrow and then broadens Were the hypothesis supported or not Give summary of study – what did you do Any types of problems or limitations to study Future research? Write in past tense References: APA format

Ethics: discipline dealing with good and evil moral duty Through research we want to seek information that could be used for the greater good However, “evil” can occur → the effect it can have on participants: psychological harm, physical harm, invasion of privacy Need to find the balance between the benefits/ good and the harm Time period and culture changes the standards of ethics



-Milgram: o Four prompts: the experiment requires that you continue, please continue, you have no choice, you must go on  Kept going unless the participant refuses after all four  Shows the power of authority  2/3rds of participants gave maximum shock



-Animals in research: up for debate – personal opinion o About 20 million animals are used in research each year o 95% of psychological studies using animals use rats, mice, rabbits or birds o Of all the cats and dogs euthanized each year by researchers and animals shelters, only 2% of those are euthanized by researchers

  o

-Protecting Participants – minimize harm to participants by: IRB: institutional review board – made up of community members and university members  Have to submit forms with all research plans, forms, scripts, etc.  Board members must approve research as ethically acceptable before study can be administered  Must consider:  Importance of research vs. potential harm to participants

 

Regardless of the quality of research, will there be harm? Will temporary harm be alleviated by the end of the

study? Would you let your friend, sister, etc. participate? Informed consent: provides participants with as full as possible description of the procedures they will be asked to take part in before they participate  Should include an overview of task required in the study  Should include that the participant has the ability to leave without losing credit or money  Responses that you give will be maintained in confidence Debriefing: provides participants with a full explanation of the study after the study is over  Don’t need to hold anything back  Includes explanation of hypothesis  Includes removal or deception – must do so sensitively  Use of participants as consultants – what did they think of study? – researcher can learn from participants  “Scouting” goal – leave participants better than when they arrived – shouldn’t leave feeling worse about themselves than when they came in 





 

 



       

-10 Ethically questionable practices: 1. Involving people in research without their knowledge or consent o Might be okay in observational research – i.e. watching how many cars stop at crosswalk – no personal info recorded o In some cases, people might behave differently if they know they are being observed 2. Coercing people to participate – always inappropriate – never do! 3. Withholding true nature of the research o Sometimes use a cover story so you can observe natural behavior o Inappropriate to do if it will change participant’s willingness to participate 4. Deceiving the participants – avoid if possible but if you have to use deception you must do a deception debriefing afterwards 5. Leading subjects to commit acts that diminish their self-respect o Never do this purposefully 6. Collecting data that could be used to slander participants social group 7. Expose subjects to physical or mental stress 8. Invading privacy of participant 9. Withholding benefits from participants in control groups 10. Failing to treat participants fairly/ not showing them consideration and respect Chapter 4 – Fundamental Research Issues

 



Experimental Method: systematically varying a specific factor (variable) to see if it has an effect on the social behavior of interest -Independent Variable (IV): o Experimental variable: systematically varied by the researcher  Researcher assigns variables to categories/ levels using random assignment  Quasi-Experimental (Subject Variable): researcher cannot randomly assign subject to the various levels  No random assignment but still categories/levels  i.e. gender, age, smoker vs. non smoker, height, ethnicity, weight – something participant brings to the study -Dependent Variable (DV): the outcome variable – what is measured in an experiment

    

-Experimental hypothesis template: (One level of IV) will be (higher/lower) on (DV) than (other level of IV)

   

 





-True Experiments: two necessary features Experimental variables: manipulated by the experimenter Random Assignment: each subject has an equal chance of being assigned to any of the levels of the IV → How random assignment allows us to discuss causality: (IV → DV) Random assignment spreads out all other factors that aren’t being measured along all conditions  Only thing that systematically differs is what you controlled for (IV)  Can’t talk about causality with a quasi-experimental design

Correlational Method: researcher observes how changes in one variable are associated with changes in another  No manipulation or categorizing  i.e. what is the relationship between hours studied and test score -Correlation Coefficient: measure of the strength and direction of the relationship between two variables (indicated by r) o “r” ranges from -1 to 1 o Positive numbers = variables move together (positive relationship) o Negative numbers = variables move in opposite directions (negative relationship) o Close to zero = no relationship o Close to +1/-1 = very strong relationship o Curvilinear relationship: increase in one variable are accompanied by increases and decreases in the other variable -Correlation VS Causation: correlation does not mean causation o Correlational studies show relationship variables but you cannot make causal conclusions

   

-Pearson Correlation (r): interval or ratio level data and linear relationship -Line of best fit (Regression Line) -3 Criteria for Causality: (in correlational studies) 1. There is a relationship between variables – look at data, computer a statistic 2. The causal variable precedes the affected variable (i.e. healthy breakfast and test score) 3. There is no possibility of a 3 variable affecting both of the first two (i.e. children that wear bigger shoe size are better readers – 3 variable is age)  3 variable usually prevents us from determining causality rd

rd

rd

   

   

  

-Reasons for Correlational research: sometimes experimental designs aren’t possible or ethical so we have to do correlational -Correlational Hypothesis Template: (VOI 1) will be (positively/negatively) related to (VOI 2) o Ex. exam scores will be positively related to time studying  As exam scores go up, time studying goes up Pros and Cons of experimental vs. correlational research -Confounding variable: when an uncontrolled variable varies systematically with an IV  Cannot separate effect of IV from that of confounding variable  Threatens internal validity Defining variables: -Conceptual definition: the abstract, theoretical construct we are trying to access -Operational definition: specifying the way in which the research is measuring or manipulating a concept o Have to operationalize variables when we conduct research o “This is what I did in our study” – clearly state what it is that you did – i.e. how you measured intelligence o Need to clearly operationalize all variables (IV, DV, VOIs)

Validity: how true something is  -Construct validity: the degree to which the variables accurately reflect or measure the constructs of interest (how well did we operationalize the variables) o Are we getting at what we think we are getting at?  -Internal validity: the extent to which causal conclusions about the effect of the IV can be substantiated o Did IV → DV? (causality within study)  -External validity: the extent to which the results of a study can be generalized to other populations and settings

o

Want a realistic sample to increase external validity

Ward, Zanna and Cooper study: do stereotypes bring out self-fulfilling prophecies / effect performance in interview environment? -Experiment 1: o IV: race of applicant (black vs. white)  Quasi-experimental variable – cant talk about causality  DV: immediacy behaviors (conceptual) → operationalized: physical distance, forward lean, eye contact, shoulder lean  -Experiment 2: o IV: immediacy (low vs. high) – confederates were trained to treat people the way the black vs. white applicants were treated in experiment 1 o DV: applicant interview performance (conceptual) → operationalized by judges rating of performance → Experiment 2 was created so that the IV could be experimental rather than  quasi-experimental – allows them to determine causality  Chapter 5 - Measurement -Observed Score= true score + systematic error + random error o Systematic error: we aren’t measuring the construct we hoped to  Has to do with validity  The more we maximize validity, the more we minimize systematic error  Random error: has to do with reliability- not directly related to content  Reliability: the extent to which a measure is consistent or free from random error o Reliable measures give you similar results time after time o The more the instructions are standardized, the more reliable the results will be o Try to keep the environment as free from distraction as possible  -Ways to assess reliability: o Test-retest reliability: measuring the same individuals at two points in time  Correlate the two scores  Want tests close enough together to get similar responses but far enough apart that they don’t remember their exact answers  Internal Consistency: determines whether the individual items correlate well with one another  In a 100 question IQ test, all questions are trying to get at IQ  Spilt-half reliability: correlating half of your answers with the other half  Chronbach’s alpha: looks at correlation between every item  Inter-rater reliability: examines the agreement of observations made by two or more judges  Judges should agree on what they are looking for and how they are measuring it 

     

   





Be able to explain: What the observed score should ideally be (close, far, the same?) Why is it important to have reliability Pros and cons of different ways of assessing reliability Construct validity: the degree to which a variable accurately reflects the theoretical construct it is designed to measure (free from systematic error) o Does your measure reflect what it is supposed to measure? o Are you getting at the theoretical concept that you want? → You can have a reliable measure that is not valid but you can’t have a valid measure that is not reliable -Ways to Assess Construct Validity →Do items seem to get at what we want to measure? o Face validity: how obvious is it to the subject what the test is measuring?  Don’t always want high face validity  Participants think test is measuring what it says it is measuring  Content validity: experts believe the measure relates to the concept being accessed →How does measure compare to other measures?  Predictive validity: the measure’s ability to predict a future behavior or outcomes (aka criterion validity)  Predicts future behavior  Concurrent validity: the extent to which the measure corresponds with another (current) measure of the construct  i.e. does psychologist’s assessment of subject’s depression behavior correspond with the results of the beck depression inventory? Corresponds with a current behavior  Convergent validity: the measure overlaps with different  measures that are intended to tap the same theoretical construct i.e. on GRE, want math, verbal and logic scores to correlate  because we think they are measuring the same thing (aptitude for grad school) Correlates with other measures of the same construct   Discriminate validity: the measure does not overlap with other measures that are intended to tap different theoretical constructs  i.e. ideally you want a low correlation between SAT and test taking ability – you want SAT to reflect aptitude to do well in college, NOT how well you are at taking a test  does not correlate with measures of different constructs -Measurement Scales o Nominal: numbers stand for categories but mean absolutely nothing  Male=1, female=2 → makes no sense to take average







       

 

Ordinal: numbers indicate rank order – indicates preference but not by how much (doesn’t tell you distance between numbers)  i.e. ranking professors in order of how you like them Interval: the distance between numbers on the scale are all in equal size  Zero is an arbitrary reference point  -5……0…..5 (-5 = hate, +5 = love) Ratio: only scale that measures a true amount of the variable  0 means zero amount of that variable – no negative numbers  I.e. how much do you weigh on scale? 200 is twice as much as 100 Be able to explain: Why operationalizing is important & why it is difficult to do Key differences between the different types of validity Difference between reliability and validity and why each is important How each of the scales differ and examples of each

Rubin study: is love conceptually different than liking? – Rubin thinks so – how can we measure/ figure this out?  Love questionnaire:  Content validity: looking to see what experts think love is (reviewing past literature)  Face validity: based on what faculty and students thought  Questionnaire study – fill out questionnaire for romantic partner or friend – fill out two times for each  Interval scale: rating scale with equal distance between each value  Discriminative validity: love for one’s romantic partner was uncorrelated with scores on Marlowe-crone social desirability scale  Two scales are measuring different things – low correlation = discriminative validity → helps show love and liking are different  Concurrent validity: people who perceive themselves to be in love score higher on the love scale than those who do not  Internal consistency: on love scale – chronbach’s alpha shows correlation between every item (reliability)  Experimental study –  IV – strength of love (strong or weak) – quasi  IV – together o apart in room – quasi  DV – eye contact/ gazing  Hypothesis: strong love couples that are together will gaze more than weak love couples that are apart  Reliability- inter-rater: judges are in agreement  Predictive validity Chapter 9 – Conducting Experiments Assessing construct validity of the operationalized IV:

Pre-test: conducted before the actual study (with different subjects) to determine if the IV manipulation will work as you predict o Manipulation Check: done with same participants from your actual study – accesses whether the manipulation of the IV had the intended effect on the participants  Can ask series of questions after dependent variable is collected o

 

  

Independent Variable: can be quasi-experimental or experiments o Figure out the amount of levels and what levels are (of IV) -Strength of Manipulation: want strongest manipulation possible that is ethical, reasonable and realistic o Want it to be reasonable for external validity reasons o Want good distance between levels (i.e. hours spent studying) Dependent Variable -Types of Measures o Self-report: most common – questionnaires, etc.  Pros: fast, cheap, easy, often accurate  Cons: relative (open to interpretation), self-serving bias, requires participant awareness (can they accurately tell you information about themselves)  Behavioral measures: directly observing behavior of interest – must operationalize – how are you rating/measuring behavior?  Pros: less relative – operational definitions, consistent, direct measure of natural behavior and environment  Cons: reactivity (change in behavior due to presence of experimenter), have to hire and train judges, expensive, time consuming  Physiological measures: directly measuring physical aspect of respondent - blood pressure, sweat, heart rate  Pros: objective measure of strength – no subjectivity  Cons: reactivity, valance – know strength but not direction →reactivity: minimize by being un-intrusive and give people time to get used to measurement tools – gets rid of nerves



-Variability in scale (measuring DV) o Need to give broad expanse of answers you could get for a question o Might need to do prior research or pre-test  Cant just say: never, sometimes, always – not enough info



-Number of dependent variables: how many DV’s do you want to measure? o Extra DV’s can help show that there aren’t alternate explanations for IV causing DV o Order Effect with multiple DV’s: Could impact results



Participant Expectations

-Cover story: provides rationale for study so participants don’t provide their own rationale – don’t always need cover story o Cover stories help gather people’s natural responses – sometimes if participants know what is being accessed, they might alter their behavior o Cover stories don’t blatantly deceive participant, just gets their mind off the focus to get natural responses  -Face validity: when its obvious to the participant what’s being accesses o Want low face validity in cases where participant might have self-serving bias (i.e. questions about race) o Filler items: keeps a person from knowing what you are focusing on (in a questio...


Similar Free PDFs