2020-1 HKU PSYC3052 Advanced Social Psychology Course Summary PDF

Title 2020-1 HKU PSYC3052 Advanced Social Psychology Course Summary
Author Collin Nzaranka
Course Social Psychology
Institution Nassau Community College
Pages 41
File Size 625.5 KB
File Type PDF
Total Downloads 116
Total Views 148

Summary

In addition to the overview of each area provided by the text, we will generally focus on three to four articles each week in depth. Social psychology has been defined as “an attempt to understand and explain how the thoughts, feelings, and behaviors of individuals are influenced by the actual, imag...


Description

HKU PSYC3052A/B Advanced Social Psychology 2020-1 Autumn Course Summary

This is a collaborative group summary of the PSYC3052 advanced social psychology course given at the University of Hong Kong in Autumn 2020-1 by Gilad Feldman.

If you are enrolled in PSYC3052 but do not have edit access, please request access by pressing on “view only” and request.

General course resources Course materials: https://osf.io/7aj3n/ Course summary: https://mgto.org/psyc30522020coursesummary Course syllabus: https://mgto.org/hku2020psyc3052 Course videos: https://www.youtube.com/playlist? list=PLRAF6P3W1K4cc3y3eVtUdu9KJPwg6Ydp0

Course summary credits Big thanks to all the wonderful students who helped make this a wonderful summary: ● ● ● ● ● ● ●

Ritika Satish Sukhani (3+, 4+, 6b+, 7+, 9b, 12b) Cheuk Hei Peony Chung (3+, 4b, 5b, 6b, 9+, 11b) Hong Pak Kwong (3+, 4+, 5+ 8+, 11+) Hoi Yan Chu (3b+, 4b, 6+, 8, 10+, 11+) Hoi Laam Chong (2+, 3+, 5+, 6+, 9b+, 10b) Kam Hung Tsui (1+, 3+, 4+, 6, 7m) Maria Tahir (3b)

Week #1: Introduction to the science crisis Case studies 1. The chocolate leads to weight loss hoax a. *article titles address stimuli readers are keen to be true. Reducing the need for the work to be ‘open’, readers will take broad-phenomenon at face value due to desire for positive, easy solutions for their quick-fix relatable issues. b. From an economical standpoint: Reader demand for easy solutions to weightloss; of which they do not intend to dispute, encourages publishers & scientists incentive to supply; due to income and status incentive. So readers’ demand can be as much of a complication to the science crisis as the research. 2. Ego depletion Saga a. An RRR (published in 2016): i.

24 labs

ii.

Over 2000 participants

iii.

No discernible effect on low-level inhibitory control

b. Michael Inzlicht (an important figure in the field of ego-depletion): “The problem is that ego depletion might not even a thing.” c. Roy F. Baumeister & Kathleen D. Vohs (original authors): proposed two questions of the RRR i.

The generality of causal principles

ii.

The reliable effectiveness of particular lab procedures

d. Kathleen D. Vohs: A pre-registered depletion replication project: the paradigmatic replication approach i.

More than 40 labs from the world contributed data

ii.

Meta-analytic and Bayesian models

iii.

Week effect

e. Conclusion: → the end of ego-depletion

3. Manylabs: Science mass pre-registered replication effort 4. Feeling the future / Daryl Bem

Reasons for the crisis (there are many others) 1. Questionable research practices / “P-hacking”- (All reducing authenticity of significance) a. stop collecting data once p < .05, b. Only reporting on p < .05 measures, c. Use covariates to reach p < .05, d. Exclude participants, e. Transform data. f. Remove outliers i.

The removal of items that are very different from the sample

g. Delete experimental conditions i.

Remove conditions that did not have a significant effect (do not publish the results)

ii.

E.g. If among the ‘high’, ‘medium’ or ‘low’ conditions, only the ‘high’ condition gets a significant result, only this condition is presented in the research paper.

h. Measure other variables (originally unplanned) i. Add more participants (e.g. the significant p-value might disappear if more participants are added because of random noise). i.

—> further collection/ analysis of results may cause changes in the p value that might make the research not publishable (dance of the p-values)

j. Multiple measures are used to analyse the results but only p < .05 is reported i.

—> other measures may suggest a result that contradicts with the result suggested by p value being smaller than .05 (i.e., the result may not be as significant as researchers thought it would be)

k. Analyse different conditions but only report those with p < .05 (i.e. a significant

result) i.

—> the analysis of other conditions may suggest opposite results, those results may not echo with the hypothesis or suggest that the results of a certain experiment is insignificant

l. Manipulates the exclusion criteria to get p < .05 m. manipulate the data by calculations or other means to get p < .05m n. Use covariates to get p < 0.05 2. Lacking power: Using a sample size that is too small . a. Small sample sizes tend to get significant results easier than large sample sizes. 3. Over-reliance on “significant results” a. “The dance of the p-values” i.

P-hacking simulator: Experience statistics

ii.

Getting opposite results from the same dataset: https://fivethirtyeight.com/features/science-isnt-broken/#part1

iii.

Video explaining the “dance of the p-values”

b. “All or nothing” thinking 4. Incentive system in science (psychology included): a. Journals encourage new, positive, and counterintuitive results making researchers feel the pressure to find positive unlikely results b. Discourages replication and replications do not receive as much attention as new and positive results, and journals are less likely to publish replication articles in general, especially failures to replicate c. Original researchers might respond to the negative results of a replication attributing to poor replication (such as methods, participants, analysis, etc.) d. Stronger emphasis on publishing regardless of getting things right, the number of publications is the top factor for an individual’s career path, e.g. chance of being hired, salary, promotion. 5. Hypothesizing After the Results are Known - HARKing: a. Hypothesizing after getting the result to make the hypothesis fit the result

Recommendations to address crisis 1. Transparency / open-science: a. Share data, code, method/stimuli, and all decisions made (exclusions, reporting, etc.). b. Use open source software (and avoiding licensed softwares such as SPSS) c. Focus on collaboration and getting things right (it’s ok to be insignificant) 2. Replications: a. Any findings need attempted replications, to examine if the phenomenon is dependable or just a fluke (a one-time occurence). Operational definitions are critically important in aiding replication because an operational definition spells out exactly how to measure something. To replicate an experiment, one must know how the original researcher performed measurements. Hence operational definitions must be known, and known precisely, in order to replicate research. b. Replication should be encouraged, by multiple/different labs 3. Pre-registrations / Registered Reports a. Submit the plan of research method design in order to avoid some potential phacking measures like changing the hypothesis or report different measures used to fit in the result. b. Minimize flexibility in decisions. 4. Meta-analyses a. Encouraged to get an estimate of the overall effects (and address/test publication bias) 5. Well powered samples: Larger sample size, conduct power analyses a. Conduct the research with a larger sample size as this will give us a narrower distribution and hence, larger power to detect differences 6. Multiple analyses from different researchers 7. It is important for the educators to recognize the crisis and the low replicability of some ‘popular’ psychological studies, and to properly convey the uncertainty and problems of the studies to the student.

Suggested resources: ● Podcast, two psychologists four beers - Is Ego Depletion real?

Week #2: Misunderstanding and misinterpreting stats Bits of Stats Introduction 1. Misunderstandings of statistics + abusing statistics to get to the goal a. Cannot prove or disprove in null hypothesis testing. b. Replication fallacy: level of significance ≠ results of the experiment is replicable 2. Daniel Lakens: “A p-value is the probability of (observed) data, under the assumption that the null hypothesis is true” a. Likelihood of observing the effect when there is no effect i.

Question: p=.05 can lead to drawing wrong conclusions what % of the time?

ii.

Answer: depends on the power, it could be up to 35% of error

b. c. If Power decreases, the rate of having Type I and Type II error increases d. Type I error (false positive) depends on P-value e. Type II error (false negative) depends on Power f. Power + p-value and accuracy i.

Power: significant and true

ii.

P- value: not true and significant

g. In order to drop the error rate to 5%, we should at least have 95% power, p=.05 h. We aim for 99% power! 3. Likelihood- shiny app a. # of studies b. # of success c. Type 1 error rate (p-value) d. Assumed power 4. The wolf is coming (狼來了) - An easy memorization technique for type 1 error and type 2 error(from lecture) a. The boy is calling the wolf but in fact there is no wolf (Type 1 error - False positive) b. Then, when there is wolf, no one believe in the boy (Type 2 error - False Negative) 5. P value only captures the type 1 error (rejecting the null hypothesis incorrectly)

Questionable research practices (QRPs) 1. Examples for QRPs a. Selective reporting i.

A form of reporting bias

ii.

certain components of conducted research are not fully presented based on the nature or direction of the results

iii.

Hutton, J. L., & Williamson, P. R. (2000). Bias in meta‐analysis due to outcome variable selection within studies. Journal of the Royal Statistical Society: Series C (Applied Statistics), 49(3), 359-370.

b. Cherry picking i.

Suppressing evidence / fallacy of incomplete evidence

ii.

fallacy of selective attention (common example: confirmation bias)

iii.

confirm a particular position while ignoring a significant portion of related

and similar cases or data that may contradict that position c. P-hacking i.

the misuse of data analysis to find patterns in data that can be presented as statistically significant

ii.

Can be done by performing many statistical tests on the data and only reporting those that come back with significant results

iii.

Smith, G. D., & Ebrahim, S. (2002). Data dredging, bias, or confounding: They can all get you into the BMJ and the Friday papers.

d. Outcome switching i.

Experimenters "move the goal posts" during a trial, which may be done to achieve the desired results

e. HARKing i.

Refers to the questionable research practice of ‘’Hypothesizing After the Results are Known’’

2. How common are these? a. See John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological science, 23(5), 524-532.

Additional resources: 1. Lakens posted this blog post - Improving Education about P-values https://daniellakens.blogspot.com/2019/09/improving-education-about-p-values.html 2. Youtube lecture: https://www.youtube.com/watch?v=RVxHlsIw_Do 3. and this home exercise: https://docs.google.com/document/d/1tm9nM5vlGovkoKSqZ1zO8nsUYFLsKyYa2aI1JgU0xU/edit 4. One more very clear video about Type I and Type II errors: https://www.youtube.com/watch?v=FEBLOnUdtoM 5. This is part of the free online course he's giving that will help you understand the basics of science: https://www.coursera.org/learn/statistical-inferences

Week #3: Registered Reports | RRR assessment and case studies Relevant resources 1. 2. 3. 4.

RRR assessment template: http://mgto.org/RRRassessment RRR assessment examples from 2019: https://mgto.org/2019-rrr-assessments RRR assessment tutorial: https://youtu.be/kjfHRCfZuvI Gilad’s pre-registration and registered report workshop: https://www.youtube.com/watch? v=0lkjMtLpDZM

Pre-registration ● Once we have generated and specified hypothesis and designed a study , we put it in public website such as OSF with time-stamp ○ This helps with clear distinction between confirmatory and exploratory research ■ Confirmatory research analyze hypotheses while exploratory research browse through data ○ Confirmatory research is where we can actually make sense of the p-value; having a known number of hypothesis tests; having a possible rate error control; (a complete opposite of exploratory research) ● If we pre-register, we can have evidence that it is confirmatory research instead of exploring data with various analyses

Registered reports ● Traditional publishing model is broken. Publication bias. ● Four central aspects of Registered Reports ○ 1) Decide hypothesis, procedures and analyses BEFORE data collection ○ 2) Peer review BEFORE experiments are conducted so that you can attain inprinciple acceptance which means 3) your study will published no matter the results ○ 4) Registered reports include original studies and valuable replications ● It doesn’t matter whether hypotheses are supported, results are novel or not, or p-value is below 0.05 or not. ● Benefits of registered reports ○ No publication bias because in-principle acceptance is given before data is produced ○ Eliminates various forms of researcher bias such as p-hacking ○ Increases reproducibility ○ Incentivizes doing replications ○ Incorporates public archiving of data and materials ○ In general, more reproducible, transparent and credible ○ Get timely feedback ○ Get expert reviewer feedback when it’s most useful ○ Higher acceptance rate as compared to regular articles

Peer review: ● Stage 1 peer review occurs after designing the study and before data collection ○ Ensuring the hypotheses are well found; the methods and proposed analyses are feasible and sufficiently detailed; the study is well-powered; having sufficient controls ● Stage 2 peer review occurs after writing the report and before publishing it ○ Ensuring the author followed the approved protocol; positive controls succeed; the conclusions are justified by the data ○ Make sure that the data collection and analysis in line with the stage one peer review ● Important Note: Registered Report Models After stage 1 peer review, if students would like to test for other hypotheses and research questions they may have, they must remember and explicitly state that the new hypotheses were not pre-registered in stage 1, therefore it then becomes exploratory and someone else needs to replicate it. This doesn’t mean that students should be discouraged to introduce new research questions after pre registration peer review, we should then try to adopt internal peer reviews to ensure credibility of our studies/replication of studies.

RRR examples 1) Influence of verb aspect (was doing vs. did) a) Target report: Hart, W., & Albarracín, D. (2011). Learning about what others were doing: Verb aspect and attributions of mundane and criminal intent for past actions. Psychological Science, 22, 261-266. i) How verb aspect(was doing vs. did) influences persons’ perception of intention ii) Results: participants who read what a person was doing showed enhanced accessibility of intention-related concepts and attributed more intentionality to the person, compared with participants who read what the person did. (1) p= .001, d= 1.00 b) Multi-lab RRR: Eerland, A., Sherrill, A. M., Magliano, J. P., Zwaan, R. A., Arnal, J. D., Aucoin, P., ... & Crocker, C. (2016). Registered replication report: Hart & Albarracín (2011). Perspectives on Psychological Science, 11, 158-171. i) Two studies found an effect, and two others did not produce significant effects c) Response from original authors: we already successfully replicated the study in the past i) We need to revisit original authors’ own replications. Need to examine that the data and designs are both reliable, and more importantly, their replication process has to be 100% open, to make sure issues like data control and selective reporting are avoided d) Replicators: analyzed the original research data and found that a lot of

duplications (only in the direction that can show the phenomenon) → manipulated data e) Retraction Watch: an unnamed graduate students as the source of the manipulation, and the only author was unaware of what had occurred 2) Priming effect - Doyen et al. (2012) failed to replicate priming studies a) Elderly priming (Bargh et al. (2012)) - P-curve suggests p-hacking for these studies (many results are just below the p value of .05) b) Hot and cold priming → Manipulating the beliefs of experimenters led to results going in the expected direction. (Experimenter bias?) -

Yet, we need to differentiate between “social priming” and “cognitive priming” effects. Cognitive priming seem to replicate far better. Therefore we should be alert about which specific phenomenon we are looking for and do not treat all the studies under a big phenomenon/effect as the same.

Week #4 Improving psychological science How to read a forest plot? 1. Whether the result is significant or not? General rule of thumb (though not 100% accurate): a. The confidence intervals does not overlap with the null → significant b. The confidence intervals does overlap with the null → not significant 2. How to see sample size? a. See the size of the dot at the center of confidence intervals (bigger rectangle → larger sample size) b. when we have large sample, the noise go down (larger sample size → narrower the CI)

Assessing classics in social psychology/ JDM - Multiple studies replications allows for more flexibility in terms of examining moderators Hindsight bias (Baruch Fischhoff) - Receiving information about the outcome makes people assign a greater likelihood to the known outcome than they would otherwise do - A common tendency for people to misperceive past events as easily predictable, or they think they would have predicted the result accurately if they were given the chance, making people think that they are good at predicting a lot of events, however, this is just a misconception Outcome bias (Jonathan Baron) - Receiving information about treatment outcome (success/failure) leads people to adjust their perceived quality of the decision, competence of the decision maker, and likelihood of the outcome to match with the outcome → Successful replications were conducted on these two biases → Do these biases affect how we view scientific evidence, including classic findings and replication studies? (e.g. using research results to judge the quality of a study)

Priming studies → Collectively, hypotheses regarding priming are not vigorously interrogated -

Priming studies (e.g. from Srull & Wyer 1979) have implausibly large effect sizes; and significant results despite having small sample sizes (large F ratios, small p value etc.) Results of RRR (using the same methods of the original study) are inconsistent with those reported by Scrull & Wyer 1979 → Time to be more critical and decrease our confidence in the methods that produce hostile priming effects

7 sins of psychology 1. Sin of Bias– seeing what we want to see rather than the truth - Outcome bias – judging the quality of research based on the results it produces rather than the theory and method - Confirmation bias – favouring evidence that confirms our prior beliefs - Publication bias – deciding what research gets published, and with what prominence, based on the results - Hindsight bias – fooling ourselves (and others) into believing that we predicted results that were unexpected 2. Sin of Hidden Flexibility - cherry picking, testing to a forgone conclusion - p-hacking - Selective reporting 3. Sin of Unreliability– failure of rigour and self-correction - Lack of replication: ~1 in 1000 papers reports an independent direct replication - Lack of statistical power...


Similar Free PDFs