Lecture 14 - Instructor: Timothy Peterka PDF

Title Lecture 14 - Instructor: Timothy Peterka
Course Scientific Study of Politics
Institution University of California Davis
Pages 4
File Size 240.7 KB
File Type PDF
Total Downloads 51
Total Views 133

Summary

Instructor: Timothy Peterka...


Description

MARCH 13 Lecture 14 (Research Design) Research Design Causal inference Convincing comparisons is key Need to compare outcomes from different values of the IV Require controlling for confounding variables Cases that are as close to identical as possible Eliminating Z as a possible alternative explanation Experiments Why? -

-

-

We have reason to think that a confounder could be responsible Protect against competing explanations May be too difficult to rule out confounding variables w/just observational data Experiments help break the link between confounders and the IV Causality w/Experiments Help rule out Z Goal: Make 2 groups (treatment group and control group) as similar as possible Only difference is whether they received the treatment of not (to approximate the counterfactual condition) Treatment Ex. Suppose we’re interested in the effects of campaign ads IV (the causal factor of interest): exposure to a TV ad DV (the outcome of interest): support for candidate Researcher controls the Value Most times a treatment is binary (you get it or you don’t) Sometimes, there are different levels of treatment Ex. hearing something for 30 secs, 60 secs, or ?? Random Assignment of subjects Control Group: does not receive treatment (to measure DV) Forms the baseline for comparison What we compare the treatment group to Did the treatment make a difference? Is the value of the DV from the treatment group different from the value of the DV form the control group? Treatment Group: receives treatment Measure the DV By comparing it to the value from the control group If treatment makes a difference, the DV should be a different value People are chosen via random assignment Random Assignment: each subject has an equal chance to be assigned the treatment None of their attributions effects if they get the treatment Ex. flipping a coin to choose sample → flip a coin to determine treatment ≠ Random sampling: deals w/which from the population goes into the sample

-

Population: the entire collection of cases/all that exists Sample: a subset of the population Random Assignment and Z Control and treatment groups are not systematically different Generally have the same values Confounding factors are “balanced” between groups

-

-

A very good approximation of the counterfactual condition FPCI still applies **Breaks links between Z and X by breaking the correlation between the confounder and the IV of interest Does not remove confounders What if there wasn’t random assignment? People in the treatment group could be systematically different from the control Bc they can choose treatment or control Ex. Z could determine their value for X Ex. TV ad People choose whether or not to watch the ad (treatment) Perhaps more people are into politics than the people who chose the control → the effects we measure could be due to interest, not the treatment Experimental Designs A research design where researchers controls values and randomly assigns value of the IV

-

-

Political interest is not correlated w/watching ad Researcher controlled ad watching via coin flip Contrasts w/observational designs Researchers don’t control IV Can only see what’s going on (what’s observed)

-

-

-

Ex.

Ex.

Political interest may be correlated w/watching ad Validity Internal Validity Does the experiment test what it says it tests? How sure are we of the effect? Comes from treatment we control Experiments do very well on this External Validity How does it generalize to the rest of the population? Experiments can struggle on this Benefits Accounts for Z Isolate effect of causal factor DV is measured after the IV is applied (no reverse causality) Can replicate the study Great for identifying and isolating a causal factor Drawbacks Not all IV are amenable to experimental manipulation (researchers don’t have control over these variables) Ex. income, ideology, military spending → limits experimental research designs External validity Samples of convenience. Concerns about generalizability Ex. college students Effect may be seen for the subject pool, but the pool could be systematically different from the rest of the population “Realness” of the experiment Lab environment is very different from the real world In lab, people choose to pay attention more Ways to improve: move out of the lab (ex. field experiments), can also make more generalizable

-

Ethical Concerns Even if we can control, it may not always be a good idea Could be beneficial to withhold controlling

McClendon - Social Esteem and Participation in Contentious Politics: A Field Experiment at an LGBT Pride Rally Research Question: Can the promise of social esteem make people participate in a political rally? Why an experiment? Studies show link between social ties and participation McClendon would like to know if relationship is causal (if it’s actually due to perception from peers) It’s costly to participate in a political rally, especially when it’s on a contentious topic Theory: Even in the face of something controversial, is getting social esteem is something that may offset those costs Receive adulation in exchange for risk Maintain or improve social standing within group Hypothesis: “Individuals who receive an explicit promise of admiration from ingroup members for participating in contentious politics on behalf of the group should on average be more likely to participate than those who learn simply about the goals and logistics of the event.” If you tell someone that people will be really happy w/you, the organization will be happy w/you if you participate in a political rally, that drives people to participate more rather than if an individual was just given a time and place Experimental Design Subjects were members of an LGBT advocacy and support organization Organization was throwing a rally to support repeal of “Don’t Ask, Don’t Tell” and support for marriage equality Made experiment more real bc participants were already part of the organization so being invited via email is not irregular Invited subjects via email listserv 3,651 participants Randomized treatments

-

-

-

Expects higher participation from people that received the newsletter and facebook invite IV: Treatment group DV: participation Measured in 2 ways Intention to participate By use of raffle ticket (attendance) HA: Difference in participation between Info-Only group and Newsletter/Facebook groups should be greater than 0 Use the Difference of Means test to test the difference Should look for a small p-value on the difference

Results “Similarly, the Facebook email induced: A 2.40 % point increase in intended participation A 1.24 % point increase in actual participation Difference in participation was small, but the actual # of people that ended went up by ¾ **The effect of assignment to the newsletter treatment therefore represented a 76% increase in actual participation relative to the info-only condition, resulting in 37 people participating from that treatment group **The effect of assignment to the Facebook treatment likewise represented a 71% increase in actual participation (to 36 people from that group), relative to the info-only condition How do we know it’s due to esteem? If people’s reason was going was bc: “I went because people in my community would think highly of people who

-

attend” Key Points Treatment was randomly assigned Good approximation of counterfactual What if people assigned to info-only were assigned to an esteem condition (newsletter or facebook)? What if people assigned to an esteem condition were assigned to the info-only condition? Findings: promise of esteem induced people to participate more...


Similar Free PDFs