Ch. 7 - experimental design I single-factor designs PDF

Title Ch. 7 - experimental design I single-factor designs
Course Research Methods In Psy
Institution Baylor University
Pages 5
File Size 158.2 KB
File Type PDF
Total Downloads 52
Total Views 158

Summary

Textbook Notes for Ch. 7
Professor Latendresse ...


Description

Ch. 7 Experimental Design I: Single-Factor Designs Introduction  This chapter considers designs that feature SINGLE independent variables with two or more levels.  Ch. 8 will add independent variables which creates factorial designs. 7.1 Single Factor – Two Levels  There are 4 basic research designs that can be called single-factor designs o Factor = independent variables  Independent groups design = between-subjects design, IV manipulated, random assignment used to form equivalent groups.  Matched groups design = between-subjects design, IV manipulated, matching used to form equivalent groups. o After matched pairs are formed, they are randomly assigned to groups. o Ch. 6 recall: decisions about whether to use RA or matching have to do with sample size and need to be wary about extraneous variables that are highly correlate with the dependent variable.  Ex post facto design = between-subjects design, IV subject variable, groups intrinsically not equal, possible matching to increase equivalence. o Subject variables means  groups composed of different types of individuals (e.g. male or female, introvert or extrovert, etc.) o Matching used here is different from matched group design!  o Random assignment is not possible, because subjects are already in one group or another by virtue of the subject variable being investigated (e.g. gender)  Repeated-measures design = within-subject, IV manipulated by definition, can be tested once (complete/partial counterbalance) or more than once (reverse/block counterbalance). o “within subject”  each participant in the study experiences each level of the IV Table 7.1 Attributes of Four Single-Factor Designs Types of Minimum Levels Independent Variable Design of Independent Between or Within? Variable? Independen 2 Between t Groups Matched 2 Between groups Ex post facto 2 Between

Repeated measures

2

Within

Independent Variable Type? Manipulated

Creating Equivalent Groups RA

Manipulated

Matching

Subject

Matching may increase equivalence n/a

Manipulated

Between-Subjects, Single-Factor Designs (p.191)  Not as common as one would think; most researchers like to use more complex designs  3 examples – comparing independent groups, matched groups, and ex post facto designs Within-Subjects, Single-Factor Designs (p.194)  RECAP, requires: (a) fewer participants, (b) more sensitive to small differences between means, and (c) typically uses counterbalancing to control for order effects.  Famous Example – the Stroot effect (p.195-196) 7.2 Single Factor – More than 2 Levels (p.198 – 209)

Jeung 1









Most single-factor studies use 3 or more levels and, for that reason, they are called single-factor multilevel designs. This allows researchers to… o (1) discover nonlinear effects o (2) Test for specific alternative hypothesis and perhaps rule them out while supporting the researcher’s hypothesis (falsification thinking). Nonlinear effects example – Yerkes and Dodson (1908) o Found that performance will be POOR with LOW arousal, improves as arousal INCREASES, and then DECLINES when arousal becomes too HIGH. o If they only had 2 levels of arousal as the IV (low and moderate)… then they might not have found out that performance declines at high arousal. Alternative hypothesis example – Bransford and Johnson (1972) o Gave a paragraph and assigned to 3 groups – no topic, topic before, topic after – and tested to see how well they understood the passage to evaluate memory. o Results showed that providing a framework helps memory, but ONLY IF the framework is provided before the material is read (ruled out the hypothesis of “giving a framework regardless of when it’s given”). Multilevel designs include BOTH between- and within-subjects designs of the same 4 types  independent groups design, matched groups design, ex post facto designs, and repeated measures. o Going beyond two levels makes all the counterbalancing options available.

Analyzing Data from Single-Factor Designs (p.202) Presenting the Data [3 choices] 1. Numbers can be presented in sentence form, an approach that might be fine for reporting the results of experimental studies with 2 or 3 levels…. But makes for tedious reading as amount of data increase. 2. Construct a table of results – with means and standard deviations for each condition presented a. Preferred when data points are so numerous that a graph with be uninterpretable… or b. When the researcher wishes to inform the reader of the precise values of the means and standard deviations. 3. In the form of a graph – dependent variable is always on the vertical (Y) axis, and the independent variable on the horizontal (X) axis. a. When there are large differences to report b. (especially) if nonlinear effects occur c. Or if the result is an interaction between two factors RULE OF THUMB – NEVER present the same data in both table and graph form; clarity is key. Types of Graphs  If the IV is manipulated between-subjects (or is a subject variable), then use a BAR GRAPH. o Remember: Between-subjects = Bar Graph o Because the levels of IV represent separate groups of individuals, so the data in the graph should best reflect separate groups (or separate bars in this case). o The top of each bar represents the mean for each condition o Researchers will also place error bars on the tops of the graphs, reflecting SD or CI’s  IF the IV is a within-subjects manipulated (or subject variable), then use LINE GRAPH. o Participants are experiencing all levels of that independent variable, so the data should “connect” in a more continuous way. o Points usually represent the means of each conditions o Error bars typically placed on each point on the graph.

Jeung 2

Analyzing the Data (p.206)  To determine whether differences found between the conditions of a single-factor design are significant or due to chance… inferential statistical analysis is required. o Depends on 2 types of variability… the first depends on differences between-groups  caused by a combination of systematic and error variance. o Systematic variance = the result of an identifiable factor, either the variable of interest or some factor that you’ve failed to control adequately (such as confounds). o Error variance = is nonsystematic variability due to individual differences between subjects in the groups and any number of random, unpredictable effects that might have occurred during the study. o Occurs within each group, also as a result of individual differences and other random effects  Inferential statistic = [variability between conditions (systematic + error)] / [variability within each condition (error)]  IDEAL OUTCOME  variability between conditions is LARGE and variability within conditions is SMALL o An inferential statistic will then test to see if the researcher can reject the null hypothesis and conclude with a certain degree of confidence that there is a significant difference between the levels of the independent variable.  Inferential statistics (Ch. 4 Recall) = used to infer from a sample what might occur in the population of interest; can be either parametric or nonparametric tests o Parametric tests = have certain assumptions (or parameters) that are required to best estimate the population… for example   Assume normal distribution for IV  Homogeneity of variance – variability of each set of scores being compared ought to be similar… so if SD in one group is significantly larger than the SD of another group…  violation  use nonparametric o Nonparametric tests = DOES NOT have the same assumptions as parametric; can be used if violations of the parameters for ideal statistical tests occur  Scales of Measurements are also important to consider… o T-tests or one-way ANOVA (Analysis Of Variance)  used when interval or ratio scales of measurement are used (and the aforementioned parameters are met). o Other techniques are required when nominal or ordinal scales of measurement are used.  For example Chi-square test of independence could be used with nominal data Statistics for Single-Factor, Two-Level Designs  There are 2 varieties of the t-test for comparing two sets of scores   Independent samples t-test = involves two groups of participants that are completely independent of each other; occurs whenever we use RA to create equivalent groups, or if the variable being studied is a subject variable involving two different groups. o Independent groups design o Ex post facto design  Dependent samples t-test (paired t-test) = If IV is a within-subject factor, or if two groups of people are formed in such a way that some relationship exists between them. o Matched groups design o Repeated-measures design  REMEMBER: t-test is a parametric statistics, so it must meet assumptions of normal distribution of data and homogeneity of variance.  You can determine the magnitude of the difference by calculating effect size (Cohen’s d); Ch. 4

Jeung 3

Statistics for Single-Factor, Two-Level Designs  The difficulty of completing multiple t-tests INCREASES the risk of making a Type I error – that is, the more t-tests you calculate, the greater the chances that one of them will accidentally yield significant differences between conditions.  The chances of making at least ONE Type I error when doing a multiple t-tests can be estimated by using this formula  1–(1–alpha)c [where c = the number of comparisons being made]. o Example: If all possible t-tests are completed in a study with 5 levels… there is a very good chance (4/10) of making at least one Type I error  o 1 - (1 – 0.05)10 = 1 – (0.95)10 = 1 – 0.60 = 0.40 or 40%  TO AVOID THE PROBLEM  researchers uses a procedure called a one-way (one-factor) analysis of variance or one-way ANOVA (ANalysis Of VAriance). o “One” in one-way means one independent variable. o Tests for the presence of an OVERALL significant effect that could exist somewhere between the levels of independent variable. o Hence, in a study with three levels  the null hypothesis is “level 1 = level 2 = level 3.”  Rejecting null hypothesis DOES NOT identify which condition differs from which.  To determine precisely which condition is significantly different from another requires subsequent testing or post hoc (after the fact) analysis.  Selecting which post hoc analyses depends on: sample size and how conservative the researcher wishes to be when testing for differences between conditions. o Tukey’s HSD test (honestly significant difference) = one of the most popular post hoc choices, but requires that there are equal numbers of participants in each level of the IV. o Bonferroni correction = more conservative test for comparisons of groups with unequal sample sizes per condition. o SPSS provides many options for different kinds of tests.  If ANOVA does not find any significance, subsequent testing is normally not done…. Unless specific predictions about particular pairs of levels of the independent variable have been made ahead of time.  These are called “planned comparisons.”  ANOVA yields an F score or an F ratio – this examines the extent to which the obtained mean differences could be due to chance or are the result of some other factor (presumably the independent variable). o F ratio is typically portrayed in a table called an ANOVA source table  One-way ANOVA for independent groups o Multilevel independent groups design o Multilevel ex post facto design  One-way ANOVA for repeated measures o Multilevel matched groups design o Multilevel repeated-measures design  Be mindful of the parameters used (requires normal distribution + homogeneity of variance)  if either or both are violated, alternate nonparametric tests should be used. 7.3 Special-Purpose Control Group Designs Placebo Control Group Designs (p.209)  Placebo (Latin: “I shall please”) = a substance or treatment given to a participant in a form suggesting a specific effect when, in fact, the substance or treatment has no genuine effect.  Placebo control group = led to believe they are receiving a particular treatment when, in fact, they aren’t.  Still include a straight control group – to yield a simple baseline measure.

Jeung 4

Wait List Control Group Designs  Wait list control groups = often used in research designed to assess the effectiveness of program (Ch. 11) or in studies on the effects of psychotherapy.  In this design, the participants in the experimental group are in a program because they are experiencing a problem the program is designed to alleviate; wait list controls are also experiencing the problem.  Criticisms – it is unethical to make some patients wait longer  3 strong arguments for it: o (1) Hindsight – you only know that “a program as effective as this one ought to be available to everyone” only after it is actually effective. o (2) Researchers point out that in research evaluating a new research or program, the comparison is not between the new treatment and no treatment – it is between the new treatment and most favored current treatment. o (3) Treatments cost money, and it is certainly worthwhile to spend the bucks on the best treatment Yoked Control Group Designs  Yoked control group = used when each subject in the experimental group, for one reason or another, participates for varying amounts of time or is subjected to different types of events in the study.  Each member of the control group is then matched, or “yoked” to a member of the experimental group so that, for the groups as a whole, the time spent participating or the types of events encountered is kept constant.

Jeung 5...


Similar Free PDFs