Single System Research Designs (SSRD) Notes PDF

Title Single System Research Designs (SSRD) Notes
Author Cris Hernandez
Course Research and Evaluation Methods II
Institution The University of Texas at Arlington
Pages 8
File Size 305.2 KB
File Type PDF
Total Downloads 10
Total Views 175

Summary

Single System Research Designs (SSRD) Notes...


Description

Single System Research Designs (SSRD) Notes Experimental Designs vs Single System Designs Experimental: Consists of a sample of multiple cases. Has different types: 





Experimental: o Classic Experimental Design Pretest-posttest control group design o Posttest-only control group design o Solomon four-group design Quasi-Experimental o Time-Series o Cross-Sectional o Longitudinal o Non-equivalent comparison groups Pre-Experimental o Pilot studies o One-shot case study o One-group pretest-posttest design o Post-test only design with nonequivalent groups

Single System: Consists of studying a single case (not multiple cases) Single-System Research Design (SSRD) Basics  Study single case  Apply logic of the time-series design  Internal validity supported by unlikelihood of repeated coincidences  Obtain repeated measures of a dependent variable  Require a baseline of outcome behavior  Compare baseline measure to post-intervention measure of the same outcome Potential Uses of Single-System Designs  Assess and monitor change  Evaluate whether change has occurred  Determine whether change in intervention is needed  Determine whether intervention caused observed change  Compare relative effectiveness of interventions Single System Designs in Social Work  Originally used in behavior modification studies  Appeared in social work literature in 1960s o Staats & Butterfield (1965); Stuart (1967) o Advocated by those who wanted to integrate research and practice  Grounded in quantitative, positivistic research o Replication of a single case studies support the generalizability of findings

Single System Research Designs (SSRD) Notes Single System Sample Size and Unit of Analysis Sample Size  

In experimental designs, sample size may appear as N=30 or N-30,000 In single system designs, N=1

Unit of Analysis  

Single-Subject (Individual) Single-System (Group, family, community)

Key Characteristics of Single-System Designs  Different phases o Baseline o Intervention o Follow-up  Comparison of baseline and intervention phases  Change intervention, as needed Questions in SSRD  Evaluation: Did the client systems improve during the course of the intervention?  Experimentation: Did the client system improve because of the intervention? o Here, the because indicates the causality Planning  Planning a single-case experiment o Identify the target problem o Identify desired goals (short, medium, long term) o Define problem and goals in operational terms  Triangulation o Use of multiple measurements strategies o Agreement among measures indicates valid and reliable outcomes Phases Period of time during which distinctive evaluation activity occurs  



“A” represents baseline phases “B” through “Z” represent intervention phases o “B” or “BC” etc. represent multiple interventions  P.266 Figure 11.1-visual for your reference o “B1,” “B2,” “B3,” etc. represent changes in intervention intensity Length of phases o Long enough to obtain clear, representative, and stable picture of target

Single System Research Designs (SSRD) Notes o Adjacent phases should be equal length, ideally Baseline Phase  Period of time during which no formal intervention is implemented  Usually first implemented prior to intervention

Intervention Phase  Period of time during which formal, planned, systematic practitioner actions designed to change a target take place  One or a combination of interventions implemented  Should be related clearly to goals  Should be specified clearly Follow-Up Phase  Period of time after completion of an intervention during which maintenance of change is monitored and, perhaps, reinforced Issues in Single Case Designs  Operationally defining a problem: o What is actually going to be measured?  Use precise and observable terms  Can we see what you are trying to measure? o For example: Depression, we can’t really see depression o Indirect indicators of depression: excessive sleep, frequent crying, isolation, mood swings, etc. o The indicators can be measured, but must be included in your definition of “depression”  Triangulation o Use multiple measurement strategies (self-reporting, observation, journaling etc.) As a rule of thumb, 2-3 indicators are considered appropriate.

Single System Research Designs (SSRD) Notes o Agreement among the multiple measures indicates valid and reliable outcomes Internal Validity-Causality  Did X cause Y?  The confidence with which the researcher can assert that an observed change was caused by a prior intervention, event, phenomenon etc.  Can we really say that implementing deep breathing exercises changed an individual’s self-report of stress level?  Be careful with causality—instead of saying X caused Y, you might say, the implementation X may have supported a change in Y  Stay away from “absolutes” to be safe Criteria for Inferring Causality  How can we be more confident in inferring causality? o Temporal ordering  Intervention comes before the change in behavior Think of an Arrow:  Effect must follow cause in a reasonable amount of time  Effect must covary with cause  Effect must be plausible result of cause o We can reasonably assume cause-effect  Alternative explanations for cause-effect relationship must be ruled out Potential Threats to Causal Inferences Internal Validity Was change caused by the intervention? Can alternative explanations change our results? (Goes back to causality, can we really say our intervention is what caused the change in behavior?)  Threats to Internal Validity: o History  Unaccounted for events that may affect the dependent variable during the course of the research  Earthquake, hurricane, and marriage o Maturation  Naturally occurring mental or physical changes in participants over the course of the study  Child growth, stress diminishing o Testing  Testing experience affects outcome variable  Pre-test affecting post-test scores o Instrumentation  Measurement bias that affects outcomes  Instrument fails to detect depression consistently o Statistical Regression to the Mean  Tendency of extreme scores to move toward the mean score upon retesting

Single System Research Designs (SSRD) Notes  Severe behaviors fall to average o Dropout/Experimental Mortality  Losing participants before study completion  Example – the most severely depressed participants drop out of the depression study Construct validity Did you implement your intervention and measure your outcomes accurately?  Threats to Construct Validity: o Mono-operation/method bias  Using only one measure of change o Hypothesis guessing  Patient guessing about practitioner’s expectations o Evaluation apprehension  Patient nervousness about evaluation o Practitioner expectations  Patient performing according to practitioner’s expectations o Interaction of interventions  Multiple intervention obfuscates causal relationship Generalizability (External Validity) Can your results be generalized?  This is the degree to which an intervention works with multiple individuals  For example: You have 5 clients, all are depressed. Does cognitive behavioral therapy reduce depression among just one, three, all five?  For example: You have a great intervention that worked at Agency A, but will it work with Agency B?  Threats to Generalizability: o Different/unrepresentative settings and conditions o Practitioner effect  Different practitioners, same intervention, different outcomes o Different outcome variable  Each practitioner/client conceptualizes the outcome differently o History  Extraneous events affect intervention effects o Measurement differences o Client differences o Testing bias  Client “primed” by testing to respond to intervention o Reactive effects  Client changes based on participation in the evaluation

Single System Research Designs (SSRD) Notes How to Enhance Generalizability  Direct replication o Same intervention, same practitioner  Clinical replication o Multiple, distinct interventions, same practitioner, multiple clients w similar prob  Systematic replication o Same intervention, different practitioners, settings, problems  80% similarity in outcomes---generalizability  Probabilities o Statistically significant that probability of success is greater than the probability of failure  Meta-analysis o Analyzing multiple studies to compare findings Types of Single System Designs  

Pre-designs Experimental Designs

Case Studies/Pre-Designs 

A o e.g., monitoring to detect whether or not a problem exists or develops



B



o e.g., impossible to get baseline data B-C o e.g., impossible to get baseline data and first intervention not sufficiently effective

Single System Baseline Phase  Observations gathered pre-intervention  Many observation points (5-10)  Stable pattern in desired outcome Types of Baselines  Concurrent/prospective  Reconstructed/retrospective o Best for specific events o Should be recent if based on memories  Combined retrospective and prospective The B Design  No Baseline  Assessment of outcome measure(s) and implementation of intervention begin simultaneously

Single System Research Designs (SSRD) Notes  

Repeated measures are taken while the treatment continues Cannot claim that change is due to alternative hypotheses (eg. Spontaneous remission, maturational factors, biological and psychological factors)

When are Baselines Unnecessary?  Prospective baseline unwarranted in some crises situations  Prospective baseline unnecessary when no history of desired behavior having occurred The A-B Design  Requires assessment of the target problem prior to implementation of the intervention  A Phase-baseline period; no attempt to effect change  B Phase-intervention  Baseline/intervention comparison assumes that if the intervention had not occurred, the baseline pattern would have continued unchanged  Limited in making causal inferences Maximizing Internal Validity  Experimental single-system designs: o A-B-A o A-B-A-B o B-A-B o ABCD (multiple component)  Recall the principle of “unlikely successive coincidences”  Control for rival hypotheses o Demonstrate there was more than one coincidence in which treatment began then client system improved, or treatment was discontinued, and client deteriorated Single System Designs Advantages  Participation between worker and client  Client feedback  Knowledge building for practice  Resource efficient (time and money)  Qualitative and quantitative methods available Single System Designs Disadvantages  Internal and External validity are weaker than in an experimental group design.  Limited applications  Analysis of the results o Largely judgmental/subjective o Statistics may require assumptions about the data that are often not reasonable

Single System Research Designs (SSRD) Notes o May require unfeasible number of observations in each phase Results  

Build a chart in Microsoft Excel displaying the results of an SSRD Follow this protocol (Royse, Thyer & Padgett, 2009): o Larger lines to indicate vertical and horizontal axes o Black ink only o Clearly identify each data point o Separate phases with dashed vertical line o Use abbreviations sparingly o If one of the data points is a “zero,” elevate it slightly above the horizontal axis so that the data point does not rest on it...


Similar Free PDFs