Stats for Psych Final Study Guide PDF

Title Stats for Psych Final Study Guide
Course Statistics for Psychology
Institution Temple University
Pages 13
File Size 82.7 KB
File Type PDF
Total Downloads 40
Total Views 174

Summary

Download Stats for Psych Final Study Guide PDF


Description

1.

Chapter 1: Intro to Stats 1.1. Statistics 1.1.1. Used to organize and summarize the information so that the researcher can see what happens in the research study and can communicate the results to others 1.1.2. Helps the researcher answer the questions that initiated the research by determining exactly what general conclusions are justified based on the specific results that were obtained 1.2. Population 1.2.1. The set of all individuals of interest in a particular study 1.3. Sample 1.3.1. A set of individuals selected from a population, usually intended to represent the population in a research study 1.4. Parameter 1.4.1. A value that describes a population 1.5. Statistic 1.5.1. A value that describes a sample 1.6. Variable 1.6.1. A characteristic or condition that changes or has different values for different individuals 1.7. Data 1.7.1. Measurements or observations 1.8. Types of Variables 1.8.1. Discrete variable 1.8.1.1. Consists of separate, indivisible categories 1.8.1.2. No values exist between two neighboring categories 1.8.2. Continuous variable 1.8.2.1. Contains an infinite number of possible values that fall between any two observed values 1.8.2.2. Divisible into an infinite number of fractional parts 1.8.3. Measuring Variables 1.8.3.1. To establish relationships between variables, researchers must observe the variables and record their observations 1.8.3.2. Uses scale of measurements 1.8.4. 4 Types of Measurement Scales 1.8.4.1. Nominal Scale 1.8.4.1.1. An unordered set of categories identified only by name 1.8.4.1.2. Only determine whether two individuals are the same or different 1.8.4.1.3. I.e. gender, class year, type of degree 1.8.4.2. Ordinal Scale 1.8.4.2.1. An ordered set of categories 1.8.4.2.2. Tells you the direction of difference between two individuals

2.

1.8.4.2.3. I.e rank order, dessert preference 1.8.4.3. Interval scale 1.8.4.3.1. An ordered series of equal sized categories 1.8.4.3.2. Identify the direction and magnitude of a difference 1.8.4.3.3. I.e. temperature, no such thing as no temperature 1.8.4.4. Ratio scale 1.8.4.4.1. An interval scale where a value of zero indicates none of the variable 1.8.4.4.2. Identify the direction and magnitude of differences & allow ratio comparisons of measurements 1.8.4.4.3. I.e. height, weight. 1.9. Descriptive Statistics 1.9.1.1. Statistical procedures used to summarize, organize and simplify data 1.10. Inferential statistics 1.10.1.1. Techniques that are used to study samples and then make generalizations about the populations from which they were selected 1.11. Sampling Error 1.11.1. Naturally occurring discrepancy or error that exist between s sample statistic and the corresponding population parameter 1.12. Constructs 1.12.1. Internal attributes that cannot be directly observed but are useful for describing and explaining behavior 1.13. Operational definition 1.13.1. Identifies a measurement procedure for measuring an external behavior and uses the resulting measurements as a definition and a measurement of a hypothetical construct 1.14. Experimental and Nonexperimental Methods 1.14.1. Examines the relationship between variables by using one of the variables to define the groups and measuring the second variable to obtain scores for each group 1.14.2. Goal of an experimental study 1.14.2.1. To demonstrate a cause and effect relationship 1.14.2.2. Participant variables: age, gender, and intelligence that vary from one individual to another 1.14.2.3. Experimental variables: lighting, time of day and weather conditions Chapter 2: Frequency Distributions 2.1. A method for simplifying and organizing data 2.2. Frequency Distribution Tables 2.2.1. Consists of at least two columns; one for categories (x) other for frequency (f) 2.3. Regular Frequency Distribution

2.3.1. When a frequency distribution lists all the individual categories Grouped Frequency Distribution 2.4.1. The x column lists groups of scores (intervals) instead of individual values 2.5. Frequency Distribution Graphs 2.5.1. The scores categories are listed on the x axis and the frequencies are listed on the Y axis 2.5.2. Graph should be a histogram or polygon 2.6. Histogram 2.6.1. A bar is centered over each score 2.6.2. Height corresponds to the frequency; width extends to the real limits & adjacent bars touch 2.6.3. Scores on x axis and frequency on y 2.7. Polygon 2.7.1. A dot is centered above each score 2.7.2. The height of the dot corresponds to the frequency 2.7.3. A continuous line is drawn from dot to dot to connect a series of dots 2.7.4. The graph is completed by drawing a line down to the x axis 2.8. Bar Graphs 2.8.1. Used for measurements that are nominal or ordinal 2.8.2. Gaps are left between adjacent bars 2.9. Relative Frequencies 2.9.1. Used when the exact number of individuals is not known 2.10. Frequency Distribution Graphs 2.10.1. Show the entire set of scores 2.10.2. Show whether the scores are clustered or scattered 2.11. Describing Frequency Distribution 2.11.1. Three Characteristics 2.11.1.1. Central Tendency 2.11.1.1.1. Measured where the center of the distribution is located 2.11.1.2. Variability 2.11.1.2.1. Measures the degree to which the scores are spread over a wide range or are clustered together 2.11.1.3. Shape 2.11.1.3.1. Symmetrical or Skewed; where the scores taper off are called the tail of the distribution 2.11.1.3.2. Positively and Negatively skewed distributions 2.11.1.3.2.1. Positively skewed: the scores tend to pile up on the left side of the distribution with the tale tapering off to the right 2.11.1.3.2.2. Negatively skewed: the scores tend to pile up on the right side and the tail points to the left 2.12. Stem and leaf displays 2.12.1. Provides an efficient method for obtaining and displaying a frequency distribution 2.4.

3.

4.

Chapter 3: Central Tendency 3.1. Central Tendency 3.1.1. A statistical measure that determines a single value that accurately describes the center of the distribution and represents the entire distribution of scores 3.1.2. To identify the single value that is the best representative for the entire data set; average of typical value 3.1.3. Three methods for determining central tendency 3.1.3.1. Mean: average 3.1.3.2. Median: middle number 3.1.3.3. Mode: most often number 3.1.4. Central Tendency & the shape of the distribution 3.1.4.1. In a symmetrical distribution; the mean and median will always be equal 3.1.5. Selecting a measure of central tendency 3.1.5.1. Extreme scores/ skewed distribution 3.1.5.1.1. Mean is not appropriate; median is 3.1.5.2. Ordinal Data 3.1.5.2.1. Mean is not appropriate ; median is 3.1.5.3. When to use the mode 3.1.5.3.1. When evaluating nominal data 3.1.5.3.2. Useful in describing discrete variables 3.1.5.3.3. Gives an indication of the shape of the distribution 3.1.6. Central Tendency and the Shape of the Distribution 3.1.6.1. Symmetrical distribution 3.1.6.1.1. Median is exactly at the center 3.1.6.1.2. Mean & mode is exactly at the center 3.1.6.2. Skewed Distribution 3.1.6.2.1. Positively skewed distribution 3.1.6.2.1.1. Mode, median, & mean (smallest to largest) 3.1.6.2.2. Negatively skewed distribution 3.1.6.2.2.1. Mean, median & mode Chapter 4: Variability 4.1. Variability 4.1.1. Goal: To obtain a measure of how spread out the scores are in a distribution 4.1.2. Serves as a descriptive measure and an important component of most inferential statistics 4.1.2.1. Descriptive statistics; variability measures the degree to which the scores are spread out or clustered together in a distribution 4.1.2.2. Inferential statistics; variability provides a measure of how accurately any individual score represents the entire population 4.2.

Central Tendency and Variability

4.2.1.

4.3.

5.

Central tendency describes the central point of the distribution & variability describes how the scores are scattered around that central point

Range 4.3.1. Distance covered by the scores in a distribution 4.4. Standard Deviation & Variance 4.4.1. Deviation 4.4.1.1. Distance from the mean: x-mean 4.4.2. Variance 4.4.2.1. The mean of the squared deviations 4.4.2.2. Average squared distance from the mean 4.4.3. Standard deviation 4.4.3.1. The square root of the variance and provides a measure of the standard or average distance from the mean 4.4.3.2. Compute the deviation for each score, square each deviation, compute the mean of the squared deviations(variance), & take the square root of the variance 4.5. Properties of the Standard Deviation 4.5.1. If a constant is added to every score in a distribution, the standard deviation will not be changed 4.5.2. About 70% of scores will be within one standard deviation of the mean and about 95% of the scores will be within two standard deviations Chapter 5: Z-scores 5.1. Purposes of Z Scores 5.1.1. Used to identify and describe the exact location of each score in a distribution 5.1.2. Specifies the precise location of each X value within a distribution 5.1.3. The sign of the z score (+ or -) 5.1.3.1. Signifies whether the score is above the mean or below the mean 5.1.4. Numerical Value 5.1.4.1. Specifies the distance from the mean by counting the number of standard deviations between the score and the mean 5.2. The z-score formula 5.2.1. z= (x-mean)/ standard deviation 5.3. Determining a raw score from a z score 5.3.1. X = mean + (z score * SD) 5.4. Probability, T scores and Standard Scores 5.4.1. The Unit Normal Table 5.4.1.1. Column A: lists z score values 5.4.1.2. Columns B & C: list the proportions in the body & tail 5.4.1.3. Column D: lists the proportion between the mean and the z score location 5.5. Within-group norms 5.5.1. Compares against peers

6.

7.

5.5.1.1. Percentile rank 5.5.1.1.1. Percentage of sample that earned scores equal to or lower than score obtained by the individual 5.5.1.2. Z scores 5.5.1.3. T scores 5.5.1.3.1. Transforms z scores so that the mean corresponds to T=50 & the standard deviation corresponds to 10 t score units 5.5.1.3.2. Formula: T=10z+50 5.5.1.3.3. Reverse formula: z=(T-50)/10 5.5.1.4. Standard scores 5.5.1.4.1. Converting standard scores to z scores 5.5.1.4.1.1. Z=(SS-100)/15 Chapter 7: Probability & Samples: The Distribution of Sample Means 6.1. Sampling Error 6.1.1. The natural discrepancy or amount of error between a sample statistic & its corresponding population parameter 6.2. Characteristics of the Distribution of Sample Means 6.2.1. The sample means should pile up around the population mean 6.2.2. The pile of sample means should tend to form a normal shaped distribution 6.2.3. The larger the sample size, the closer the sample means should be to the population mean 6.3. The mean of the distribution of sample means 6.3.1. The average value of all the sample means is exactly equal to the value of the population mean 6.3.1.1. Mean value = expected value of M 6.4. The Standard Error of M 6.4.1. The standard deviation of the distribution of sample means 6.4.2. Provides a measure of how much distance is expected on average between a sample mean and the population mean 6.4.3. The magnitude of the standard error is determined by two factors 6.4.3.1. The size of the sample 6.4.3.1.1. The law of large numbers: states that the larger the sample size, the more probable it is that the sample mean will be close to the population 6.4.3.2. The standard deviation of the population from which the sample is selected 6.4.3.2.1. There is an inverse relationship between the sample size and the standard error 6.4.4. Probability & the distribution of Sample Means 6.4.4.1. The primary use of the distribution of sample means is to find the probability associated with any sample Sampling Techniques

7.1.

Sampling 7.1.1. Method used to select participants from a population 7.2. Determining Sample 7.2.1. Define population of interest: who, what, where & when 7.2.2. Identify relevant characteristics for sampling 7.2.3. Find group that is accessible & has characteristics needed 7.2.4. Attempt to recruit for study 7.3. Sampling Methods 7.3.1. Probability sampling 7.3.1.1. Able to define probability of change of being sampled in population 7.3.1.1.1. Simple random sampling 7.3.1.1.1.1. Every member of population has equal chance of getting selected 7.3.1.1.1.2. Uses random selection 7.3.1.1.1.3. Requires full list of population 7.3.1.1.2. Systematic random sampling 7.3.1.1.2.1. In a given population, every nth individual is selected to participate 7.3.1.1.2.2. Performed by estimating needed sample size & diving number of names on list by estimated sample size 7.3.1.1.2.3. Order might matter 7.3.1.1.3. Stratified random sampling 7.3.1.1.3.1. Population divided in subgroups which are then randomly selected 7.3.1.1.3.2. Proportional or disproportional sampling 7.3.1.1.4. Cluster sampling 7.3.1.1.4.1. Use of naturally occuring groups & randomly sample from those clusters 7.3.1.1.4.2. Helpful when full list isn't available but clusters are 7.3.1.1.4.3. Multistage approach common 7.3.2. Non Probability sampling 7.3.2.1. No probability known, based on other factors 7.3.2.1.1. Haphazard/convenience sampling 7.3.2.1.1.1. Select people based on availability 7.3.2.1.1.2. Accidental sample/ take them where you find them 7.3.2.1.1.3. Often reliant on volunteers 7.3.2.1.2. Purposive sampling 7.3.2.1.2.1. To obtain sample of people based on some predetermined criterion 7.3.2.1.2.2. Goal not necessarily generalization but can present interesting/important findings 7.3.2.1.3. Quota sampling

7.3.2.1.3.1.

7.3.2.1.3.2. 7.4.

8.

Choose sample from a pre-specified strata & choose individuals based on population-specific proportions Similar to stratified sampling random sampling, but not random

Sample Size 7.4.1. Sample size is directly related to the type of research you are conducting 7.5. Accessibility 7.5.1. Accessibility to sample or population important factor Chapter 8: Intro to Hypothesis Testing 8.1. Hypothesis Testing 8.1.1. A statistical method that uses sample data to evaluate a hypothesis about a population 8.1.2. General goal: to rule out chance as a plausible explanation for the results from a research study 8.1.3. Steps 8.1.3.1. State hypothesis about the population 8.1.3.1.1. Null hypothesis: states that there is no change in the general population before and after an intervention 8.1.3.1.2. Alternative hypothesis: states that there is a change in the general population following an intervention 8.1.3.2. Use hypothesis to predict the characteristics the sample should have 8.1.3.2.1. The alpha level establishes a cut off for making a decision about the null hypothesis & determines the risk of a Type I error 8.1.3.2.2. Critical region: consists of outcomes that are very unlikely to occur if the null hypothesis is true 8.1.3.3. Obtain a sample from the population 8.1.3.3.1. Compare the sample means with the null hypothesis 8.1.3.4. Compare data with the hypothesis prediction 8.1.3.4.1. If the test statistic results are in the critical region, we conclude that the difference is significant or that the treatment has a significant effect 8.1.3.4.2. If the mean difference is not in the critical region, we conclude that the evidence from the sample is not sufficient & the decision is fail to reject the null hypothesis 8.1.4. If the individuals in the sample are noticeably different from the individuals in the entire population, we have evidence that the treatment has an effect 8.1.5. However the difference could be a sampling error 8.1.6. Purpose is two decide between two explanations 8.1.6.1. The difference between the sample and the population can be explained by sampling error

8.1.6.2.

9.

The difference between the sample and the population is too large to be explained by sampling error 8.2. Uncertainty and Errors in Hypothesis Testing 8.2.1. It uses limited information as the basis for reaching a general conclusion 8.3. Type I Errors 8.3.1. Occurs when a researcher rejects a null hypothesis that is actually true 8.3.2. Occurs when a researcher unknowingly obtains an extreme non representative sample 8.3.3. The alpha level for a hypothesis test is the probably that the test will lead to a Type I error 8.4. Type II Errors 8.4.1. Occurs when a researcher fails to reject a null that is really false 8.4.2. Occurs when the sample mean is not in the critical region even though the treatment has an effect on the sample 8.4.3. The research data do not show the results that the researcher hoped to obtain 8.4.4. Cannot determine an exact probability for this error Chapter 10: The T Test for two independent samples 9.1. Independent Measures Designs 9.1.1. Evaluate the mean difference between two populations using the data from two separate samples 9.1.2. Independent measures = between subjects design 9.1.3. Used to test for mean differences between two distinct populations or between two different treatment conditions 9.1.4. Used when the researcher has no prior knowledge about the populations between compared 9.1.5. The population means and standard deviations are all unknown 9.2. Actual mean differences vs Sampling Error 9.2.1. To determine whether the sample mean difference obtained in a research study indicates an actual mean difference between the two populations or whether the obtained difference is simply the result of sampling error 9.3. Steps in independent samples t test 9.3.1. 1. State the hypothesis and select an alpha level 9.3.2. 2. Locate the critical region 9.3.3. 3. Compute the test statistic 9.3.3.1. Formula: t= (M1-M2 ) / s(m1-m2); s(m1-m2)= standard error of the mean = amount of error expected when you use a sample mean difference to represent a population mean difference 9.3.4. 4. Make decision 9.4. Parametric Assumptions 9.4.1. Independent observations 9.4.2. Normally distributed: populations which samples are selected from are normally distributed 9.4.3. Homogeneity of variance: populations from which samples are selected

have equal variances Matched subjects design/Related samples t-test 9.5.1. Each individual in one sample is matched with an individual in the other sample\ 9.5.2. Matching is accomplished by selecting pairs of subjects so that the two subjects in each pair have identical scores on the variable that is being used for matching 9.5.3. A difference score is computed for each matched pair of individuals Chapter 11: The T Test for Two Related Samples 10.1. T Test for Two Related Samples 10.1.1. Allows researchers to evaluate the mean difference between two treatment conditions using the data from a single sample 10.1.2. A single group of individuals is obtained and each individual is measured in both of the treatment conditions being compared 10.1.3. Data consist of two scores for each individual 10.1.4. Uses the same sample for each test 10.2. Hypothesis Test 10.2.1. Compute a difference score for each individual 10.2.2. Null hypothesis: mean difference for the general population is zero 10.2.3. Alternative hypothesis: there is a treatment effect that causes the scores in one treatment condition to be systematically higher or lower than the scores on the other condition 10.2.4. Steps 10.2.4.1. 1. State the hypothesis & select the alpha level 10.2.4.2. 2. Locate the critical region 10.2.4.3. 3. Calculate the t statistic 10.2.4.4. 4. Make a decision Chapter 12: ANOVA Independent Measures 11.1. ANOVA 11.1.1. A hypothesis testing procedure that is used to evaluate mean differences between three or more treatments 11.2. Parametric assumptions 11.2.1. The observations within each sample must be independent 11.2.2. The populations from which the samples are selected must be normal 11.2.3. The populations from which the samples are selected must have equal variances. 11.3. Post hoc tests 11.3.1. Additional hypothesis tests that are done after anova to determine exactly har mean differences are significant and which are not Repeated ANOVA 12.1. Overview of repeated measures anova 12.1.1. One group of individuals participated in all of the different treatment conditions 12.2. When this test is used: 9.5.

10.

11.

12.

13.
...


Similar Free PDFs