7988474098 | calculate IQR; anything above Q3+1.5(IQR) or below Q1-1.5(IQR) is an outlier | How do you check if there is outliers? | 0 | |
7988474099 | median; it is resistant to skews and outliers | If a graph is skewed, should we calculate the median or the mean? Why? | 1 | |
7988474100 | mean; generally is more accurate if the data has no outliers | If a graph is roughly symmetrical, should we calculate the median or the mean? Why? | 2 | |
7988474101 | Minimum, Q1, Median, Q3, Maximum | What is in the five number summary? | 3 | |
7988474102 | variance=(standard deviation)^2 | Relationship between variance and standard deviation? | 4 | |
7988474103 | the variance is roughly the average of the squared differences between each observation and the mean | variance definition | 5 | |
7988474104 | the standard deviation is the square root of the variance | standard deviation | 6 | |
7988474105 | IQR | What should we use to measure spread if the median was calculated? | 7 | |
7988474106 | standard deviation | What should we use to measure spread if the mean was calculated? | 8 | |
7988474107 | Q3-Q1; 50% | What is the IQR? How much of the data does it represent? | 9 | |
7988474108 | 1. Type data into L1 2. Find mean with 1 Variable Stats 3. Turn L2 into (L1-mean) 4. Turn L3 into (L2)^2 5. Go to 2nd STAT over to MATH, select sum( 6. Type in L3 7. multiply it by (1/n-1) 8. Square root it | How do you calculate standard deviation? | 10 | |
7988474109 | What is the formula for standard deviation? | ![]() | 11 | |
7988474110 | Categorical: individuals can be assigned to one of several groups or categories Quantitative: takes numberical values | Categorical variables vs. Quantitative Variables | 12 | |
7988474111 | No | If a possible outlier is on the fence, is it an outlier? | 13 | |
7988474112 | Center (Mean or Median), Unusual Gaps or Outliers, Spread (Standard Deviation or IQR), Shape (Roughly Symmetric, slightly/heavily skewed left or right, bimodal, range) | Things to include when describing a distribution | 14 | |
7988474113 | Subtract the distribution mean and then divide by standard deviation. Tells us how many standard deviations from the mean an observation falls, and in what direction. | Explain how to standardize a variable. What is the purpose of standardizing a variable? | 15 | |
7988474114 | shape would be the same as the original distribution, the mean would become 0, the standard deviation would become 1 | What effect does standardizing the values have on the distribution? | 16 | |
7988474115 | a curve that (a) is on or above the horizontal axis, and (b) has exactly an area of 1 | What is a density curve? | 17 | |
7988474116 | when you want to find the percentile: invNorm (area, mean, standard deviation) | Inverse Norm | 18 | |
7988474117 | (x-mean)/standard deviation | z | 19 | |
7988474118 | the value with p percent observations less than is | pth percentile | 20 | |
7988474119 | can be used to describe the position of an individual within a distribution or to locate a specified percentile of the distribution | cumulative relative frequency graph | 21 | |
7988474120 | STAT plot, scatter, L1 and L2 (Plot 1: ON); STAT --> CALC --> 8:LinReg(a+bx) No r? --> 2nd 0 (Catalog) down to Diagnostic ON | How to find and interpret the correlation coefficient r for a scatterplot | 22 | |
7988474121 | tells us the strength of a LINEAR association. -1 to 1. Not resistant to outliers | r | 23 | |
7988474122 | the proportion (percent) of the variation in the values of y that can be accounted for by the least squares regression line | r^2 | 24 | |
7988474123 | a scatterplot of the residuals against the explanatory variable. Residual plots help us assess how well a regression line fits the data. It should have NO PATTERN | residual plot | 25 | |
7988474124 | a line that describes how a response variable y changes as an explanatory variable x changes. We often use a regression line to predict the value of y for a given value of x. | regression line | 26 | |
7988474125 | residual=y-y(hat) aka observed y - predicted y | residual formula | 27 | |
7988474126 | BINS: 1. Binary: There only two outcomes (success and failure) 2. Independent: The events independent of one another? 3. Number: There is a fixed number of trials 4. Success: The probability of success equal in each trial | What method do you use to check if a distribution or probability is binomial? | 28 | |
7988474127 | BITS: 1. Binary: There only two outcomes (success and failure) 2. Independent: The events independent of one another 3. Trials: There is not a fixed number of trials 4. Success: The probability of success equal in each trial | What method do you use to check if a distribution or probability is geometric? | 29 | |
7988474128 | number of trials | n | 30 | |
7988474129 | probability of success | p | 31 | |
7988474130 | number of successes | k | 32 | |
7988474131 | (n choose k) p^k (1-p)^(n-k) | Binomial Formula for P(X=k) | 33 | |
7988474132 | binompdf(n,p,k) | Binomial Calculator Function to find P(X=k) | 34 | |
7988474133 | binomcdf(n,p,k) | Binomial Calculator Function for P(X≤k) | 35 | |
7988474134 | 1-binomcdf(n,p,k-1) | Binomial Calculator Function for P(X≥k) | 36 | |
7988474135 | np | mean of a binomial distribution | 37 | |
7988474136 | √(np(1-p)) | standard deviation of a binomial distribution | 38 | |
7988474137 | (1-p)^(k-1) x p | Geometric Formula for P(X=k) | 39 | |
7988474138 | geometpdf(p,k) | Geometric Calculator Function to find P(X=k) | 40 | |
7988474139 | geometcdf(p,k) | Geometric Calculator Function for P(X≤k) | 41 | |
7988474140 | 1-geometcdf(p,k-1) | Geometric Calculator Function for P(X≥k) | 42 | |
7988474141 | 1/p=expected number of trials until success | Mean of a geometric distribution | 43 | |
7988474142 | √((1-p)/(p²)) | Standard deviation of a geometric distribution | 44 | |
7988474143 | Take binomcdf(n,p,maximum) - binomcdf(n,p,minimum-1) | What do you do if the binomial probability is for a range, rather than a specific number? | 45 | |
7988474144 | type "n" on home screen, go to MATH --> PRB --> 3: ncr, type "k" | how do you enter n choose k into the calculator? | 46 | |
7988474145 | Measures of center (median and mean). Does NOT affect measures of spread (IQR and Standard Deviation) or shape. | What does adding or subtracting a constant effect? | 47 | |
7988474146 | Both measures of center (median and mean) and measures of spread (IQR and standard deviation). Shape is not effected. For variance, multiply by a² (if y=ax+b). | What does multiplying or dividing a constant effect? | 48 | |
7988474147 | √(σ²x+σ²y) --> you add to get the difference because variance is distance from mean and you cannot have a negative distance | σ(x-y) | 49 | |
7988474148 | X1P1+X2P2+.... XKPK (SigmaXKPK) | calculate μx by hand | 50 | |
7988474149 | (X1-μx)²p(1)+(X2-μx)²p(2)+.... (Sigma(Xk-μx)²p(k)) | calculate var(x) by hand | 51 | |
7988474150 | square root of variance | Standard deviation | 52 | |
7988474151 | a fixed set of possible x values (whole numbers) | discrete random variables | 53 | |
7988474152 | -x takes all values in an interval of numbers -can be represented by a density curve (area of 1, on or above the horizontal axis) | continuous random variables | 54 | |
7988474153 | (σx)²+(σy)², but ONLY if x and y are independent. | What is the variance of the sum of 2 random variables X and Y? | 55 | |
7988474154 | no outcomes in common | mutually exclusive | 56 | |
7988474155 | P(A)+P(B) | addition rule for mutually exclusive events P (A U B) | 57 | |
7988474156 | 1-P(A) | complement rule P(A^C) | 58 | |
7988474157 | P(A)+P(B)-P(A n B) | general addition rule (not mutually exclusive) P(A U B) | 59 | |
7988474158 | both A and B will occur | intersection P(A n B) | 60 | |
7988474159 | P(A n B) / P(B) | conditional probability P (A | B) | 61 | |
7988474160 | P(A) = P(A|B) P(B)= P(B|A) | independent events (how to check independence) | 62 | |
7988474161 | P(A) x P(B) | multiplication rule for independent events P(A n B) | 63 | |
7988474162 | P(A) x P(B|A) | general multiplication rule (non-independent events) P(A n B) | 64 | |
7988474163 | a list of possible outcomes | sample space | 65 | |
7988474164 | a description of some chance process that consists of 2 parts: a sample space S and a probability for each outcome | probability model | 66 | |
7988474165 | any collection of outcomes from some chance process, designated by a capital letter (an event is a subset of the sample space) | event | 67 | |
7988474166 | P(A) = (number of outcomes corresponding to event A)/(total number of outcomes in sample space) | What is the P(A) if all outcomes in the sample space are equally likely? | 68 | |
7988474167 | probability that an event does not occur | Complement | 69 | |
7988474168 | 1 | What is the sum of the probabilities of all possible outcomes? | 70 | |
7988474169 | P(A U B)= P(A)+P(B) | What is the probability of two mutually exclusive events? | 71 | |
7988474170 | 1. for event A, 0≤P(A)≤1 2. P(S)=1 3. If all outcomes in the sample space are equally likely, P(A)=number of outcomes corresponding to event A / total number of outcomes in sample space 4. P(A^C) = 1-P(A) 5. If A and B are mutually exclusive, P(A n B)=P(A)+P(B) | five basic probability rules | 72 | |
7988474171 | displays the sample space for probabilities involving two events more clearly | When is a two-way table helpful | 73 | |
7988474172 | could have either event or both | In statistics, what is meant by the word "or"? | 74 | |
7988474173 | visually represents the probabilities of not mutually exclusive events | When can a Venn Diagram be helpful? | 75 | |
7988474174 | If A and B are any two events resulting from some chance process, then the probability of A or B (or both) is P(A U B)= P(A)+P(B)-P(A n B) | What is the general addition rule for two events? | 76 | |
7988474175 | both event A and event B occur | What does the intersection of two or more events mean? | 77 | |
7988474176 | either event A or event B (or both) occurs | What does the union of two or more events mean? | 78 | |
7988474177 | If we observe more and more repetitions of any chance process, the proportion of times that a specific outcome occurs approaches a single value, which we can call the probability of that outcome | What is the law of large numbers? | 79 | |
7988474178 | is a number between 0 and 1 that describes the proportion of times the outcome would occur in a very long series of repetitions | the probability of any outcome... | 80 | |
7988474179 | We interpret probability to represent the most accurate results if we did an infinite amount of trials | How do you interpret a probability? | 81 | |
7988474180 | 1. Short-run regularity --> the idea that probability is predictable in the short run 2. Law of Averages --> people except the alternative outcome to follow a different outcome | What are the two myths about randomness? | 82 | |
7988474181 | the imitation of chance behavior, based on a model that accurately reflects the situation | simulation | 83 | |
7988474182 | 1. State: What is the question of interest about some chance process 2. Plan: Describe how to use a chance device to imitate one repetition of process; clearly identify outcomes and measured variables 3. Do: Perform many repetitions of the simulation 4. Conclude: results to answer question of interest | Name and describe the four steps in performing a simulation | 84 | |
7988474183 | not providing a clear description of the simulation process for the reader to replicate the simulation | What are some common errors when using a table of random digits? | 85 | |
7988474184 | both event A and event B occur | What does the intersection of two or more events mean? | 86 | |
7988474185 | The part of the population from which we actually collect information. We use information from a sample to draw conclusions about the entire population | sample | 87 | |
7988474186 | In a statistical study, this is the entire group of individuals about which we want information | population | 88 | |
7988474187 | A study that uses an organized plan to choose a sample that represents some specific population. We base conclusions about the population on data from the sample. | sample survey | 89 | |
7988474188 | A sample selected by taking the members of the population that are easiest to reach; particularly prone to large bias. | convenience sample | 90 | |
7988474189 | The design of a statistical study shows ______ if it systematically favors certain outcomes. | bias | 91 | |
7988474190 | People decide whether to join a sample based on an open invitation; particularly prone to large bias. | voluntary response sample | 92 | |
7988474191 | The use of chance to select a sample; is the central principle of statistical sampling. | random sampling | 93 | |
7988474192 | every set of n individuals has an equal chance to be the sample actually selected | simple random sample (SRS) | 94 | |
7988474193 | Groups of individuals in a population that are similar in some way that might affect their responses. | strata | 95 | |
7988474194 | To select this type of sample, first classify the population into groups of similar individuals, called strata. Then choose a separate SRS from each stratum to form the full sample. | stratified random sample | 96 | |
7988474195 | To take this type of sample, first divide the population into smaller groups. Ideally, these groups should mirror the characteristics of the population. Then choose an SRS of the groups. All individuals in the chosen groups are included in the sample. | cluster sample | 97 | |
7988474196 | Drawing conclusions that go beyond the data at hand. | inference | 98 | |
7988474197 | Tells how close the estimate tends to be to the unknown parameter in repeated random sampling. | margin of error | 99 | |
7988474198 | The list from which a sample is actually chosen. | sampling frame | 100 | |
7988474199 | Occurs when some members of the population are left out of the sampling frame; a type of sampling error. | undercoverage | 101 | |
7988474200 | Occurs when a selected individual cannot be contacted or refuses to cooperate; an example of a nonsampling error. | nonresponse | 102 | |
7988474201 | The most important influence on the answers given to a survey. Confusing or leading questions can introduce strong bias, and changes in wording can greatly change a survey's outcome. Even the order in which questions are asked matters. | wording of questions | 103 | |
7988474202 | Observes individuals and measures variables of interest but does not attempt to influence the responses. | observational study | 104 | |
7988474203 | Deliberately imposes some treatment on individuals to measure their responses. | experiment | 105 | |
7988474204 | A variable that helps explain or influences changes in a response variable. | explanatory variable | 106 | |
7988474205 | A variable that measures an outcome of a study. | response variable | 107 | |
7988474206 | a variable that is not among the explanatory or response variables in a study but that may influence the response variable. | lurking variable | 108 | |
7988474207 | A specific condition applied to the individuals in an experiment. If an experiment has several explanatory variables, a treatment is a combination of specific values of these variables. | treatment | 109 | |
7988474208 | the smallest collection of individuals to which treatments are applied. | experimental unit | 110 | |
7988474209 | Experimental units that are human beings. | subjects | 111 | |
7988474210 | the explanatory variables in an experiment are often called this | factors | 112 | |
7988474211 | An important experimental design principle. Use some chance process to assign experimental units to treatments. This helps create roughly equivalent groups of experimental units by balancing the effects of lurking variables that aren't controlled on the treatment groups. | random assignment | 113 | |
7988474212 | An important experimental design principle. Use enough experimental units in each group so that any differences in the effects of the treatments can be distinguished from chance differences between the groups. | replication | 114 | |
7988474213 | An experiment in which neither the subjects nor those who interact with them and measure the response variable know which treatment a subject received. | double-blind | 115 | |
7988474214 | An experiment in which either the subjects or those who interact with them and measure the response variable, but not both, know which treatment a subject received. | single-blind | 116 | |
7988474215 | an inactive (fake) treatment | placebo | 117 | |
7988474216 | Describes the fact that some subjects respond favorably to any treatment, even an inactive one | placebo effect | 118 | |
7988474217 | A group of experimental units that are known before the experiment to be similar in some way that is expected to affect the response to the treatments. | block | 119 | |
7988474218 | Using information from a sample to draw conclusions about the larger population. Requires that the individuals taking part in a study be randomly selected from the population of interest. | inference about the population | 120 | |
7988474219 | Using the results of an experiment to conclude that the treatments caused the difference in responses. Requires a well-designed experiment in which the treatments are randomly assigned to the experimental units. | inference about cause and effect | 121 | |
7988474220 | When the treatments, the subjects, or the environment of an experiment are not realistic. Lack of realism can limit researchers' ability to apply the conclusions of an experiment to the settings of greatest interest. | lack of realism | 122 | |
7988474221 | A basic principle of data ethics. All planned studies must be approved in advance and monitored by _____________ charged with protecting the safety and well-being of the participants. | institutional review board | 123 | |
7988474222 | A basic principle of data ethics. Individuals must be informed in advance about the nature of a study and any risk of harm it may bring. Participating individuals must then consent in writing. | informed consent | 124 | |
7988474223 | a model of random events | simulation | 125 | |
7988474224 | a sample that includes the entire population | census | 126 | |
7988474225 | a number that measures a characteristic of a population | population parameter | 127 | |
7988474226 | every fifth individual, for example, is chosen | systematic sample | 128 | |
7988474227 | a sampling design where several sampling methods are combined | multistage sample | 129 | |
7988474228 | the naturally occurring variability found in samples | sampling variability | 130 | |
7988474229 | the values that the experimenter used for a factor | levels | 131 | |
7988474230 | control, randomization, replication, and blocking | the four principles of experimental design | 132 | |
7988474231 | a design where all experimental units have an equal chance of receiving any treatment | completely randomized design | 133 | |
7988474232 | if the true mean/proportion of the population is (null), the probability of getting a sample mean/proportion of _____ is (p-value). | interpreting p value | 134 | |
7988474233 | center: p1-p2 shape: n1p1, n1(1-p1), n2p2, and n2(1-p2) ≥ 10 spread (if 10% condition checks): √((p1(1-p1)/n1)+(p2(1-p2)/n2) | p̂1-p̂2 center, shape, and spread | 135 | |
7988474234 | plug in center and spread into bell curve, find probability | probability of getting a certain p̂1-p̂2 (ex. less than .1) | 136 | |
7988474235 | (p̂1-p̂2) plus or minus z*(√((p1(1-p1)/n1)+(p2(1-p2)/n2)) | Confidence intervals for difference in proportions formula | 137 | |
7988474236 | t for mean z for proportions | When do you use t and z test/intervals? | 138 | |
7988474237 | Significance test for difference in proportions | 139 | ||
7988474238 | What is being claimed. Statistical test designed to assess strength of evidence against null hypothesis. Abbreviated by Ho. | What is a null hypothesis? | 140 | |
7988474239 | the claim about the population that we are trying to find evidence FOR, abbreviated by Ha | What is an alternative hypothesis? | 141 | |
7988474240 | Ha less than or greater than | When is the alternative hypothesis one-sided? | 142 | |
7988474241 | Ha is not equal to | When is the alternative hypothesis two-sided? | 143 | |
7988474242 | fixed value that we compare with the P-value, matter of judgement to determine if something is "statistically significant". | What is a significance level? | 144 | |
7988474243 | α=.05 | What is the default significance level? | 145 | |
7988474244 | if the true mean/proportion of the population is (null), the probability of getting a sample mean/proportion of _____ is (p-value). | Interpreting the p-value | 146 | |
7988474245 | We reject our null hypothesis. There is sufficient evidence to say that (Ha) is true. | p value ≤ α | 147 | |
7988474246 | We fail to reject our null hypothesis. There is insufficient evidence to say that (Ho) is not true. | p value ≥ α | 148 | |
7988474247 | Type I Error | reject Ho when it is actually true | 149 | |
7988474248 | Type II Error | fail to reject Ho when it is actually false | 150 | |
7988474249 | probability of rejecting Ho when it is false | Power definition | 151 | |
7988474250 | α | probability of Type I Error | 152 | |
7988474251 | 1-power | probability of Type II Error | 153 | |
7988474252 | increase sample size/significance level α | two ways to increase power | 154 | |
7988474253 | State --> Ho/Ha, define parameter Plan --> one sample, z test Check --> random/normal/independent Do --> find p hat, find test statistic (z), use test statistic to find p-value Conclude --> p value ≤ α reject Ho p value ≥ α fail to reject Ho | 5 step process: z/t test | 155 | |
7988474254 | Formula for test statistic (μ) | ![]() | 156 | |
7988474255 | (p̂-p)/(√((p)(1-p))/n) | Formula for test statistic (p̂) (where p represents the null) | 157 | |
7988474256 | overlap normal distribution for null and true. Find rejection line. Use normalcdf | probability of a Type II Error? | 158 | |
7988474257 | for proportions | when do you use z tests? | 159 | |
7988474258 | for mean (population standard deviation unknown) | when do you use t tests? | 160 | |
7988474259 | tcdf(min, max, df) | finding p value for t tests | 161 | |
7988474260 | state--> Ho: μ1-μ2=0 (if its difference) plan --> one sample, paired t test check --> random, normal, independent do --> find test statistic and p value conclude --> normal conclusion | Sample paired t test | 162 | |
7988474261 | The sample mean/proportion is far enough away from the true mean/proportion that it couldn't have happened by chance | What does statistically significant mean in context of a problem? | 163 | |
7988474262 | check the differences histogram (μ1-μ2) | When doing a paired t-test, to check normality, what do you do? | 164 | |
7988474263 | In C% of all possible samples of size n, we will construct an interval that captures the true parameter (in context). | How to interpret a C% Confidence Level | 165 | |
7988474264 | We are C% confident that the interval (_,_) will capture the true parameter (in context). | How to interpret a C% Confidence Interval | 166 | |
7988474265 | random, normal, independent | What conditions must be checked before constructing a confidence interval? | 167 | |
7988474266 | State: Construct a C% confidence interval to estimate... Plan: one sample z-interval for proportions Check: Random, Normal, Independent Do: Find the standard error and z*, then p hat +/- z* Conclude: We are C% confident that the interval (_,_) will capture the true parameter (in context). | C% confidence intervals of sample proportions, 5 step process | 168 | |
7988474267 | What's the z interval standard error formula? | ![]() | 169 | |
7988474268 | InvNorm(#) | How do you find z*? | 170 | |
7988474269 | subtract the max and min confidence interval, divide it by two (aka find the mean of the interval ends) | How do you find the point estimate of a sample? | 171 | |
7988474270 | Ask, "What am I adding or subtracting from the point estimate?" So find the point estimate, then find the difference between the point estimate and the interval ends | How do you find the margin of error, given the confidence interval? | 172 | |
7988474271 | use p hat=.5 | Finding sample size proportions: When p hat is unknown, or you want to guarantee a margin of error less than or equal to: | 173 | |
7988474272 | x bar +/- z*(σ/√n) | Finding the confidence interval when the standard deviation of the population is *known* | 174 | |
7988474273 | starts normal or CLT | Checking normal condition for z* (population standard deviation known) | 175 | |
7988474274 | x bar +/- t*(Sx/√n) | Finding the confidence interval when the standard deviation of the population is *unknown* (which is almost always true) | 176 | |
7988474275 | n-1 | degrees of freedom | 177 | |
7988474276 | InvT(area to the left, df) | How do you find t*? | 178 | |
7988474277 | same as standard deviation, but we call it "standard error" because we plugged in p hat for p (we are estimating) | What is the standard error? | 179 | |
7988474278 | provides an estimate of a population parameter. | a point estimator is a statistic that... | 180 | |
7988474279 | Confidence level C decreases, sample size n increases | Explain the two conditions when the margin of error gets smaller. | 181 | |
7988474280 | NO; the confidence interval gives us a set of plausible values for the parameter | Does the confidence level tell us the chance that a particular confidence interval captures the population parameter? | 182 | |
7988474281 | Sx is for a sample, σx is for a population | Sx and σx: which is which? | 183 | |
7988474282 | you are not given the population standard deviation | How do we know when do use a t* interval instead of a z interval? | 184 | |
7988474283 | Normal for sample size... -n -n<15: if the data appears closely normal (roughly symmetric, single peak, no outliers) | Checking normal condition for t* (population standard deviation unknown) | 185 | |
7988474284 | plug data into List 1, look at histogram. Conclude with "The histogram looks roughly symmetric, so we should be safe to use the t distribution) | How to check if a distribution is normal for t*, population n<15 | 186 | |
7988474285 | State: Construct a __% confidence interval to estimate... Plan: one sample t interval for a population mean Check: Random, Normal, Independent (for Normal, look at sample size and go from there) Do: Find the standard error (Sx/√n) and t*, then do x bar +/- t*(standard error) Conclude: We are __% confident that the interval (_,_) will capture the true parameter (in context). | t* confidence interval, 5 step process | 187 | |
7988474286 | z* or t* (standard error) | margin of error formula | 188 | |
7988474287 | x bar plus or minus t* (Sx/√n) -get x bar and Sx using 1 Var Stats -t*=Invt(area to the left, df) -population (n) will be given | When calculating t interval, what is it and where do you find the data? | 189 | |
7988474288 | z/t* interval | What is it looking for if it asks for the appropriate critical value? | 190 |
AP Statistics review Flashcards
Primary tabs
Need Help?
We hope your visit has been a productive one. If you're having any problems, or would like to give some feedback, we'd love to hear from you.
For general help, questions, and suggestions, try our dedicated support forums.
If you need to contact the Course-Notes.Org web experience team, please use our contact form.
Need Notes?
While we strive to provide the most comprehensive notes for as many high school textbooks as possible, there are certainly going to be some that we miss. Drop us a note and let us know which textbooks you need. Be sure to include which edition of the textbook you are using! If we see enough demand, we'll do whatever we can to get those notes up on the site for you!