Reviewing some modeling tests has caused me to go back to looking at the power of a given test which is often overlooked by analysts. We want to minimize the the probability of committing a type I error which is a false positive, and we would like to maximize the power, minimize beta or a type II error, a false negative, failing to reject a null hypothesis that is false, an error of omission.
Starting out there are two types of errors:
- Type I error – we reject the null hypothesis Ho when the null is true; alpha = P(Type I error). This is the standard test we follow when we test for 95% confidence or 5% change we are wrong. We calculate the p-value and then determine whether it falls above or below a threshold.
- Type II error – we fail to reject Ho when Ha is true; beta = P(Type II error). This will be the power of a test.
- Type I: "I falsely think the alternate hypothesis is true" (one false)
- Type II: "I falsely think the alternate hypothesis is false" (two falses)
These two types of error are inversely related; the smaller the risk of a Type I error increases the likelihood of a type II error. Note that we cannot compute beta or the probability of a type II error unless we know how false the null actually is which makes finding the probabilility of a Type II error difficult.
Calculating the the power of a test allows us to determine the sample size necessary to not make a type II error. Generally, you need a large sample for most tests to say there is a significant difference with any strong likelihood. We often don't have that luxury, so it is important to consider both type I and type II errors.
No comments:
Post a Comment