Ó 2004, S. D. Cochran. All rights reserved.

MEASUREMENT ERROR

  1. So far we have learned a few key concepts that are relevant to this next section 

  1. We've started to think about distributions  

  1. Sample distributions--the collections of values we observe in our sample

  2. Normal distribution--the probability density curve 

  1. These distributions have a central point, a middle, that we can define in various ways

  2. These distributions have variability around the middle point--the spread, the variance 

  1. Now we are going to carry the idea of distributions, or uncertainty, a step further. Whenever we measure or observe something, the value we obtain can also be thought of as an element in a distribution of all the possible observations that we might have made 

  1. In that hypothetical distribution, there is a central point

  2. Like all distributions, there is also variability 

  1. Statisticians also think about their observations, or measurements, as having three components. 

Observed score = True value + Chance Error + Bias 

  1. The first part is true score--the part of some observed value that is absolutely real 

  2. The second part is chance error 

  1. These are differences that tend to show variation around a central point; the central point is the true score--if we measure something repeatedly we won't get the same answer but it will be centered around a particular score

  2. Chance error is bidirectional--the pertubations it causes both inflate and deflate the observed score

  3. Chance error itself can be thought of as having its own distribution

  1. Sometimes error in a measurement is large, sometimes small

  2. Sometimes it adds, sometimes it subtracts

  3. You can think of the size of a chance error as a deviation from no error at all

  1. Observed scores in a distribution that are very far from the average of the distribution are referred to as outliers

  1. Outliers have a disproportionate effect and so statisticians disagree over whether or not outliers should be thrown out

  2. Example: Imagine you are asked to sprint a 100 yard dash 10 times. 9 of the 10 times go just fine, you come in somewhere between 13 and 17 seconds. Once you trip and fall, skin your knee, and take 5 minutes to get across the finish line. The time for that trial is an outlier. Should we include it. Some would argue that it reflects the total range of your performance and it should be included. Some would argue that it is such an aberration that to include distorts your true performance capabilities in the 100 yard dash.

  3. There is no hard and fixed rule

  1. The third part of an observed value is bias or systematic error

  1. Bias is unidirectional

  2. Positive bias inflates a score

  3. Negative bias decreases a score

  4. Example: Imagine that we measure inches with a ruler that is mismarked so that 13 inches is indicated as being 12 inches. The bias is one inch. Every measurement using that ruler will biased by the same amount in the same direction.