Skip to content


Even the best experiments have sources of error, but a smart experimentalist considers the likely sources of error and the effect they have on the experiment’s results and conclusions. To answer questions relating to how specific errors lead to final conclusions, first consider how the error would affect your raw data, then rework calculations that led to your conclusions.

There are two (or three?) types of error that arise in chemistry lab:

  1. Random error (or, indeterminate error)
    • Random error can change your results randomly in either direction;
    • It is impossible to predict what direction random error will affect your results because it will be different every time – a large spread in duplicate experiments suggests random error is a problem in your experiment;
    • Random errors can be difficult to identify because they come from uncontrolled variables. For example, the density of water is temperature dependent. If you were measuring the density of water and failed to notice that the temperature of the water was drifting randomly, this would produce a random error;
    • Another example of a random error is contamination from dirty glassware. If the amount and identity of the contamination is unknown, it would have a random effect on the experiment;
    • Limitations in measurement tools are a source of random error that contributes to all experiments – if you measure the mass of a marble 5 times, you will always get a slightly different mass.
  2. Systematic error (or, determinate error)
    • Flaws inherent to the experiment that cause results to shift in one direction every time;
    • Inaccurate data with a narrow distribution in duplicate experiments suggests systematic error;
    • For example, if you are supposed to count the number of chocolate candies in a bowl, but you tend to eat them while counting them, then you will tend to count less candies than were actually in the bowl. Your results will always be too low due to this error;
    • Systematic errors can be eliminated by changing the procedure. In the candy counting example, you could improve your experiment in the future by counting candies with duct tape over your mouth;
    • More realistically, a poorly calibrated thermometer that always reads higher than the actual temperature could be recalibrated to eliminate systematic error;
    • Impurities, such as water contamination in a substance presumed to be dry, is another example of a systematic error. You could eliminate this error by drying the substance better. Note that contamination can be a random or systematic error. For it to be a systematic error, you must be able to predict the effect of the contamination on the results, whereas random errors are unpredictable;
    • Transfer errors are the systematic loss of substances when they change containers. One way that transfer errors can be eliminated is by measuring the mass of empty containers after transfer.
  3. Human error
    • You didn’t follow instructions, forgot to tare the balance, accidentally threw away your product, didn’t write down a critical piece of data, etc.;

If human error is to blame in your experiment, then you are in luck: you can do the experiment again and try not to make human errors!

Quantitative error analysis

by K. Jewett and S. Sontum

Quantitative error analysis not only provides a method of communication but also lends to better design of experiments.  As mentioned previously, it is impossible to perform a chemical measurement that is totally free of error.  A scientist’s objective is to keep these errors to a tolerable level and to estimate the magnitude and source of the errors.  Error analysis will tell us which aspect of the measurement is contributing most to the overall error, thus aiding in minimizing the experimental uncertainty. It is thereby important that in every chemical measurement one estimates the error along with the actual value.

To estimate error, the uncertain digit must be measured.  At some level all physical measurements have an uncertain digit. The uncertain digit is the digit that changes due to random errors when repeated measurements are made. On analog instruments like a graduate cylinder the uncertain digit is measured by estimating the measured value to 1/5 the smallest division.  For 50 mL burets with 0.1 mL divisions this means reading to the nearest 0.02 mL with the digit in the hundredths place being the uncertain digit.  On digital instruments like the analytical balances, an assumption is made that the smallest stable displayed digit is the uncertain digit. When the uncertain digit is measured, the methods of statistics can be used to evaluate errors in repeated measurements. Knowing which measurement error limits the accuracy of an experiment and the nature of the error, whether it is systematic or random, is the primary information used to improve the experimental design.

Accuracy vs. Precision

The accuracy and precision of an experiment are independent of each other when systematic errors are present. The accuracy of a result is an expression of the overall uncertainty including both random and systematic errors.  How close something is to the truth is hard to measure but in the absence of systematic errors the accuracy is related to the precision or reproducibility of groups of identical measurements.  For example, let’s say you are to trying to measure the position of a star with a telescope as a function of the vertical (y) and horizontal (x) angle above the horizon due east.  When you look at a star, you know that there is a multitude of causes for error, stars, after all, twinkle.  So you take several readings, and hope that the best estimate of the star’s position is the average – the center of the scatter.  Here are three examples of results:

We can push this idea further by interpreting what the scatter of the measurements.  In 1795 at the age of eighteen a German mathematician Karl Friedrich Gauss asked this very question.  He devised the Gaussian curve or normal distribution (symmetrical about the average) in which the scatter is summarized by the standard deviation, or spread, of the curve.  The scatter marks an area of uncertainty.  We are not sure the star lies at the exact center.  All we can say is that the star lies in the area of uncertainty, and the size of that area is calculable from the observed scatter of the individual observations.  Gauss found that the best estimate of the area of uncertainty was the standard deviation, which for n measurements is given by:

The Gaussian curve has turned out to be a nearly universal description of the distribution of random measurement errors.  Student’s grades measured by an exam follow this bell shaped curve, as well as, the distribution of the weight of each grain of wheat in a field or the repeated measurements of the weight of a single wheat grain. In the bell shaped Gaussian curves depicted above, 67 % of the points fall within one standard deviation on each side of the average while 95 % of the points are with ± 2 standard deviations around the average.  It is very rare for random errors to cause deviations that are greater than two standard deviations.  When two averages are two standard deviations or more apart we can say they are significantly different at a 95% confidence level.  In our example of two experiments to measure the star’s position, the averages of 0 and 2 are significantly different because they are two standard deviations apart.