Traceability and calibration
Traceability (traceability) is an unbroken chain of comparisons, which ensures that every measurement result or the value of a standard is linked by reference to a higher level of accuracy and ends at the final, highest level of measurement of the primary standard.
In Greece, already since 2000, more and more measurements in industrial
production have traceability through calibration laboratories to the national
standards of the country, which are accredited to ISO/IEC 17025 -
2005, which are maintained by the Hellenic Institute of Metrology. The basic means for ensuring the traceability of a measurement is the
calibration (calibration) of the measuring instrument used, which
consists of determining its metrological characteristics. Calibration
therefore is the process of checking the accuracy of a measuring
instrument (tape measure, caliper, micrometer, scale, etc.). Each measuring
system measures a quantity in specific units of measurement according to the SI (Systeme Internationale). Such quantities are time, temperature, mass, length, the amount of a substance, electric current, and luminous intensity (see Table I). The precise definition of calibration is: “The relationship of the results to a standard based on the SI system”.
Table 1. The seven basic units of the SI.
|
QUANTITY |
UNIT |
SYMBOL |
|
Length (l) |
metre( meter) |
m |
|
Mass (m) |
kilogram (kilogram) |
kg |
|
Time (t) |
second (second) |
s |
|
Electric current intensity (I) |
ampere (ampere) |
A |
|
Thermodynamic temperature (T) |
kelvin (kelvin) |
K |
|
Luminous intensity (I) |
candela (candela) |
cd |
|
Amount of substance (n) |
mole (mole) |
mol |
Table 2. List of prefixes in SI units.
|
International name |
Symbol |
Greek |
Multiplier |
Scale |
|
kilo |
k |
thousand |
103 th |
Thousand |
|
Hector |
h |
hundred |
102 |
Hundred |
|
deca |
da |
ten |
101 |
Ten |
|
- |
- |
- |
100=1 |
Unit |
|
deci |
d |
tenth |
10-1 |
Tenth |
|
centi |
c |
hundredth |
10-2 |
Hundredth |
|
mili |
m |
thousandth |
10-3 |
Thousandth |
|
micro |
μ |
micro |
10-6 |
millionth |
|
nano |
n |
nano |
10-9 |
billionth |
Main reasons that require the calibration of a measuring instrument:
1. Ensuring that the indications read from the instrument are in agreement with other measurements
2. The determination of the accuracy of the instrument indications. The assurance of the instrument indications
The consequences of calibrating measuring instruments are as follows:
-
The result of the calibration may allow either the acceptance of the instrument indications or the correction of the indications
-
Calibration may determine other metrological properties such as the effect of other factors.
-
The result of the calibration is recorded in a document in the form of a calibration certificate or a calibration report.
Measurement uncertainty
More simply, we would say that calibration is the process of calculating the difference between the values given by the instrument under test and the values of a standard. Consequently, for calibration to be performed, a series of measurements is required. When calibration is carried out in an accredited laboratory, it is accompanied by a certificate that lists in detail the measurements that have been performed. The same applies when calibration is carried out within the laboratory (internal calibration), although these measurements are far fewer. As we will see, these measurements are always repeatable, although they are carried out at different value levels. In general, from these repeated values, dispersion is calculated in the form of standard deviation (SD: which expresses the amount of variation or dispersion of a set of given values) or from the coefficient of variation (CV%: used to compare the variability of samples from different populations.) as well as the difference between the values measured on the instrument and those of the standard. In calibrations, the dispersion of values arises from these differences. The mean difference of these is called uncertainty (Uncertainty or U). Uncertainty (Uexp : standard uncertainty) is calculated using the following formulas:
Standard uncertainty = Mean instrument value – Standard value
Standard uncertainty has the same units of measurement as those of the instrument. However, when uncertainty is used in further calculations, relative uncertainty is used instead:
Relative uncertainty = (Difference between instrument and standard values)/ (theoretical value)
Relative uncertainty correlates standard uncertainty with the magnitude of the theoretical value.
The uncertainty that we now take into account in the laboratory is the expanded uncertainty (U), which is twice the standard uncertainty for a confidence interval of 95%. That is, U = 2 Uexp In the case where more than one instrument participates in an experiment, which have been calibrated and their uncertainty has been calculated, then the expanded uncertainty equals the sum of all individual standard uncertainties.
