Calibration of Experimental Research

For most of the primary and secondary measures, there are mea­suring instruments, like rulers for lengths, balances for weights, and thermometers for temperatures. In daily routines, we take such measuring instruments for granted. If a person’s tempera­ture is noticed to be 104°F on a newly bought thermometer, one does not react with complacence, wondering, How am I sure that the thermometer is not wrong and that the actual temperature is four degrees less? An experimenter in science, on the other hand, relative to a thermometer involved in his experiments, needs to wonder exactly that. He cannot act on faith and let things be as they are. The chance that a thermometer in his experiment is incorrect may be only one in a thousand, but it is necessary that he knows if it is reading the correct temperature, and if not, how far it is off. The actions taken as a way of answering these ques­tions, plus the corrective measures implemented if needed, are known as calibration.

No measuring instrument, however brand-new or expensive or fancy looking, is above the reach of doubt leading to calibration. And here is a task that demands scruples and conscience on the part of the experimenter, not to be delegated to someone else lightly, for the simple reason that if the experimenter is beginning with less than dependable measurements, he cannot make depend­able correlations. If, after making some progress, he is required to repeat the experiments, now with better readings, the fatigue is self-inflicted besides the waste of time.

As to the means and methods of doing calibration, however, no general rules can be laid out. Depending on the instrument, the accuracy required, and the reference and standards available, a wide range of possibilities and “acceptability” prevail. Most of the basic standards, for example, the “correct” length of a meter, are supposed to be maintained in all countries by an organization like the U.S. Bureau of Standards. Copies of such standards are likely to be available in many significant reference places, such as the metrology lab of a local college or university or research orga­nization. But every experimenter who needs to measure a length need not run to a metrology lab. More often than not, a copy of a copy of . . . the locally accessible standard is adequate. This is where the question of the “needed accuracy” arises. An investiga­tor, heat-treating a weld specimen, for example, does not have to measure the soaking period to the accuracy of seconds; minutes are good enough. Here again, the experimenter needs to decide how much accuracy is required for each of the measurements involved. A sense of proportion, not to be taken for granted, is an asset in an experimenter. Fortunately, there are statistical meth­ods to help determine such issues when in doubt. But in most cases, comparing with one or two near-by standards may be ade­quate. For example, a dial indicator used in an experiment can be calibrated using the gage blocks with surface plate within the same lab. All that is needed is to check and correct, if need be, the measuring instruments in use beyond any reasonable doubt.

There is another kind of calibration we have not dealt with, which is, indeed, more often referred to, without qualification, as calibration. The issue we have dealt with so far is then considered an issue of “standards.” When a quality to be measured is such that a one-to-one correspondence exists, or can be reasonably established, with another quality that is more easily or more pre­cisely measurable, then the instrument used to measure the sec­ond quality is said to be “calibrated” in terms of the first quality. Often, reading electrical units, like current and voltage, is more convenient than reading some physical property, say tempera­      | Chapter 11 ture. For example, if we are required to measure, over a range, the temperature of molten glass, we cannot think of using a ther­mometer with a glass stem and mercury bulb. Instead we can use a thermocouple made of a “standard” pair of metal wires, which can be dipped safely in the liquid glass bath. The (emf) voltage generated in this circumstance, using that particular pair of wires, has a one-to-one correspondence with the temperature of the bath, this information having been well established in the body of science and available in many standard handbooks, which we normally need not cast doubt on. Then, we record the voltage values corresponding to the variation of temperature taken by adding heat energy to, or removing it from, the glass bath. (It is herein assumed that the voltmeter was independently subject to calibration relative to voltage readings.) Using a correspondence relation, either in graphical or tabular form, for every voltage value, we can find a temperature value. That is, for every reading on the voltmeter, there prevails a corresponding temperature of the bath. The voltmeter, at this level of use, is said to be “cali­brated for reading temperature.” This indeed is the principle used in all commercially available temperature recorders.

Source: Srinagesh K (2005), The Principles of Experimental Research, Butterworth-Heinemann; 1st edition.

Leave a Reply

Your email address will not be published. Required fields are marked *