Measurement accuracy, methods, tools and equipment. Physical quantities

The great Russian scientist Dmitry Ivanovich Mendeleev said: "Science begins where measurements begin." During this lesson, you will learn what measurement is, what is the scale division of the measuring instrument and how to calculate it, and also learn how to determine the error (inaccuracy) of measurement results.

Subject: Introduction

Lesson number 2: Physical quantities and their measurement.

Accuracy and error of measurements.

The purpose of the lesson: get acquainted with the concept of "physical quantities"; learn to measure physical quantities using the simplest measuring instruments and determine the measurement error.

Equipment: ruler, beaker, thermometer, ammeter, voltmeter.

1. Checking homework (15 minutes).

1) The first student solves problem number 5 at the blackboard.

2) The second student solves problem number 6 at the blackboard.

3) The rest write a physical dictation.

4) How to ask additional questions from the problem solvers at the board questions to the paragraph and basic definitions.

6) At 7 "A" as an additional question, ask about the messages on the leaflet (what conclusions did you make).

2. Learning new material (20 minutes).

You already know that in order to study various physical phenomena that occur with various physical bodies, you have to experiment. And during the experiments, it is necessary to measure various physical quantities, such as body mass, speed, time, height, length, width, etc. To measure physical quantities, various physical instruments are required.

2.1. What does it mean to measure a physical quantity?

(PZ): Measure a physical quantity - this means comparing it with another similar (as they say, homogeneous) physical quantity, taken as a unit.

For example, the length of an object is compared with a unit of length, the mass of a body is compared with a unit of mass. But if one researcher measures the length, for example, of the distance traveled in fathoms, and another researcher measures it in feet, then it will probably be difficult for them to understand each other right away.

Therefore, all over the world they try to measure physical quantities in the same units. In 1963, the International System of Units SI (SI - System International) was adopted. And it is in this system of units of measurement of physical quantities that we will continue to work.

For example, the most common physical quantities are length, mass, and time. In the International System of Units SI it is accepted:

Measure the length in meters (m); unit of measurement - 1 m;

The mass is measured in kilograms (kg), the unit of measurement is 1 kg;

Time measured in seconds (s), unit of measurement - 1 s.

Of course, you know other, minor units of measurement. For example, time can be measured in minutes, hours. But it is important to consider that we will try to conduct all our subsequent calculations in the SI system.

Units are often used that are 10, 100, 1000, 1,000,000, etc. times larger than the accepted units (the so-called multiple units).

For example: deca (dk) - 10, hecto (g) - 100, kilo (k) - 1000, mega (M) - 1000000, deci (d) - 0.1, centi (s) - 0.01, miles ( m) - 0.001.

Example: table length is 95 cm. in express the length in meters (m)?

60cm=60*0.01=0.6m

2.2. Scale division value of the measuring instrument

When taking measurements, it is very important to use measuring instruments correctly. You are already familiar with some instruments, such as a ruler, a thermometer. You have yet to get acquainted with others - with a measuring cylinder, a voltmeter, an ammeter. But all these devices have one thing in common: they have a scale.

To work correctly with a measuring device, you must first pay attention to its measuring scale.

For example, consider the measuring scale of the most ordinary ruler.

We consider an example with a ruler in a class together.

Using this ruler, you can measure the length of any object, but not in SI units, but in centimeters. Units of measurement must be indicated on the scale of any instrument.

On the scale you see strokes (the so-called dashes printed on the scale). The spaces between the strokes are called scale divisions. Do not confuse strokes with divisions!

Next to some strokes are numbers.

In order to start working with any device, it is necessary to determine the value of the division of the scale of this device.

(PZ): The division value of the scale of the measuring instrument is the distance between the nearest scale marks, expressed in units of the measured value. (in centimeters or millimeters for a ruler, degrees for a thermometer, etc.).

To determine the value of the division of the scale of any measuring instrument, it is necessary to select the two nearest strokes, next to which the numerical values ​​\u200b\u200bof the quantity are plotted. For example, two and one. Now subtract the smaller value from the larger value. The result obtained must be divided by the number of divisions between the selected strokes.

In our example, the student ruler.

Another example is the scale of a thermometer.

Rice. 2. Thermometer scale

We select the two nearest strokes with numbers, for example, 20 and 10 degrees Celsius (note that units of measurement, °С, are also indicated on this scale). There are 2 divisions between the selected strokes. Thus, we get

2.3. Measurement error and its determination.

For correct measurements, it is not enough to be able to determine the price of division of the instrument scale. Recall that when we talk about the distance from one point to another, we sometimes use expressions like "plus or minus half a kilometer." This means that we do not know the exact distance, that some inaccuracy was made in its measurement, or, as they say, an error.

The error is present in any measurement, absolutely accurate instruments do not exist. And the magnitude of the error can also be determined on the scale of the measuring instrument.

(ПЗ): Measurement error - is half the division value of the scale of the measuring device.

Example 1 For example, a regular student's ruler has a division value of 1 mm. Suppose, with its help, we measured the thickness of a piece of chalk and we got 12 mm. Half of the division value of the ruler is 0.5 mm. This is the measurement error. If we denote the thickness of a piece of chalk with the letter b, then the result of the measurement is written as follows:

b = 12 + 0.5(mm)

The sign (plus or minus) means that when measuring, we could make a mistake both up and down, that is, the width of a piece of chalk lies in the range from 11.5 mm to 12.5 mm.

I draw example No. 2 on the board with a smaller number of divisions, together with the class we calculate the CD and find the error.

Rice. 1. The scale of the usual ruler

CD \u003d (2cm - 1cm) / 5cm \u003d 0.2cm \u003d 2mm

Half of the division price of the ruler in this case will be equal to 1mm.

Then the width of the piece of chalk b = 12 + 1(mm), that is, in this case, the width of a piece of chalk lies in the range from 11 mm to 13 mm. The measurement spread is greater.

In both cases, we made the correct measurements, but in the first case, the measurement error was smaller, and the accuracy was higher than in the second, since the division value of the ruler was less.

Thus, from these two examples we can conclude:

(ПЗ): The smaller the division value of the instrument scale, the greater the accuracy (less error) of measurements using this instrument.

When recording values, taking into account the error, use the formula:

(PZ): A = a + ∆а,

where А is the measured quantity, а is the measurement result, ∆а is the measurement error.

3. Consolidation of the studied material (10 minutes).

Tutorial: exercise number 1.

4. Homework.

Textbook: § 4, 5.

Problem book: No. 17, No. 39. (detailed description of the tasks)

(explain how to write a detailed solution of problems!!!)

error called the deviation of the measurement result of a physical quantity (for example: pressure) from the true value of the measured quantity. The error arises as a result of the imperfection of the method or those. measuring instruments, insufficient consideration of the influence of external conditions on the measurement process, the specific nature of the measured quantities themselves and other factors.

The accuracy of measurements is characterized by the closeness of their results to the true value of the measured quantities. There is a concept of absolute and relative measurement error.

The absolute measurement error is the difference between the measurement result and the actual value of the measured quantity:

DX=Q-X,(6.16)

Absolute error is expressed in units of the measured value (kgf / cm2, etc.)

The relative measurement error characterizes the quality of the measurement results and is defined as the ratio of the absolute error DX to the actual value of the quantity:

dX=DX/X , (6.17)

Relative error is usually expressed as a percentage.

Depending on the reasons leading to the measurement error, there are systematic and random errors.

Systematic measurement errors include errors that, when repeated measurements under the same conditions, manifest themselves in the same way, that is, they remain constant or their values ​​change according to a certain law. Such measurement errors are determined quite accurately.

Random errors are called errors, the values ​​of which are measured during repeated measurements of a physical quantity, performed in the same way.

The assessment of the error of instruments is carried out as a result of their verification, i.e., a set of actions (measures) aimed at comparing instrument readings with the actual value of the measured value. The value of exemplary measures or indications of exemplary instruments is taken as the actual value of the measured quantity when checking working instruments. When assessing the error of exemplary measuring instruments, the actual value of the measurement of a quantity is taken to be the value of reference measures or the readings of reference instruments.

The main error is the error inherent in the measuring instrument under normal conditions (atmospheric pressure, Tair = 20 degrees, humidity 50-80%).

An additional error is an error caused by the measurement of one of the influencing quantities outside the normal conditions. (e.g. temperature, cf. meas.)

The concept of accuracy classes. Under the class of accuracy, a generalized characteristic of measuring instruments is taken, determined by the limits of permissible basic and additional errors, as well as other properties of these instruments that may affect their accuracy. The accuracy class is expressed as a number coinciding with the value of the permissible error.

An exemplary pressure gauge (sensor) of accuracy class 0.4 has an allowable error = 0.4% of the measurement limit, i.e. the error of a standard pressure gauge with a measurement limit of 30 MPa should not exceed + -0.12 MPa.

Accuracy classes of pressure measuring instruments: 0.16; 0.25; 0.4; 0.6; 1.0; 1.5; 2.5.

Sensitivity devices is the ratio of the movement of its pointer D n (the direction of the arrow) to the change in the value of the measured value that caused this movement. Thus, the higher the accuracy of the instrument, the higher the sensitivity, as a rule.

The main characteristics of measuring instruments are determined in the process of special tests, including calibration, in which the calibration characteristic of the instrument is determined, i.e. the relationship between its readings and the values ​​of the measured quantity. The calibration characteristic is compiled in the form of graphs, formulas or tables.

In the practical use of certain measurements, it is important to evaluate their accuracy. The term "measurement accuracy", i.e. the degree of approximation of the measurement results to some actual value, does not have a strict definition and is used for a qualitative comparison of measurement operations. For quantitative assessment, the concept of "measurement error" is used (the smaller the error, the higher the accuracy).

The error is the deviation of the measurement result from the actual (true) value of the measured quantity. In this case, it should be borne in mind that the true value of a physical quantity is considered unknown and is used in theoretical studies. The actual value of a physical quantity is established experimentally under the assumption that the result of the experiment (measurement) approaches the true value to the maximum extent possible. Evaluation of the measurement error is one of the important measures to ensure the unity of the measurement.

Measurement errors are usually given in the technical documentation for measuring instruments or in regulatory documents. True, if we take into account that the error also depends on the conditions in which the measurement itself is carried out, on the experimental error of the methodology and the subjective characteristics of a person in cases where he is directly involved in the measurements, then we can talk about several components of the measurement error, or about the total error .

The number of factors affecting the measurement accuracy is quite large, and any classification of measurement errors (Fig. 2) is conditional to a certain extent, since various errors, depending on the conditions of the measurement process, appear in different groups.

2.2 Types of errors

The measurement error is the deviation of the measurement result X from the true X and the value of the measured quantity. When determining measurement errors, instead of the true value of the physical quantity X and, its actual value X d is actually used.

Depending on the form of expression, there are absolute, relative and reduced measurement errors.

The absolute error is defined as the difference Δ "= X - X and or Δ \u003d X - X d, and the relative error is defined as the ratio δ \u003d ± Δ / X d · 100%.

The reduced error γ= ±Δ/Χ Ν ·100%, where Χ N is the normalizing value of the quantity, which is used as the measuring range of the device, the upper limit of measurements, etc.

As a given true value for multiple measurements of a parameter, the arithmetic mean value is used:

= i ,

where Xi is the result of the i-th measurement, - n is the number of measurements.

Value , obtained in one series of measurements, is a random approximation to X u. To assess its possible deviations from X and determine the estimate of the standard deviation of the arithmetic mean:

S( )=

To evaluate the dispersion of individual Xi measurement results with respect to the arithmetic mean determine the sample standard deviation:

σ =

These formulas are used subject to the constancy of the measured value during the measurement process.

These formulas correspond to the central limit theorem of probability theory, according to which the arithmetic mean of a series of measurements always has a smaller error than the error of each specific measurement:

S( )=σ /

This formula reflects the fundamental law of error theory. It follows from it that if it is necessary to increase the accuracy of the result (with the systematic error excluded) by a factor of 2, then the number of measurements must be increased by a factor of 4; if the accuracy needs to be increased by 3 times, then the number of measurements

increase by 9 times, etc.

It is necessary to clearly distinguish between the use of the values ​​S and σ: the first is used in assessing the errors of the final result, and the second is used in assessing the error of the measurement method. The most probable error of a single measurement Δ in 0.67S.

Depending on the nature of the manifestation, the causes of occurrence and the possibilities of elimination, there are systematic and random measurement errors, as well as gross errors (misses).

The systematic error remains constant or regularly changes with repeated measurements of the same parameter.

Random error changes randomly under the same measurement conditions.

Gross errors (misses) arise due to erroneous actions of the operator, malfunction of measuring instruments or sudden changes in measurement conditions. As a rule, gross errors are revealed as a result of processing the measurement results using special criteria.

The random and systematic components of the measurement error appear simultaneously, so that their total error is equal to the sum of the errors when they are independent.

The value of the random error is not known in advance, it arises due to many unspecified factors. Random errors cannot be excluded from the results, but their influence can be reduced by processing the measurement results.

For practical purposes, it is very important to be able to correctly formulate the requirements for measurement accuracy. For example, if we take Δ = 3σ as the permissible manufacturing error, then by increasing the requirements for accuracy (for example, to Δ = σ), while maintaining the manufacturing technology, we increase the probability of rejects.

As a rule, it is believed that systematic errors can be detected and eliminated. However, in real conditions, it is impossible to completely eliminate these errors. There are always some non-excluded residuals that need to be taken into account in order to evaluate their boundaries. This will be the systematic measurement error.

In other words, in principle, the systematic error is also random and the indicated division is due only to the established traditions of processing and presenting the measurement results.

Unlike a random error, which is generally identified regardless of its sources, a systematic error is considered in terms of its components, depending on the sources of its occurrence. There are subjective, methodological and instrumental components of the error.

The subjective component of the error is associated with the individual characteristics of the operator. As a rule, this error occurs due to reading errors (approximately 0.1 scale divisions) and incorrect operator skills. Basically, the systematic error arises due to the methodological and instrumental components.

The methodological component of the error is due to the imperfection of the measurement method, the methods of using measuring instruments, the incorrectness of the calculation formulas and the rounding of the results.

The instrumental component arises due to the inherent error of the measuring instruments, determined by the accuracy class, the influence of the measuring instruments on the result and the limited resolution of the measuring instruments.

The expediency of dividing the systematic error into methodological and instrumental components is explained by the following:

To improve the accuracy of measurements, limiting factors can be identified, and, therefore, a decision can be made to improve the methodology or choose more accurate measuring instruments;

It becomes possible to determine the component of the total error, which increases with time or under the influence of external factors, and, therefore, purposefully carry out periodic verification and certification;

The instrumental component can be assessed before the development of the methodology, and only the methodological component will determine the potential accuracy of the selected method.

2.3 Measurement quality indicators

The unity of measurements, however, cannot be ensured only by the coincidence of errors. When making measurements, it is also important to know the measurement quality indicators. The quality of measurements is understood as a set of properties that determine the receipt of results with the required accuracy characteristics, in the required form and on time.

The quality of measurements is characterized by such indicators as accuracy, correctness and reliability. These indicators should be determined by estimates, which are subject to the requirements of consistency, unbiasedness and efficiency.

The true value of the measured value differs from the arithmetic mean of the observation results by the value of the systematic error Δс, i.e. X = -Δ s. If the systematic component is excluded, then X = .

However, due to the limited number of observations, the value also impossible to determine exactly. One can only estimate its value, indicate with a certain probability the boundaries of the interval in which it is located. Estimate the numerical characteristic of the distribution law X, represented by a point on the number axis, is called a point. Unlike numerical characteristics, estimates are random variables, and their value depends on the number of observations n. An estimate is called consistent if, as n→∞, it reduces in probability to the estimated value.

An estimate is called unbiased if its mathematical expectation is equal to the estimated value.

An effective estimate is one that has the smallest variance σ 2 = min.

The listed requirements are satisfied by the arithmetic mean value results of n observations.

Thus, the result of a single measurement is a random variable. Then the measurement accuracy is the proximity of the measurement results to the true value of the measured quantity. If the systematic components of the error are excluded, then the accuracy of the measurement result characterized by the degree of dispersion of its value, i.e. dispersion. As shown above, the variance of the arithmetic mean σ is n times smaller than the variance of an individual observation result.

H Figure 3 shows the distribution density of the individual and total measurement results. The narrower shaded area refers to the probability density distribution of the mean. The correctness of the measurements is determined by the closeness of the systematic error to zero.

The reliability of measurements is determined by the degree of confidence in the result and is characterized by the probability that the true value of the measured quantity lies in the indicated neighborhoods of the actual one. These probabilities are called confidence limits, and the boundaries (neighborhoods) are called confidence boundaries. In other words, the reliability of the measurement is the closeness to zero of the non-excluded systematic error.

A confidence interval with boundaries (or confidence limits) from - Δ d to + Δ d is the interval of random error values, which, with a given confidence probability P d, covers the true value of the measured value.

R d ( - Δ d ≤, X ≤ + Δ d ).

With a small number of measurements (n 20) and using the normal law, it is not possible to determine the confidence interval, since the normal distribution law describes the behavior of a random error in principle for an infinitely large number of measurements.

Therefore, with a small number of measurements, the Student's distribution or t-distribution (proposed by the English statistician Gosset, published under the pseudonym "student") is used, which makes it possible to determine confidence intervals with a limited number of measurements. The boundaries of the confidence interval are determined by the formula:

Δ d = t S( ),

where t is the Student's distribution coefficient, depending on the given confidence probability R d and the number of measurements n.

As the number of observations n increases, the Student's distribution quickly approaches the normal one and coincides with it already at n ≥ 30.

It should be noted that measurement results that do not have reliability, that is, the degree of confidence in their correctness, are of no value. For example, a measuring circuit sensor can have very high metrological characteristics, but the influence of errors from its installation, external conditions, recording and signal processing methods will lead to a large final measurement error.

Along with such indicators as accuracy, reliability and correctness, the quality of measuring operations is also characterized by the convergence and reproducibility of the results. These indicators are most common in assessing the quality of tests and characterize their accuracy.

Obviously, two tests of the same object by the same method do not give identical results. An objective measure of them can serve as statistically valid estimates of the expected closeness of the results of two or more tests obtained with strict adherence to their methodology. Conformity and reproducibility are taken as such statistical estimates of the consistency of test results.

Convergence is the closeness of the results of two tests obtained by the same method, on identical setups, in the same laboratory. Reproducibility differs from convergence in that both results must be obtained in different laboratories.

The accuracy of measurement of quantities is the ability to streamline the existence of a person and his environment. It would be impossible to imagine a life in which there would be no familiar and approved concepts of time, length or mass for all of us. However, in addition to being able to distinguish them, it is equally important to learn how to determine and calculate distances and segments, weight, speed of movement of objects, the course of time intervals. Over the thousand-year history of existence, humanity has acquired a lot of invaluable knowledge and managed to systematize them into separate sciences.

Concepts and notation - the basics of metrology

Metrology is a science that helps to understand the measurement of different quantities. It makes it possible to understand what a measure is, the unity and standardization of quantities, defines such concepts as measurement accuracy, error, introduces a variety of measuring instruments and tools.

The measurement process is associated with the determination of data regarding a particular quantity through experiments, as well as the subsequent correlation of the obtained values ​​\u200b\u200bwith generally accepted standards and units. Thus, we can assume that the accuracy of measurement directly depends on how close the data obtained as a result of experiments are to the true values ​​of the quantity, which, in principle, are not subject to dispute and are an axiom.

Absolute inaccuracy

Scientists say that it is almost impossible to measure anything absolutely correctly. The fact is that there are too many factors that influence the process of determining the value, independent of human actions. In this regard, metrology admits the possibility of the existence of errors, which are inaccuracies obtained in the measurement process, as well as a certain indicator that shows deviations from the generally accepted truth and norm.

The error can be systematic or random. It is practically impossible to exclude the first during the experiment, because this is such a factor that will distort the result every time, but a random error can be the result of a gross error or inaccuracy of analytical activity.

It is also possible to reduce the probability of error by using more advanced methods and tools, minimizing the influence of external influences during the experimental determination of values. An elementary example of reducing errors can be considered the use of watches, if the time is measured not in hours and minutes, but in fractions of a second, which is possible with electronic stopwatches.

Measure seven times...

The need to obtain absolutely accurate knowledge of quantities is due to the high manufacturability of the modern world. If the first piece of furniture was a roughly knocked together toilet seat, the details of which were cut out by eye, then current technologies help to create elements of the same stools with an error of up to a millimeter. It is possible that such microscopic values ​​are absolutely unimportant in everyday life of a person, however, when the accuracy of measurement concerns science, medicine, production, it becomes a decisive factor in the success of an enterprise.

If you look closely, then every person in the house has the simplest measuring instruments. Elementary examples of these are a construction tape measure, ruler, kitchen or floor scales, steelyards, electricity, water, gas meters, various timers and clocks, thermometers and thermometers. Using the latter as an example, one can once again demonstrate the methods and accuracy of measurement. So, the usual one installed indoors in order to determine the air temperature in the room has a scale with divisions of ten degrees, while a mercury thermometer designed to measure the temperature of the human body is divided into a tenth of a degree, which helps to reduce the likelihood of errors during the collection of anamnesis of the patient .

What is length and how to measure it?

One of the most recognizable and definite quantities is length. Probably, initially a person measured the distance with the help of steps, but now the distance units are normalized. The world standard is a metric system, where the largest value is measured in kilometers, conditionally divided into meters, centimeters and millimeters. There are also intermediate values ​​(decimeters, micrometers), but they are often used only in highly specialized areas.

In order to determine the length, it is necessary to select a specific segment that will have a beginning and an end (points A and B), and so the length is the value of the largest distance on the plane between these points. To measure the length, many tools have been created from elementary ones, such as a centimeter and a ruler, to control and measuring equipment of a high degree of accuracy with a minimum error.

Home appliances for measuring length

It is unlikely that an ordinary person will need to measure long distances, each of us approximately knows the length of his routes, such data can be clarified using a car speedometer, a sports and tourist pedometer, or even using a smartphone by downloading a special program into it.

Houses are more often used for construction and repair. A construction tape measure is what any man has in his pantry. It is a metal tape with a scale from 0 to 3, 5, 7.5, 30 meters applied to one or both of its sides with additional centimeter and millimeter divisions. An alternative to a simple tape measure can be with which you can calculate distances up to 250 m, in addition, measuring the length with it is easy to do even alone. There are also models that display the area and volume of the room.

Calipers

Measurement with a caliper will give the most accurate result. This is a device that is used in industry and provides an opportunity to find out the linear value of parts ranging in size from 0.1 mm to 15 cm with a minimum error. To find out how close the scale is to the true value, you can use such comparative methods - comparison with an already tested tool or with a finished part of a suitable size.

There are several types of this device, their principle of operation is similar, they differ in the length of the millimeter scale and the mechanism by which the actual measurement is made. Vernier calipers are the most difficult to work with, but this option makes it possible to minimize systematic errors. In a device with a dial or digital screen, measurements are made using electronics, and if the instrument is of good quality, then its results are reliable with a high degree of probability.

Complex technologies

Even more sophisticated computing equipment is instrumentation used in industrial enterprises and in organizations involved in the installation of power lines, laying television, telephone and Internet cables. This technique performs several functions at once. The main task is to measure the length of the cable, however, along the way, the device can detect errors in the operation of the wire, indicating the place of power outage, which significantly minimizes the funds and time required to perform repair work.

There are different classes of measuring instruments. The most elementary are manual installations with cable length meters, more complex options are able to calculate not only the length of wires, but also measure wide rolls with fabrics, paper, and various types of cords. In addition to the fact that their use is advisable on production lines, the introduction of such equipment in warehouses and large retail outlets is spreading.

How to embrace the immensity

Measuring time is also a difficult and important task. In life situations, few people pay attention to the fact that a personal watch can rush or lag several minutes behind the generally accepted standard. However, public organizations and enterprises cannot afford such liberties, and therefore compare the time with indicators in state institutions, which, in turn, are guided by data obtained using satellites.

It is worth noting that such a concept as exact time is rather arbitrary. The time zones into which the planet is divided are of an objective nature and are directly dependent on state borders, and sometimes on the political will of the governments of different countries.

As is known, when measuring (testing, controlling, analyzing) a physical quantity, the result must be expressed with an accuracy corresponding to the task and established requirements.

Measurement result accuracy is a qualitative indicator, which, when processing the results of observations (single observed values), must be expressed through its quantitative characteristics. At the same time, the observed value is according to GOST R 50779.10-2000 (ISO 3534.1-93) “Statistical methods. Probability and bases of statistics. Terms and definitions” is the characteristic value obtained as a result of a single observation in multiple measurements.

A number of accuracy indicators are currently used in existing regulatory documents. Our analysis of normative and legislative documents showed that the Federal Law “On Ensuring the Uniformity of Measurements” does not contain a definition of the fundamental metrological concept “indicators of measurement accuracy”.

In the recently used (RMG 29-99) and new (RMG 29-2013) terminological documents, the concept of "measurement accuracy indicators" and its definition are also not regulated.

Among the relevant documents (interstate - GOST, national - GOST R, as well as methodological instructions and recommendations - MI, R, RD), we also did not find a standard regulating the indicators of measurement accuracy and the form of their expression.

However, in the note to the concept of “measurement result” given in RMG 29-2013, it is indicated that “... accuracy indicators include, for example, the standard deviation, confidence limits of the error, standard measurement uncertainty, total standard and extended uncertainty”.

GOST R ISO 5725-1-2002 defines accuracy as the degree of closeness of a measurement result to an accepted reference value. The normative document reflects the concept of "accepted reference value" used in international metrological practice instead of the concept of "true value of a physical quantity" characteristic of domestic metrology until 2003 (before the adoption of MS ISO 5725 in our country).

The document explains as a note (with reference to the international standard) "... in relation to multiple measurements," the term "accuracy", when it refers to a series of measurement results (tests), includes a combination of random components and a total systematic error (ISO 3534- 1), which does not contradict the approach to expressing accuracy in terms of error components of the measurement result”. In addition to the general concept of a qualitative characteristic of accuracy, an explanation is given of which parameters can be taken as quantitative characteristics of multiple measurements (tests).

However, until 1986, in our country, accuracy indicators were regulated by GOST 8.011-72 “GSI. Indicators of measurement accuracy and forms of expression of measurement results. Currently, GOST 8.011-72 has been replaced by MI 1317 (the document is relevant in the 2004 version).

In metrological practice, the accuracy of measurements is described by a number of indicators shown in Figure 1.3, and some of them are expressed in the concept of error, and the other part in the concept of uncertainty.

The new version of the International Dictionary of Terms and Definitions - VIM 3 (2010) emphasizes that “the concept of “measurement accuracy” is not a quantity and it cannot be assigned a numerical value of a quantity. It is considered that a measurement is more accurate if it has a smaller measurement error. In addition, VIM 3 notes that a complete characterization of the accuracy of measurements can be obtained by evaluating both indicators of accuracy - correctness and precision. The term "measurement accuracy" should not be used to refer to the correctness of measurements, and the term measurement precision should not be used to refer to "measurement accuracy", although the latter has a connection with these two concepts.

Figure 1.3 - Indicators of the accuracy of the results, traditionally used in regulatory documents

Of all the accuracy indicators presented and traditionally used in metrological practice, we have singled out only those that give a complete picture of the accuracy indicators of measurement results. The results of the analysis are summarized in tables 1.1 and 1.2.

As "indicators of measurement accuracy", as follows from the diagram (Figure 1.4), characteristics can also be used,

regulated by GOST R 8.563-2009:

Characteristics of measurement error according to MI 1317-2004;

characteristics of uncertainty according to RMG 43-2001 (the application of the MD on the territory of the Russian Federation was discontinued from 01.10.2012);

Accuracy indicators according to GOST R ISO 5725-2002.

Table 1.1 - Analysis of the possibility of applying the characteristics of the error in as indicators of the accuracy of the measurement result_

Feature or

d, mathematical expression G

in the concept of error

or uncertainty

Comment

1 Measurement error

Expression (1) is theoretical, since the true value of the measured quantity always remains unknown, therefore equation (2) is used in practice. As a model of measurement error, a model of a random variable (or a random process) is taken. Therefore, metrologists do not consider the possibility of using expression (2) to develop ideas about the indicators of measurement accuracy.

2 Borders, in

error

measurements

located with

given

probability

The limits of measurement error for a given probability give full reason to judge the possible degree of closeness of the measurement result to the actual value of the measured quantity.

3 Mean square deviation of the error

Knowledge of Od allows (under certain assumptions about the form of the error probability density distribution function) to estimate the range of values ​​in which X l can be located.

4 Average

quadratic

deviation

random

component

errors

measurements

Knowing only the standard deviation of the random component of the measurement error Odel in the general case does not allow us to judge the possible degree of closeness of the measurement results to the actual value of the measured value Х l, since in addition to the random component of the measurement error there may be a systematic component.

Continuation of table 1.1

5 Convergence

results

measurements

Estimated by convergence measures

By itself, the convergence of measurements does not give the slightest idea of ​​the limits in which the measurement error may lie.

6 Reproducibility of results

Assessed by measures of reproducibility

Like the convergence of measurements, reproducibility also does not give an idea of ​​the limits in which the measurement error can lie.

7 Average

quadratic

deviation

systematic

component

errors

measurements

By themselves, the characteristics of the systematic component of the measurement error (no matter how satisfactory they may seem) do not allow us to judge the boundaries in which the total measurement error can lie (at a given probability). The reasons for this are not taking into account the role of the random component of the measurement error.

8 Borders, in

which are not

excluded

systematic

component

errors

measurements

located with

given

probability

9 Measurement precision

Characterizes the degree of closeness between independent results and measurements obtained under certain accepted conditions.

Knowing only the standard deviation of precision does not allow us to judge the degree of possible closeness of the measurement results to the actual value of the measured value X l.

Regulated by the national standard GOST R ISO 5725-2002, harmonized with international requirements, the measurement accuracy indicators are shown in Figure 1.5.


Figure 1.4 - Measurement accuracy indicators of the methodology regulated by GOST R 8.563-2009


Figure 1.5 - Indicators of measurement accuracy, regulated in GOST R ISO 5725-1-2002

Table 1.2 - Analysis of the possibility of applying characteristics

uncertainties as indicators of the accuracy of a measurement result_

Index

Characteristic or mathematical expression in the concept of error or uncertainty