logo
Rumah >
Berita
> Berita Perusahaan Tentang Analysis of the reasons for the significant differences in weighing data after replacing and repairing analog sensors with the same measuring range

Analysis of the reasons for the significant differences in weighing data after replacing and repairing analog sensors with the same measuring range

2025-11-28

Berita perusahaan terbaru tentang Analysis of the reasons for the significant differences in weighing data after replacing and repairing analog sensors with the same measuring range

Analysis of the reasons for the significant differences in weighing data after replacing and repairing analog sensors with the same measuring range

 

 

In the daily operation and maintenance of industrial weighing systems, this kind of problem is frequently encountered: after replacing or repairing an analog load cell, even if its nominal range is the same as the original sensor, the weighing result still deviates significantly. In some cases, the error even exceeds the normal allowable range, seriously affecting the measurement accuracy of production.

This phenomenon seems simple, but in fact it is closely related to subtle differences in the manufacturing process, performance parameter control and national standard requirements of analog load cells. This article, in combination with the Chinese national standard GB/T 7551-2019 Load Cells, starts from the manufacturing requirements of the core performance parameters of load cells, and analyzes the deeper reasons for data deviation even when the ranges are identical.


1. Manufacturing Requirements for Core Performance Parameters of Analog Load Cells in National Standards

The standard GB/T 7551-2019 Load Cells, as the core standard for the production and testing of analog load cells in China, clearly specifies the manufacturing accuracy requirements for multiple key performance parameters of load cells with the same range. These parameters directly determine the weighing accuracy of the load cell, and are also the key source of subsequent data differences.

Among them, the parameters most closely related to data deviation mainly include the following four categories:


(1) Sensitivity and Sensitivity Temperature Coefficient

Sensitivity is one of the core indicators of analog load cells. It refers to the change in output signal of the sensor under the rated load (i.e., the upper limit of the full scale).

According to the standard, the typical sensitivity of analog load cells is generally
2.0 mV/V ± 0.02 mV/V (or other fixed nominal values with allowable small deviations).

At the same time, the standard also specifies the limit of the sensitivity temperature coefficient:
Within the operating temperature range of −10°C to +40°C, the variation of sensitivity with temperature must be ≤ 0.002% FS/°C (FS = full scale).

This means that even if two load cells have the same nominal range, small differences in sensitivity values (for example, one is 2.01 mV/V and the other is 1.99 mV/V) or non-compliance with the sensitivity temperature coefficient will lead to different analog output signals (voltage/current) under the same load, which will eventually be converted into deviations in weighing data.


(2) Nonlinearity Error

Nonlinearity error refers to the maximum deviation between the actual relationship of the sensor’s output signal and the load, and the ideal linear relationship.

The national standard requires:

  • For analog load cells, the nonlinearity error should be ≤ 0.02% FS (Class C), or

  • 0.01% FS (Class B).

For load cells with the same range, differences in nonlinearity may arise due to variations in manufacturing processes, such as:

  • Machining precision of the elastic element

  • Flatness and thickness uniformity of the strain gauge area

  • Deviations in strain gauge bonding position

For example:
The original load cell has a nonlinearity error of 0.01% FS, while the replaced one has 0.018% FS.
At a load close to full capacity (e.g., a 100 kg load cell loaded with 90 kg), the output signal difference can reach:
[(0.018% − 0.01%) × 100 kg = 0.008 kg]

If the range is larger (e.g., 1000 kg), the deviation will further expand to:
[(0.018% − 0.01%) × 1000 kg = 0.08 kg]

This is already sufficient to significantly affect weighing accuracy.


 


(3) Hysteresis Error

Hysteresis error refers to the maximum difference in the output signal of a load cell under the same load during the loading and unloading processes.

According to the national standard, the hysteresis error should be:
0.02% FS (Class C) or
0.01% FS (Class B).

This error mainly originates from the material properties of the elastic element of the load cell (such as mechanical hysteresis characteristics) and inconsistencies in the bonding properties of the strain gauge. If the elastic structure uses different batches of alloy materials, or the curing characteristics of the bonding adhesive for strain gauges are inconsistent, hysteresis errors will differ from those of the original sensor.

For example, in frequent loading–unloading applications (such as dynamic conveyor weighing):

  • The original load cell outputs 1.000 mV at 50 kg loading, and 0.999 mV at 50 kg unloading, resulting in a hysteresis error of 0.001 mV.

  • The replacement load cell outputs 1.000 mV at 50 kg loading, and 0.997 mV at 50 kg unloading, resulting in a hysteresis error of 0.003 mV.

Over long-term operation, this will lead to repeatability deviations in the weighing data.


(4) Zero Drift and Zero Temperature Coefficient

Zero drift refers to the variation in the output signal of the load cell over time under the no-load (zero) condition.
The zero temperature coefficient indicates the magnitude of zero-point variation with temperature changes.

According to the national standard, the zero temperature coefficient should be ≤ 0.002% FS/°C.

The zero stability of analog load cells largely depends on the temperature stability of the strain gauge and the compensation design of the circuit. If the new load cell does not undergo sufficient temperature compensation during production (e.g., deviation in the selection of compensation resistor values), or if the temperature sensitivity of the strain gauge differs from that of the original sensor, environmental temperature changes (such as day–night temperature differences or thermal effects from equipment operation) will cause significant zero-point output deviations.

For example:

  • The original load cell outputs 0.000 mV at 20°C under no load, and 0.001 mV at 30°C.

  • The replacement load cell outputs 0.000 mV at 20°C, and 0.003 mV at 30°C.

A temperature change of only 10°C results in a signal drift of 0.002 mV, which, when converted into weight data, may cause the scale to display a positive or negative value at zero load, seriously affecting actual weighing results.


2. Practical Scenarios and Cause Analysis of Data Deviations Despite Same Rated Range

Even if the rated range of the replacement load cell is identical to that of the original one, during actual replacement and maintenance, the subtle differences in the above standard parameters will be amplified through the entire chain of
signal acquisition → transmission → processing,
and ultimately appear as significant deviations in weighing data.

Based on actual operation and maintenance scenarios, the specific causes can be classified into the following three categories:


**(I) Production Process Variations: "Hidden Performance Differences" in Sensors of the Same Range**

National standards specify the allowable ranges for performance parameters but do not require parameters of sensors with the same range to be identical. As long as they fall within the limits, sensors from different manufacturers or batches can still have minor differences, which become directly exposed after replacement.

For example, a factory uses a 100kg analog sensor (Class C). The original sensor from Manufacturer A has a sensitivity of 2.005 mV/V, a nonlinearity error of 0.012% FS, and a zero-temperature coefficient of 0.0015% FS/°C. The newly replaced sensor from Manufacturer B has a sensitivity of 1.995 mV/V, a nonlinearity error of 0.018% FS, and a zero-temperature coefficient of 0.0018% FS/°C. From a standards perspective, both meet Class C requirements. However, in practical application:

* When a 50kg load is applied, the original sensor output signal is (50kg / 100kg) × 2.005 mV/V × Excitation Voltage (typically 10V) = 1.0025 mV. The new sensor output is (50kg / 100kg) × 1.995 mV/V × 10V = 0.9975 mV. The sensitivity difference alone causes a signal deviation of 0.005 mV, corresponding to a weight data deviation of 0.005mV ÷ (2.0 mV/V × 10V / 100kg) = 0.025 kg.
* If the ambient temperature increases from 20°C to 30°C, the zero drift of the original sensor is 0.0015% FS/°C × 10°C × 100kg = 0.15 kg, while for the new sensor it is 0.0018% FS/°C × 10°C × 100kg = 0.18 kg. The temperature change adds another 0.03 kg of deviation. The combined total deviation reaches 0.055 kg. If used for food packaging (e.g., requiring ±0.05 kg accuracy), this would directly cause products to be over or underweight.

Furthermore, some smaller manufacturers, to reduce costs, might not strictly calibrate parameters according to national standards. For instance, the actual sensitivity deviation might reach 0.05 mV/V (exceeding the standard requirement of ±0.02 mV/V), yet the sensor is still labeled as "100kg range". Data differences after replacing with such sensors would be even more pronounced.

**(II) Installation and Calibration Processes: Failing to Meet the "Signal Adaptation Requirements" of the Original System**

The accuracy of data from analog sensors depends not only on their own performance but is also closely related to the installation method and system calibration. Even if a replacement sensor's parameters comply with national standards, failure to operate according to the original system's adaptation requirements during replacement can lead to data deviations.

1. **Installation Position and Load State Deviation**
The output signal of an analog sensor is directly related to the direction of force and the flatness of installation. National standards require that during sensor installation, the load must act vertically on the center of the elastic element, and the flatness error of the mounting surface should be ≤ 0.1 mm/m. If the replacement sensor is installed with a positional offset (e.g., 5mm shift from the original center position) or if the mounting surface is not leveled (e.g., having a 0.2 mm/m tilt), the actual force on the sensor will not align with the "rated load direction" of its nominal range. For instance, a 100kg sensor might experience 98kg of vertical load but also bear an additional 2kg of lateral force, causing the output signal to be lower than normal, manifesting as a "weighing data deviation".
Additionally, in scenarios involving multiple sensor assemblies (e.g., in vehicles, hoppers), national standards require that the load distribution uniformity deviation among sensors be ≤ 1% FS. If, when replacing one sensor, its height is not adjusted (e.g., creating a height difference exceeding 0.5mm compared to other sensors), the load may concentrate on the other sensors, leaving the new sensor under-loaded. This results in overall weighing data being lower than expected.

 

 

**2. Failure to Re-perform System Calibration**

The signal from an analog sensor must undergo "amplification - filtering - analog-to-digital conversion" by an instrument before it can be converted into weighing data. National standards require that an analog weighing system must undergo re-"system calibration" after replacing a sensor. This involves loading standard weights and adjusting the instrument's amplification factor and zero-point compensation value to match the sensor's output signal to the standard weight.

If calibration is not performed after replacement, and the instrument continues to use the parameters of the original sensor (e.g., the original sensor's sensitivity of 2.005 mV/V versus the new sensor's 1.995 mV/V), the weight calculated by the instrument will be deviated. For example, when a 50kg standard weight is loaded, the new sensor outputs 0.9975mV (as in the previous case), but if the instrument still calculates based on the 2.005 mV/V sensitivity, the resulting weight is 0.9975mV ÷ (2.005mV/V × 10V / 100kg) ≈ 49.75kg, which differs from the actual 50kg by 0.25kg—a deviation far exceeding the standard allowable range.

Some users mistakenly believe that "sensors with the same range can be directly replaced" and overlook the system calibration step, which is a common cause of data discrepancies.

**(III) Aging and Wear: "Performance Decay Differences" Between Old and New Sensors**

After long-term use, analog sensors experience performance parameter shifts from their initial state due to aging and wear. New sensors are in their "initial performance state." Even if the range is the same, the parameter differences between the old and new sensors can lead to data deviations—a phenomenon particularly evident when replacing sensors that have been in use for over 5 years.

According to national standards, the typical service life of an analog sensor is 10 years. However, performance decay accelerates in harsh environments (e.g., high temperature, humidity, dust):
* The elastic element may undergo "plastic deformation" under long-term load, leading to decreased sensitivity (e.g., from 2.0 mV/V to 1.98 mV/V).
* Aging of the strain gauge bonding layer can increase hysteresis error (e.g., from 0.01% FS to 0.03% FS).
* Oxidation of compensation resistors in the circuit can exacerbate zero drift (e.g., from 0.001 mV/h to 0.005 mV/h).

When a new sensor is installed, its parameters comply with the "initial requirements" of the national standard (e.g., sensitivity 2.005 mV/V, hysteresis error 0.012% FS). However, the system's instrument may have adapted to the "decayed parameters" of the old sensor (e.g., calculating based on an effective sensitivity of 1.98 mV/V). If not recalibrated, the output signal of the new sensor will be "over-amplified" by the instrument, manifesting as "heavier weighing data." For instance, under a 50kg load, the new sensor outputs 1.0025mV. If the instrument calculates using the old sensor's sensitivity of 1.98 mV/V, the resulting weight is 1.0025mV ÷ (1.98mV/V × 10V / 100kg) ≈ 50.63kg, differing from the actual 50kg by 0.63kg.

**III. Solutions: Reducing Data Discrepancies through Standard Compliance and Operational Optimization**

To prevent data discrepancies after replacing analog sensors of the same range during maintenance, it is essential to manage the entire process from "selection - installation - calibration," strictly adhering to national standard requirements while optimizing operations based on the actual application scenario:

 

 

**(I) Selection: Prioritize Compliant Products with Matched Parameters**

* During replacement, priority should be given to products from the "same manufacturer and same model" as the original sensor to ensure parameters such as sensitivity, nonlinearity error, and temperature coefficients are consistent (deviation ≤ 0.01mV/V or 0.005% FS).
* If the same model is unavailable, it is necessary to request parameter test reports from the manufacturer compliant with "GB/T 7551-2019", focusing on verifying key indicators like sensitivity, nonlinearity error, and zero-temperature coefficient, ensuring deviations are minimized (e.g., sensitivity deviation ≤ 0.005mV/V).

**(II) Installation: Strictly Adhere to Standard Requirements to Ensure Uniform Load Distribution**

* Before installation, check the flatness of the mounting surface (use a level to ensure error ≤ 0.1mm/m). During installation, ensure the force acts vertically on the sensor, avoiding lateral forces.
* For multi-sensor assemblies, use height gauges to adjust the height difference between sensors to ≤ 0.2mm, ensuring even load distribution.

**(III) Calibration: System Calibration is Mandatory After Replacement**

* According to the national standard "GB/T 14249.1-2008 Weighing Instruments - General Technical Requirements", after replacing an analog sensor, "multi-point calibration" must be performed using standard weights (accuracy class not lower than M1), including at least five points: zero, 25% FS, 50% FS, 75% FS, and 100% FS.
* Adjust the amplification factor and zero-point compensation via the instrument so that the weighing data error at each calibration point is within the range allowed by the national standard (e.g., for Class III instruments, the permitted error is ≤ 0.1%).

**IV. Summary**

The occurrence of weighing data discrepancies after replacing analog sensors of the same range essentially stems from the conflict between the "parameter deviations permitted by national standards" and the "precision requirements of practical application scenarios," coupled with operational oversights in installation and calibration.

Although "GB/T 7551-2019" provides a compliant framework for sensor production, it does not eliminate the subtle performance variations among products of the same range. These variations are amplified in practice through the signal processing chain, ultimately affecting weighing accuracy.