I've spent many hours reading posts across the forums on this site and I have learned a great deal but I find myself stuck trying to understand the load cell calculation I have inherited. I am in the process of rewriting software that obtains reading from a load cell via a micro-controller in real-time and displays the result in a graph. While I could just take was in the code and move it to the rewrite, I was hoping to understand why it is written the way it is.

What I have so far ...

The load cell specifications I found here: http://discountloadcells.com/doc/spec_sheets/s_type/hbm_rsc.pdf. Note that mine has a 500lb rating rather than kg.

The calculation in code is:

LC Factor = ((LC Rating) /( LC Calibration * LC Sensitivity * Excitation V)) * Zero Offset

Scale Reading = (LC Rating * V measured) / LC Factor

where:

LC Rating = 500lbs (from load cell markings/spec sheet)

LC Calibration = 2 <-- don't understand where this came from

LC Sensitivity = 2 (specsheet mV /V)

Excitation V = 5 (from micocontroller, spec sheet, and digi-meter)

Zero Offset = .004882 V (looks to be the measure voltage for the load cell unloaded. it's roughly what the digi-meter shows at least)

Given the above, the load cell factor is 0.12207. At a voltage reading of 0.015, the code outputs 61.44 lbs.

Given everything I have read, the calculation does not seem correct. I especially do not understand where the Load Cell calibration value originates and why there is a load cell factor involved. Everything that I have read seems to indicate that the calculation to derive lbs should be as simple as (LC Rating * V measured)/ (LC Sensitivity * Excitation V) and the offset subtracted (i.e. Y = MX + B). Could someone help me:

1) Understand why this calculation is written the way it is (i.e. Load Cell factor)

2) Where the LC Calibration is coming from and what it represents

Again I could take it at face value and move on but it's driving nuts trying understand and I just can't let it go.

Thanks in advance.

-Ed