Arduino-controlled battery charger circuit questions

Thread Starter

Raymond Genovese

Joined Mar 5, 2016
1,653
This came up in another thread and I want to understand more about the circuit.

More than two years ago, there was an AAC project to charge a battery with an Arduino: https://www.allaboutcircuits.com/projects/create-an-arduino-controlled-battery-charger/

I want to advance my understanding about how the circuit works. I am NOT looking to disparage the article. I am posting to get further insight and would appreciate input from those that know more than I do on these matters. Also, I apologize for any errors I may be making in presentation.

Details are in the article, but for convenience, here is the circuit schematic:

Looking at this TI Note as background http://www.ti.com/lit/an/snva557/snva557.pdf I see these two relevant graphs:
BC3.jpg

Temperature monitoring is pretty straight forward. It is the charging circuit that I want to understand better.

The program varies charging using PWM to an IRF510 (with a resistor and capacitor for “smoothing”). Battery temperature, battery current charge, battery voltage and charger operation time are all measured.

I am looking at the battery, charging resistor and MOSFET as three resistors:

bc2.jpg

The program sets the following gobals [Edited to add: See the article for the complete program]:

C:
int batteryCapacity = 2500;  //capacity rating of battery in mAh
float resistance = 10.0;  //measured resistance of the power resistor
int cutoffVoltage = 1600;  //maximum battery voltage (in mV) that should not be exceeded
float cutoffTemperatureC = 35;  //maximum battery temperature that should not be exceeded (in degrees C)
//float cutoffTemperatureF = 95;  //maximum battery temperature that should not be exceeded (in degrees F)
long cutoffTime = 46800000;  //maximum charge time of 13 hours that should not be exceeded
float targetCurrent = batteryCapacity / 10;  //target output current (in mA) set at C/10 or 1/10 of the battery capacity per hour
The main loop sets a PWM value connected to the MOSFET and then calculates:

C:
valueProbeOne = analogRead(analogPinOne);  // read the input value at probe one
voltageProbeOne = (valueProbeOne * 5000) / 1023; //calculate voltage at probe one in milliVolts
valueProbeTwo = analogRead(analogPinTwo);  // read the input value at probe two
voltageProbeTwo = (valueProbeTwo * 5000) / 1023; //calculate voltage at probe two in milliVolts
…and then the battery voltage

batteryVoltage = 5000 - voltageProbeTwo; //calculate battery voltage

…and the charging current

current = (voltageProbeTwo - voltageProbeOne) / resistance; //calculate charge current

…calculates a difference from the target charge

currentError = targetCurrent - current; //difference between target current and measured current

… gets the battery temperature (code not shown here)

Then…
The PWM value is modified (if necessary) to adjust the charging current to approximate the target current as defined in the global variable. But, the PWM value is not changed at this point, only calculated.

The battery temperature, battery voltage and overall charging time are then evaluated relative to the global variable limits and if any are exceeded, the PWM is changed to 0 (off). Again, the PWM value is not changed at this point only calculated.

At this point the code loops.

Once the time limit is reached, the PWM value will stay at zero and no further charging will take place. If am understanding correctly, if battery temperature or charging current are exceeded, the PWM will be changed to 0, but need not remain there, i.e., charging can continue when the values no longer exceed the limits.

My general question is whether or not I understand the circuit…as it is intended to operate.


A specific question is how the author is going from mV to mA (e.g., calculating charge current)
– is this reasonable because the V is constant? This issue perplexes me and is likely due simply to my EE ignorance, so I thought I would ask.
 
Last edited:
Top