Hi, I have a little piece of code I can't quite crack:
c=c*0.0196;
c actually receives inputs from an interfaced hardware (1's and 0's essentially), 0.0196 is the resolution (of the ADC). I was wondering how could such a multiplication be possible between a real decimal number and a matrix made up of strings of 1's and 0's?
thanks
c=c*0.0196;
c actually receives inputs from an interfaced hardware (1's and 0's essentially), 0.0196 is the resolution (of the ADC). I was wondering how could such a multiplication be possible between a real decimal number and a matrix made up of strings of 1's and 0's?
thanks