Iam Implementing a FPU(floating point unit in a simple CPU),In this when I want to do multiplication ,Iam unable to calculate how many clock cycles it will take.The problem I explained below very briefly .
I have read some algorithms ,in this (briefly) the result of multiplication is:
1.sign bit=s1 xor s2
2.exponent=e1+e2-127
3.mantissa=m1*m2
1,2 steps are ok but in the 3rd step they are doing integer multiplication I will do this using repetitive addition using shifting.but all this is a combinational logic there is a no clock playing a role here .so how should I consider the delay, though this comb delay may be large than the clock period and how can I decide this comb delay .
similarly in the division also ..
shall I consider 1 shift is equal to 1 clock period such that in multiplication maximum 48 clock cycles it will take.or shall I consider some shifts(1,2,3,4,5,6,7,8) equal to 1 clock cycle.
Thank you,
I have read some algorithms ,in this (briefly) the result of multiplication is:
1.sign bit=s1 xor s2
2.exponent=e1+e2-127
3.mantissa=m1*m2
1,2 steps are ok but in the 3rd step they are doing integer multiplication I will do this using repetitive addition using shifting.but all this is a combinational logic there is a no clock playing a role here .so how should I consider the delay, though this comb delay may be large than the clock period and how can I decide this comb delay .
similarly in the division also ..
shall I consider 1 shift is equal to 1 clock period such that in multiplication maximum 48 clock cycles it will take.or shall I consider some shifts(1,2,3,4,5,6,7,8) equal to 1 clock cycle.
Thank you,