floating point multiplication/division

Discussion in 'Embedded Systems and Microcontrollers' started by manikanta.mashetti, Jan 13, 2014.

  1. manikanta.mashetti

    Thread Starter New Member

    Jan 13, 2014
    Iam Implementing a FPU(floating point unit in a simple CPU),In this when I want to do multiplication ,Iam unable to calculate how many clock cycles it will take.The problem I explained below very briefly .

    I have read some algorithms ,in this (briefly) the result of multiplication is:
    1.sign bit=s1 xor s2

    1,2 steps are ok but in the 3rd step they are doing integer multiplication I will do this using repetitive addition using shifting.but all this is a combinational logic there is a no clock playing a role here .so how should I consider the delay, though this comb delay may be large than the clock period and how can I decide this comb delay .

    similarly in the division also ..

    shall I consider 1 shift is equal to 1 clock period such that in multiplication maximum 48 clock cycles it will take.or shall I consider some shifts(1,2,3,4,5,6,7,8) equal to 1 clock cycle.

    Thank you,
  2. MrChips


    Oct 2, 2009
    What CPU or MCU are you using?
    Which compiler are you using?
    Show your code.