I've figured out this much on Floating-Point to Decimal:
The most significant bit (bit31 for single precision or bit 63 for double precision) is the sign. 0 is +, 1 is -.
For single precision (32 bit) the next 8 bits (bit30 - bit23) are the exponent. For double precision (64 bit) the next eleven bits (bit 62 - bit 52) are the exponent.
In single precision, a bias of 127 is added to the exponent. In double, a bias of 1023 is added.
The mantissa or significand are the remaining bits. Since the first digit of the number is always one, it is not explicity stored in the floating point format.
Subtract the bias from the exponent and convert into decimal.
If the exponent is positive, the integer portion of the number is 1 (always there though not specified) followed by the bits from the most significant bit of the mantissa ((bit 22 for single precision, bit 51 for double) to msb-exponent+1.
i.e. if the floating point number is stored as
01000011100110011001100110011001 (32 bit)
the sign is +,
the biased exponent is 10000111,
and the mantissa is 00110011001100110011001
The exponent becomes 10000111 - 01111111 = 1000 base2 or 8 base10.
The integer part of the mantissa is 100110011 (notice the 1 in the msb position followed by the first 8 digits of the mantissa).
Convert this to decimal to get 307.
For the fractional part you take the remaining bits of the mantissa and multiply that value by 2^exponent.
001100110011001 * 2^8 = 00110011001101 * 100000000 = 110011001100100000000 base2 or 1677568 base10.
The final result then is 307.1677568