Floating Point Number Converter Decimal Binary and Hexadecimal

Convert between decimal, binary, and hexadecimal representations of floating-point numbers. This calculator shows the IEEE 754 representation and provides detailed breakdown of the bits.

Input Parameters

Calculation Results

Floating Point Number Converter Decimal Binary and Hexadecimal Calculator Usage Guide

Learn how to use the Floating Point Number Converter Decimal Binary and Hexadecimal calculator and its working principles

How to Use This Calculator

  1. Enter a floating-point number in the input field (e.g., 123.456, -0.1, 3.14e8)
  2. Select the precision (32-bit for single precision or 64-bit for double precision)
  3. Check which output formats you want to see (binary and/or hexadecimal)
  4. Click the "Calculate" button to see the conversion results
  5. Click "Reset" to clear all inputs and results

Understanding IEEE 754 Representation

The IEEE 754 standard is a widely used floating-point representation that encodes real numbers in three parts:

  • Sign bit (1 bit): Indicates if the number is positive (0) or negative (1)
  • Exponent (8 bits for 32-bit, 11 bits for 64-bit): Represents the power of 2 by which the mantissa is multiplied. It uses a biased representation to handle both positive and negative exponents
  • Mantissa (fraction) (23 bits for 32-bit, 52 bits for 64-bit): Contains the significant digits of the number. It's normalized to start with a 1 (implied for non-zero numbers)

Example

Let's take the number 123.456 in single precision (32-bit):

  1. Decimal to binary: 123 = 1111011, 0.456 ≈ 0.0111001100110011001100110011001100110011
  2. Combined binary: 1111011.0111001100110011001100110011001100110011
  3. Normalized: 1.1110110111001100110011001100110011001100110011 × 26
  4. Exponent (6) + bias (127) = 133 = 1000 0101 in binary
  5. Sign bit = 0 (positive)
  6. Final 32-bit representation: 0 1000 0101 1110 1101 0111 0011 0011 0011
  7. Hexadecimal: 4C8D733

Precision Considerations

Single precision (32-bit) provides less precision than double precision (64-bit). This means that:

  • Small differences may not be distinguishable in single precision
  • Very large or very small numbers may be rounded or lose precision
  • For scientific calculations requiring high precision, use double precision (64-bit)