Floating-point numbers (which contains decimal places) are stored in memory using a format known as floating-point representation. In most modern computers, the IEEE 754 standard is commonly used for representation floating-point numbers.
This standard defines several formats for representing floating-point numbers, including single precision (32 bits) and double precision (64 bits).
1 Single Precision (32 bits):
In single precision, a floating-point number is represented using 32 bits, divided into three fields: sign bit, exponent, and significant (also called mantissa). The format is as follows:
31 30-23 22-0
[Sign] [Exponent] [Significand]
- Sign Bit (1 bit): Represents the sign of the number. 0 indicates positive and 1 indicates negative.
- Exponent (8 bits): Represents the exponent of the number, biased by fixed value (127 for single precision).
- Significant (23 bits): Represents the fractional part of the number.
2 Double Precision (64 bits):
In double precision, a floating-point number is represented using 64 bits, divided into three fields: sign bit, exponent, and significant. The format is as follows:
63 62-52 51-0
[Sign] [ Exponent ] [ Significand ]
- Sign Bit (1 bit): Represents the sign of the number.
- Exponent (11 bits): Represents the exponent of the number, biased by a fixed value (1023 for double precision).
- Significand (52 bits): Represents the fractional part of the number.