ARM Technical Support Knowledge Articles | |
Applies to: General Topics
Information in this article applies to:
I understand that the single precision floating-point values only have a 24-bit mantissa, but the printf %f float format parameter seems to have less precision. This is demonstrated in the following test program.
void main(void) { long itemp; float ftemp; // initialize serial output here! printf ("RESULTS: "); for (itemp=16777200; itemp<16777216; itemp++){ ftemp = itemp; printf("long = %ld, float = %10.5f, %lx ", itemp, ftemp, ftemp); } }
The output of this program is as follows:
RESULTS: long = 16777200, float = 16777200.00000, 4b7ffff0 long = 16777201, float = 16777200.00000, 4b7ffff1 long = 16777202, float = 16777200.00000, 4b7ffff2 long = 16777203, float = 16777200.00000, 4b7ffff3 long = 16777204, float = 16777200.00000, 4b7ffff4 long = 16777205, float = 16777210.00000, 4b7ffff5 long = 16777206, float = 16777210.00000, 4b7ffff6 long = 16777207, float = 16777210.00000, 4b7ffff7 long = 16777208, float = 16777210.00000, 4b7ffff8 long = 16777209, float = 16777210.00000, 4b7ffff9 long = 16777210, float = 16777210.00000, 4b7ffffa long = 16777211, float = 16777210.00000, 4b7ffffb long = 16777212, float = 16777210.00000, 4b7ffffc long = 16777213, float = 16777210.00000, 4b7ffffd long = 16777214, float = 16777210.00000, 4b7ffffe long = 16777215, float = 16777220.00000, 4b7fffff
It seems that printf() displays less than 21 bits. Why does the %f parameter not correctly display the results?
Single-precision floating-point numbers have a 24-bit mantissa, which is approximately 7.2 decimal digits. The term approximately implies that the conversion between decimal and binary representations can be inexact. The following table shows this conversion problem. The first column gives a decimal value, the second column contains the closest single-precision float representation of that number:
Decimal Value Closest 'float' Representation ------------- ------------------------------ 1.0 1.0 1.1 1.10000002384 1.01 1.00999999046 1.001 1.00100004673 1.0001 1.00010001659 0.1 0.10000000149 0.01 0.0099999997765 0.001 0.0010000000475 0.0001 0.000099999997474
The representation problem becomes even more apparent when you compute the difference of two numbers that are close. For example, the operation:
1.001 - 1.0 = 0.001
After conversion to single-precision floating-point gives the following reqult:
1.00100004673 - 1.0 = 0.00100004673
The precision of actual floating-point operations will not exceed the 7.2 digits, so it makes no sense to give users the impression that a floating-point number has more precision. Therefore, the printf routine implements rounding after 7 decimal digits. If you look at the output of your test program, you can see the effects of rounding when the decimal representation changes from 4 to 5 at the last digit position.
long = 16777204, float = 16777200.00000, 4b7ffff4 long = 16777205, float = 16777210.00000, 4b7ffff5 long = 16777214, float = 16777210.00000, 4b7ffffe long = 16777215, float = 16777220.00000, 4b7fffff
Article last edited on: 2005-07-15 10:22:19
Did you find this article helpful? Yes No
How can we improve this article?