Thorsten Kettner wrote:

##### Quote

Hi all.

I know that floating point types are inaccurate. I knew about

this and used them only when it didn't matter really. This time

however it matters and as doubles are passed to my function, I

multiply them by 100 and store them as int and calcualte on int

basis. I thought that would be okay, as a double has a precision

of 15 digits, and I expected that as long as I deal with smaller

numbers there'd be no problem. But if I'd print say 17 digits; I

would get the 15 correct digits followed by two random digits.

double d = 33.3L;

int i = d * 100.0L;

I expected d to be stored as 33,3000000000000xyz... where x, y

and z are random digits. However i becomes 3329. That looks like

a precision of only two digits to me. Where is the error? In my

understanding? Or in the compiler?

Hi,

Yes a float is not very accurate. It stores only up to 6 significant

digits, which is fine however in most cases. So one might prefer to do

his/her calculations in doubles. A double takes twice as much storage

space compared to a float. A float takes up 4 bytes, and a double 8

bytes. Roughly this comes down to twice the number of significant

digits, thus 12 or 13. After these digits the numbers will become

random, as you said. But you expect these random number only in the

positive direction. They can just as easily go in the negative

direction, thus d might also be stored as 33,2999999999999xyz. Because

you are assigning to an int, rounding is always down to the nearest

integer after your multiplication of 100.0; Thus when the d is

represented as I indicated, your result is what I might expect from your

expression. The assignment to i should therefore be:

int i = d * 100.0L + 0.5;

Rounding will occur as you expect.

Remember that an int is 4 bytes, and as such only has about 8 or 9

significant digits. Although it is more than a float has, it will most

likely be less. Why less? Because if the range of your values does not

exceed 100.0 in the above example, you reduce the significant digits

from 12 to only 4, which is less than a float.

So my question is why use doubles, in order to get your required

accuracy, but then revert to int during your computations? Using int's

might compute faster, but what about the intended accuracy?

Wiljo.