Tags:

Difference between decimal, float and double in programming

By Game Changer → Tuesday, April 25, 2017


Hi guys!
Let's have a clear conception about {int} {float} and {double} Do you know? Precision is the main difference. First, I am gonna show you some practical example!

C Difference between float and double

A float has 23 bits of precision; 8 bits of exponent, and 1 sign bit. A double has 52. bits of precision; 11 bits of exponent, and 1 sign bit.

float x = 3.141592653589793238;
double z = 3.141592653589793238;
printf("x=%f\n", x);
printf("z=%f\n", z);
printf("x=%20.18f\n", x);
printf("z=%20.18f\n", z);

Gives you the output

x=3.141593
z=3.141593
x=3.141592741012573242
z=3.141592653589793116

.Net difference between int, float and double

When Float ~7 digits (then 32 b),  Double ~15-16 digits (then 64 b) and Decimal ~28-29 digits takes 128 bit!

float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);
Result :

float: 0.3333333
double: 0.333333333333333
decimal: 0.3333333333333333333333333333

You have seen, Decimals much higher precision and are usually used within financial applications that require a high degree of accuracy.
Okay, Prob is decimals are much slower (up to 20X times in some tests) than a double/float.

Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.



Java difference between int, float and double:

One Asked: Can you explain what makes this difference between float and double?

Sure. Imagine you had two decimal types, one with five significant digits, and one with ten.

What value would you use to represent pi for each of those types? In both cases you'd be trying to get as close to a number which you couldn't represent exactly - but you wouldn't end up with the same value, would you?

It's the same for float and double - both are binary floating point types, but double has more precision than float.

12 {int}
12.345678 {float}
12.345678910111213 {double}

float is represented in 32 bits, with 1 sign bit, 8 bits of exponent, and 23 bits of the significant (or what follows from a scientific-notation number: 2.33728*1012; 33728 is the significand).
double is represented in 64 bits, with 1 sign bit, 11 bits of exponent, and 52 bits of significand.
By default, Java uses double to represent its floating-point numerals (so a literal 3.14 is typed double). It's also the data type that will give you a much larger number range, so I would strongly encourage its use over float.

Javascript:

12 {int}
12.345678 {float}
12.345678910111213 {double}

Actually All numbers in JavaScript are doubles: that is, they are stored as 64-bit IEEE-754 doubles.

That is, the goal is not to get a "double": the goal is to get the string reprsentation of a number formatted as "YYY.XX". For that, consider Number.toFixed, for instance:

(100).toFixed(2)
The result is the string (not a "double"!) "100.00". The parenthesis are required to avoid a grammar ambiguity in this case (it could also have been written as 100.0.toFixed or 100..toFixed), but would not be required if 100 was in a variable.

Happy coding!!


Post Tags:

Sael

I'm Sael. An expert coder and system admin. I enjoy to make codes easy for novice.

Website: fb/Fujael

No Comment to " Difference between decimal, float and double in programming "