why do we add "d"in the end of a double?

Thread Starter

yef smith

Joined Aug 2, 2020
756
Hello,i have built the following code with d and without d in the end.
and as you can see in the output,i get the same result.
So why we add "d" in the end?
Thanks.
Code:
double x = 23234.234d;
            double y = 23234.234;
            Console.WriteLine("with d "+x);
            Console.WriteLine("without d " + y);

1660724599010.png
 

djsfantasi

Joined Apr 11, 2010
9,163
We can’t be sure without knowing the language, but we can make some assumptions. The number 23234.234 can be specified as a normal floating point number. So, both cases, it is changed (cast) from a floating point number to a double precision number.

In the first case (with the d), the compiler is told explicitly to store the number in double precision.

In the second case, the compiler stores the number as a single precision floating point. Then when the assignment is executed, it changes the representation from a single to a double precision number.

So why would you include the d? First, your program clearly stated your intent. Second, there is no performance hit* during execution to change the storage format (single -> double).

* Note that some optimizing compilers will recognize this situation and create the executable as if you specified the d
 

BobTPH

Joined Jun 5, 2013
8,998
I don’t use C#, so I can’t try this, but you might want to try this:

double x = 1.0 / 3.0;
double y = 1.0d / 3.0d;

Then print them with 15 digits of precision.

I don’t know what it will do for sure, but it might well show a difference.
 

MrSalts

Joined Apr 2, 2020
2,767
I don’t use C#, so I can’t try this, but you might want to try this:

double x = 1.0 / 3.0;
double y = 1.0d / 3.0d;

Then print them with 15 digits of precision.

I don’t know what it will do for sure, but it might well show a difference.
See below...
Ignore the myNum variable.

B9FCDD16-D577-4300-A6ED-374BCFE8EA08.jpeg
 

BobTPH

Joined Jun 5, 2013
8,998
Well, your first result is doing integer arithmetic, which explains the result.

Try it again with 2.0/3.0 so it uses float.
And maybe 2.0f / 3.0f as well.
 

WBahn

Joined Mar 31, 2012
30,072
In the second case, the compiler stores the number as a single precision floating point. Then when the assignment is executed, it changes the representation from a single to a double precision number.
In most recent versions of languages floating point literals are interpreted as type double if no type specifier is supplied.

Now that we know that the language in question is C#, this is specifically the case per the C# 6.0 Language Specification: "If no Real_Type_Suffix is specified, the type of the Real_Literal is double."

https://docs.microsoft.com/en-us/do...fication/lexical-structure#6454-real-literals
 

WBahn

Joined Mar 31, 2012
30,072
Interesting, it looks like it does arithmetic in double always.
Not at all. The first four (non-commented) expressions were NOT done using doubles.

In the first case, 2/3 was done using integer division and then promoted to a float for assignment.
In the second case, 2/3f was done using float division with the 2 being promoted just far enough to match the 3f.
In the third case, 2f/3f was done using float division.
In the fourth case, 2/3 was done using integer division and then promoted to a double for assignment.
Some of those lines involved arithmetic done using floats.

There are a few missing cases that would be instructive, such as:

double myDouble = 2/3.0f;
double myDouble = 2.0f/3.0f;
double myDouble = 2f/3;

All of these should evaluate the right hand side using single-precision arithmetic (promoting any integers as needed first) and then cast the result to a double when it goes to assign it. So I would expect to see it print a result that is good to about seven sig figs and then garbage after it.

Most modern versions of compilers use type double as the representation for floating point values by default. This has been the case for a couple decades, but I'm sure there are some exceptions floating around (no pun intended), most likely in the embedded world.

The only implicit casts that C# will do are widening (unlike C), so it will implicitly cast from a float to a double, but not the other way around, hence the error in the one line.

Languages like Python (starting in Python 3) use double for everything numeric (unless you jump through hoops), so 2/3 will give the same result as 2.0/3.0.
 

BobTPH

Joined Jun 5, 2013
8,998
Okay, I missed the fact that there was no case where 2.0f /3.0f assigned to a double. I would expect that that would result in float precision.
 

WBahn

Joined Mar 31, 2012
30,072
You don’t know that. The fact that it prints in lower precision is because it was assigned to a float variable, which cannot hold double precision. It is possible that they took into account the left side when doing the operation, I as a compiler developer, doubt it. If it was a C compiler, the results would not confirn to the language specification.
This is why I said that there were missing examples that would be instructive.

That THIS compiler performed integer division regardless despite the fact that the target was a float is strongly suggestive that the target data type is not considered during expression evaluation, only at assignment time. Most compilers evaluate expressions piecemeal, making typing decisions at each point in the evaluation. Some language specs require this, others put looser constraints on it.

No, examples do not reveal the spec (especially for C, which was intentionally left with lots of undefined and specification-defined behavior, but C# is pretty tightly spec'ed).
 

WBahn

Joined Mar 31, 2012
30,072
I was just perusing the C# language spec and found that it specifies that operands in expressions are evaluated left to right, regardless of the order of operator evaluation. So, for example, the order of evaluation in the following expression is as shown:

a / (b + c) * d // Evaluation order: a, b, c, +, /, d *

I couldn't find where it specifies the order of evaluation of subexpressions. So, for instance:

(a + b) / (c + d)

Is this evaluated as: { a, b, +, c, d, +, / }?

My strong guess is that this is the case and that it is probably spec'ed someplace. Certainly it is strongly encouraged by the left-right operator evaluation as this likely maps well to a stack-oriented post-fix expression processing. But I don't know that that covers all the corner cases.
 

WBahn

Joined Mar 31, 2012
30,072
In your second example, c+d can be evaluated before a+b as long as they have no side effects.
That's the crux of my curiosity. Most languages do not specify the order in which subexpressions have to be evaluated, but they also don't specify the order in which operands have to be evaluated except in specific situations, such as when short-circuiting is possible. But C# explicitly requires that the 'a' and 'b' operands MUST be evaluated before 'c' and 'd'. If they are going to go to the trouble of requiring that, I'm suspicious that they also give no leeway in the order in which subexpressions are evaluated and that I just couldn't find where that requirement is laid out in the language spec. In the C language spec, the fact that the order is not specified is explicitly stated; I could not find any similar statement in the C# spec. But then there's a LOT of spec there and I only spent a few minutes skimming it.
 

MrChips

Joined Oct 2, 2009
30,821
The other factor that needs to be investigated is which expressions are evaluated at compile time vs run time.

One might expect differences in
y = 2.0/3.0;
and
y = x/3.0;
 

BobTPH

Joined Jun 5, 2013
8,998
That's the crux of my curiosity. Most languages do not specify the order in which subexpressions have to be evaluated, but they also don't specify the order in which operands have to be evaluated except in specific situations, such as when short-circuiting is possible. But C# explicitly requires that the 'a' and 'b' operands MUST be evaluated before 'c' and 'd'. If they are going to go to the trouble of requiring that, I'm suspicious that they also give no leeway in the order in which subexpressions are evaluated and that I just couldn't find where that requirement is laid out in the language spec. In the C language spec, the fact that the order is not specified is explicitly stated; I could not find any similar statement in the C# spec. But then there's a LOT of spec there and I only spent a few minutes skimming it.
The ordering operands is to make the order of the side effects predictable. For instance, if each of the operands is a function call, and the functions have side effects, then you cannot change the order in which those functions are called.

However, you can evaluate the operands, assign the results to registers or temps, then evaluate the expression from those temps in any order that produces the correct value.
 
Top