Literals and casting
All literals defined in code are by default integers unless the literal is a float or a double. Floats are denoted with an L or l and the doubles with a D or d.
What is the consequence of this and how does it affect casting?
The first issue to consider is the maximum and minimum value of an integer. No integer literal can out of the range of an integer. The range is from -2,147,483,648 to 2,147,483,647 (inclusive). Code that contains an integer out of this range will not compile.
The second issue relates to explicit and implicit casting. I will demonstrate with an example:
short s1 = 32767; short s2 = (short) 32768; short s3 = (short) 2147483648; short s4 = (short) 2147483648f; short s5 = (short) 2147483648d;
A short has a range from -32,768 to 32,767 (inclusive) therefore in:
line 1 the literal is implicitly cast to a short because it is within the range of a short.
line 2 the literal is outside the range of a short (but within the range of a integer) and therefore must be explicitly cast down to a short. There will be a loss of precision.
line 3 the literal is outside the range of an integer and therefore this code will not compile.
line 4 the literal is within the range of a float and is implicitly cast to an integer then explicitly cast to a float. Notice the f and the end of the number.
line 5 the literal is a double and with the range of a double. Notice the d and the end of the number.
As you can see it can be tricky to work with literals in the code.