The as operator seems to cause problems in type inference rules:
fansh> Decimal a := 2d
2
fansh> Decimal b := 3d
3
fansh> b-a
1
fansh> (Decimal)b - (Decimal)a
1
fansh> (b as Decimal) - (a as Decimal)
ERROR(6): No operator method found: sys::Decimal? - sys::Decimal?
fansh> (Decimal?) b - (Decimal?) a
1
brianMon 1 Apr 2013
Promoted to ticket #2122 and assigned to brian
brianMon 20 May 2013
Ticket cancelled
Actually I think this is working correctly. I treat some special known expressions such as null literal or an as expression as known nullable, and I do not allow them to be used where a non-nullable is expected. By definition we expect those expressions to sometimes return null or you wouldn't be using them. It works the same with any method:
static Void main() { a := 2d; foo(a as Decimal) }
static Decimal foo(Decimal d) { d }
Invalid args foo(sys::Decimal), not (sys::Decimal?)
katoxThu 23 May 2013
I don't really get the difference between the last two. Both should substract Decimal? types. In both cases a smart enough compiler could infer that they can't be null in this particular case.
What is the rationale?
brianThu 23 May 2013
Its the same reason we don't allow this:
(b as Decimal) - null
We know that Decimal.subtract requires a non-nullable parameter. So the compiler flags that as a known error. By the same case, by using the as operator you are telling the compiler that by definition you expect that sometimes (b as Decimal) will be null, in which case the compiler flags it as a compile time error. Any expression which is known to be null is not allowed to be used where a non-nullable is expected.
katoxFri 24 May 2013
Any expression which is known to be null is not allowed - that makes sense and it is the same principle used for other things in the compiler.
However there is no way non-nullable Decimal could turn into null when casted as Decimal. It would have to be a completely different (nullable or non-nullable) type. But the compiler should know that it isn't the case. And it probably should let it compile if unusure because it might work.
It is also quite easy to write code that compiles and is obviously wrong. But one needs to use normal casting, not as operator.
class Main
{
Void main()
{
Decimal? b := 3d
Decimal x := 2d
Decimal? y := null
echo( (b as Decimal) - x ) // -> 1
echo( (b as Decimal) - (Decimal?) x ) // -> 1
echo( (b as Decimal) - (x as Decimal) ) // compiler error
echo( (b as Decimal) - y ) // compiled, runtime error
echo( (b as Decimal) - (Decimal?) y ) // compiled, runtime error
echo( (b as Decimal) - (Decimal) y ) // compiled, runtime error
}
}
brianFri 24 May 2013
echo( (b as Decimal) - (Decimal?) x ) // -> 1
I don't think I've actually ever used a nullable cast, but you could definitely make the case that it should cause a compile time error where a non-nullable is expected. If there was community agreement on making that a new error, I would mind adding it.
katox Mon 1 Apr 2013
The
as
operator seems to cause problems in type inference rules:brian Mon 1 Apr 2013
Promoted to ticket #2122 and assigned to brian
brian Mon 20 May 2013
Ticket cancelled
Actually I think this is working correctly. I treat some special known expressions such as
null
literal or anas
expression as known nullable, and I do not allow them to be used where a non-nullable is expected. By definition we expect those expressions to sometimes return null or you wouldn't be using them. It works the same with any method:katox Thu 23 May 2013
I don't really get the difference between the last two. Both should substract Decimal? types. In both cases a smart enough compiler could infer that they can't be null in this particular case.
What is the rationale?
brian Thu 23 May 2013
Its the same reason we don't allow this:
We know that Decimal.subtract requires a non-nullable parameter. So the compiler flags that as a known error. By the same case, by using the
as
operator you are telling the compiler that by definition you expect that sometimes(b as Decimal)
will be null, in which case the compiler flags it as a compile time error. Any expression which is known to be null is not allowed to be used where a non-nullable is expected.katox Fri 24 May 2013
Any expression which is known to be null is not allowed - that makes sense and it is the same principle used for other things in the compiler.
However there is no way non-nullable Decimal could turn into null when casted
as
Decimal. It would have to be a completely different (nullable or non-nullable) type. But the compiler should know that it isn't the case. And it probably should let it compile if unusure because it might work.It is also quite easy to write code that compiles and is obviously wrong. But one needs to use normal casting, not
as
operator.brian Fri 24 May 2013
I don't think I've actually ever used a nullable cast, but you could definitely make the case that it should cause a compile time error where a non-nullable is expected. If there was community agreement on making that a new error, I would mind adding it.