#1743 Decimal operation failure?

saltnlight5 Thu 29 Dec 2011

Hi,

I got a pretty strange error while doing some decimal operation.

fansh> n := 106751.0 / 365.0
sys::Err: java.lang.ArithmeticException: Non-terminating decimal expansion; no exact representable decimal result.
java.math.BigDecimal.divide (BigDecimal.java:1616)
fan.sys.FanDecimal.div (FanDecimal.java:81)

Can some one shed some light on this?

Thanks

~ Zemian

DanielFath Thu 29 Dec 2011

It's relatively simple. The division of the two numbers results in an number which representation isn't finite. For instance:

n := 1.0/3.0 //java.lang.ArithmeticException 

because n = 0.3333333333333333333333....(ad infinitum).

dsav Thu 29 Dec 2011

So, this works as intended?

Yuri Strot Thu 29 Dec 2011

I think it might be intended for java BigDecimal, but not very good for default floating point type in Fantom.

saltnlight5 Thu 29 Dec 2011

Ah, I see it now. The behavior is the same as in Java

BigDecimal a = new BigDecimal(106751.0);
BigDecimal b = new BigDecimal(365.0);
System.out.println(a.divide(b)); // ArithmeticException here!

I also read that Fantom's Float is same as Java's double, so I can get a result without error like this:

n := 106751.0F / 365.0F

So now, I have couple observations and a question:

1) As a programmer, I am aware of binary can not represent some decimal in exactly values, but working with a programming laguange in general, I would like default behavior to at least round it up and have the least unexpected behavior. It's very inconvenient, if not surprising, to have a language that failed on 1.0/3.0

2) I worked with Groovy a bit, and it also use BigDecimal as they default decimal literal, but yet they do not have this unexpected error. It seems like they actually catch the ArithmeticException and perform some senseable rounding automatically when this happens. See divideImpl

3) In Fantom sys::Decimal, I don't see a rounding method, so how would I do Java's equavilent of this?

BigDecimal a = new BigDecimal(106751.0);
BigDecimal b = new BigDecimal(365.0);
System.out.println(a.divide(b, 4, BigDecimal.ROUND_UP));

andy Thu 29 Dec 2011

Rounding division results seems like a sensible idea - have run into that before - is a bit annoying.

DanielFath Thu 29 Dec 2011

There still needs to be some mechanism with which you can control rounding. Not all rounding is appropriate in all cases.

brian Mon 2 Jan 2012

Decimal should have same semantics as Java's BigDecimal I think.

I'm wondering if we should actually force the d or f suffix in all literals to make it clear what kind of number you are working with.

helium Tue 3 Jan 2012

Decimal should have same semantics as Java's BigDecimal I think.

Why? It's a Fantom type and Fantom runs on .Net, JavaScript, ... .

DanielFath Thu 12 Jan 2012

The way I see it either Fantom can raise error or produce a meaningful return (e.g. fractions, floats, etc.). Each has it's pros and cons.

Errors will confuse new people. Meaningful return will be nice for new people but might breach contract. E.g. might introduce rounding errors in case of floats.

jodastephen Fri 13 Jan 2012

The example by the OP is not acceptable behaviour for Fantom. The Groovy approach looks suitable as an alternative.

brian Fri 13 Jan 2012

For now I just view sys::Decimal as a pretty much exactly what BigDecimal is. If there is a method missing or you'd like to see some behavior change, then please post a patch. I don't ever use this class and don't have much of an opinion on it.

go4 Sat 14 Jan 2012

I don't ever use this class and don't have much of an opinion on it.

Why the default type is Decimal but Float?

Xan Sat 14 Jan 2012

Is there another possibility: make a class for representing decimals with infinite digits. If we "know" that one division has a periodic repetition, a computer can also do. In decimal base, there is only three possibilities:

  • exact numbers
  • pure periodic numbers: ex. 2.4444444...
  • mixt periodic numbers: ex. 2.56777777...
  • by a theorem of Gauss, if we dividide by prime p, the periodic number 1/p has maximum periode of length n-1.

By the other hand, if number is only divisible by 2 or 5, it's exaact, if conains 2 or 5 and any other prime, then it's mixt periodic, and if if only contains prime different than 2 or 5, it's pure periodic number.

So it's easy to see if a division is periodic or not and so make such class.

KevinKelley Sat 14 Jan 2012

Why the default type is Decimal but Float?

Indeed. A full scheme-y numeric tower with Rationals and Complex and so on, while it seems like it'd be kinda nice, mostly is beyond common need. Using Decimal as the default for unadorned floating point literals, as a nod toward extra precision, doesn't actually buy us anything much, and it means that the by-far common case (I think) is more awkward, and has gotchas for newcomers.

Porting code from other languages, a lot of stuff comes across fairly clean, but not floating-point numbers. Got to manually add fs by hand, all over the place.

I don't know how hard it'd be to flip the switch, but I'd be in favor.

brian Mon 16 Jan 2012

Why the default type is Decimal but Float

At the time we made the switch it seemed reasonable, but in retrospect it might have been a mistake.

I'm sort of thinking for now that we should just always be explicit with "f" or "d" suffix. Then that leaves issue open for future (and would be transition plan no matter what).

Does anyone have objection for requiring a "f" or "d" suffix on all float/decimal literals?

jodastephen Mon 16 Jan 2012

Requiring f or d would be a mistake. Decimal is the correct default type for a modern practical language, as the vast majority of developers don't really understand floating point numbers.

The decimal requirement does not require infinite decimal precision, it simply requires a sensible default. Groovy has already shown the way.

brian Mon 16 Jan 2012

Requiring f or d would be a mistake.

The issue at hand isn't really to pick a default, but rather to decide if it is sensible to force code to be explicit and therefore more readable. Decimal might be reasonable default for some operations, but can be a bad default in terms of performance. More importantly my concern is that all C/Java like languages default literals to be floats, and therefore having it not be the default is quite non-obvious to most existing programmers. So it seems like a reasonable trade-off to force "d" or "f" to be explicit in code for now. Then it is obvious when reading code what you are doing. And it leaves the future open to making decimal or perhaps a proper numeric tower a default when suffix is omitted.

brian Mon 30 Jan 2012

Stephen Viles (not sure fantom.org handle?) provided a patch to make Decimal.divide work like Groovy. changeset.

I'm still going to add a warning in next build so we can get to a point where "d" and "f" suffixes are required. I do believe this is the best way to make the code most readable and handle the transition for programmers who are coming from C/Java world.

qualidafial Mon 30 Jan 2012

I agree with Stephen. I would rather not be required to add f or d suffix to every floating point number. That's annoying as hell.

This may sound counter-intuitive, but I don't think performance considerations are a good enough reason to pollute the language syntax. Performance characteristics change all the time due to changes/improvements in the underlying platform.

Java reflection is lightning fast now, compared to only a few years ago; advances in garbage collection mean it is now accepted to create thousands or even millions of short-lived objects without affecting performance.

For years in JavaScript, the best practice for string concatenation has been to use Array.push() followed by Array.join(""). Now improvements in JS engine performance mean that using simple + for concatenation is just as fast or faster than other alternatives.

Let's not drag the language syntax into a performance debate. Let's keep the default syntax using the principle of least surprise (BigDecimal for floating point), and just fix this one problem brought up by saltnlight5.

andy Mon 30 Jan 2012

We made the switch to Decimal default a few years ago. And after thousands of lines of code using Ints and Floats, I can definitely say that having to postfix numbers with f just wasn't an issue - in fact - its feels more "right" now - seeing Decimals with no postfix looks like a compiler error to me - so I add the d anyways ;)

I'm not sure I have a strong opinion as what should be the default type (and don't think I did before when we made that change). But I do know, having the experience of the f change, it just wasn't a big issue. I picked it up right away, and never gave it a second thought.

So +1 for mandatory d and f postfixes. And I believe we should leave it like that and not re-introduce a default type - but at least we leave that door open.

KevinKelley Mon 30 Jan 2012

epsilon := 1e-15f made me think about this, the other day -- not that I don't understand, but just that the literal looks so much like it wants to be a subtraction.

I guess what I probably wish for is something like Google Go's ideal constants, "Numeric constants represent values of arbitrary precision and do not overflow." But there's an issue on the Go bug-tracker mentioning 0.1+0.2!=0.3, so I guess they're still working on the details. :-)

qualidafials right with the Hotspot-love; a lot of things that used to be a performance issue, now get optimized away. Still there are some things that naturally need to be done with floats -- you don't want arbitrary-precision for things that have only a real-world number of significant digits anyway, and if you want to (say) run a multiple-thousand-point convolution against every point in your database...

That patch (a few posts up) looks like it fixes the issue at the top of this thread, leaving the question of the required f and d suffixes. I wonder if we even need them -- would it work to leave literals as type Decimal, but just get rid of the error on assignment of a Decimal to a Float? The only loss of precision is what doesn't fit in 15 significant digits and/or numbers greater than 1e308.

I'm remembering some LISP-y discussion about "floating-point contagion" -- any expression involving a machine float, always yields a float; introducing a floating point number says you're discarding the exactness property. If we wanted to follow that philosophy, then we could say that Float is known to be an inexact but pretty accurate approximation, so it's okay to assign anything to one. Going the other way... Decimal can hold anything that Float does, but assigning a Float to a Decimal could be an error -- because it implies precision that wasn't there.

I dunno. Not a big deal, long as stuff works.

StephenViles Tue 31 Jan 2012

The only loss of precision is what doesn't fit in 15 significant digits and/or numbers greater than 1e308.

It's not possible to exactly represent 0.1 (or any other negative power of ten) in binary floating-point. See Why is 0.1 not 0.1? in the IEEE 754 FAQ.

Still there are some things that naturally need to be done with floats

Calculation of monetary values needs to be done with Decimals, not Floats. As http://floating-point-gui.de/formats/exact/ explains:

While binary floating-point numbers are better for computers to work with, and usually good enough for humans, sometimes they are just not appropriate. Sometimes, the numbers really must add up to the last bit, and no technical excuses are acceptable - usually when the calculations involve money.

I like Fantom's first-class support for Decimal calculations, and I'd like the language to help me keep my possibly-slower-but-exact calculations (using Decimal) separate from my near-enough-is-good-enough calculations (using Float). So I would not find floating-point contagion useful.

KevinKelley Tue 31 Jan 2012

It's not possible to exactly represent 0.1

You're arguing something I didn't say: 0b0.0001100110011... represents it just fine, carried out to 15 or so decimal places. No numbering system can represent everything.

Of course there's a need for fixed-point Decimal, for some uses. But they're not ideal for everything; nor is Float.

What do you do when you want a square root, or a sine? There's a lot of stuff that Decimal doesn't do.

This is not a new discussion. Floating-point is etched into the hardware of a billion microprocessors for a reason; it handles the common cases, fast.

It seems odd to me that the default here is the opposite of what I usually do. It would be nice if the literal syntax were more convenient than it is. I think we could have that: arbitrary precision until you bring in a radical, and as much precision as is available then. Trying to assign from a Float to a declared Decimal should continue to be an error, requiring a conversion.

DanielFath Tue 31 Jan 2012

There is something nice using safer but slower operations. As the old adage goes:

  1. Make it work,
  2. Make it right,
  3. Make it fast.

If we are gonna add Decimal as default then they need proper rounding. However if Fantom wants to appeal to Java refugees then 0.1 being Decimal will cause some upset. Other than that Decimal (with rounding) would be my preferred default.

dobesv Wed 1 Feb 2012

This is definitely an interesting debate. I can see how having decimal numbers work "properly" by using the arbitrary-precision Decimal/BigDecimal implementation is very nice. But it will, of course, be slower.

I would probably suggest against silently "downgrading" decimals to floats when they are used together - most people are familiar with systems that generally auto-upgrade to the highest precision representation involved. i.e. mixing float and double yields a double, mixing int and long yields a long, mixing int and float yields a float. If mixing float and decimal yielded a float, I'd be pretty annoyed.

Requiring people to explicitly show whether a number is decimal or float by suffixing it would be helpful in that it makes people think about precision. It could be a bit of clutter, though in practice I think you could require the suffix only where there is ambiguity.

For example, if you have an arithmetic operation involving some decimal literals where all the variables involved (including the variable/parameter the value is stored into) are the same decimal type, allow the suffix to be omitted. If there's a mix, then require a suffix.

With that approach you have the benefit of having people give some bit of thought about precision and also encourage people to declare their constants ahead of time rather than stick them in the middle of their mathematical expressions.

It may also be smart to require that lengthening/shortening to/from IEEE floating point to a Decimal type be done explicitly.

brian Wed 1 Feb 2012

Fantom is not a scripting language where we want numbers to be changing implementation under the covers. It is a system language where performance and specificity of types is a core philology. In practical applications you use Decimal for currency and Float for measurements and graphics. You would never want numbers to "magically" turn into Decimals if you were performing graphical coordinate calculations.

Every mainstream C like language uses a non-suffix to mean float. Clearly this is extremely confusing for newcomers who are the target audience for Fantom.

Andy and I exclusively use floats since our entire business is based on a processing of sensor data. Having the "f" suffix doesn't bother me, and in fact I find looking at Java code that I find the lack of a suffix wrong looking. So I really don't think those of using Decimals will find a "d" suffix over burdensome.

dobesv Fri 3 Feb 2012

Isn't the "d" suffix used to indicate a double values normally?

http://www.dotnetperls.com/suffix

Maybe not necessary to follow the same convention of course. Just something that popped in my head about using d.

StephenViles Fri 17 Feb 2012

Login or Signup to reply.