Pensando en programar

Why is decimal-math so funky?

Question: The question comes from reddit's r/javascript and is a recurrent one: Why is decimal-math so funky?
Additional question: Why is NaN != NaN?

Unrelated to JavaScript

The answer is actually pretty simple and it's mostly unrelated to JavaScript.

The thing is that JavaScript has only one number type and that is the standard IEEE-754 for floating-point arithmetic. Now, once you're limited to doing floating point arithmetic in binary, a number of limitations appear. These have already been described nicely enough elsewhere, so I won't do it here. Additional excellent link

We could wonder why we only have one number type and why that one, but I guess, once again, it all boils down to trying to simplify the language, both for programmers and implementers.

NaN

Note that this also answers why NaN != NaN, as the standard also defines de behaviour of NaN. At least in this respect. The underlying reason why NaN != NaN is simply logic: That X is not something and Y is also not something cannot tell you anything about X and Y being equal or different. We could make NaN == NaN throw some sort of exception or return undefined, but:

  1. allowing that a simple comparison might throw an exception seems hardly practical in any situation
  2. allowing a comparison to return anything other than true or false would rather hilariously complicate boolean logic
  3. undefined will, in a boolean context, be evaluated as false. Returning false is also less risky than returning true.