admin管理员组

文章数量:1323188

What is the safest way to divide two IEEE 754 floating point numbers?

In my case the language is JavaScript, but I guess this isn't important. The goal is to avoid the normal floating point pitfalls.

I've read that one could use a "correction factor" (cf) (e.g. 10 uplifted to some number, for instance 10^10) like so:

(a * cf) / (b * cf)

But I'm not sure this makes a difference in division?

Incidentally, I've already looked at the other floating point posts on Stack Overflow and I've still not found a single post on how to divide two floating point numbers. If the answer is that there is no difference between the solutions for working around floating point issues when adding and when dividing, then just answer that please.

Edit:

I've been asked in the ments which pitfalls I'm referring to, so I thought I'd just add a quick note here as well for the people who don't read the ments:

When adding 0.1 and 0.2, you would expect to get 0.3, but with floating point arithmetic you get 0.30000000000000004 (at least in JavaScript). This is just one example of a mon pitfall.

The above issue is discussed many times here on Stack Overflow, but I don't know what can happen when dividing and if it differs from the pitfalls found when adding or multiplying. It might be that that there is no risks, in which case that would be a perfectly good answer.

What is the safest way to divide two IEEE 754 floating point numbers?

In my case the language is JavaScript, but I guess this isn't important. The goal is to avoid the normal floating point pitfalls.

I've read that one could use a "correction factor" (cf) (e.g. 10 uplifted to some number, for instance 10^10) like so:

(a * cf) / (b * cf)

But I'm not sure this makes a difference in division?

Incidentally, I've already looked at the other floating point posts on Stack Overflow and I've still not found a single post on how to divide two floating point numbers. If the answer is that there is no difference between the solutions for working around floating point issues when adding and when dividing, then just answer that please.

Edit:

I've been asked in the ments which pitfalls I'm referring to, so I thought I'd just add a quick note here as well for the people who don't read the ments:

When adding 0.1 and 0.2, you would expect to get 0.3, but with floating point arithmetic you get 0.30000000000000004 (at least in JavaScript). This is just one example of a mon pitfall.

The above issue is discussed many times here on Stack Overflow, but I don't know what can happen when dividing and if it differs from the pitfalls found when adding or multiplying. It might be that that there is no risks, in which case that would be a perfectly good answer.

Share Improve this question edited Nov 2, 2016 at 23:16 halfer 20.3k19 gold badges109 silver badges202 bronze badges asked Jul 17, 2014 at 8:20 Thomas WatsonThomas Watson 6,6175 gold badges36 silver badges45 bronze badges 17
  • 6 What specific pitfall are you alluding to ? What's your exact problem ? In the general case there's nothing better than / to divide two js numbers to get another js number. – Denys Séguret Commented Jul 17, 2014 at 8:21
  • 4 Avoiding the "normal floating-point pitfalls" is far too wide a goal, and likely unachievable. What specific problems are you trying to avoid? What specifically is going wrong with a / b? Adding so-called "correction factors" out of sheer superstition is a fairly horrible idea. :-) – Mark Dickinson Commented Jul 17, 2014 at 8:56
  • 4 just divide them already! – Alnitak Commented Jul 17, 2014 at 9:16
  • 2 Nope - you'll get exactly the same problems as when adding and multiplying - there's nothing "special" or "different" about division. The only mitigation for all "floating point errors" is to conceptually decouple the values stored from the values presented (i.e. use .toFixed(n) when outputting). – Alnitak Commented Jul 17, 2014 at 9:19
  • 2 @HexedAgain: Yes, really. a/b doesn’t wipe out someone’s funds “because infinity”. Planes don’t fall out of the sky “because infinity”. Infinity is not an exceptional value in floating-point, and does not cause bugs on its own. Software that misbehaves when presented with presented with infinity is buggy software, full stop. Consider also that infinity is a strictly better result than any other result that could be returned; when signed integer arithmetic overflows, the result is undefined, but I rarely see people claiming on SO that "a*b could be disastrous!”. – Stephen Canon Commented Jul 18, 2014 at 9:48
 |  Show 12 more ments

3 Answers 3

Reset to default 6

The safest way is to simply divide them. Any prescaling will either do nothing, or increase rounding error, or cause overflow or underflow.

If you prescale by a power of two you may cause overflow or underflow, but will otherwise make no difference in the result.

If you prescale by any other number, you will introduce additional rounding steps on the multiplications, which may lead to increased rounding error on the division result.

If you simply divide, the result will be the closest representable number to the ratio of the two inputs.

IEEE 754 64-bit floating point numbers are incredibly precise. A difference in one part in almost 10^16 can be represented.

There are a few operations, such as floor and exact parison, that make even extremely low significance bits matter. If you have been reading about floating point pitfalls you should have already seen examples. Avoid those. Round your output to an appropriate number of decimal places. Be careful adding numbers of very different magnitude.

The following program demonstrates the effects of using each power of 10 from 10 through 1e20 as scale factor. Most get the same result as not multiplying, 6.0, which is also the rational number arithmetic result. Some get a slightly larger result.

You can experiment with different division problems by changing the initializers for a and b. The program prints their exact values, after rounding to double.

import java.math.BigDecimal;

public class Test {
  public static void main(String[] args) {
    double mult = 10;
    double a = 2;
    double b = 1.0 / 3.0;
    System.out.println("a=" + new BigDecimal(a));
    System.out.println("b=" + new BigDecimal(b));
    System.out.println("No multiplier result="+(a/b));
    for (int i = 0; i < 20; i++) {
      System.out.println("mult="+mult + " result="+((a * mult) / (b * mult)));
      mult *= 10;
    }
  }
}

Output:

a=2
b=0.333333333333333314829616256247390992939472198486328125
No multiplier result=6.0
mult=10.0 result=6.000000000000001
mult=100.0 result=6.000000000000001
mult=1000.0 result=6.0
mult=10000.0 result=6.000000000000001
mult=100000.0 result=6.000000000000001
mult=1000000.0 result=6.0
mult=1.0E7 result=6.000000000000001
mult=1.0E8 result=6.0

Floating point division will produce exactly the same "pitfalls" as addition or multiplication operations, and no amount of pre-scaling will fix it - the end result is the end result and it's the internal representation of that in IEEE-754 that causes the "problem".

The solution is to pletely forget about these precision issues during calculations themselves, and to perform rounding as late as possible, i.e. only when displaying the results of the calculation, at the point at which the number is converted to a string using the .toFixed() function provided precisely for that purpose.

.tofixed() is not a good solution to divide float numbers. Using javascript try : 4.11 / 100 and you will be surprised.

4.11 / 100 = 0.041100000000000005

Not all browsers get the same results. Right solution is to convert float to integer:

parseInt(4.11 * Math.pow(10, 10)) / (100 * Math.pow(10, 10)) = 0.0411

本文标签: javascriptA safe way to divide two floating point numbersStack Overflow