I just started reading about the inaccuracies with writing decimals in binary. Then I was given this challenge:
You are given two functions which take float inputs a and b and returns the value
a ** 2 - b ** 2. There are two ways to calculate this:(a + b) * (a - b)and(a ** 2 - b ** 2). Both the calculations would give the same result when not performed in binary. Write a program that checks which of these functions give the most accurate result.
Here is my thought process:
The difference between the two functions is the fact that the second method performs two multiplications while the first method only performs one. I don’t know how multiplication and addition/subtraction is done internally in python. But both the methods perform a total of 3 mathematical operations, so if the second method gives a more inaccurate answer, this means multiplication creates more inaccuracy.
This could be done by comparing both the method results to a totally accurate answer, but I don’t have this unless I calculate the correct answer by hand. Does anyone know another method which could be used to solve this problem?
>Solution :
It doesn’t have to do with "binary" so much as with that floating-point formats (regardless of base) have limited precision. For example, consider a decimal calculator that can only work with 4 significant digits. Let, say, a = 999 and b = 998. Then a+b is computed exactly as 1997, and a-b exactly as 1, and their product is also exact: 1997.
But, the other way, you’re in trouble at once trying to compute a**2 = 999*999 = 998001. The caculator cnly stores the first 4 digits, so rounds to 9980e2. b**2 similarly rounds to 9960e2. Their difference is 20e2, and 2000 is not 1997.
Do read the Python docs appendix on floating-point issues. As it suggests, if you want exact results you can often use the fractions or decimal modules instead.