Opened 8 years ago
Closed 8 years ago
#15921 closed defect (fixed)
work around Maxima fpprintprec bug and other ARMspecific problems
Reported by:  dimpase  Owned by:  

Priority:  major  Milestone:  sage6.3 
Component:  calculus  Keywords:  Maxima, fpprintprec, ARM 
Cc:  vbraun, snark  Merged in:  
Authors:  Dmitrii Pasechnik  Reviewers:  Peter Bruin, Volker Braun 
Report Upstream:  Reported upstream. Developers acknowledge bug.  Work issues:  
Branch:  b8fea1c (Commits, GitHub, GitLab)  Commit:  b8fea1cb11554aff202cd669342136d5a4ea084c 
Dependencies:  Stopgaps: 
Description (last modified by )
Maxima uses CL FORMAT function wrongly, resulting in outputting wrong number of digits for floats (one extra), and contradicting its own manual on fpprintprec. In particular it outputs too many digits on ix86 and ix86_64, which got in Sage's doctests. As a result, doctests fail on ARM.
The fixes are to convert the results into RealField(prec)
, with appropriate prec
(at least 54, or sometimes more). This ticket also fixes ARMspecific numerical noise stemming
from various other upstream problems, such as eglibc
loss of precision in lgamma
.
Attachments (3)
Change History (44)
comment:1 Changed 8 years ago by
 Branch set to u/dimpase/arm_fixes
 Commit set to 7c532abee02b2c1e5005da408fb6420fc2dece89
 Description modified (diff)
 Summary changed from work around Maxima fpprintprec bug to work around Maxima fpprintprec bug and other ARMspecific problems
comment:2 Changed 8 years ago by
 Branch changed from u/dimpase/arm_fixes to u/dimpase/arm_fixes_etc
 Commit changed from 7c532abee02b2c1e5005da408fb6420fc2dece89 to 079bb9af4f12892268a19f0d218ac96bd72466f4
comment:3 Changed 8 years ago by
 Cc snark added
 Status changed from new to needs_review
strangely enough, I sometimes get test timeouts in plot/plot.py
when doing make ptest
, but never for make test
. Anyhow, please review.
comment:4 Changed 8 years ago by
 Milestone changed from sage6.2 to sage6.3
comment:5 followup: ↓ 6 Changed 8 years ago by
"doctest" is both "doc" and "test" ; for the "test" part, making the precision of computations explicit is nice, but for the "doc" part, those computations are ugly: isn't it possible to do something betterlooking?
comment:6 in reply to: ↑ 5 ; followup: ↓ 7 Changed 8 years ago by
Replying to Snark:
"doctest" is both "doc" and "test" ; for the "test" part, making the precision of computations explicit is nice, but for the "doc" part, those computations are ugly: isn't it possible to do something betterlooking?
Do you mean to ask me do document adding things like RealField(54)
?
comment:7 in reply to: ↑ 6 ; followup: ↓ 8 Changed 8 years ago by
Replying to dimpase:
Replying to Snark:
"doctest" is both "doc" and "test" ; for the "test" part, making the precision of computations explicit is nice, but for the "doc" part, those computations are ugly: isn't it possible to do something betterlooking?
Do you mean to ask me do document adding things like
RealField(54)
?
No : I mean if some poor user types "help(Gamma)" and gets explained in the examples to type "RealField?(54)(Gamma(6))", then that is bad.
I propose to let the computation be just "Gamma(6)" and add a tolerance comment to check the answer ("# abs tol 1e11"). That way we have a good doc and a correct test, hence a nice doctest.
comment:8 in reply to: ↑ 7 Changed 8 years ago by
Replying to Snark:
No : I mean if some poor user types "help(Gamma)" and gets explained in the examples to type "RealField?(54)(Gamma(6))", then that is bad.
I propose to let the computation be just "Gamma(6)" and add a tolerance comment to check the answer ("# abs tol 1e11"). That way we have a good doc and a correct test, hence a nice doctest.
How about putting what you suggest into the Examples section, and things like RealField(54)(Gamma(6))
into the Tests section
(latter with a proper comment, explaining what's going on)?
comment:9 followup: ↓ 10 Changed 8 years ago by
It seems to me that converting to a RealField
is not only (possibly confusing) noise for the reader; it could also hide potential future precision bugs.
Is there any chance that the Maxima bug will get fixed soon?
Also, maybe I misunderstand, but what makes conversion of floats into a RealField
with higher precision suppress numerical noise (mentioned in the ticket description)?
comment:10 in reply to: ↑ 9 ; followup: ↓ 11 Changed 8 years ago by
Replying to pbruin:
It seems to me that converting to a
RealField
is not only (possibly confusing) noise for the reader; it could also hide potential future precision bugs.Is there any chance that the Maxima bug will get fixed soon?
all I know about this is in the thread cited in the ticket description.
Also, maybe I misunderstand, but what makes conversion of floats into a
RealField
with higher precision suppress numerical noise (mentioned in the ticket description)?
probably it should be classified as "numerical noise", but as "precision/formatting/rounding bugs". And the latter are Maxima, and perhaps also the ECL's interpretation of a particular case of undefined behaviour. Hmm, perhaps it's also some specifically Sage problem:
sage: .4980113944988315 0.498011394498831 sage: .49801139449883150 0.498011394498832
Conversion into RealField(54)
appears to fix this discrepancy (I don't know why).
sage: RealField(54)(.49801139449883150) 0.498011394498832 sage: RealField(54)(.4980113944988315) 0.498011394498832
By the way, Python behaves differently, but still confusing :
$ python Python 2.7.5 (default, Aug 25 2013, 00:04:04) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang500.0.68)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> .4980113944988315 0.4980113944988315 >>> .49801139449883150 0.4980113944988315 >>> .49801139449883155 0.49801139449883153 >>> .49801139449883156 0.4980113944988316 >>>
In particular that 55
at the end being printed as 53
, and rounding 56
at the end as 60
, but keeping 55
as just 55
.
One reason against just putting ellipses is that this is a sure way to hide this problem forever...
comment:11 in reply to: ↑ 10 ; followup: ↓ 12 Changed 8 years ago by
Replying to dimpase:
Replying to pbruin:
Also, maybe I misunderstand, but what makes conversion of floats into a
RealField
with higher precision suppress numerical noise (mentioned in the ticket description)?probably it should be classified as "numerical noise", but as "precision/formatting/rounding bugs". And the latter are Maxima, and perhaps also the ECL's interpretation of a particular case of undefined behaviour. Hmm, perhaps it's also some specifically Sage problem:
sage: .4980113944988315 0.498011394498831 sage: .49801139449883150 0.498011394498832Conversion into
RealField(54)
appears to fix this discrepancy (I don't know why).sage: RealField(54)(.49801139449883150) 0.498011394498832 sage: RealField(54)(.4980113944988315) 0.498011394498832
Part of the subtlety is that the real number whose decimal expansion is .4980113944988315000000...
does not have a finite binary expansion:
sage: a=.4980113944988315 sage: a.prec() 53 sage: a.str(base=2) '0.011111110111110110101100101111000110100001100111111010' sage: b=.49801139449883150 sage: b.prec() 54 sage: b.str(base=2) '0.0111111101111101101011001011110001101000011001111110101' sage: c=.498011394498831500000000000 sage: c.str(base=2) '0.0111111101111101101011001011110001101000011001111110100111111111110011011100001100100101'
At least this is consistent with normal rounding rules.
There is another subtlety, namely that a
is a RealLiteral
and remembers its decimal expansion, which demonstrates its effect when you convert a
into a field with higher precision. (See also this sagedevel discussion.) Instead of extending by 0, it gives the same binary expansion as c
:
type(a) <type 'sage.rings.real_mpfr.RealLiteral'> sage: RealField(c.prec())(a).str(base=2) '0.0111111101111101101011001011110001101000011001111110100111111111110011011100001100100101'
Now I don't know if this is what behind your observation; at least the output of elliptic_e()
seems to be a RealNumber
, not a RealLiteral
.
One reason against just putting ellipses is that this is a sure way to hide this problem forever...
Yes, but I'd say the same holds for the RealField(prec)(0.123)
solution... I tend towards # abs tol
/# rel tol
as the least invasive solution, although that could also hide precision bugs.
comment:12 in reply to: ↑ 11 Changed 8 years ago by
Replying to pbruin:
Now I don't know if this is what behind your observation; at least the output of
elliptic_e()
seems to be aRealNumber
, not aRealLiteral
.
on different platforms Maxima's elliptic_e()
outputs different number of digits.
Maxima on ARM: (%i1) elliptic_e(0.5, 0.1); (%o1) .4980113944988315 as compared to x86_64: (%i1) elliptic_e(0.5, 0.1); (%o1) .49801139449883153
(and Maxima with different Lisps does different things on the same platform, too  in some case just 14 digits...)
Actually, as you mention, on x86_64
the number of digits is too big for RealField(53)
. I have no idea whether this is handled gracefully by Sage, or not.
Yes, but I'd say the same holds for the
RealField(prec)(0.123)
solution... I tend towards# abs tol
/# rel tol
as the least invasive solution, although that could also hide precision bugs.
Well, it is a mess anyway. I'd be glad to have it off my shoulders one way or another.
comment:13 Changed 8 years ago by
Could it be solved by setting Maxima's fpprintprec
option to the correct value depending on the platform? I think RealField(53)
prints numbers with 15 significant digits. In Maxima with ECL on x86_64 (sage maxima
), I get
(%i19) fpprintprec: 13; (%o19) 13 (%i20) elliptic_e(0.5, 0.1); (%o20) .49801139449883 (%i21) fpprintprec: 14; (%o21) 14 (%i22) elliptic_e(0.5, 0.1); (%o22) .498011394498832 (%i23) fpprintprec: 15; (%o23) 15 (%i24) elliptic_e(0.5, 0.1); (%o24) .4980113944988315 (%i25) fpprintprec: 16; (%o25) 16 (%i26) elliptic_e(0.5, 0.1); (%o26) .49801139449883153
so fpprintprec
should presumably be set to 14. Maybe we could do something like first setting fpprintprec
to 15, count with how many digits Maxima prints 1/3.0
, and then adjusting fpprintprec
by how much this differs from 15.
comment:14 Changed 8 years ago by
This seems to correctly set the fpprintprec
; on my x86_64 system it leads to lots of doctest failures where the number of digits printed is now one less than before.

src/sage/interfaces/maxima.py
diff git a/src/sage/interfaces/maxima.py b/src/sage/interfaces/maxima.py index 7603881..7a36885 100644
a b http://maxima.sourceforge.net/docs/intromax/intromax.html. 136 136 137 137 sage: maxima.eval('fpprec : 100') 138 138 '100' 139 sage: maxima.eval('fpprintprec : 100') 140 '100' 139 141 sage: a.bfloat() 140 142 8.20121933088197564152489730020812442785204843859314941221237124017312418754011041266612384955016056b1 141 143 … … Obtaining digits of `\pi`:: 362 364 363 365 sage: maxima.eval('fpprec : 100') 364 366 '100' 367 sage: maxima.eval('fpprintprec : 100') 368 '100' 365 369 sage: maxima(pi).bfloat() 366 370 3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068b0 367 371 … … class Maxima(MaximaAbstract, Expect): 594 598 # Remove limit on the max heapsize (since otherwise it defaults 595 599 # to 256MB with ECL). 596 600 self._sendline(":lisp (ext:setlimit 'ext:heapsize 0)") 601 602 # Adjust the precision (see Trac #15921) 603 self._sendline('fpprintprec : 15;') 604 s = self._eval_line('1/3.0;') 605 self._sendline('fpprintprec : {};'.format(30  s.count('3'))) 606 597 607 self._eval_line('0;') 598 608 599 609 def __reduce__(self): 
src/sage/interfaces/maxima_lib.py
diff git a/src/sage/interfaces/maxima_lib.py b/src/sage/interfaces/maxima_lib.py index 643ac0f..5c94370 100644
a b class MaximaLib(MaximaAbstract): 331 331 self.__init_code = init_code 332 332 333 333 MaximaAbstract.__init__(self,"maxima_lib") 334 335 # Adjust the precision (see Trac #15921) 336 self._eval_line('fpprintprec : 15') 337 s = self._eval_line('1/3.0;') 338 self._eval_line('fpprintprec : {}'.format(30  s.count('3'))) 339 334 340 self.__seq = 0 335 341 336 342 def _coerce_from_special_method(self, x): 
src/sage/symbolic/constants.py
diff git a/src/sage/symbolic/constants.py b/src/sage/symbolic/constants.py index 6f76834..5917fba 100644
a b EXAMPLES: Arithmetic with constants 160 160 sage: pim = maxima(pi) 161 161 sage: maxima.eval('fpprec : 100') 162 162 '100' 163 sage: maxima.eval('fpprintprec: 100') 164 '100' 163 165 sage: pim.bfloat() 164 166 3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068b0
Do you think this approach would be worth trying?
comment:15 Changed 8 years ago by
Let me try this on ARM. Could you please add your patch as an attachment to the ticket?
comment:16 Changed 8 years ago by
If we take this approach, we should probably have two separate tickets, one for the Maxima bug and one for ARMspecific issues. Also, the duplicated code in this patch should be in maxima_abstract.py
.
comment:17 followup: ↓ 18 Changed 8 years ago by
I'm starting to think that we shouldn't try to limit fpprintprec
after all, because this will only introduce unnecessary rounding errors. A more robust solution would be to leave it as 0 and to set maxfpprintprec
(which is 16 by default) to 20 or some other sufficiently high value, so that the output precision is only controlled by the Lisp implementation (and the platform). At least on x86_64 with ECL, both the length and the least significant digits are unpredictable: dividing some powers of 10 by 3 gives a result that looks like
3.333333333333333493e5 3.333333333333333222e4 0.003333333333333333 0.03333333333333333 0.3333333333333333 3.3333333333333335 33.333333333333336 333.3333333333333 3333.3333333333335 33333.333333333336 333333.3333333333 3333333.3333333335 3.3333333333333332093e+7 3.3333333333333331347e+8
The '...335' at the end of 3.33... is clearly wrong, but limiting the precision would turn that into ...34, which is even worse. I have the feeling that we should just live with the above values and insert # abs tol
and # rel tol
in doctests where appropriate.
After doing the above computation with three Lisp variants (ECL, GCL and SBCL) I have the impression that GCL has the best floatingpoint accuracy, ECL the worst, and SBCL is in between. I don't know whether this is caused by differences in the floatingpoint internals or in the Lisp format
function.
comment:18 in reply to: ↑ 17 Changed 8 years ago by
Replying to pbruin:
I'm starting to think that we shouldn't try to limit
fpprintprec
after all, because this will only introduce unnecessary rounding errors. A more robust solution would be to leave it as 0 and to setmaxfpprintprec
(which is 16 by default) to 20 or some other sufficiently high value, so that the output precision is only controlled by the Lisp implementation (and the platform). At least on x86_64 with ECL, both the length and the least significant digits are unpredictable: dividing some powers of 10 by 3 gives a result that looks like3.333333333333333493e5 3.333333333333333222e4 0.003333333333333333 0.03333333333333333 0.3333333333333333 3.3333333333333335 33.333333333333336 333.3333333333333 3333.3333333333335 33333.333333333336 333333.3333333333 3333333.3333333335 3.3333333333333332093e+7 3.3333333333333331347e+8
how do you get this output? Is it printed by Sage, or by Maxima, or by ECL?
For me ECL prints something like
>> (float (/ (expt 10. 3) 3)) 333.33334
even though
>> (floatprecision (float (/ (expt 10. 3) 3))) 24
comment:19 Changed 8 years ago by
This is using Maxima, which outputs floating numbers using the Lisp format
function. On ARM the commands (same which I used to get the above output on x86_64
) and output look as follows:
pbruin@node3eth0:~/src/sage$ ./sage maxima ... Maxima 5.29.1 http://maxima.sourceforge.net using Lisp ECL 12.12.1 ... (%i1) maxfpprintprec: 100; (%o1) 100 (%i2) for i: 4 thru 9 do print(10^i/3.0); 3.3333333333333337e5 3.333333333333333e4 0.003333333333333333 0.03333333333333333 0.3333333333333333 3.3333333333333335 33.333333333333336 333.3333333333333 3333.3333333333335 33333.333333333336 333333.3333333333 3333333.3333333335 3.333333333333333e+7 3.333333333333333e+8 (%o2) done
Changed 8 years ago by
set maxfpprintprec to 21 (better alternative to fpprintprec.patch; passes tests)
comment:20 Changed 8 years ago by
On both ARM and x86_64, the only noticeable effect of setting maxfpprintprec: 21
(see maxfpprintprec.patch) is that Maxima now prints floating point numbers in (1, 1) starting with ()0.
instead of just ().
. This is of course fairly limited, but still nice. In a standalone Maxima compiled with GCL or SBCL, it actually yields one extra digit of accuracy (meaning that the last digit is in general wrong, but it not rounded away and thereby corrupting the previous digit). I could not find an example where this happens on ECL, so that Sage would benefit from it.
It seems that we still need a separate fix for the following doctest failures on ARM (note that the last two are identical, so we could remove the first one):
File "src/sage/symbolic/expression.pyx", line 7340, in sage.symbolic.expression.Expression.gamma Failed example: SR(10.0r).gamma() Expected: 362880.0 Got: 362880.00000000047 File "src/sage/functions/bessel.py", line 1618, in sage.functions.bessel._Bessel.__call__ Failed example: _Bessel(20,algorithm='maxima')(5.0) Expected: 27.703300521289436e12 Got: 27.70330052128944e12 File "src/sage/functions/other.py", line 663, in sage.functions.other.Function_gamma.__init__ Failed example: gamma1(float(6)) Expected: 120.0 Got: 119.99999999999997 File "src/sage/functions/special.py", line 705, in sage.functions.special.elliptic_j.EllipticE Failed example: elliptic_e(0.5, 0.1) Expected: 0.498011394498832 Got: 0.498011394498831 File "src/sage/functions/special.py", line 715, in sage.functions.special.elliptic_j.EllipticE.__init__ Failed example: elliptic_e(0.5, 0.1) Expected: 0.498011394498832 Got: 0.498011394498831
Patch coming soon. (Sticking to patches instead of a Git branch to make it easier to sort out what do here and what to do on a separate ticket.)
comment:21 Changed 8 years ago by
As a mostly irrelevant side remark, here is a curious error message from the doctest framework:
File "src/sage/functions/special.py", line 705, in sage.functions.special.elliptic_j.EllipticE Failed example: elliptic_e(0.5, 0.1) # abs tol 1e15 Expected: 0.498011394498832 Got: 0.498011394498831 Tolerance exceeded: 1e15 > 1e15
I'll just change it to 2e15, then...
comment:22 followup: ↓ 23 Changed 8 years ago by
See attachment. I also discovered (by comparing with PARI) that the output of the Bessel function was in fact less correct (on both ARM and x86_64) than the doctest suggested. I changed the output to match the actual value instead of Maxima's approximation, and added a suitable rel tol
marker. Is there a general rule saying whether one should resolve such discrepancies like this?
comment:23 in reply to: ↑ 22 Changed 8 years ago by
Replying to pbruin:
See attachment. I also discovered (by comparing with PARI) that the output of the Bessel function was in fact less correct (on both ARM and x86_64) than the doctest suggested. I changed the output to match the actual value instead of Maxima's approximation, and added a suitable
rel tol
marker. Is there a general rule saying whether one should resolve such discrepancies like this?
perhaps, report upstream? By the way, do you know whether Maxima uses double precision CL floats in that kinds of functions?
comment:24 Changed 8 years ago by
It seems so; the following is based on bessel.lisp
(case of real and positive order and argument) and reproduces the numerical inaccuracy (enter to_lisp();
in Maxima and then input this):
(let ((order 20) (arg 5)) (multiplevaluebind (n alpha) (floor (float order)) (let ((jvals (makearray (1+ n) :elementtype 'flonum))) (slatec:dbesj (abs (float arg)) alpha (1+ n) jvals 0) (aref jvals n))))
The Lisp function slatec:dbesj
was apparently automatically translated from Fortran.
comment:25 Changed 8 years ago by
It doesn't matter so much, the default implementation of the useraccessible Bessel Jfunction is OK:
sage: bessel_J(20, 5).N(100) 2.7703300521289416873940187368e11 sage: pari(20).besselj(5, precision=100) 2.77033005212894168739401873681733200777 E11
comment:26 followup: ↓ 27 Changed 8 years ago by
 Reviewers set to Peter Bruin
I redid a few tests after upgrading to Maxima 5.33.0 (#13973), and nothing really changed, except the change from .123...
to 0.123...
is also done (upstream, differently than in maxfpprintprec.patch) by this upgrade. May I propose that we just apply the changes from special_functions_precision.patch to solve the precision problems in this ticket?
I'm happy with the other changes you made, even though they don't really seem to be related to ARM, Maxima or precision. Two trivial remarks:
 The message
Doctesting files changed since the last git commit
doesn't seem to have a full stop at the end when I runsage t new
.  Does the
show_default()
doctest ingraphics.py
really take a long time?
comment:27 in reply to: ↑ 26 ; followup: ↓ 28 Changed 8 years ago by
Replying to pbruin:
I redid a few tests after upgrading to Maxima 5.33.0 (#13973), and nothing really changed, except the change from
.123...
to0.123...
is also done (upstream, differently than in maxfpprintprec.patch) by this upgrade. May I propose that we just apply the changes from special_functions_precision.patch to solve the precision problems in this ticket?
I wonder about the # For ARM: rel tol 2e15
tag.
How does it work? Or is this just a comment?
I'm happy with the other changes you made, even though they don't really seem to be related to ARM, Maxima or precision. Two trivial remarks:
 The message
Doctesting files changed since the last git commit
doesn't seem to have a full stop at the end when I runsage t new
. Does the
show_default()
doctest ingraphics.py
really take a long time?
I did timings of tests on ARM and added #long time
to these above certain thresholds.
It's not 100% clear what changes you propose. Can you do a reviewer's git branch (or a patch)?
comment:28 in reply to: ↑ 27 Changed 8 years ago by
Replying to dimpase:
Replying to pbruin:
I redid a few tests after upgrading to Maxima 5.33.0 (#13973), and nothing really changed, except the change from
.123...
to0.123...
is also done (upstream, differently than in maxfpprintprec.patch) by this upgrade. May I propose that we just apply the changes from special_functions_precision.patch to solve the precision problems in this ticket?I wonder about the
# For ARM: rel tol 2e15
tag. How does it work? Or is this just a comment?
The rel tol 2e15
is a magic marker telling the doctest framework not to complain about numerical errors smaller than this tolerance; see Special markup to influence tests. The comment "for ARM" is just a comment to clarify that the error can be this large on ARM. Normally the relative error in basic functions like the gamma function should be roughly bounded by 2e16 (since this is approximately 2^{52} and doubles have 53 bits of precision) and hence be invisible when printing with 16 decimal digits of precision.
I'm happy with the other changes you made, even though they don't really seem to be related to ARM, Maxima or precision. Two trivial remarks:
 The message
Doctesting files changed since the last git commit
doesn't seem to have a full stop at the end when I runsage t new
. Does the
show_default()
doctest ingraphics.py
really take a long time?I did timings of tests on ARM and added
#long time
to these above certain thresholds.
I wonder why it takes so long on ARM; when testing show_default()
on an x86_64 it only seems to take 316 microseconds for the first time (%timeit n 1 r 1 show_default()
) and an average of 11.4 microseconds when running it many times.
It's not 100% clear what changes you propose. Can you do a reviewer's git branch (or a patch)?
OK, coming soon.
comment:29 followup: ↓ 30 Changed 8 years ago by
 Branch changed from u/dimpase/arm_fixes_etc to u/pbruin/15921arm_fixes
 Commit changed from 079bb9af4f12892268a19f0d218ac96bd72466f4 to b8fea1cb11554aff202cd669342136d5a4ea084c
Here is a reviewer branch, let me know if you think this is a good approach.
I am actually not too happy with making these error tolerances too large in the case of the gamma function; I would say this is such an elementary function that it should really be correct up to the last one or two bit. Hence it would also be reasonable to decide that these are really bugs in whichever library this is implemented in on ARM, and hence should keep giving doctest failures on ARM.
New commits:
a7b7bf2  Numerical noise patches for ARM. As well, added # long time to many

079bb9a  replacing HG with git in examples

b8fea1c  Trac 15921: fix some special function values and precision

comment:30 in reply to: ↑ 29 ; followup: ↓ 31 Changed 8 years ago by
Replying to pbruin:
Here is a reviewer branch, let me know if you think this is a good approach.
I am actually not too happy with making these error tolerances too large in the case of the gamma function; I would say this is such an elementary function that it should really be correct up to the last one or two bit. Hence it would also be reasonable to decide that these are really bugs in whichever library this is implemented in on ARM, and hence should keep giving doctest failures on ARM.
The problem is that then it is hard to run patchbots on an ARM system, as they will always return failures...
New commits:
a7b7bf2 Numerical noise patches for ARM. As well, added # long time to many
079bb9a replacing HG with git in examples
b8fea1c Trac 15921: fix some special function values and precision
comment:31 in reply to: ↑ 30 Changed 8 years ago by
Replying to dimpase:
Replying to pbruin:
I am actually not too happy with making these error tolerances too large in the case of the gamma function; I would say this is such an elementary function that it should really be correct up to the last one or two bit. Hence it would also be reasonable to decide that these are really bugs in whichever library this is implemented in on ARM, and hence should keep giving doctest failures on ARM.
The problem is that then it is hard to run patchbots on an ARM system, as they will always return failures...
If I understand correctly (from your comments on this tickets and here) the failure arises because of a bad eglibc implementation of lgamma
(log gamma), so it should really be fixed upstream.
Now I just noticed on http://www.eglibc.org/ that eglibc is no longer maintained. Is it recommended to use the normal glibc on ARM nowadays, and does the bug also exist there?
comment:32 Changed 8 years ago by
The error is actually 2 ulp, so it is correct "up to the last two bit".
Edit: referring to the gamma(float(6))
example
comment:33 followup: ↓ 34 Changed 8 years ago by
Also, Dima's arm box in Oxford uses Ubuntu EGLIBC 2.150ubuntu20.2
comment:34 in reply to: ↑ 33 Changed 8 years ago by
Replying to vbraun:
Also, Dima's arm box in Oxford uses Ubuntu EGLIBC 2.150ubuntu20.2
I'm presently building glibc2.19 on this arm box, to see if any good comes out of it.
By the way, even Ubuntu 14 still uses EGLIBC.
comment:35 Changed 8 years ago by
glibc 2.19
provides a proper implementation of tgamma
on ARM. With it I get tgamma(6.0)=120.000000...
, while still exp(lgamma(6.0))=119.99999999999997157829
, just as with eglibc
.
On the ARM box glibc 2.19
is installed in /home/dimpase/sage
. I link with the static /home/dimpase/sage/lib/libm.a
.
the sources are in /home/dimpase/sage/glibc2.19
.
comment:36 followups: ↓ 37 ↓ 38 Changed 8 years ago by
What is the upstream status for the Maxima fpprintprec bug? If there is a fix then we should just go with upstream instead of increasing our tolerances imho. Its not cool of maxima to print too many digits.
As for the eglibc issue, I'm in favor of increasing the tolerance there. At the end of the day that seems to be a valid implementation choice as for how to implement tgamma, and 2 ulp isn't the end of the world.
comment:37 in reply to: ↑ 36 Changed 8 years ago by
Replying to vbraun:
What is the upstream status for the Maxima fpprintprec bug? If there is a fix then we should just go with upstream instead of increasing our tolerances imho. Its not cool of maxima to print too many digits.
I reminded Robert Dodier about it recently, and he was going to have a look this week. No further word from him yet.
As for the eglibc issue, I'm in favor of increasing the tolerance there. At the end of the day that seems to be a valid implementation choice as for how to implement tgamma, and 2 ulp isn't the end of the world.
comment:38 in reply to: ↑ 36 ; followup: ↓ 39 Changed 8 years ago by
Replying to vbraun:
What is the upstream status for the Maxima fpprintprec bug? If there is a fix then we should just go with upstream instead of increasing our tolerances imho. Its not cool of maxima to print too many digits.
I don't think a change in upstream will help us. Besides, I see printing too many digits as a feature in this case, since printing 17 decimal digits guarantees exact conversion of floats with 53 bits of precision.
What explains the difference between ARM and x86_64 in the cases of interest for us is just floating point noise. There are only two Maximarelated affected doctests. The first one is
sage: from sage.functions.bessel import _Bessel sage: _Bessel(20,algorithm='maxima')(5.0) 27.70330052128944e12 # ARM 27.703300521289436e12 # x86_64
In this case, the number of digits does differ, but the correct answer is 2.770330052128941687...e11, so the last 12 digits are wrong in both cases.
Another issue here is that this example has two digits before the decimal point, instead of one. This is apparently an ECL issue, since Maxima on GCL or SBCL only prints one digit before the point.
The second relevant doctest is
elliptic_e(0.5, 0.1) 0.498011394498831 # ARM 0.498011394498832 # x86_64
Here the number of digits is the same, and the correct answer is 0.49801139449883153311546...
So I think for these two doctests we should just specify a tolerance.
As for the eglibc issue, I'm in favor of increasing the tolerance there. At the end of the day that seems to be a valid implementation choice as for how to implement tgamma, and 2 ulp isn't the end of the world.
For the gamma(10) doctest the relative error is roughly 1.3e15 (9 ulp). OK, it isn't the end of the world, and it doesn't seem to increase for larger arguments, but one would expect the gamma function to have a smaller error (which is the case for other architectures, according to the glibc Info manual).
comment:39 in reply to: ↑ 38 Changed 8 years ago by
Replying to pbruin:
What explains the difference between ARM and x86_64 in the cases of interest for us is just floating point noise.
perhaps I should clarify the issue that is behind the gammafunction related things.
One can have the same C code for x86_64 and for ARM which will produce different FP results, if one uses the long double
type.
The latter is 16 bytes on x86_64, but only 8 bytes on ARM. I'm 99% sure this is the problem behind the eglibc issue.
comment:40 Changed 8 years ago by
 Reviewers changed from Peter Bruin to Peter Bruin, Volker Braun
 Status changed from needs_review to positive_review
Agree. In general we should probably err on the lenient side for RDF evaluation, its a speed vs. accuracy tradeoff.
comment:41 Changed 8 years ago by
 Branch changed from u/pbruin/15921arm_fixes to b8fea1cb11554aff202cd669342136d5a4ea084c
 Resolution set to fixed
 Status changed from positive_review to closed
New commits:
Merge branch 'develop' of trac.sagemath.org:sage into arm_fixes
oops, wrong commit/branch...