Opened 6 years ago
Last modified 6 years ago
#17066 needs_work defect
always simplify hypergeometric() when it's a polynomial
Reported by: | rws | Owned by: | |
---|---|---|---|
Priority: | major | Milestone: | sage-6.4 |
Component: | symbolics | Keywords: | special, functions, pFq, evaluation |
Cc: | Merged in: | ||
Authors: | Ralf Stephan | Reviewers: | |
Report Upstream: | N/A | Work issues: | |
Branch: | u/rws/work_around_mpmath_problem_with_hypergeometric___zeroes (Commits) | Commit: | 0e0fef65a021a9b0a9db30a2bc6d300ede68bf14 |
Dependencies: | Stopgaps: |
Description
See https://groups.google.com/forum/?hl=en#!topic/sage-devel/8z2HyH5Y8Dc with Fredrik's assessment:
"For Sage, fixing the problem is actually trivial: when the hypergeometric function is a polynomial (and at least when the inputs are exact), don't call mpmath; just evaluate the polynomial directly and then call .n() on the result."
Change History (26)
comment:1 Changed 6 years ago by
- Branch set to u/rws/work_around_mpmath_problem_with_hypergeometric___zeroes
comment:2 Changed 6 years ago by
- Commit set to b6f0fbc3983b520c28330e49ec38d79aadf1796f
- Status changed from new to needs_review
- Summary changed from work around mpmath problem with hypergeometric() zeroes to always try to simplify hypergeometric()
comment:3 Changed 6 years ago by
- Status changed from needs_review to needs_work
Maybe document the problem at ask.sagemath is fixed, too?
hypergeometric([-2,-1],[2],-1).n(100)
was the example Dima posted on sage-devel...
comment:4 Changed 6 years ago by
The "solution" is tongue-in-cheek, anyway, at least its application should be restricted in order to not burn CPU needlessly with higher orders of the function.
comment:5 Changed 6 years ago by
Perhaps, but it wouldn't be hard to restrict it properly, right?
comment:6 follow-up: ↓ 8 Changed 6 years ago by
This is related to the following ask.sagemath question as well.
comment:7 Changed 6 years ago by
- Commit changed from b6f0fbc3983b520c28330e49ec38d79aadf1796f to dff348c1c81579de07e88485e05d5e40b0d31942
comment:8 in reply to: ↑ 6 Changed 6 years ago by
Replying to kcrisman:
This is related to the following ask.sagemath question as well.
Indeed. While what you're saying there about decidability is certainly right, we can bit off a good chunk using known closed forms. I have now fixed a bug in closed_form()
and it looks like using that Sage function (provided by Fredrik with the hypergeometric code) instead of Maxima is not only faster but gives also better results:
sage: hypergeometric([3/2],[1/2,-1/2],5) # using closed_form() -2*sqrt(5)*sinh(2*sqrt(5)) - 19*cosh(2*sqrt(5)) sage: from sage.interfaces.maxima_lib import maxima_lib, max_to_sr sage: max_to_sr(maxima_lib.hgfred([3/2],[1/2,-1/2],5).ecl()) -4*(5*5^(1/4))*sqrt(1/5)*sqrt(pi)*sqrt(sqrt(5)/pi)*cosh(2*sqrt(5)) - 2*5^(3/4)*sqrt(pi)*bessel_I(-3/2, 2*sqrt(5))
The latter, while numerically correct, is difficult to simplify further.
There are several doctests failing where I need to decide how to fix or rewrite them.
comment:9 Changed 6 years ago by
- Commit changed from dff348c1c81579de07e88485e05d5e40b0d31942 to a2253833d3edd9cd6fa9ccfc9509592ad2b9fad4
Branch pushed to git repo; I updated commit sha1. New commits:
a225383 | 17066: restrict to nonpositive first arg; fix doctests
|
comment:10 Changed 6 years ago by
- Commit changed from a2253833d3edd9cd6fa9ccfc9509592ad2b9fad4 to 66ed0c0586f2ec1b78a28e76a1f34f258b6f4f96
Branch pushed to git repo; I updated commit sha1. New commits:
66ed0c0 | 17066: more doctests
|
comment:11 Changed 6 years ago by
- Status changed from needs_work to needs_review
Had to restrict the simplification, after all, as advised.
comment:12 Changed 6 years ago by
Yeah, make sure nothing is wrong!
I know that I won't be able to finish the review of this immediately. But I do notice that all the hold=True
will be quite confusing in the doctests. I recommend that we add some more doctests that test the methods in question with non-simplifying stuff, and then make it clear to the reader why we are all of a sudden using hold=True
. On the plus side, hold=True
seems to have worked great!
comment:13 Changed 6 years ago by
- Commit changed from 66ed0c0586f2ec1b78a28e76a1f34f258b6f4f96 to 55daf81080f46c54429e629b65e8049acfdb6d00
Branch pushed to git repo; I updated commit sha1. New commits:
55daf81 | 17066: explain why we add hold=True to some doctests
|
comment:14 Changed 6 years ago by
- Summary changed from always try to simplify hypergeometric() to always simplify hypergeometric() when it's a polynomial
comment:15 Changed 6 years ago by
Does this take #10169 comment 4 in account?
comment:16 follow-ups: ↓ 17 ↓ 23 Changed 6 years ago by
Expanding hypergeometric functions in polynomial closed form before evaluating them can lead to numerical cancellation. Compare for example
sage: hypergeometric([1,-100],[3], 0.7) 0.0278923450568346 sage: hypergeometric([1,-100],[3], Reals(1000)(0.7)).n() 0.0278923450568346 sage: sage.functions.hypergeometric.closed_form(hypergeometric([1,-100],[3],x)).subs(x=0.7) -376.859920391941
Won't that be a problem with this ticket? (I didn't read beyond the ticket's description, so please ignore my comment if it is beside the point.)
comment:17 in reply to: ↑ 16 ; follow-up: ↓ 19 Changed 6 years ago by
Replying to mmezzarobba:
Won't that be a problem with this ticket?
It appears the only solutions are (apart from relying on the user to supply needed precision):
A) restrict the range of parameters where ticket is applied
B) do not simplify; just fix the evaluation problem
C) declare the problem to be in n()
I mean really, n()
makes bold claims:
Returns a numerical approximation of an object "x" with at least "prec" bits (or decimal "digits") of precision.
but won't deliver when x
is a polynomial?
comment:18 Changed 6 years ago by
When it's a polynomial of low degree and the inputs are not too large, you could do everything with rational arithmetic and convert to a correctly rounded floating-point number in the end. Or you could use interval arithmetic.
comment:19 in reply to: ↑ 17 Changed 6 years ago by
Replying to rws:
It appears the only solutions are A) restrict the range of parameters where ticket is applied B) do not simplify; just fix the evaluation problem C) rely on the user to supply needed precision; add warning
If (as you seem to say in your post to sage-devel
) the result of the ticket is that
sage: hypergeometric([1,-100],[3],x)
returns a polynomial while
sage: hypergeometric([1,-100],[3],0.7)
still calls a numerically stable evaluation algorithm, then I don't think there is a problem. (Ideally, hypergeometric([1,-100],[3],x,hold=True).subs(x=0.7)
should also call the numerically stable evaluation algorithm, though.)
What I was concerned about was that the implementation of n()
for an "atomic-looking" expression with dedicated numerical evaluation code would end up using a numerically unstable formula internally without adapting the precision or warning the user. I'm not saying such a behaviour is an absolute no-go, but I do think it is something we should avoid when possible.
comment:20 Changed 6 years ago by
It's not that I don't see your point. The real fix for this cannot be provided by hypergeometric
though but must be done in subs
, if at all.
comment:21 Changed 6 years ago by
Since mpmath uses a numerically stable algorithm but fails on zero, you could potentially also try calling mpmath, catch the exception, and then check symbolically if it's exactly zero.
comment:22 Changed 6 years ago by
- Commit changed from 55daf81080f46c54429e629b65e8049acfdb6d00 to 0e0fef65a021a9b0a9db30a2bc6d300ede68bf14
comment:23 in reply to: ↑ 16 ; follow-up: ↓ 24 Changed 6 years ago by
Expanding hypergeometric functions in polynomial closed form before evaluating them can lead to numerical cancellation. Compare for example
sage: hypergeometric([1,-100],[3], 0.7) 0.0278923450568346 sage: hypergeometric([1,-100],[3], Reals(1000)(0.7)).n() 0.0278923450568346 sage: sage.functions.hypergeometric.closed_form(hypergeometric([1,-100],[3],x)).subs(x=0.7) -376.859920391941Won't that be a problem with this ticket? (I didn't read beyond the ticket's description, so please ignore my comment if it is beside the point.)
Just so I follow, what you are saying is that the polynomial (rational function, I guess) given will be correct, but that substituting floating point numbers are likely to be inaccurate unless carried out to a very high degree of precision? I'm torn here because on the one hand, presumably that could happen with any rational function, there are standard examples given in a lot of calculus books, but on the other hand we "expect" hg to return a hg function and not a polynomial.
The real fix for this cannot be provided by hypergeometric though but must be done in subs, if at all.
Well, it could provide a good reason not to "always simplify hypergeometric when it's a polynomial"... I didn't realize that people meant rational function when they said polynomial. Is it possible to really only simplify when it's a true polynomial, or is that a very unlikely event?
comment:24 in reply to: ↑ 23 ; follow-up: ↓ 25 Changed 6 years ago by
Replying to kcrisman:
Just so I follow, what you are saying is that the polynomial (rational function, I guess)
No, it is really a polynomial—but one of degree 100, with large coefficients that alternate in sign.
given will be correct, but that substituting floating point numbers are likely to be inaccurate unless carried out to a very high degree of precision?
Precisely. (Well, not necessarily a very high precision, but probably something like 80-100 guard bits in my example.)
comment:25 in reply to: ↑ 24 Changed 6 years ago by
Just so I follow, what you are saying is that the polynomial (rational function, I guess)
No, it is really a polynomial—but one of degree 100, with large coefficients that alternate in sign.
Oh, I misread some of the output in the example I tested - hard to see where the dividing signs go :-) So it's the alternation that is the cancellation problem, got it.
given will be correct, but that substituting floating point numbers are likely to be inaccurate unless carried out to a very high degree of precision?
Precisely. (Well, not necessarily a very high precision, but probably something like 80-100 guard bits in my example.)
Anyway, 53 default yields madness, as we see in comment:16. Not good, but I'm not sure what the "right" answer is; I'm leaning toward saying we should not do this automatically, then, but making the simplification a lot easier to find (which is probably already done in this ticket). There are likely as many people using hg functions for numerics as for symbolics, right?
comment:26 Changed 6 years ago by
- Status changed from needs_review to needs_work
sage -t --long src/sage/symbolic/expression.pyx # 4 doctests failed sage -t --long src/sage/functions/bessel.py # 1 doctest failed sage -t --long src/sage/functions/hypergeometric.py # 102 doctests failed
This patch just changes
hypergeometric()
to always invokeMaxima_lib.hgfred()
.New commits:
17066: always try to simplify