#10550 closed defect (duplicate)
integration not working
Reported by: | mariah | Owned by: | burcin |
---|---|---|---|
Priority: | minor | Milestone: | sage-duplicate/invalid/wontfix |
Component: | calculus | Keywords: | |
Cc: | kcrisman, burcin | Merged in: | |
Authors: | Reviewers: | Karl-Dieter Crisman, Burcin Erocal | |
Report Upstream: | N/A | Work issues: | |
Branch: | Commit: | ||
Dependencies: | Stopgaps: |
Description
sage: g = abs(x^2-1)^(-2/3) sage: for i in range(3,15): ....: RR(g.integrate(x, -10^i, 10^i)) ....: 12.0193735393553 12.3400570107857 12.4880961536322 12.5548875258538 12.6007931754768 12.5769884999380 12.6035401600491 12.4014006867096 11.1318879504315 7.82165950876205 6.48361691536824 6.32209292067043 sage:
Values should increase as g is a positive function!
Change History (19)
comment:1 Changed 10 years ago by
comment:2 Changed 10 years ago by
It might also be worth noting that integrating over the entire real line is special-cased by integrate_numerical, so we get:
RR(g.integrate(x, -Infinity, Infinity)) 12.6178509639583
This is a little low, Mathematica gives 12.6195 but its hard to improve the precision. This is an intrinsically hard sort of problem to automate (but we can clearly improve the current behavior).
comment:3 Changed 10 years ago by
The reason this is *broken* is because the implementation of the method _evalf_ in sage/symbolic/integration/integral.py in DefiniteIntegral? completely ignores the parent and the precision of a and b. This function has to be rewritten to take into account the precision of the inputs:
def _evalf_(self, f, x, a, b, parent=None): from sage.gsl.integration import numerical_integral return numerical_integral(f, a, b)[0]
comment:4 Changed 10 years ago by
My personal opinion is that it should work like this:
sage: RR(g.integrate(x, -10^15, 10^15)) error message: you have to specify some params to numerical_integral when constructing the definite integral sage: RR(g.integrate(x, -10^15, 10^15, max_points=10^5)) correct answer
Or maybe not. I hate this.
comment:5 Changed 10 years ago by
Mike Hansen: One option -- just rewrite _evalf_ to call mpmath, which is not perfect, but way better than GSL.
comment:6 Changed 10 years ago by
Be sure to use fast_callable:
sage: f(x) = cos(x)^10 - exp(x*sin(cos(x))) sage: g = fast_callable(f, RealField(200), [x]) sage: timeit('f(10.0)') ^C625 loops, best of 3: 1.25 ms per loop sage: a = RealField(200)(10) sage: f(a) 0.17239044387825963285967105367392659309547067375921972300438 sage: g(a) 0.17239044387825963285967105367392659309547067375921972300438 sage: timeit('f(a)') 625 loops, best of 3: 1.38 ms per loop sage: timeit('g(a)') 625 loops, best of 3: 40.7 µs per loop sage: 1380/40.7 33.9066339066339
comment:7 Changed 10 years ago by
some more information about integration in mpmath and other programs
http://groups.google.com/group/mpmath/browse_thread/thread/2fb6d36501c49a73
comment:8 Changed 10 years ago by
Your timings with fast_callable seem misleading to me since you are not including the overhead of constructing the fast callable. I got:
sage: f(x) = cos(x)^10 - exp(x*sin(cos(x))) sage: g = fast_callable(f, RealField(200), [x]) sage: def gmaker(afunc): return fast_callable(afunc, RealField(200), [x]) sage: timeit('f(a)') 125 loops, best of 3: 1.46 ms per loop sage: timeit('g(a)') 625 loops, best of 3: 65.2 µs per loop sage: timeit('gmaker(f)(a)') 125 loops, best of 3: 1.55 ms per loop
comment:9 Changed 10 years ago by
Not entirely misleading though, if you call the fast_callable many many times, as would seem to be the case here.
comment:10 Changed 10 years ago by
Getting this right will be tricky. While simply changing from a blind application of gsl's numerical_integral() to mpmath's quad() may be often be an improvement, in this situation it makes things worse:
sage: f = abs(x^2-1)^(-2/3) sage: g = fast_callable(f, RealField(100), [x]) sage: import mpmath as mp sage: mp.mp.prec = 100 sage: for i in [mp.quad(g, [-10^i, 10^1]) for i in range(15)]: ....: print i ....: +inf 8.8396932208278888892057527426 8.3969760821447628003659891605 9.7953936385023669571121865442 9.2339519516756815076612878988 8.2441701346282922281629165563 8.044969834509411578891250552 11.635099287297832304011545535 11.084646393563409097851890886 7.5468622876138167706495612921 7.0790386862087011969090472793 8.4443867135724817987750671703 10.690050592136777734927819129 32.336415464969432639295368434 10.533897172238729354640752465 sage: mp.quad(g, [mp.mpf('-inf'),mp.mpf('+inf')]) mpf('+inf')
Switching from tanh-sinh quadrature to Gauss-Legendre gives
sage: for i in [mp.quadgl(g, [-10^i, 10^1]) for i in range(15)]: ....: print i ....: 6.3780650727028379121031555789 7.9564100202151404330674115745 8.3052587750738550159021915092 11.156320092824073496435254839 9.5630214580799381652111853848 2.8857506804320681789970931663 1.7298436143374586397937735747 0.63762636717628812549878116562 0.29045600775700733200460854022 0.13456948161705169202354793393 0.062450129901992433601218328054 0.02898624937362903007167714756 0.013454200394425034174020875196 0.0062448854829133373771884652354 0.002898619019151035832588794256 sage: mp.quadgl(g, [mp.mpf('-inf'),mp.mpf('+inf')]) mpf('11.350773575585495163385287559838')
Hand-tuning with Gauss-Legendre:
sage: mp.quadgl(g, [mp.mpf('-inf'),-1,1,mp.mpf('+inf')]) mpf('12.350413467045610425723223632513') sage: mp.quadgl(g, [mp.mpf('-inf'),-1,1,mp.mpf('+inf')], maxdegree=12) mpf('12.592883474959440301710672862944')
And with tanh-sinh:
sage: mp.quad(g, [mp.mpf('-inf'),-1,1,mp.mpf('+inf')]) mpf('+inf') sage: mp.quad(g, [mp.mpf('-inf'),-1,1,mp.mpf('+inf')], maxdegree=12) mpf('+inf')
It seems this integral requires at least some hand tuning to be reasonable.
comment:11 Changed 10 years ago by
There's probably a whole class of functions -- positive functions with a singularities at a finite number of points that go to zero sufficiently fast at plus/minus infinity maybe -- which exhibit this sort of breakage.
comment:12 Changed 10 years ago by
I tried gp
%gp for(n=1,10, print(intnum(x=-10^n,10^n,(abs(x^2-1))^(-2/3))))
and got
7.9618235972792581897233298717819451083 6.7165473168753097702622726677655297805 25.355067401739496118829772887176008798 235.26460811139471211432045489286370351 2344.1629174317876121014814723385308094 23437.691659284423942658077351512842284 234375.08896035939931707271806395149682 2343750.0412917410819973945084611231908 23437500.019165928432661951605079616090 234375000.00889603593988653387093897736
comment:13 Changed 10 years ago by
Just for the record, if given enough precision and the singularity locations, mpmath does okay in tanh-sinh on the infinite range:
sage: import mpmath sage: dpss = [int(10*1.25**i) for i in [0..15]] sage: qq = [] sage: for dps in dpss: ....: mpmath.mp.dps = dps ....: q = mpmath.quad(lambda x: abs(x^2 - 1)^(-2/3), (-infinity, -1, 1, infinity)) ....: delta = float(abs((q-qq[-1])/qq[-1])) if qq else 0.0 ....: print dps, RealField(250)(q), delta ....: qq.append(q) ....: 10 12.619415730000000000000000000000000000000000000000000000000000000000000000 0.000000000000000 12 12.619588240700000000000000000000000000000000000000000000000000000000000000 1.36705461222e-05 15 12.619632936725900000000000000000000000000000000000000000000000000000000000 3.54179740682e-06 19 12.619638678800019440000000000000000000000000000000000000000000000000000000 4.55011184915e-07 24 12.619638942183136436066100000000000000000000000000000000000000000000000000 2.08708920835e-08 30 12.619638947875526360028104307100000000000000000000000000000000000000000000 4.51073913449e-10 38 12.619638947928984872554788554326908474000000000000000000000000000000000000 4.2361364495e-12 47 12.619638947929088225923816071123105700944331096000000000000000000000000000 8.18988320141e-15 59 12.619638947929088350562971317084699893895128992807003256090000000000000000 9.87660231487e-18 74 12.619638947929088350575171596623717955924858573775002931329627429009895074 9.66769302146e-22 93 12.619638947929088350575171711452592280507575074285006026048789429945225951 9.0992202549e-27 116 12.619638947929088350575171711452647219166086056903447254987222698078233522 4.35342554075e-33 145 12.619638947929088350575171711452647219167199848815619049997783218559507282 8.82586195031e-41 181 12.619638947929088350575171711452647219167199848815874865637945653424795957 2.02712328949e-50 227 12.619638947929088350575171711452647219167199848815874865637945883686562894 1.82463038671e-62 284 12.619638947929088350575171711452647219167199848815874865637945883686562894 9.97427124086e-78
and plotting it shows a very plausible logarithmic decrease in the size of the change (can't really call it an error..) You have to give it way more digits of precision than it actually gets right, but it's possible that it converged in the end.
comment:14 Changed 10 years ago by
- Cc kcrisman added
comment:15 Changed 10 years ago by
- Cc burcin added
Burcin, I think that this might be a dupe of #8321, at least for practical purposes. What do you think?
comment:16 Changed 10 years ago by
- Cc jdemeyer added
- Reviewers set to Burcin Erocal
- Status changed from new to needs_review
Yep, this is a duplicate. I suggest we close this. I'll write a comment on #8321 so the discussion here is not lost. We were looking for examples on that ticket. It would be great if the examples here could be transferred somehow. Any volunteers?
comment:17 Changed 10 years ago by
- Status changed from needs_review to positive_review
comment:18 Changed 10 years ago by
- Milestone changed from sage-4.7.1 to sage-duplicate/invalid/wontfix
- Resolution set to duplicate
- Status changed from positive_review to closed
comment:19 Changed 10 years ago by
- Cc jdemeyer removed
- Reviewers changed from Burcin Erocal to Karl-Dieter Crisman, Burcin Erocal
Seems like there might be two issues here. One is that the default for integrate_numerical (which is what I think gets called here after symbolic integration fails) only uses a maximum of 87 sample points. This parameter can be set by using integrate_numerical directly but I don't think it can be through integrate.
The second possible issue is that it seems that integrate_numerical tries to create a fast version of the function, but this is not done in an accurate way. I believe this code was written prior to fast_callable, which seems to help: