- Sponsor
-
Notifications
You must be signed in to change notification settings - Fork 570
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
numerical integration with arbitrary precision #8321
Comments
comment:1
Note that PARI/GP can do (arbitrary precision) numerical integration:
I don't know why it does not work from Sage:
|
comment:2
There is an example-doctest in the file
|
comment:3
mpmath also supports it and can handle python functions. |
comment:4
I think I have the solution for this trac with mpmath:
The tests just run fine:
greez maldun |
Attachment: trac_8321_numeric_int_mpmath.patch.gz Numerical evaluation of symbolic integration with arbitrary precision with help of mpmath |
comment:8
Thanks, Maldun, this is a good addition to have. I don't have time to review this immediately, but it would be helpful to know if you detected any errors, compared this with symbolic integrals and their evaluation, etc. Basically, that the results from this really are as accurate as advertised. Also, you might as well leave the GSL stuff in as comments, as in the patch you posted above, or even as an optional argument, though that may not be compatible with |
comment:9
Does this work for double integrals? |
comment:10
Replying to @kcrisman:
I will consider this, but hopefully it is not necessary, and mpmath will do the whole thing. I plyed arround a little, and I didn't find any differences between the other evaluation methods. In some cases it works even better (I had an example recently in ask sage, which motivated me to switch to this form ov |
comment:11
Replying to @mwhansen:
mpmath does, this function doesn't, but the current version in sage didn't either, so it's no prob. |
comment:12
Thank you for the patch Stefan. This was much needed for quite a while now. Replying to @sagetrac-maldun:
I agree. I like how the patch looks overall. It would be good to see comparisons on
Maybe Fredrik can comment on this as well. Using Does anyone know of good examples to add as tests for numerical integration?
Unfortunately, ATM, the numerical evaluation framework for symbolic expressions doesn't support specifying different methods. This could (probably, I didn't check the code) be done by changing the interpretation of the python object we pass around to keep the algorithm parameter and the parent, instead of just the parent. Is this a desirable change? Shall we open a ticket for this?
I guess this is based on a comment I made in the context of orthogonal polynomials and scipy vs. mpmath. Instead of a general policy, I'd like to consider each function separately. Overall, I'd lean toward using |
Author: Stefan Reiterer |
Changed keywords from none to numerics, integration |
comment:13
Replying to @burcin:
Ok I will try this + do some tests in the near future!
I think I should find some, since I did/do a lot of work with finite element, spectral methods,
I personally would highly recommend this. Consider for example highly oscillating integrals like
That's true. I think we should provide, like mentioned above, method parameters. greez maldun |
comment:15
Replying to @sagetrac-maldun:
That would be great. I suggest making that a new enhancement ticket though. Let's fix this bug first and use mpmath for numerical evaluation. We should also open a new ticket for numerical integration of double integrals as Mike was asking in comment:9. |
comment:16
on page 132 of http://cannelle.lateralis.org/sagebook-1.0.pdf you'll find 10 examples. Paul |
comment:17
I suggest the following doctests for integral.py:
Further Ideas? |
comment:18
those doctests are not in arbitrary precision (or do you suggest to take them as a basis |
comment:19
I had now a little time to think about it, and I suggest to add even more tests. But yesterday I found out that if sage knows the analytical solution it just evaluate this, and I don't think this is the best way I gave it to discussion on sage devel: see http://groups.google.com/group/sage-devel/browse_thread/thread/886efb8ca8bdcff2 Why do I have this concern?
I will give more examples today or tomorrow. |
comment:37
I've run some more comparisons which include Burcin's relative error for mpmath's quad. Here are some results for 6 test functions. The code that runs the tests is here: https://gist.github.com/1166436 In the output below:
All of the times are listed in seconds.
It looks like mpmath is now just as accurate as GSL but the times are obviously a lot longer. I agree with @kcrisman 's suggestion for calling GSL when the precision is default (float or RDF or 53 bits?) and calling mpmath with relative errors when the precision is higher. Perhaps adding the mpmath relative errors should be on a different ticket which this one will depend on? |
comment:38
The singleton trick definitely needs to be implemented; it can save a factor 4x or more. Then there is the question of whether to use GaussLegendreRel or TanhSinhRel by default. Gauss-Legendre is somewhat faster for smooth integrands; Tanh-Sinh is much better for something like Anyway, I agree that it would be sensible to use GSL by default. |
comment:39
Replying to @fredrik-johansson:
I understand what you are saying about the class being instantiated on every call. Can you explain what you mean by the "singleton trick"?
For
ps. somehow I read your second to last comment (about the relative error) and thought it was written by Burcin (hence my reference to his name in my reply). Sorry :) |
comment:40
Replying to @benjaminfjones:
To make a class singleton, add something like this (warning: untested):
|
comment:41
I tried making TanhSinhRel the default and including the singleton class TanhSinhRel(TanhSinh):
instance = None
def __new__(cls, *args, **kwds):
if not cls.instance:
cls.instance = super(TanhSinhRel, cls).__new__(cls, *args, **kwds)
return cls.instance
def estimate_error(self, results, prec, epsilon):
mag = abs(results[-1])
if len(results) == 2:
return abs(results[0]-results[1]) / mag
try:
if results[-1] == results[-2] == results[-3]:
return self.ctx.zero
D1 = self.ctx.log(abs(results[-1]-results[-2])/mag, 10)
D2 = self.ctx.log(abs(results[-1]-results[-3])/mag, 10)
if D2 == 0:
D2 = self.ctx.inf
else:
D2 = self.ctx.one / D2
except ValueError:
return epsilon
D3 = -prec
D4 = min(0, max(D1**2 * D2, 2*D1, D3))
return self.ctx.mpf(10) ** int(D4) I couldn't see a timing difference by adding that code to TanhSinhRel (but maybe this isn't the right place to add it?). Also, I get a deprecation warning about arguments passed to Here are two tests in a row, one shows the deprecation warning, the second doesn't:
and again in the same session to check if timings have improved:
The timings do improve after the first call which includes |
comment:42
Ugh, I should clearly have tested this feature properly in mpmath. The problem is that the
This ought to work. The speedup of doing this should be greater for GaussLegendre than for TanhSinh, since the node generation is more expensive for the former. Some further speedup would be possible by overriding the transform_nodes and sum_next methods so that Sage numbers are used most of the time. This way most of the calls to sage_to_mpmath and mpmath_to_sage could be avoided when an interval is reused. transform_nodes would just have to be replaced by a simple wrapper function that takes the output (a list of pairs of numbers) from the transform_nodes method of the parent class and converts it to Sage numbers. sum_next computes the sum over This should all just require a few lines of code. One just needs to be careful to get the precision right in the conversions, and to support both real and complex numbers. Unless someone else is extremely eager to implement this, I could look at this tomorrow. |
comment:43
Sorry, make that
|
Changed keywords from numerics, integration to numerics, integration, sd32 |
comment:45
OK, here's a version also avoiding type conversions:
This prints:
The estimate_error method still needs some work, though... |
comment:46
It turns out that the current top-level function, numerical_integral, isn't even in the reference manual. See #11916. I don't think this is addressed here yet, though if it eventually is then that ticket would be closed as a dup. |
comment:47
Just FYI - the Maxima list has a similar discussion going on right now, starting here in the archives. We should continue to allow access to their methods as optional, of course :) |
comment:48
Replying to @kcrisman:
I especially like this quote. I think it is a better way forward than trying to guess what the user needs - maybe just allowing many options is better.
|
comment:50
Replying to @zimmermann6:
It does not work for two reasons:
The latter point seems the most fundamental to me. For arbitrary precision numerical integration, GP/PARI is probably our best bet, though, and it seems that the PARI C api should be quite usable, because the integrand gets passed as a black box function. From PARI handbook, we get the signature:
so as long as we can provide a way to evaluate our integrand (say) Pari's high precision numerical integration is supposed to be of quite high quality. This approach would be much easier than trying to symbolically translate any arbitrary Sage symbolic expression to GP (plus more general, because we would be able to use any python callable, provided we find a way to provide the desired precision) |
comment:51
Sage 4.8 can now integrate the formula in this ticket, thus I propose to change it to:
Paul |
Reviewer: Paul Zimmermann |
Work Issues: add more arbitrary precision tests |
comment:53
Replying to @zimmermann6:
You are right that for this ticket, the original example doesn't test generic numerical integration. The numerical approximation of the resulting expressions in gamma functions seems suspect, though:
Also note that the equality tests as stated in the examples are not direct evidence that something is going wrong:
I guess the numbers are coerced into the parent with least precision before being compared ... |
comment:59
is there current interest in adding the functionality of this ticket? i read through the benchmarks in this thread, but on the other hand i didn't understand what is the proposed way to plug this into sage. how about the following: does it make sense to have the numerical integration algorithms to be called via a new keyword argument in |
comment:60
for reference one will find on https://members.loria.fr/PZimmermann/sagebook/english.html a translation into english of the book "Calcul mathematique avec Sage", which was also updated to Sage 7.6. Chapter 14 discusses numerical integration, in particular with arbitrary precision. (We have also added a section about multiple integrals.) |
From the sage-devel:
The
_evalf_
function defined on line 179 ofsage/symbolic/integration/integral.py
calls the gslnumerical_integral()
function and ignores the precision.We should raise a
NotImplementedError
for high precision, or find a way to do arbitrary precision numerical integration.CC: @sagetrac-maldun @fredrik-johansson @kcrisman @sagetrac-mariah @sagetrac-bober @eviatarbach @mforets
Component: symbolics
Keywords: numerics, integration, sd32
Work Issues: add more arbitrary precision tests
Author: Stefan Reiterer
Reviewer: Paul Zimmermann
Issue created by migration from https://trac.sagemath.org/ticket/8321
The text was updated successfully, but these errors were encountered: