#18356 closed enhancement (fixed)
special resultants ``composed_sum`` and ``composed_product``
Reported by: | pernici | Owned by: | |
---|---|---|---|
Priority: | major | Milestone: | sage-7.3 |
Component: | algebra | Keywords: | |
Cc: | Merged in: | ||
Authors: | Mario Pernici, Vincent Delecroix | Reviewers: | Vincent Delecroix, Marc Mezzarobba |
Report Upstream: | N/A | Work issues: | |
Branch: | c6b1c35 (Commits) | Commit: | |
Dependencies: | Stopgaps: |
Description (last modified by )
Add a new operation composed_op
on polynomials to compute the composed sum, difference, product and division of two polynomias. That is the polynomial whose roots are the sum (resp. difference, product, division) of the roots of the two polynomials.
Two implementations are proposed. The straightforward one using resultants and the algorithm from [BFSS]. For the latter algorithm, only polynomial with coefficient in ZZ
or QQ
are supported. However, this algorithm is asymptotically faster than using the resultant
method.
For instance
sage: p1 = minpoly(cos(pi/43)) sage: p2 = minpoly(cos(pi/47)) sage: time r1 = p1.composed_op(p2, operator.add) CPU times: user 108 ms, sys: 4 ms, total: 112 ms Wall time: 111 ms sage: time r2 = p1.composed_op(p2, operator.add, algorithm="resultant") CPU times: user 47.1 s, sys: 0 ns, total: 47.1 s Wall time: 47.1 s sage: r1 == r2 True sage: r1.is_irreducible() True
In the example above, r1
is the minimal polynomial of cos(pi/43) + cos(pi/47)
Reference:
[BFSS] A. Bostan, P. Flajolet, B. Salvy and E. Schost, "Fast Computation of special resultants", Journal of Symbolic Computation 41 (2006), 1-29
Attachments (1)
Change History (47)
comment:1 Changed 6 years ago by
- Branch set to u/pernici/ticket/18356
- Created changed from 05/03/15 17:41:29 to 05/03/15 17:41:29
- Modified changed from 05/03/15 17:41:29 to 05/03/15 17:41:29
comment:2 Changed 6 years ago by
- Commit set to 3806e8403764abf4d391161f836f0ec422201cc5
comment:3 Changed 6 years ago by
- Status changed from new to needs_review
comment:4 Changed 6 years ago by
- Status changed from needs_review to needs_work
Hi,
If you look at the branch field in the ticket description you will see that it is red. It means that there is a merge conflict with the last beta. What you have to do is not to start from the stable release but from the development release (the name of the branch on the server is "develop"). Do you know how to do that?
Vincent
comment:5 Changed 6 years ago by
I had a look at the branch, just some comment
- It is better to use
Return XYZ
rather thanCompute XYZ
in the one line description of functions. What the function is internally doing is mostly a technical question. What you want to know when reading the documentation is what does the function return.
- This is old cython
cdef int i for i from 0 <= i <= n: ...
you can just writecdef int i for i in range(n): ...
this will be transformed into a C loop by Cython.
- In
hadamard_product
I would defined precisely what is the pointwise product. If you have a naive look to wikipedia, the pointwise product of two functionsf
andg
is justx -> f(x)g(x)
.
Vincent
comment:6 Changed 6 years ago by
Hi Vincent,
I opened #18373 based on the development branch, replacing this ticket. I followed your suggestions here and in #18242 :
In
newton_sum
the ringR
is added as an argument
Avoided using fraction fields (apart from
R(QQ(1)/p1)
there was also a division by
x
).
Eliminated duplicate code, merging
composed_add
andcomposed_product
.
I did some benchmark; there is a strong dependence on the sparseness of the polynomials. The resultant algorithm is faster with some polynomials with few coefficients and high degree. By default
composed_op
selects which algorithm to use, based on the degree and the number of nonvanishing coefficients. The dependence on the size of the coefficients does not seem to make much difference.
I eliminated
hadamard_product
.
comment:7 Changed 6 years ago by
Hello,
You should not open a new ticket each time you create a branch. Now we have three duplicated tickets!! Delete #18373 (i.e. remove the branch field and set the milestone to sage-duplicate/invalid/won't fix
) and simply modify the branch of this ticket.
Vincent
comment:8 follow-up: ↓ 9 Changed 6 years ago by
Sorry for this mess.
For this ticket I worked with the stable release. Then I downloaded the development release,
made a branch there, and tried to modify this ticket with
./sage -dev push --ticket 18356
but I got an error message
Not pushing your changes because they would discard some of the commits on the remote branch "u/pernici/ticket/18356".
and did not know how to proceed, so I opened #18373 . Can I delete this ticked and keep #18373 ?
comment:9 in reply to: ↑ 8 Changed 6 years ago by
Replying to pernici:
For this ticket I worked with the stable release. Then I downloaded the development release, made a branch there, and tried to modify this ticket with
./sage -dev push --ticket 18356
but I got an error messageNot pushing your changes because they would discard some of the commits on the remote branch "u/pernici/ticket/18356".and did not know how to proceed, so I opened #18373 .
You should have asked.
Can I delete this ticked and keep #18373 ?
If you want me to review it, no. There are comments here that are precious for the history. Moreover, if you do not know how to properly pull and push with git, choosing your solution will not help you.
You should first stop using the sage dev scripts which are deprecated since at least two version of sage. Then you can read the manual and start either using the git trac
command or using git
by itself. Everything is explained in details.
Vincent
comment:10 Changed 6 years ago by
- Commit changed from 3806e8403764abf4d391161f836f0ec422201cc5 to 1898678cfccd18df76b57a015c03d96fb44542ae
comment:11 Changed 6 years ago by
- Description modified (diff)
comment:12 Changed 6 years ago by
- Status changed from needs_work to needs_review
comment:13 Changed 6 years ago by
- Status changed from needs_review to needs_work
Hello,
Thanks for using this ticket.
- In the description of the function you wrote a very precise definition. But then, in the
NOTE
, you claim that the two algorithms might behave differently...
- Why do you only check the parent for
p1
?if p1.parent().base_ring() not in (ZZ, QQ): raise ValueError('implemented only for polynomials on integer or rational ring')
Moreover, why is it only in the case ofBFSS
?
- What is this argument
_cached_QQ
!? You can certainly not use that. Note that insage.rings.qqbar
there is alreadyQQxy
,QQxy_x
andQQxy_y
defined. So either it is time critical in which case you could create a function like_QQxy = None def QQxy_with_gens(): global _QQxy if _QQxy is None: from sage.rings.rational_field import QQ R = QQ['x', 'y'] _QQxy = (R, R.gen(0), R.gen(1)) return _QQxy
and move it near thePolynomialRing
function. Then this function should be also used insage.ring.qqbar
. You can also try to see why callingPolynomialRing
is so slow.
- It would be more natural to me if
op
would be an operator and not a string. In other words, one ofsage: import operator sage: operator.add <built-in function add> sage: operator.sub <built-in function sub> sage: operator.mul <built-in function mul> sage: operator.div <built-in function div>
It can be checked in the method withif op is operator.add: ... elif op is operator.sub: ...
It is a very personal feeling, what do you think?
- This takes time
from sage.rings.power_series_ring import PowerSeriesRing from sage.rings.big_oh import O as big_O
so you would better move it where it is used, i.e. in the case wherealgorithm=BFSS
. Similarly, innewton_sum
you should movefrom sage.rings.power_series_ring import PowerSeriesRing
- You can use the fact that bool is a subtype of int when you decide the algorithm
sage: sum(bool(i) for i in [0,1,3,0,2,0,4]) 4
- In
newton_sum
the argumentprec
andR
are redundant. But I do not know how to do better.
comment:14 Changed 6 years ago by
- Commit changed from 1898678cfccd18df76b57a015c03d96fb44542ae to 5f888c83095eab89acbd96ee09e067918a60c258
comment:15 Changed 6 years ago by
Hello,
I made changes according to your comments:
- The description corresponds to the monic form; now by default
the result of
composed_op
is in monic form.
- Added the check that
p1
andp2
are on the same polynomial ring.
- Put
QQxy_with_gens
in place ofcached_QQ
.
- Used
operator.add
, etc.
- Moved the import statements.
- Used
bool
.
- Eliminated the
prec
argument innewton_sum
.
Other changes:
previously
composed_op
returned a polynomial inx
regardless of the variable name of the input polynomials; fixing this gave a speed-up of the "resultant" algorithm for small polynomials.
added some more checks on the input polynomials
eliminated the call to
integral
in the case ofoperator.add
comment:16 Changed 6 years ago by
- Description modified (diff)
comment:17 Changed 6 years ago by
- Description modified (diff)
- Status changed from needs_work to needs_review
comment:18 Changed 6 years ago by
- Status changed from needs_review to needs_info
The algorithm for newton_sum
is not the one from BFSS. It is the Schoenage formula. Isn't it? There is no inversion in BFSS.
Vincent
comment:19 follow-up: ↓ 20 Changed 6 years ago by
The algorithm for newton_sum
is slightly changed with respect to the one written in BFSS, Lemma 1.
I will change it to the latter form, which is simpler and slightly faster.
In either case there is no inversion, there is reversion though.
Lemma 1 comes from Schoenage, who used it in a numerical context. Should I put the reference to Schoenage's paper, or is it sufficient to write "See [BFSS] and references within." ?
comment:20 in reply to: ↑ 19 ; follow-up: ↓ 21 Changed 6 years ago by
Replying to pernici:
The algorithm for
newton_sum
is slightly changed with respect to the one written in BFSS, Lemma 1. I will change it to the latter form, which is simpler and slightly faster. In either case there is no inversion, there is reversion though.
As far as I understand Lemma 1 is precisely Schoenage formula (but I did not open the relevant paper). The algorithm of BFSS is Proposition 1. The inversion is only computed up to the degree of the polynomial. Then there are some iterations that depends on prec
. Am I wrong?
Lemma 1 comes from Schoenage, who used it in a numerical context. Should I put the reference to Schoenage's paper, or is it sufficient to write "See [BFSS] and references within." ?
I think it would make sense.
About the name, what do you think about newton_series
instead? It would make the result clearer. And please, add the mathematical definition in the function.
Vincent
comment:21 in reply to: ↑ 20 Changed 6 years ago by
Replying to vdelecroix:
Replying to pernici:
The algorithm for
newton_sum
is slightly changed with respect to the one written in BFSS, Lemma 1. I will change it to the latter form, which is simpler and slightly faster. In either case there is no inversion, there is reversion though.As far as I understand Lemma 1 is precisely Schoenage formula (but I did not open the relevant paper). The algorithm of BFSS is Proposition 1. The inversion is only computed up to the degree of the polynomial. Then there are some iterations that depends on
prec
. Am I wrong?
You are right. I used that formula, not the algorithm following it.
Lemma 1 comes from Schoenage, who used it in a numerical context. Should I put the reference to Schoenage's paper, or is it sufficient to write "See [BFSS] and references within." ?
I think it would make sense.
About the name, what do you think about
newton_series
instead? It would make the result clearer. And please, add the mathematical definition in the function.
A problem with calling it newton_series
is that newton_sum
returns a polynomial, not a series.
comment:22 Changed 6 years ago by
- Commit changed from 5f888c83095eab89acbd96ee09e067918a60c258 to 1fe9b052c8f1e49f9b0480f40dfa1937ef9f8dcf
Branch pushed to git repo; I updated commit sha1. New commits:
1fe9b05 | eliminated `newton_sum`
|
comment:23 follow-up: ↓ 24 Changed 6 years ago by
I eliminated newton_sum
, since it can be inlined, and it is not yet clear to me which arguments
newton_sum
should have (prec
or R
), nor what it should return (a polynomial or a series).
Implementing composed_op
using flint
would be faster. For instance in sage it is not
implemented mullow
, present in flint
nmod_poly
, so that one has to compute the product of the
polynomials and then truncate.
comment:24 in reply to: ↑ 23 Changed 6 years ago by
Replying to pernici:
I eliminated
newton_sum
, since it can be inlined, and it is not yet clear to me which argumentsnewton_sum
should have (prec
orR
), nor what it should return (a polynomial or a series).
indeed
Implementing
composed_op
usingflint
would be faster. For instance in sage it is not implementedmullow
, present inflint
nmod_poly
, so that one has to compute the product of the polynomials and then truncate.
see #18420 (needs review)
comment:25 follow-up: ↓ 26 Changed 6 years ago by
I will try to implement composed_op
in polynomial_rational_flint.pyx
; for the moment I have an
implementation of composed_mul
; for small polynomials (degree 2) it is a few times faster than the resultant algorithm, 20x faster than the previous implementation of the BFSS algorithm.
Therefore the resultant algorithm will be taken away.
For large polynomials the implementation in polynomial_rational_flint.pyx
is currently slow; the reason is that there is no hadamard_product
for polynomials in Flint,
and my implementation is very slow. I have to study something of Flint to do better.
Once I fix this I expect that the two implementations of BFSS will be equally fast for large polynomials.
comment:26 in reply to: ↑ 25 Changed 6 years ago by
Replying to pernici:
I will try to implement
composed_op
inpolynomial_rational_flint.pyx
; for the moment I have an implementation ofcomposed_mul
; for small polynomials (degree 2) it is a few times faster than the resultant algorithm, 20x faster than the previous implementation of the BFSS algorithm.
Did you have a look at #18420 that I already mentioned in comment:23?! There is no need to implement your methods in polynomial_rational_flint.pyx
. Just use _mul_trunc_
that is implemented in #18420.
Vincent
comment:27 follow-up: ↓ 28 Changed 6 years ago by
I tried some benchmark with some small polynomials in #18420;
I find that p1._mul_trunc_(p2, prec)
is slightly faster than (p1*p2).truncate(prec)
, with polynomials of degree prec-1
.
I will look further at #18420, but it is not the way to speed up composed_op
,
as the following benchmark shows.
def test2(): K.<x> = QQ[] p1 = x^2 - 2 p2 = x^2 - 3 t0 = time() r = p1.composed_op(p2, operator.mul, algorithm='BFSS') #r = p1.composed_mul(p2) t1 = time() print 'r=', r print '%.6f' %(t1 - t0)
Here are the results for the best out of 10 runs:
composed_op 'BFSS': up to PowerSeriesRing?(...) included: 0.000524 newton_sums: 0.001369 hadamard prod: 0.000030 integral: 0.000073 exp: 0.001195 reverse: 0.000002 total: 0.003264
Only the newton sums could use _mul_trunc_
, although it is not clear to me how
to do it best; anyway it would make it only slightly faster. But even if the newton sums
took zero time, this algorithm would be slower than the resultant algorithm and much
slower than composed_mul
composed_op
'resultant': 0.000584
composed_mul
: 0.000026
comment:28 in reply to: ↑ 27 ; follow-up: ↓ 29 Changed 6 years ago by
Hi,
Did you completely avoid the power series ring? We should provide more methods to polynomial like:
inverse_series(self, prec, as_polynomial=False)
: return the inverse series and ifas_polynomial
is true returns it as a polynomial
As FLINT has some native implementation, we should use it where it makes sense. But this should be done in another ticket.
In all your code, you know in advance the precision needed. Right?
Vincent
comment:29 in reply to: ↑ 28 ; follow-up: ↓ 30 Changed 6 years ago by
Replying to vdelecroix:
Hi,
Did you completely avoid the power series ring? We should provide more methods to polynomial like:
inverse_series(self, prec, as_polynomial=False)
: return the inverse series and ifas_polynomial
is true returns it as a polynomial
Yes, I used fmpq_poly_inv_series
, fmpq_poly_exp_series
, so the power series ring is not used.
As FLINT has some native implementation, we should use it where it makes sense. But this should be done in another ticket.
In all your code, you know in advance the precision needed. Right?
Yes.
comment:30 in reply to: ↑ 29 ; follow-up: ↓ 31 Changed 6 years ago by
Replying to pernici:
Replying to vdelecroix:
Hi,
Did you completely avoid the power series ring? We should provide more methods to polynomial like:
inverse_series(self, prec, as_polynomial=False)
: return the inverse series and ifas_polynomial
is true returns it as a polynomialYes, I used
fmpq_poly_inv_series
,fmpq_poly_exp_series
, so the power series ring is not used.
I think this is bad. This will make your code specific over Q. You might consider implementing (in an other ticket) an inverse_series
and an exp_series
method on polynoms that handle the option of returning a polynom. This method would have a default implementation in polynomial_element.pyx
and specialized one for the FLINT classes.
comment:31 in reply to: ↑ 30 ; follow-up: ↓ 32 Changed 6 years ago by
Replying to vdelecroix:
Replying to pernici:
Replying to vdelecroix:
Hi,
Did you completely avoid the power series ring? We should provide more methods to polynomial like:
inverse_series(self, prec, as_polynomial=False)
: return the inverse series and ifas_polynomial
is true returns it as a polynomialYes, I used
fmpq_poly_inv_series
,fmpq_poly_exp_series
, so the power series ring is not used.I think this is bad. This will make your code specific over Q. You might consider implementing (in an other ticket) an
inverse_series
and anexp_series
method on polynoms that handle the option of returning a polynom. This method would have a default implementation inpolynomial_element.pyx
and specialized one for the FLINT classes.
I have rewritten composed_op
using inverse_series
, exp_series
, methods in polynomial_element.pyx
(without default implementation yet), using _mul_trunc_
in #18420 as a template.
It is faster than the "resultant" algorithm in all the cases tested.
I think that inverse_series
, exp_series
, etc. in polynomial_element.pyx
should always return a polynomial.
I could open a ticket on implementing these methods, depending on #18420. Do you think that it is ok?
comment:32 in reply to: ↑ 31 Changed 6 years ago by
Replying to pernici:
Replying to vdelecroix:
Replying to pernici:
Replying to vdelecroix:
Hi,
Did you completely avoid the power series ring? We should provide more methods to polynomial like:
inverse_series(self, prec, as_polynomial=False)
: return the inverse series and ifas_polynomial
is true returns it as a polynomialYes, I used
fmpq_poly_inv_series
,fmpq_poly_exp_series
, so the power series ring is not used.I think this is bad. This will make your code specific over Q. You might consider implementing (in an other ticket) an
inverse_series
and anexp_series
method on polynoms that handle the option of returning a polynom. This method would have a default implementation inpolynomial_element.pyx
and specialized one for the FLINT classes.I have rewritten
composed_op
usinginverse_series
,exp_series
, methods inpolynomial_element.pyx
(without default implementation yet), using_mul_trunc_
in #18420 as a template. It is faster than the "resultant" algorithm in all the cases tested.I think that
inverse_series
,exp_series
, etc. inpolynomial_element.pyx
should always return a polynomial.
For a programmer this is clear. But not a user. If you ask inverse_series
you expect a series... but I do not have a strong opinion. A possible alternative is to write:
- one global
def inverse_series(self, prec, as_polynomial=False)
inpolynomial_element.pyx
that would use - several
cpdef Polynomial _inverse_series(self, long prec)
that return polynomials.
I could open a ticket on implementing these methods, depending on #18420. Do you think that it is ok?
I think it would be good. Power series are by far too slow to be usable. Please put me in cc.
Vincent
comment:33 Changed 6 years ago by
- Commit changed from 1fe9b052c8f1e49f9b0480f40dfa1937ef9f8dcf to dcb3748ffdc3748b6968cf91c188f8ab266ab626
Branch pushed to git repo; I updated commit sha1. New commits:
dcb3748 | Eliminated `QQxy_with_gens`; added some doctests.
|
comment:34 Changed 6 years ago by
Eliminated QQxy_with_gens
, used to optimize the "resultant" algorithm, since the latter will
not be used anyway after introducing inverse_series
, exp_series
in the "BFSS" algorithm.
comment:35 Changed 5 years ago by
comment:36 Changed 5 years ago by
- Branch changed from u/pernici/ticket/18356 to u/vdelecroix/18356
- Commit changed from dcb3748ffdc3748b6968cf91c188f8ab266ab626 to 885eb5d695818eeeed34362e37e5271960896c8b
- Description modified (diff)
- Milestone changed from sage-6.7 to sage-7.1
- Status changed from needs_info to needs_review
New commits:
51a3835 | Trac 18356: composed_sum and composed_product for polynomials
|
8c609a8 | Added `composed_op` replacing `composed_sum` and `composed_product`.
|
830d9a7 | Trac 18356: various improvements and bug fixes
|
bef78f5 | Eliminated call to big_O
|
8fcbb1a | Used `operator.add` instead of `+`.
|
896f9bd | Added checks on the input polynomials.
|
81ec213 | eliminated `newton_sum`
|
f720b8b | Trac 18356: Eliminated `QQxy_with_gens`; added some doctests.
|
de9f047 | Trac 18356: fix documentation
|
885eb5d | Trac 18356: do not use power series ring
|
comment:37 Changed 5 years ago by
comment:38 Changed 5 years ago by
Some benchmarks (see composed_op_benchmark.py) that shows that the threshold between the algorithms should depend on the operator
p1 = x^3 - x^2 - x - 1 p2 = x^3 + x^2 + x + 1 operation BFSS resultant better operator.add 0.199ms 0.188ms resultant x1.06 operator.sub 0.129ms 0.184ms BFSS x1.43 operator.mul 0.102ms 0.187ms BFSS x1.82 operator.div 0.106ms 0.177ms BFSS x1.67 p1 = x^4 - x^2 - 1 p2 = x^4 - x^2 - 3 operation BFSS resultant better operator.add 0.254ms 0.174ms resultant x1.46 operator.sub 0.194ms 0.176ms resultant x1.11 operator.mul 0.127ms 0.167ms BFSS x1.32 operator.div 0.145ms 0.170ms BFSS x1.17 p1 = x^8 - x - 1 p2 = x^8 - x^2 - 3 operation BFSS resultant better operator.add 0.992ms 2.647ms BFSS x2.67 operator.sub 0.908ms 2.632ms BFSS x2.90 operator.mul 0.350ms 0.271ms resultant x1.29 operator.div 0.408ms 0.214ms resultant x1.91 p1 = x^8 - 3*x - 1 p2 = x^2 - 3 operation BFSS resultant better operator.add 0.274ms 0.177ms resultant x1.55 operator.sub 0.186ms 0.185ms resultant x1.01 operator.mul 0.106ms 0.170ms BFSS x1.61 operator.div 0.125ms 0.168ms BFSS x1.35
comment:39 Changed 5 years ago by
- Commit changed from 885eb5d695818eeeed34362e37e5271960896c8b to 1ec84bb78d9e3a5a1e1c966d78e1797ec70afac0
Branch pushed to git repo; I updated commit sha1. New commits:
1ec84bb | Trac 18356: use _mul_trunc_ when possible
|
comment:40 Changed 5 years ago by
- Commit changed from 1ec84bb78d9e3a5a1e1c966d78e1797ec70afac0 to bfb6379a8ab8b59d37cb99204f40322b911fab90
Branch pushed to git repo; I updated commit sha1. New commits:
bfb6379 | Trac 18356: composed_op for any field
|
comment:41 Changed 5 years ago by
- Description modified (diff)
comment:42 Changed 5 years ago by
- Branch changed from u/vdelecroix/18356 to public/ticket/18356-composed-op
- Commit changed from bfb6379a8ab8b59d37cb99204f40322b911fab90 to a4d2df69e9dae4715093d18c991f0427b7f16bf9
- Milestone changed from sage-7.1 to sage-7.3
- Reviewers set to Vincent Delecroix, Marc Mezzarobba
I think that the code is good to go except for a few details which I tried to fix. Please set the ticket to positive_review if you are okay with my changes.
New commits:
d8474c9 | Merge branch 'develop' into review/composed
|
66ae0a0 | #18356 fix doctest
|
8fa5d3b | #18356 improve docstring of Polynomial.composed_op()
|
a4d2df6 | #18536 minor improvements to Polynmomial.composed_op()
|
comment:43 Changed 5 years ago by
- Commit changed from a4d2df69e9dae4715093d18c991f0427b7f16bf9 to c6b1c35de85957fc501d6f34b4de6b8def0a8ea0
comment:44 Changed 5 years ago by
- Status changed from needs_review to positive_review
Cool! Thank you!
comment:45 Changed 5 years ago by
- Branch changed from public/ticket/18356-composed-op to c6b1c35de85957fc501d6f34b4de6b8def0a8ea0
- Resolution set to fixed
- Status changed from positive_review to closed
comment:46 Changed 5 years ago by
- Commit c6b1c35de85957fc501d6f34b4de6b8def0a8ea0 deleted
Here is a benchmark
Here are some results; in the case of sparse polynomials the resultant performs better than in the case of dense polynomials