Opened 7 years ago

Closed 6 years ago

# special resultants ``composed_sum`` and ``composed_product``

Reported by: Owned by: pernici major sage-7.3 algebra Mario Pernici, Vincent Delecroix Vincent Delecroix, Marc Mezzarobba N/A c6b1c35

Add a new operation `composed_op` on polynomials to compute the composed sum, difference, product and division of two polynomias. That is the polynomial whose roots are the sum (resp. difference, product, division) of the roots of the two polynomials.

Two implementations are proposed. The straightforward one using resultants and the algorithm from [BFSS]. For the latter algorithm, only polynomial with coefficient in `ZZ` or `QQ` are supported. However, this algorithm is asymptotically faster than using the `resultant` method.

For instance

```sage: p1 = minpoly(cos(pi/43))
sage: p2 = minpoly(cos(pi/47))
sage: time r1 = p1.composed_op(p2, operator.add)
CPU times: user 108 ms, sys: 4 ms, total: 112 ms
Wall time: 111 ms
sage: time r2 = p1.composed_op(p2, operator.add, algorithm="resultant")
CPU times: user 47.1 s, sys: 0 ns, total: 47.1 s
Wall time: 47.1 s
sage: r1 == r2
True
sage: r1.is_irreducible()
True
```

In the example above, `r1` is the minimal polynomial of `cos(pi/43) + cos(pi/47)`

Reference:

[BFSS] A. Bostan, P. Flajolet, B. Salvy and E. Schost, "Fast Computation of special resultants", Journal of Symbolic Computation 41 (2006), 1-29

### comment:1 Changed 7 years ago by pernici

• Branch set to u/pernici/ticket/18356
• Created changed from 05/03/15 17:41:29 to 05/03/15 17:41:29
• Modified changed from 05/03/15 17:41:29 to 05/03/15 17:41:29

### comment:2 Changed 7 years ago by pernici

• Commit set to 3806e8403764abf4d391161f836f0ec422201cc5

Here is a benchmark

```def bench(p, op, n):
print 'p=', p
for n in range(1, n):
q = p^n
p1 = q*(q + 1)
p2 = (q + 2)*(q + 3)
f = getattr(p1, 'composed_sum') if op == '+' else getattr(p1, 'composed_product')
t0 = time(); r1 = f(p2, "resultant",op)
t1 = time(); r2 = f(p2, "BFSS", op)
t2 = time()
assert r1.monic() == r2
print 'op = %s n=%d resultant:%.4f BFSS:%.4f' %(op, n,t1-t0,t2-t1)

```

Here are some results; in the case of sparse polynomials the resultant performs better than in the case of dense polynomials

```p= x^4
op = + n=1 resultant:0.0020 BFSS:0.0141
op = + n=2 resultant:0.0146 BFSS:0.0350
op = + n=3 resultant:0.0790 BFSS:0.1117
op = + n=4 resultant:0.2977 BFSS:0.2673
op = + n=5 resultant:0.9597 BFSS:0.5701
op = + n=6 resultant:2.7090 BFSS:1.1049
op = + n=7 resultant:7.3010 BFSS:2.0877

p= x^4 + 7*x^3 + 3*x^2 + 9*x + 11
op = + n=1 resultant:0.0171 BFSS:0.0170
op = + n=2 resultant:1.0814 BFSS:0.0852
op = + n=3 resultant:22.6607 BFSS:0.3979
```
Last edited 7 years ago by pernici (previous) (diff)

### comment:3 Changed 7 years ago by pernici

• Status changed from new to needs_review

### comment:4 Changed 7 years ago by vdelecroix

• Status changed from needs_review to needs_work

Hi,

If you look at the branch field in the ticket description you will see that it is red. It means that there is a merge conflict with the last beta. What you have to do is not to start from the stable release but from the development release (the name of the branch on the server is "develop"). Do you know how to do that?

Vincent

### comment:5 Changed 7 years ago by vdelecroix

I had a look at the branch, just some comment

1. It is better to use `Return XYZ` rather than `Compute XYZ` in the one line description of functions. What the function is internally doing is mostly a technical question. What you want to know when reading the documentation is what does the function return.
1. This is old cython
```cdef int i
for i from 0 <= i <= n:
...
```
you can just write
```cdef int i
for i in range(n):
...
```
this will be transformed into a C loop by Cython.

1. In `hadamard_product` I would defined precisely what is the pointwise product. If you have a naive look to wikipedia, the pointwise product of two functions `f` and `g` is just `x -> f(x)g(x)`.

Vincent

### comment:6 Changed 7 years ago by pernici

Hi Vincent,

I opened #18373 based on the development branch, replacing this ticket. I followed your suggestions here and in #18242 :

In `newton_sum` the ring `R` is added as an argument

Avoided using fraction fields (apart from R(QQ(1)/p1) there was also a division by `x`).

Eliminated duplicate code, merging `composed_add` and `composed_product`.

I did some benchmark; there is a strong dependence on the sparseness of the polynomials. The resultant algorithm is faster with some polynomials with few coefficients and high degree. By default `composed_op` selects which algorithm to use, based on the degree and the number of nonvanishing coefficients. The dependence on the size of the coefficients does not seem to make much difference.

I eliminated `hadamard_product`.

### comment:7 Changed 7 years ago by vdelecroix

Hello,

You should not open a new ticket each time you create a branch. Now we have three duplicated tickets!! Delete #18373 (i.e. remove the branch field and set the milestone to `sage-duplicate/invalid/won't fix`) and simply modify the branch of this ticket.

Vincent

### comment:8 follow-up: ↓ 9 Changed 7 years ago by pernici

Sorry for this mess.

For this ticket I worked with the stable release. Then I downloaded the development release, made a branch there, and tried to modify this ticket with `./sage -dev push --ticket 18356` but I got an error message

```Not pushing your changes because they would discard some of the commits on the
remote branch "u/pernici/ticket/18356".
```

and did not know how to proceed, so I opened #18373 . Can I delete this ticked and keep #18373 ?

### comment:9 in reply to: ↑ 8 Changed 7 years ago by vdelecroix

For this ticket I worked with the stable release. Then I downloaded the development release, made a branch there, and tried to modify this ticket with `./sage -dev push --ticket 18356` but I got an error message

```Not pushing your changes because they would discard some of the commits on the
remote branch "u/pernici/ticket/18356".
```

and did not know how to proceed, so I opened #18373 .

Can I delete this ticked and keep #18373 ?

If you want me to review it, no. There are comments here that are precious for the history. Moreover, if you do not know how to properly pull and push with git, choosing your solution will not help you.

You should first stop using the sage dev scripts which are deprecated since at least two version of sage. Then you can read the manual and start either using the `git trac` command or using `git` by itself. Everything is explained in details.

Vincent

### comment:10 Changed 7 years ago by git

• Commit changed from 3806e8403764abf4d391161f836f0ec422201cc5 to 1898678cfccd18df76b57a015c03d96fb44542ae

Branch pushed to git repo; I updated commit sha1. New commits:

 ​428c46c `Merge branch 'develop' of git://trac.sagemath.org/sage into mp1a` ​1898678 `Added `composed_op` replacing `composed_sum` and `composed_product`.`

### comment:11 Changed 7 years ago by pernici

• Description modified (diff)

### comment:12 Changed 7 years ago by pernici

• Status changed from needs_work to needs_review

### comment:13 Changed 7 years ago by vdelecroix

• Status changed from needs_review to needs_work

Hello,

Thanks for using this ticket.

1. In the description of the function you wrote a very precise definition. But then, in the `NOTE`, you claim that the two algorithms might behave differently...
1. Why do you only check the parent for `p1`?
```if p1.parent().base_ring() not in (ZZ, QQ):
raise ValueError('implemented only for polynomials on
integer or rational ring')
```
Moreover, why is it only in the case of `BFSS`?
1. What is this argument `_cached_QQ`!? You can certainly not use that. Note that in `sage.rings.qqbar` there is already `QQxy`, `QQxy_x` and `QQxy_y` defined. So either it is time critical in which case you could create a function like
```_QQxy = None
def QQxy_with_gens():
global _QQxy
if _QQxy is None:
from sage.rings.rational_field import QQ
R = QQ['x', 'y']
_QQxy = (R, R.gen(0), R.gen(1))
return _QQxy
```
and move it near the `PolynomialRing` function. Then this function should be also used in `sage.ring.qqbar`. You can also try to see why calling `PolynomialRing` is so slow.
1. It would be more natural to me if `op` would be an operator and not a string. In other words, one of
```sage: import operator
sage: operator.sub
<built-in function sub>
sage: operator.mul
<built-in function mul>
sage: operator.div
<built-in function div>
```
It can be checked in the method with
```if op is operator.add:
...
elif op is operator.sub:
...
```
It is a very personal feeling, what do you think?
1. This takes time
```from sage.rings.power_series_ring import PowerSeriesRing
from sage.rings.big_oh import O as big_O
```
so you would better move it where it is used, i.e. in the case where `algorithm=BFSS`. Similarly, in `newton_sum` you should move `from sage.rings.power_series_ring import PowerSeriesRing`
1. You can use the fact that bool is a subtype of int when you decide the algorithm
```sage: sum(bool(i) for i in [0,1,3,0,2,0,4])
4
```
1. In `newton_sum` the argument `prec` and `R` are redundant. But I do not know how to do better.

### comment:14 Changed 7 years ago by git

• Commit changed from 1898678cfccd18df76b57a015c03d96fb44542ae to 5f888c83095eab89acbd96ee09e067918a60c258

Branch pushed to git repo; I updated commit sha1. New commits:

 ​23f379f `Added `QQxy_with_gens`; taken the `prec` argument out of `newton_sum`;` ​1cb8ac6 `Eliminated call to big_O` ​aafe62e `Used `operator.add` instead of `+`.` ​5f888c8 `Added checks on the input polynomials.`

### comment:15 Changed 7 years ago by pernici

Hello,

1. The description corresponds to the monic form; now by default the result of `composed_op` is in monic form.
1. Added the check that `p1` and `p2` are on the same polynomial ring.
1. Put `QQxy_with_gens` in place of `cached_QQ`.
1. Used `operator.add`, etc.
1. Moved the import statements.
1. Used `bool`.
1. Eliminated the `prec` argument in `newton_sum`.

Other changes:

previously `composed_op` returned a polynomial in `x` regardless of the variable name of the input polynomials; fixing this gave a speed-up of the "resultant" algorithm for small polynomials.

added some more checks on the input polynomials

eliminated the call to `integral` in the case of `operator.add`

### comment:16 Changed 7 years ago by pernici

• Description modified (diff)

### comment:17 Changed 7 years ago by pernici

• Description modified (diff)
• Status changed from needs_work to needs_review

### comment:18 Changed 7 years ago by vdelecroix

• Status changed from needs_review to needs_info

The algorithm for `newton_sum` is not the one from BFSS. It is the Schoenage formula. Isn't it? There is no inversion in BFSS.

Vincent

### comment:19 follow-up: ↓ 20 Changed 7 years ago by pernici

The algorithm for `newton_sum` is slightly changed with respect to the one written in BFSS, Lemma 1. I will change it to the latter form, which is simpler and slightly faster. In either case there is no inversion, there is reversion though.

Lemma 1 comes from Schoenage, who used it in a numerical context. Should I put the reference to Schoenage's paper, or is it sufficient to write "See [BFSS] and references within." ?

### comment:20 in reply to: ↑ 19 ; follow-up: ↓ 21 Changed 7 years ago by vdelecroix

The algorithm for `newton_sum` is slightly changed with respect to the one written in BFSS, Lemma 1. I will change it to the latter form, which is simpler and slightly faster. In either case there is no inversion, there is reversion though.

As far as I understand Lemma 1 is precisely Schoenage formula (but I did not open the relevant paper). The algorithm of BFSS is Proposition 1. The inversion is only computed up to the degree of the polynomial. Then there are some iterations that depends on `prec`. Am I wrong?

Lemma 1 comes from Schoenage, who used it in a numerical context. Should I put the reference to Schoenage's paper, or is it sufficient to write "See [BFSS] and references within." ?

I think it would make sense.

About the name, what do you think about `newton_series` instead? It would make the result clearer. And please, add the mathematical definition in the function.

Vincent

### comment:21 in reply to: ↑ 20 Changed 7 years ago by pernici

The algorithm for `newton_sum` is slightly changed with respect to the one written in BFSS, Lemma 1. I will change it to the latter form, which is simpler and slightly faster. In either case there is no inversion, there is reversion though.

As far as I understand Lemma 1 is precisely Schoenage formula (but I did not open the relevant paper). The algorithm of BFSS is Proposition 1. The inversion is only computed up to the degree of the polynomial. Then there are some iterations that depends on `prec`. Am I wrong?

You are right. I used that formula, not the algorithm following it.

Lemma 1 comes from Schoenage, who used it in a numerical context. Should I put the reference to Schoenage's paper, or is it sufficient to write "See [BFSS] and references within." ?

I think it would make sense.

About the name, what do you think about `newton_series` instead? It would make the result clearer. And please, add the mathematical definition in the function.

A problem with calling it `newton_series` is that `newton_sum` returns a polynomial, not a series.

### comment:22 Changed 7 years ago by git

• Commit changed from 5f888c83095eab89acbd96ee09e067918a60c258 to 1fe9b052c8f1e49f9b0480f40dfa1937ef9f8dcf

Branch pushed to git repo; I updated commit sha1. New commits:

 ​1fe9b05 `eliminated `newton_sum``

### comment:23 follow-up: ↓ 24 Changed 7 years ago by pernici

I eliminated `newton_sum`, since it can be inlined, and it is not yet clear to me which arguments `newton_sum` should have (`prec` or `R`), nor what it should return (a polynomial or a series).

Implementing `composed_op` using `flint` would be faster. For instance in sage it is not implemented `mullow`, present in `flint` `nmod_poly`, so that one has to compute the product of the polynomials and then truncate.

### comment:24 in reply to: ↑ 23 Changed 7 years ago by vdelecroix

I eliminated `newton_sum`, since it can be inlined, and it is not yet clear to me which arguments `newton_sum` should have (`prec` or `R`), nor what it should return (a polynomial or a series).

indeed

Implementing `composed_op` using `flint` would be faster. For instance in sage it is not implemented `mullow`, present in `flint` `nmod_poly`, so that one has to compute the product of the polynomials and then truncate.

see #18420 (needs review)

### comment:25 follow-up: ↓ 26 Changed 7 years ago by pernici

I will try to implement `composed_op` in `polynomial_rational_flint.pyx`; for the moment I have an implementation of `composed_mul`; for small polynomials (degree 2) it is a few times faster than the resultant algorithm, 20x faster than the previous implementation of the BFSS algorithm.

Therefore the resultant algorithm will be taken away.

For large polynomials the implementation in `polynomial_rational_flint.pyx` is currently slow; the reason is that there is no `hadamard_product` for polynomials in Flint, and my implementation is very slow. I have to study something of Flint to do better. Once I fix this I expect that the two implementations of BFSS will be equally fast for large polynomials.

### comment:26 in reply to: ↑ 25 Changed 7 years ago by vdelecroix

I will try to implement `composed_op` in `polynomial_rational_flint.pyx`; for the moment I have an implementation of `composed_mul`; for small polynomials (degree 2) it is a few times faster than the resultant algorithm, 20x faster than the previous implementation of the BFSS algorithm.

Did you have a look at #18420 that I already mentioned in comment:23?! There is no need to implement your methods in `polynomial_rational_flint.pyx`. Just use `_mul_trunc_` that is implemented in #18420.

Vincent

### comment:27 follow-up: ↓ 28 Changed 7 years ago by pernici

I tried some benchmark with some small polynomials in #18420; I find that `p1._mul_trunc_(p2, prec)` is slightly faster than `(p1*p2).truncate(prec)`, with polynomials of degree `prec-1`.

I will look further at #18420, but it is not the way to speed up `composed_op`, as the following benchmark shows.

```def test2():
K.<x> = QQ[]
p1 = x^2 - 2
p2 = x^2 - 3
t0 = time()
r = p1.composed_op(p2, operator.mul, algorithm='BFSS')
#r = p1.composed_mul(p2)
t1 = time()
print 'r=', r
print '%.6f' %(t1 - t0)
```

Here are the results for the best out of 10 runs:

composed_op 'BFSS': up to PowerSeriesRing?(...) included: 0.000524 newton_sums: 0.001369 hadamard prod: 0.000030 integral: 0.000073 exp: 0.001195 reverse: 0.000002 total: 0.003264

Only the newton sums could use `_mul_trunc_`, although it is not clear to me how to do it best; anyway it would make it only slightly faster. But even if the newton sums took zero time, this algorithm would be slower than the resultant algorithm and much slower than `composed_mul`

`composed_op` 'resultant': 0.000584

`composed_mul`: 0.000026

### comment:28 in reply to: ↑ 27 ; follow-up: ↓ 29 Changed 7 years ago by vdelecroix

Hi,

Did you completely avoid the power series ring? We should provide more methods to polynomial like:

• `inverse_series(self, prec, as_polynomial=False)`: return the inverse series and if `as_polynomial` is true returns it as a polynomial

As FLINT has some native implementation, we should use it where it makes sense. But this should be done in another ticket.

In all your code, you know in advance the precision needed. Right?

Vincent

### comment:29 in reply to: ↑ 28 ; follow-up: ↓ 30 Changed 7 years ago by pernici

Hi,

Did you completely avoid the power series ring? We should provide more methods to polynomial like:

• `inverse_series(self, prec, as_polynomial=False)`: return the inverse series and if `as_polynomial` is true returns it as a polynomial

Yes, I used `fmpq_poly_inv_series`, `fmpq_poly_exp_series`, so the power series ring is not used.

As FLINT has some native implementation, we should use it where it makes sense. But this should be done in another ticket.

In all your code, you know in advance the precision needed. Right?

Yes.

### comment:30 in reply to: ↑ 29 ; follow-up: ↓ 31 Changed 7 years ago by vdelecroix

Hi,

Did you completely avoid the power series ring? We should provide more methods to polynomial like:

• `inverse_series(self, prec, as_polynomial=False)`: return the inverse series and if `as_polynomial` is true returns it as a polynomial

Yes, I used `fmpq_poly_inv_series`, `fmpq_poly_exp_series`, so the power series ring is not used.

I think this is bad. This will make your code specific over Q. You might consider implementing (in an other ticket) an `inverse_series` and an `exp_series` method on polynoms that handle the option of returning a polynom. This method would have a default implementation in `polynomial_element.pyx` and specialized one for the FLINT classes.

### comment:31 in reply to: ↑ 30 ; follow-up: ↓ 32 Changed 7 years ago by pernici

Hi,

Did you completely avoid the power series ring? We should provide more methods to polynomial like:

• `inverse_series(self, prec, as_polynomial=False)`: return the inverse series and if `as_polynomial` is true returns it as a polynomial

Yes, I used `fmpq_poly_inv_series`, `fmpq_poly_exp_series`, so the power series ring is not used.

I think this is bad. This will make your code specific over Q. You might consider implementing (in an other ticket) an `inverse_series` and an `exp_series` method on polynoms that handle the option of returning a polynom. This method would have a default implementation in `polynomial_element.pyx` and specialized one for the FLINT classes.

I have rewritten `composed_op` using `inverse_series`, `exp_series`, methods in `polynomial_element.pyx` (without default implementation yet), using `_mul_trunc_` in #18420 as a template. It is faster than the "resultant" algorithm in all the cases tested.

I think that `inverse_series`, `exp_series`, etc. in `polynomial_element.pyx` should always return a polynomial.

I could open a ticket on implementing these methods, depending on #18420. Do you think that it is ok?

### comment:32 in reply to: ↑ 31 Changed 7 years ago by vdelecroix

Hi,

Did you completely avoid the power series ring? We should provide more methods to polynomial like:

• `inverse_series(self, prec, as_polynomial=False)`: return the inverse series and if `as_polynomial` is true returns it as a polynomial

Yes, I used `fmpq_poly_inv_series`, `fmpq_poly_exp_series`, so the power series ring is not used.

I think this is bad. This will make your code specific over Q. You might consider implementing (in an other ticket) an `inverse_series` and an `exp_series` method on polynoms that handle the option of returning a polynom. This method would have a default implementation in `polynomial_element.pyx` and specialized one for the FLINT classes.

I have rewritten `composed_op` using `inverse_series`, `exp_series`, methods in `polynomial_element.pyx` (without default implementation yet), using `_mul_trunc_` in #18420 as a template. It is faster than the "resultant" algorithm in all the cases tested.

I think that `inverse_series`, `exp_series`, etc. in `polynomial_element.pyx` should always return a polynomial.

For a programmer this is clear. But not a user. If you ask `inverse_series` you expect a series... but I do not have a strong opinion. A possible alternative is to write:

• one global `def inverse_series(self, prec, as_polynomial=False)` in `polynomial_element.pyx` that would use
• several `cpdef Polynomial _inverse_series(self, long prec)` that return polynomials.

I could open a ticket on implementing these methods, depending on #18420. Do you think that it is ok?

I think it would be good. Power series are by far too slow to be usable. Please put me in cc.

Vincent

### comment:33 Changed 7 years ago by git

• Commit changed from 1fe9b052c8f1e49f9b0480f40dfa1937ef9f8dcf to dcb3748ffdc3748b6968cf91c188f8ab266ab626

Branch pushed to git repo; I updated commit sha1. New commits:

 ​dcb3748 `Eliminated `QQxy_with_gens`; added some doctests.`

### comment:34 Changed 7 years ago by pernici

Eliminated `QQxy_with_gens`, used to optimize the "resultant" algorithm, since the latter will not be used anyway after introducing `inverse_series`, `exp_series` in the "BFSS" algorithm.

### comment:35 Changed 7 years ago by vdelecroix

See #19005 and #19006 for respectively `inverse_series` and `exp_series`.

### comment:36 Changed 7 years ago by vdelecroix

• Authors set to Marco Pernici, Vincent Delecroix
• Branch changed from u/pernici/ticket/18356 to u/vdelecroix/18356
• Commit changed from dcb3748ffdc3748b6968cf91c188f8ab266ab626 to 885eb5d695818eeeed34362e37e5271960896c8b
• Description modified (diff)
• Milestone changed from sage-6.7 to sage-7.1
• Status changed from needs_info to needs_review

New commits:

 ​51a3835 `Trac 18356: composed_sum and composed_product for polynomials` ​8c609a8 `Added `composed_op` replacing `composed_sum` and `composed_product`.` ​830d9a7 `Trac 18356: various improvements and bug fixes` ​bef78f5 `Eliminated call to big_O` ​8fcbb1a `Used `operator.add` instead of `+`.` ​896f9bd `Added checks on the input polynomials.` ​81ec213 `eliminated `newton_sum`` ​f720b8b `Trac 18356: Eliminated `QQxy_with_gens`; added some doctests.` ​de9f047 `Trac 18356: fix documentation` ​885eb5d `Trac 18356: do not use power series ring`

### comment:37 Changed 7 years ago by vdelecroix

I cherry picked the commit of the old branch and added to commits on top of it:

• `de9f047`: documentation improvements
• `​885eb5d`: avoid `PowerSeriesRing` (using #19005 and #19006) and some more cdef declarations

### Changed 7 years ago by vdelecroix

Utils for benchmarks between BFSS and resultant

### comment:38 Changed 7 years ago by vdelecroix

Some benchmarks (see composed_op_benchmark.py) that shows that the threshold between the algorithms should depend on the operator

```p1 = x^3 - x^2 - x - 1
p2 = x^3 + x^2 + x + 1
operation     BFSS      resultant  better
operator.sub  0.129ms   0.184ms    BFSS x1.43
operator.mul  0.102ms   0.187ms    BFSS x1.82
operator.div  0.106ms   0.177ms    BFSS x1.67

p1 = x^4 - x^2 - 1
p2 = x^4 - x^2 - 3
operation     BFSS      resultant  better
operator.sub  0.194ms   0.176ms    resultant x1.11
operator.mul  0.127ms   0.167ms    BFSS x1.32
operator.div  0.145ms   0.170ms    BFSS x1.17

p1 = x^8 - x - 1
p2 = x^8 - x^2 - 3
operation     BFSS      resultant  better
operator.sub  0.908ms   2.632ms    BFSS x2.90
operator.mul  0.350ms   0.271ms    resultant x1.29
operator.div  0.408ms   0.214ms    resultant x1.91

p1 = x^8 - 3*x - 1
p2 = x^2 - 3
operation     BFSS      resultant  better
operator.sub  0.186ms   0.185ms    resultant x1.01
operator.mul  0.106ms   0.170ms    BFSS x1.61
operator.div  0.125ms   0.168ms    BFSS x1.35
```
Last edited 7 years ago by vdelecroix (previous) (diff)

### comment:39 Changed 7 years ago by git

• Commit changed from 885eb5d695818eeeed34362e37e5271960896c8b to 1ec84bb78d9e3a5a1e1c966d78e1797ec70afac0

Branch pushed to git repo; I updated commit sha1. New commits:

 ​1ec84bb `Trac 18356: use _mul_trunc_ when possible`

### comment:40 Changed 7 years ago by git

• Commit changed from 1ec84bb78d9e3a5a1e1c966d78e1797ec70afac0 to bfb6379a8ab8b59d37cb99204f40322b911fab90

Branch pushed to git repo; I updated commit sha1. New commits:

 ​bfb6379 `Trac 18356: composed_op for any field`

### comment:41 Changed 7 years ago by vdelecroix

• Description modified (diff)

### comment:42 Changed 6 years ago by mmezzarobba

• Branch changed from u/vdelecroix/18356 to public/ticket/18356-composed-op
• Commit changed from bfb6379a8ab8b59d37cb99204f40322b911fab90 to a4d2df69e9dae4715093d18c991f0427b7f16bf9
• Milestone changed from sage-7.1 to sage-7.3
• Reviewers set to Vincent Delecroix, Marc Mezzarobba

I think that the code is good to go except for a few details which I tried to fix. Please set the ticket to positive_review if you are okay with my changes.

New commits:

 ​d8474c9 `Merge branch 'develop' into review/composed` ​66ae0a0 `#18356 fix doctest` ​8fa5d3b `#18356 improve docstring of Polynomial.composed_op()` ​a4d2df6 `#18536 minor improvements to Polynmomial.composed_op()`

### comment:43 Changed 6 years ago by git

• Commit changed from a4d2df69e9dae4715093d18c991f0427b7f16bf9 to c6b1c35de85957fc501d6f34b4de6b8def0a8ea0

Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:

 ​b339d44 `#18356 improve docstring of Polynomial.composed_op()` ​c6b1c35 `#18536 minor improvements to Polynmomial.composed_op()`

### comment:44 Changed 6 years ago by vdelecroix

• Status changed from needs_review to positive_review

Cool! Thank you!

### comment:45 Changed 6 years ago by vbraun

• Branch changed from public/ticket/18356-composed-op to c6b1c35de85957fc501d6f34b4de6b8def0a8ea0
• Resolution set to fixed
• Status changed from positive_review to closed

### comment:46 Changed 6 years ago by chapoton

• Authors changed from Marco Pernici, Vincent Delecroix to Mario Pernici, Vincent Delecroix
• Commit c6b1c35de85957fc501d6f34b4de6b8def0a8ea0 deleted
Note: See TracTickets for help on using tickets.