Opened 7 years ago

# Faster qqbar operations using resultants — at Version 5

Reported by: Owned by: gagern major sage-6.6 number fields qqbar resultant exactify minpoly mmezzarobba, mjo Martin von Gagern N/A u/gagern/ticket/17886 234b2c4f3435531007c095d278a4c02da8ee2365

This is a spin-off from comment:31:ticket:16964.

Many operations on algebraic numbers can become painfully slow. Most of these operations can be expressed in terms of resultants, and surprisingly the corresponding computations are sometimes way faster than what Sage currently does. So much faster that I'm not sure whether to consider this ticket here a request for enhancement, or even a defect.

Take for example the difference between two algebraic numbers `r1` and `r2`, which are defined as follows:

```sage: x = polygen(ZZ)
sage: p1 = x^5 + 6*x^4 - 42*x^3 - 142*x^2 + 467*x + 422
sage: p2 = p1((x-1)^2)
sage: r1 = QQbar.polynomial_root(p2, CIF(1, (2.1, 2.2)))
sage: r2 = QQbar.polynomial_root(p2, CIF(1, (2.8, 2.9)))
```

Computing their exact difference takes like forever:

```sage: r4 = r1 - r2
sage: %time r4.exactify()
CPU times: user 2h 57min 1s, sys: 2.16 s, total: 2h 57min 3s
Wall time: 2h 57min 5s
```

On the other hand, computing a polynomial which has the difference as one root can be achieved fairly easily using resultants, and the resulting number is obtained in under one second:

```sage: a,b = polygens(QQ, 'a,b')
sage: %time p3 = r1.minpoly()(a + b).resultant(r2.minpoly()(b), b)
CPU times: user 62 ms, sys: 0 ns, total: 62 ms
Wall time: 68 ms
sage: rs = [r for f in p3.factor()
....:       for r in f.univariate_polynomial().roots(QQbar, False)
....:       if r._value.overlaps(r1._value - r2._value)]
sage: assert len(rs) == 1
sage: r3, = rs
sage: %time r3.exactify()
CPU times: user 599 ms, sys: 0 ns, total: 599 ms
Wall time: 578 ms
```

One possible root of `p3` is `b=r2` and `a+b=r1` which means `a=r1-r2`. So eliminating b we get a (reducible, not minimal) polynomial in a which has that difference as one of its roots. I try to identify that by looking at the roots `r` of the factors `f`, checking whether they overlap the numeric interval.

The way I understand the current code, most exact binary operations are implemented by exactifying both operands to number field elements, then constructing the union of both number fields, converting both operands to that and performing the operation in there. But there is no reason why the number field for the result should be able to contain the operands. I guess dropping that is the main reason why direct resultant computations are faster.

I propose that we try to build all binary operations on algebraic numbers on resultants instead of union fields. I furthermore propose that we try to build the equality comparison directly on resultants of two univariate polynomials, without bivariate intermediate steps.

I can think of two possible problems. One is that we might be dealing with a special case in the example above, and that perhaps number field unions are in general cheaper than resultants. Another possible problem I can imagine is that the resultant could factor into several distinct polynomials, some of which might share a root. If that were the case, numeric refinement wouldn't be able to help choosing the right factor. Should we perhaps not factor the resultant polynomial, but instead compute roots for the fully expanded form?

I'll try to come up with a branch which implements this approach.

### comment:1 Changed 7 years ago by gagern

• Branch set to u/gagern/ticket/17886

### comment:2 Changed 7 years ago by gagern

• Authors set to Martin von Gagern
• Commit set to db116160d1d082b38c8cd1f8f90bd20feaef942b

OK, here is some work in progress, so you can see what I have in mind. This doesn't pass tests yet, so it will definitely require some more work.

The division `r1/r2` for the example from the ticket description still takes extremely long. Surprisingly, the step which takes so long is not the computation of the resultant, its factors or its roots. No, it's the `candidate.exactify()` step which turns an `ANRoot` element into an `ANExtensionElement`. The `do_polred` in there is taking ages. Any suggestions how that might be avoided? It's “only” a degree 80 polynomial we're dealing with.

New commits:

 ​db11616 `trac #17886: Compute binary operations using resultants.`

### comment:3 Changed 7 years ago by git

• Commit changed from db116160d1d082b38c8cd1f8f90bd20feaef942b to 234b2c4f3435531007c095d278a4c02da8ee2365

Branch pushed to git repo; I updated commit sha1. New commits:

 ​8d4b91b `Return a descriptor, not an algebraic number.` ​234b2c4 `Choose roots field based on approximation field.`

### comment:4 Changed 7 years ago by gagern

OK, the doctests look a lot better now. Mostly arbitrary choices made differently, like sign changes or using a different root as the reference generator, stuff like that. In several cases I obtain simpler results, i.e. polynomials of lower degree and the likes.

One thing that has me worried are cyclotomics. If both arguments are from cyclotomic fields, then we should do the union (which is fast in that case) instead of the minpoly and resultant. I haven't figured out how best to check for that case, though.

Another thing I fail is that test taken from that ARPREC paper. That example is really fast in current implementation, precisely because it's only operating in a single number field, so it doesn't really require any unions at all. Should we try to detect this fact, i.e. see if both arguments are either rational or elements of the same number field? My failure is only later on, where the original code somehow magically knows that the difference of two equal numbers is zero. I guess that if we did introduce a special case for equal number field, we might get that for free even though I don't know exactly how it works.

### comment:5 Changed 7 years ago by gagern

• Description modified (diff)

Including total CPU time for `r4.exactify()` using existing implementation.

Note: See TracTickets for help on using tickets.