Opened 8 years ago
Closed 8 years ago
#18589 closed enhancement (fixed)
isogeny efficiency improvement
Reported by:  John Cremona  Owned by:  

Priority:  major  Milestone:  sage6.8 
Component:  elliptic curves  Keywords:  isogeny 
Cc:  Luca De Feo  Merged in:  
Authors:  John Cremona  Reviewers:  Jeroen Demeyer 
Report Upstream:  N/A  Work issues:  
Branch:  3d687e5 (Commits, GitHub, GitLab)  Commit:  3d687e5225f67808eb6c5af5fbf4cb93f2000c62 
Dependencies:  Stopgaps: 
Description (last modified by )
Computation of isogenies of prime degree p is expensive when the degree is neither a "genus zero" prime [2,3,5,7,13] or a "hyperelliptic prime" [11, 17, 19, 23, 29, 31, 41, 47, 59, 71] (for these there is special code written). In one situation we can save time, after factoring the degree (p^21)/2
division polynomial, if there is exactly one factor of degree (p1)/2, or one subset of factors whose product has that degree, then the factor of degree (p1)/2 must be a kernel polynomial. Then we do not need to check consistency, which is very expensive.
The example which led me to this was with p=89 over a quadratic number field, where E.isogeny_class() was taking days. After the change here that goes down to 3 hours. (There are 4 curves in the isogeny class and the code requires factoring the 89division polynomial of each!) I used a less extreme example for a doctest: a 37isogeny.
Change History (56)
comment:1 Changed 8 years ago by
Description:  modified (diff) 

comment:2 Changed 8 years ago by
Branch:  → u/cremona/18589 

Commit:  → 47ccfd587402b953c612fcd3cddaa541a6847bd3 
Description:  modified (diff) 
Status:  new → needs_review 
comment:3 Changed 8 years ago by
Cc:  Luca De Feo added 

comment:4 Changed 8 years ago by
Interesting, the hardest part of the computation is checking the consistency, not computing the actual result.
I had a look at what PARI can do for isogenies: ellisogeny
computes the isogeny, so it could be used to implement E.isogeny()
, but I assume it does no checking.
Minor comment: in the doctest, you don't need
from sage.schemes.elliptic_curves.isogeny_small_degree import isogenies_prime_degree_general
comment:5 followups: 6 7 Changed 8 years ago by
Thanks for the comments, I noticed you over at paridev and wondered what you were up to.
I don't know how general the pari code is  here we can handle arbitrary fields (but only separable isogenies). Over number fields the really hard part is determining which prime degrees to test to get the whole class, and I put a lot of work into this (including CM and "potential CM" cases).
Some checking is necessary: I have an example where the 13division poly factors as 14 degree 6 factors, but only 2 of them are kernel polys! For the rest you need to make a quadratic extension which splits each into two cubics, and then you have to match the factors. The Sage code does this!
comment:6 followup: 8 Changed 8 years ago by
Replying to cremona:
I have an example where the 13division poly factors as 14 degree 6 factors, but only 2 of them are kernel polys!
Is this example in Sage? It would be good to have (if it takes too long, just add # not tested (10 minutes)
or whatever)
comment:7 followup: 9 Changed 8 years ago by
Replying to cremona:
For the rest you need to make a quadratic extension which splits each into two cubics, and then you have to match the factors.
I'm wondering if you could use resultants instead...
comment:8 followups: 15 21 Changed 8 years ago by
Replying to jdemeyer:
Replying to cremona:
I have an example where the 13division poly factors as 14 degree 6 factors, but only 2 of them are kernel polys!
Is this example in Sage? It would be good to have (if it takes too long, just add
# not tested (10 minutes)
or whatever)
No, but it is Example 3.1.1 on page 28 of [KT2013] which is here: http://wrap.warwick.ac.uk/57568/ The example is over F_3 so is not slow.
comment:9 followups: 10 18 Changed 8 years ago by
Replying to jdemeyer:
Replying to cremona:
For the rest you need to make a quadratic extension which splits each into two cubics, and then you have to match the factors.
I'm wondering if you could use resultants instead...
I thought that we did use resultants... but looking again at Kimi's code, the relevant function is mult() on line 1960: the effect of this function is to take the polynomial f and the rational function m, under the assumption that f is coprime to the denominator of m, and return the poly whose roots are m(alpha) as alpha runs over the roots of f. If m were a polynomial that would be res_Y(Xm(Y),f(Y)). I can see now that we could stil use that formula with m(Y) replaced by the polynomial p(Y) which satisfies p*d=n (mod f) where m=n/d which would amount to one extended gcd and one resultant. This may be better than the current code since it does not use rational functions.
Can we keep this idea for another ticket?
comment:10 Changed 8 years ago by
comment:11 followup: 24 Changed 8 years ago by
In your code, "special case 1" is really a special case of "special case 2". For simplicity, I would therefore keep only special case 2.
comment:12 followup: 25 Changed 8 years ago by
There is a preexisting typo: otain
instead of obtain
.
comment:14 followup: 33 Changed 8 years ago by
OK, I will fix the redundant imports; I agree that Case 1 is a special case of Case 2. And I am working on the resultant improvement  so setting to "needs work" is fine. (Without the patch though I would not have been able to compute this: http://beta.lmfdb.org/EllipticCurve/2.2.89.1/81.1/a/ !!
comment:15 followup: 27 Changed 8 years ago by
Since the constant (l1)//2
appears in several places in the algorithm, I would prefer to see a variable assigned to it (let's call it D
or whatever).
comment:16 Changed 8 years ago by
Also, can't you avoid some conversions? In
F(f(m(x)).numerator())
the call m(x)
is really a conversion which should be done just once, probably best as F(m)
. Then, I don't think the F()
is needed and neither do you need x
.
comment:17 followup: 28 Changed 8 years ago by
In this loop,
for i in range((l1)/(2*d)1): g = mult(S[i]) S.append(g) if g in factors: factors.remove(g)
when can it happen that g
is not in factors
?
comment:18 Changed 8 years ago by
Replying to cremona:
I thought that we did use resultants... but looking again at Kimi's code, the relevant function is mult() on line 1960: the effect of this function is to take the polynomial f and the rational function m, under the assumption that f is coprime to the denominator of m, and return the poly whose roots are m(alpha) as alpha runs over the roots of f. If m were a polynomial that would be res_Y(Xm(Y),f(Y)). I can see now that we could stil use that formula with m(Y) replaced by the polynomial p(Y) which satisfies p*d=n (mod f) where m=n/d which would amount to one extended gcd and one resultant. This may be better than the current code since it does not use rational functions.
I don't quite get what you're saying here. I think I said "resultants" too quickly before looking at the actual code.
Mathematically, we need to compute
gcd( numerator( f(a/b) ), psi)
where f
, a
, b
, psi
are univariate polynomials and only f
changes. I guess we know that the denominator of f(a/b)
is b^d
with d
the degree of f
. So we need
gcd( b^d f(a/b), psi)
Now suppose c
is such that a/b = c mod psi
, then we really need
gcd( b^d f(c), psi)
which equals
gcd(f(c), psi)
Now compute this in R[x]/psi(x)
and this should be the most efficient way.
(this was written up quickly, I haven't really checked that it's correct)
comment:19 Changed 8 years ago by
Actually, my idea works, but it's not so efficient since you're working mod psi
which is huge.
comment:20 Changed 8 years ago by
Status:  needs_review → needs_work 

I'm going to set this ticket to needs_work such that you can address the several minor points I mentioned (and answer 17, which I need to understand the algorithm better). Never mind the computation of mult()
.
comment:21 Changed 8 years ago by
Replying to cremona:
No, but it is Example 3.1.1 on page 28 of [KT2013] which is here: http://wrap.warwick.ac.uk/57568/ The example is over F_3 so is not slow.
Wouldn't it be a good testcase for the current code (both over F_3 and F_9)?
comment:22 followup: 29 Changed 8 years ago by
Since you write Every kernel polynomial is a product of irreducible factors of the division polynomial of the same degree
, could special case 2 apply by just considering the polynomials of a given degree?
comment:23 Changed 8 years ago by
Thanks for all the comments  I was busy with other things so have not replied to them indiviually yet, but perhaps it would be clearer if I now did so!
comment:24 Changed 8 years ago by
Replying to jdemeyer:
In your code, "special case 1" is really a special case of "special case 2". For simplicity, I would therefore keep only special case 2.
Agreed  done.
comment:25 Changed 8 years ago by
comment:26 Changed 8 years ago by
comment:27 Changed 8 years ago by
Replying to jdemeyer:
Since the constant
(l1)//2
appears in several places in the algorithm, I would prefer to see a variable assigned to it (let's call itD
or whatever).
Done  I called it l2.
comment:28 followup: 30 Changed 8 years ago by
Replying to jdemeyer:
In this loop,
for i in range((l1)/(2*d)1): g = mult(S[i]) S.append(g) if g in factors: factors.remove(g)when can it happen that
g
is not infactors
?
Very good question! I think it always happens, since the operation f > mult(f) applies the multiplicationbya map to the roots, so must take f to another irreducible factor of the same degree; the fact being checked is that the resulting cycle has the correct length, which is (l1)/2 divided by deg(f), so that the product of the f's in the cycle gives the kernel polynomial. I will delete the check.
comment:29 Changed 8 years ago by
Replying to jdemeyer:
Since you write
Every kernel polynomial is a product of irreducible factors of the division polynomial of the same degree
, could special case 2 apply by just considering the polynomials of a given degree?
Yes: if for some degree d dividing l2=(l1)/2 there are exactly (l1)/2d factors then their product must be a kernel poly.
Some more theory: each subgroup has l2 xcoordinates and the rational function m(x) permutes these in a single cycle (by definition of "semiprimitive root" as a class mod l which generates the multiplcative group modulo <1>). A kernel poly is a poly of degree l2 whose set of roots is invariant under this map. Since the map is defined over the ground field, each of the xcoordinates in any subgroup has the same degree d over the base, so d is a divisor of l2 and the kernel poly is a product of l2/d polys of degree d. On the other hand, an irreducible factor f of degree d (where d didvides l2) need not be a factor of a kernel poly, as the F_3 example with l=13 shows. If that is the case then the orbit of f under mult() will be longer than l2/d and we will detect that after l2/d steps we have not returned to the start. In this circumstance we will have removed l2/d factors from the list but this is only *part of* an orbit, so later on when we start with one of the remaining factors in that orbit, we may well iterate onto a factor which is already removed.
There, I have proved that g need not be in factors! And more  if we find a g which is not in factors then we have hit a partorbit already seen so can break.
comment:30 followup: 31 Changed 8 years ago by
Replying to cremona:
Very good question! I think it always happens, since the operation f > mult(f) applies the multiplicationbya map to the roots, so must take f to another irreducible factor of the same degree; the fact being checked is that the resulting cycle has the correct length, which is (l1)/2 divided by deg(f), so that the product of the f's in the cycle gives the kernel polynomial. [I will delete the check.] I tried removing the check and got runtime errors, so my mathematics must be wrong...
I also used the same thinking as you, and I also don't understand the problem...
comment:31 Changed 8 years ago by
Replying to jdemeyer:
Replying to cremona:
Very good question! I think it always happens, since the operation f > mult(f) applies the multiplicationbya map to the roots, so must take f to another irreducible factor of the same degree; the fact being checked is that the resulting cycle has the correct length, which is (l1)/2 divided by deg(f), so that the product of the f's in the cycle gives the kernel polynomial. [I will delete the check.] I tried removing the check and got runtime errors, so my mathematics must be wrong...
I also used the same thinking as you, and I also don't understand the problem...
See comment 29!
comment:32 Changed 8 years ago by
OK, I have done quite a rewrite which makes the logic much clearer (I think) and also treats all the special cases together. New commit soon...
comment:33 followup: 34 Changed 8 years ago by
Replying to cremona:
Without the patch though I would not have been able to compute this: http://beta.lmfdb.org/EllipticCurve/2.2.89.1/81.1/a/ !!
I'm actually curious why you get a huge speedup here. I can see that the patch here improves things, but to go from "not being able to compute" to "being able to compute" surprises me. Doesn't the computation and factoring of the division polynomial dominate the whole computation anyway?
comment:34 followup: 37 Changed 8 years ago by
Replying to jdemeyer:
Replying to cremona:
Without the patch though I would not have been able to compute this: http://beta.lmfdb.org/EllipticCurve/2.2.89.1/81.1/a/ !!
I'm actually curious why you get a huge speedup here. I can see that the patch here improves things, but to go from "not being able to compute" to "being able to compute" surprises me. Doesn't the computation and factoring of the division polynomial dominate the whole computation anyway?
Its computation is short, and its factoring takes < 1 hour. In the code I had, computing the complete isogeny class of 4 curves, I had to do that 4 times. (I am working on not having to recompute the kernel polys for other curves in the isogeny class, but that will definitely be another ticket). I also had some lines like E1.is_isogenous(E2,proof=True) which triggered computation of the whole isogeny class of E1. (With proof=False it just checks traces of Frobenius but I also had curves which were not isogenous but whose traces agreed at a lot of primes, so was playing safe).
comment:35 Changed 8 years ago by
Commit:  47ccfd587402b953c612fcd3cddaa541a6847bd3 → ad56369f5def8060575b6567342dedb28a447293 

Branch pushed to git repo; I updated commit sha1. New commits:
ad56369  #18589 simplified code + further improvements

comment:36 Changed 8 years ago by
Status:  needs_work → needs_review 

Please take a look at this. the code is (I hope) simpler to understand, deals with factors of each degree separately, and has more explanatory comments. I experimented with a couple of other version of mult(), one of which is there but commented out, but they were no faster. I also added a new example as discussed above.
comment:37 followup: 40 Changed 8 years ago by
Replying to cremona:
Its computation is short, and its factoring takes < 1 hour.
Then why did the computation of the isogenies without this patch take days? What was the bottleneck? I find it hard to believe that the calls to mult()
can take so much time.
comment:38 Changed 8 years ago by
In the current implementation, you are computing
from sage.rings.arith import gcd a = _least_semi_primitive(l) m = E.multiplication_by_m(a, x_only=True) F = psi_l.parent() x = F.gen() d = F(m.denominator()) n = F(m.numerator())
even when factors_by_degree
is empty.
An easy optimization would be
if all(factors == [] for factors in factors_by_degree): return ...
just before that block.
comment:39 followup: 41 Changed 8 years ago by
I think you can simplify
factors = [h for h,e in psi_l.factor() if l2 % h.degree() == 0]
to
factors = [h for h,e in psi_l.factor()]
(in the line below, you are only selecting the degrees you care about anyway)
comment:40 followup: 43 Changed 8 years ago by
Replying to jdemeyer:
Replying to cremona:
Its computation is short, and its factoring takes < 1 hour.
Then why did the computation of the isogenies without this patch take days? What was the bottleneck? I find it hard to believe that the calls to
mult()
can take so much time.
Me too, but as I tried to explain the script I was running was computing the isogeny class many times. I am rerunning just E.isogeny_class() under 6.7 now and will report back (but this ticket need not wait).
comment:41 Changed 8 years ago by
Replying to jdemeyer:
I think you can simplify
factors = [h for h,e in psi_l.factor() if l2 % h.degree() == 0]to
factors = [h for h,e in psi_l.factor()](in the line below, you are only selecting the degrees you care about anyway)
Fine, I will make these simplifications!
comment:42 followup: 44 Changed 8 years ago by
Commit:  ad56369f5def8060575b6567342dedb28a447293 → 2f83c20a3753b9141c635ab26ed88015a9fed499 

Branch pushed to git repo; I updated commit sha1. New commits:
2f83c20  #18589: 2 simplifications following review

comment:43 followup: 45 Changed 8 years ago by
Reviewers:  → Jeroen Demeyer 

Replying to cremona:
Me too, but as I tried to explain the script I was running was computing the isogeny class many times. I am rerunning just E.isogeny_class() under 6.7 now and will report back (but this ticket need not wait).
I agree that this ticket makes sense by itself. It's just that it only does some easy optimizations, it doesn't fundamentally improve the algorithm.
I have been thinking a bit about the computation of mult()
this afternoon and I might have an idea to compute mult^1(f)
(that is, find g
such that mult(g) == f
) much faster than the current mult()
. Of course, theoretically, mult()
and mult^1()
play the same role.
I'm wondering if it's worth pursuing further and for that I need to know if mult()
is ever the bottleneck of the algorithm. (this is of course for a hypothetical followup ticket)
comment:44 followup: 47 Changed 8 years ago by
Status:  needs_review → needs_work 

Replying to git:
Branch pushed to git repo; I updated commit sha1. New commits:
2f83c20 #18589: 2 simplifications following review
The check not factors_by_degree
only checks that the dict has no keys, but you really need to check that all values are empty lists:
all(factors == [] for factors in factors_by_degree.values())
comment:45 Changed 8 years ago by
Replying to jdemeyer:
Replying to cremona:
Me too, but as I tried to explain the script I was running was computing the isogeny class many times. I am rerunning just E.isogeny_class() under 6.7 now and will report back (but this ticket need not wait).
I agree that this ticket makes sense by itself. It's just that it only does some easy optimizations, it doesn't fundamentally improve the algorithm.
Agreed.
I have been thinking a bit about the computation of
mult()
this afternoon and I might have an idea to computemult^1(f)
(that is, findg
such thatmult(g) == f
) much faster than the currentmult()
. Of course, theoretically,mult()
andmult^1()
play the same role.
That is true  you can use either. You can see in Prop 3.3.1 of the thesis that both are natural! It would be good to improve this.
I'm wondering if it's worth pursuing further and for that I need to know if
mult()
is ever the bottleneck of the algorithm. (this is of course for a hypothetical followup ticket)
comment:46 Changed 8 years ago by
Commit:  2f83c20a3753b9141c635ab26ed88015a9fed499 → 3d687e5225f67808eb6c5af5fbf4cb93f2000c62 

Branch pushed to git repo; I updated commit sha1. New commits:
3d687e5  #18589 further oneliner

comment:47 Changed 8 years ago by
Replying to jdemeyer:
Replying to git:
Branch pushed to git repo; I updated commit sha1. New commits:
2f83c20 #18589: 2 simplifications following review
The check
not factors_by_degree
only checks that the dict has no keys, but you really need to check that all values are empty lists:all(factors == [] for factors in factors_by_degree.values())
You are right, I had forgotten that in creating the dict it might have had some empty lists. I should have known better than to do more simplifaction than you recommended!
New commits:
3d687e5  #18589 further oneliner

New commits:
3d687e5  #18589 further oneliner

comment:48 Changed 8 years ago by
Status:  needs_work → needs_review 

Any further changes will have to wait until tomorrow....
comment:50 followup: 51 Changed 8 years ago by
Many thanks for a fantastic job reviewing this ( excelletn in both mathematical and coding aspects). I will continue the discussion over at #18611.
I suspect that there have been other changes since 6.7 which have sped up my l=89 example, to do with factoring the 89division polynomial over a quadratic field. As you say, one computation of mult() could not take such a long time even before this patch, yet a single run of E.isogenies_prime_degree(89) is stil running after 14 hours, and all that has to do is run isogenies_prime_degree_general once with l=89.
comment:51 followup: 52 Changed 8 years ago by
Replying to cremona:
yet a single run of E.isogenies_prime_degree(89) is stil running after 14 hours
Can you interrupt (CTRLC) it and look at the traceback to see what it's doing?
comment:52 Changed 8 years ago by
Replying to jdemeyer:
Replying to cremona:
yet a single run of E.isogenies_prime_degree(89) is stil running after 14 hours
Can you interrupt (CTRLC) it and look at the traceback to see what it's doing?
As expected: it is in the lines
> 1939 if mult(S[1]) == f:
and
> 1928 return gcd(F(f(m(x)).numerator()),psi_l).monic()
and
> 1588 return a.gcd(b, **kwargs)
and below that
/usr/local/sage/sage2/src/sage/rings/polynomial/polynomial_number_field.pyx in sage.rings.poly nomial.polynomial_number_field.Polynomial_absolute_number_field_dense.gcd (build/cythonized/sag e/rings/polynomial/polynomial_number_field.c:1761)() 203 h1 = self._pari_with_name('x') 204 h2 = other._pari_with_name('x') > 205 g = h1.gcd(h2)
and at the bottom, pari/gen.pyx
comment:53 followup: 54 Changed 8 years ago by
Thanks for the info.
Interesting, it looks like the gcd()
computation in mult()
takes so much time. I would assume that some kind of coefficient explosion occurs.
comment:54 followup: 55 Changed 8 years ago by
Replying to jdemeyer:
Thanks for the info.
Interesting, it looks like the
gcd()
computation inmult()
takes so much time. I would assume that some kind of coefficient explosion occurs.
I think that the real point here is http://trac.sagemath.org/ticket/18461 which reimplemented univariate polynomial gcd! So it is possible that without any of the changes on this ticket or #18611 we would have seen anoticeable improvement. Anyway, between these two tickets we certainly have more efficient code so the exercise was worth carrying out. Thanks!
comment:55 Changed 8 years ago by
Replying to cremona:
Replying to jdemeyer:
Thanks for the info.
Interesting, it looks like the
gcd()
computation inmult()
takes so much time. I would assume that some kind of coefficient explosion occurs.I think that the real point here is http://trac.sagemath.org/ticket/18461 which reimplemented univariate polynomial gcd!
From the traceback, you see that PARI is used to compute this gcd, so the generic Sage code doesn't matter.
comment:56 Changed 8 years ago by
Branch:  u/cremona/18589 → 3d687e5225f67808eb6c5af5fbf4cb93f2000c62 

Resolution:  → fixed 
Status:  positive_review → closed 
New commits:
#18589 isogeny improvement