Opened 21 months ago
Closed 20 months ago
#31767 closed enhancement (fixed)
Enhanced checks for sign() with qqbar elements with addition and subtraction
Reported by:  tscrim  Owned by:  

Priority:  major  Milestone:  sage9.4 
Component:  performance  Keywords:  qqbar 
Cc:  vdelecroix, mmezzarobba, ghmwageringel, fredrik.johansson, saraedum, slelievre  Merged in:  
Authors:  Travis Scrimshaw  Reviewers:  Vincent Delecroix, Sebastian Oehms 
Report Upstream:  N/A  Work issues:  
Branch:  642b7c2 (Commits, GitHub, GitLab)  Commit:  642b7c2adc0d2a54c1097eb1cf923bb2e648df0f 
Dependencies:  Stopgaps: 
Description
The following are painfully slow:
sage: x1 = AA(2^(1/100)) sage: x2 = AA(2^(1/100)) sage: y = x1  x2 sage: z = x1  x2 sage: y == y sage: y == 0 sage: y == z
Note that for testing, you need to recreate y
before each comparison as otherwise it has been made exact.
We avoid computing the minimum polynomial (as it calls exactify()
) in the rich comparison until we need to. We also avoid calling exactify()
for a binary operation in sign()
until we absolutely need to. Additionally, the identity test is not done before reaching _richcmp_
.
Change History (27)
comment:1 followups: 3 10 Changed 21 months ago by
Branch:  → public/performance/qqbar_sign31767 

Commit:  → fc64ba2ec7d3337e2799fc57f78dcbb30dfa0fe7 
Status:  new → needs_review 
comment:2 Changed 21 months ago by
See also #31768 for a followup subclassing ANUnaryExpr
for some finer control and trivial simplifications.
comment:3 followup: 4 Changed 21 months ago by
This proposal looks good to me but
+ # Rationals + if type(sd._left._descr) is ANRational and type(sd._right._descr) is ANRational: + ret = sd._op(sd._left._descr._value, sd._right._descr._value) + if ret == 0: + self._set_descr(ANRational(QQ.zero())) + return 0 + return ret.sign()
Do you have an instance where the above code path is used? When both operands are rationals the descriptor should be automatically updated. We don't want to repeat this block in all possible places cmp
, sign
, _add_
, ...
Replying to tscrim:
I have not run any timings on this, but things to check for speed regressions would be low degree minimal polynomials. Suggestions for good timing tests would be useful.
I don't completely understand why I need to update some of the test for the eigenvalues, but the real precision seems to have significantly improved. So there might be some other doctests that need fixing; waiting for the patchbot.
When precision increases it is typically that more enclosure refinement occurred. With the new code it looks to me that we should have less refinement. If my diagnostic is correct, this is bad. Would be nice to check what are the diameters of the intervals before and after the change.
comment:4 Changed 21 months ago by
Replying to vdelecroix:
This proposal looks good to me but
+ # Rationals + if type(sd._left._descr) is ANRational and type(sd._right._descr) is ANRational: + ret = sd._op(sd._left._descr._value, sd._right._descr._value) + if ret == 0: + self._set_descr(ANRational(QQ.zero())) + return 0 + return ret.sign()Do you have an instance where the above code path is used? When both operands are rationals the descriptor should be automatically updated. We don't want to repeat this block in all possible places
cmp
,sign
,_add_
, ...
This is occurring after a call to exactify()
. Thus, we could have something that ends up being a rational that didn't know it before. For example, b = a  (a + 1)
for some irrational a
, then taking b + 1 == 0
). Adding a print("rat check")
just after the if
statement gives:
sage: a = AA(sqrt(2)) sage: b = a  (a + 1) sage: (b + 1).sign() rat check 0
I will add this as a test and push. I also need to figure out why the test in src/sage/geometry/polyhedron/backend_field.py
is failing too...
Replying to tscrim:
I have not run any timings on this, but things to check for speed regressions would be low degree minimal polynomials. Suggestions for good timing tests would be useful.
I don't completely understand why I need to update some of the test for the eigenvalues, but the real precision seems to have significantly improved. So there might be some other doctests that need fixing; waiting for the patchbot.
When precision increases it is typically that more enclosure refinement occurred. With the new code it looks to me that we should have less refinement. If my diagnostic is correct, this is bad. Would be nice to check what are the diameters of the intervals before and after the change.
Here is what is happening. Because we are sorting the (complex) roots, we compare things lex on the (real, imag)
pair. Eventually it has to check that the real parts of the two numbers are not equal (see case 3:
). This calls the real _richcmp_
method. In here, before it computed the minpoly()
(which called exactify()
of the treal parts), whereas now only gets the minpoly if they are known, so it first tries the refinement before ultimately ended up calling exactify()
(as both are supposed to actually be 0). This gets called when comparing these roots with a root that is explicitly known with a real part of 0.
I don't think this is a bad thing as I believe the refinement is a relatively fast operation, especially if we ultimately compute exactify()
on most elements. Also, getting rid of calls to exactify()
in cases when it is not needed is a larger benefit IMO. However, some more actual usecases could be useful here for testing timings.
comment:5 Changed 21 months ago by
Commit:  fc64ba2ec7d3337e2799fc57f78dcbb30dfa0fe7 → b069b81e5e5fc26b917f22eef825e5746f5510fb 

Branch pushed to git repo; I updated commit sha1. New commits:
b069b81  Fixing bug with addition x + (x), adding doctests, removing redundancy.

comment:6 Changed 21 months ago by
Commit:  b069b81e5e5fc26b917f22eef825e5746f5510fb → 3b8e217e54446b5cf6d7a3a18770fe4455f4cc94 

Branch pushed to git repo; I updated commit sha1. New commits:
3b8e217  Removing one other redundancy.

comment:7 Changed 21 months ago by
Commit:  3b8e217e54446b5cf6d7a3a18770fe4455f4cc94 → 5da8b2774eb2e024cc7a850789e1ed1a6816d354 

Branch pushed to git repo; I updated commit sha1. New commits:
5da8b27  Removing same redundant checks for the QQbar elements.

comment:8 followup: 9 Changed 21 months ago by
I have fixed a bug with addition x + (x)
giving the wrong sign. This was the issue with backend_field.py
. I added in a few other doctests and removed some redundant checks.
I just don't know how much this will affect timings refining the precision instead of calling the exactify()
via the minpoly()
. Something else that might be useful is adding a way to compute the minpoly
without computing exactify()
for elements that have an idea about what their minpoly
is. Mainly here I am thinking elements using ANRoot
. We could also fashion something that passes around the minpoly
when we do operations that preserve it, such as conjugation. This can be done on a separate ticket. What do you think?
comment:9 Changed 21 months ago by
Replying to tscrim:
I have fixed a bug with addition
x + (x)
giving the wrong sign. This was the issue withbackend_field.py
. I added in a few other doctests and removed some redundant checks.I just don't know how much this will affect timings refining the precision instead of calling the
exactify()
via theminpoly()
. Something else that might be useful is adding a way to compute theminpoly
without computingexactify()
for elements that have an idea about what theirminpoly
is. Mainly here I am thinking elements usingANRoot
. We could also fashion something that passes around theminpoly
when we do operations that preserve it, such as conjugation. This can be done on a separate ticket. What do you think?
Computing minpoly is doable and desirable for all descriptors without exactification. This is the reason why was introduced the function composed_op
on polynomials
sage: x = polygen(QQ) sage: p = x^3 + 2*x  1 sage: q = x^3  x + 3 sage: a = p.roots(QQbar, multiplicities=False)[0] sage: b = q.roots(QQbar, multiplicities=False)[0] sage: r = p.composed_op(q, operator.add) sage: r(a + b) # not necessary minpoly, but at least a multiple 0.?e15
More generally, I think that QQbar elements beyond degree 8 (let say) should not try to find a reasonable representation of the number field they represent (what is exactify currently doing via polred
which is costly). They should just be considered as the root of some irreducible polynomial. Computing minpoly of a + b
, abs(a)
, etc is doable.
Though, all this logic is now in Fredrik's calcium and I am not sure how much it is desirable to improve the version in Sage compared to having a calcium version of AA
/QQbar
.
comment:10 Changed 21 months ago by
Replying to tscrim:
New commits:
fc64ba2 Improvements to sign() and comparisons for qqbar elements.
Why not use (good enough refinements of) approximate values to compute the sign of xy when x and y are known to have different minimal polynomials? (cf. https://groups.google.com/g/sagedevel/c/OaUlf5VHBbc/m/doZC1KpMBAAJ )
comment:11 followup: 14 Changed 21 months ago by
comment:9: I agree that it is better to not invest a lot of time in improving AA
/QQbar
since it will almost certainly be replaced by Calcium. However, I think we can do a few improvements around the current code, and some of these changes might still be applicable once we switch the backend over (unlike, say, #31768). Perhaps we can include the current patch, maybe I will do a followup to try and lessen the impact of the changes.
Fredrik, do you have any comments about this?
comment:10: Once you have different minimal polynomials, then I believe you essentially have everything. Perhaps I have misunderstood something?
Green patchbot (the pyflakes warning existed before this ticket).
comment:12 Changed 21 months ago by
Actually, to construct a better refinement of a + b
knowing the minimal polynomials m_a(x)
and m_b(x)
there are at least two strategies
 compute
m_{a+b}(x)
(eg viacomposed_op
) and do refinement with it  compute better refinement of
a
andb
and add them
Depending on deg(m_a)
, deg(m_b)
, deg(m_{a+b})
it might be better to use the second option.
comment:13 Changed 21 months ago by
To compare (the real parts of) two qqbars in Calcium, I just compare enclosures at a couple of different levels of precision, otherwise do an exact subtraction and check the sign of the difference. Most of the work will then be in the subtraction (which factors the resultant polynomial and then refines the precision of the inputs if needed to get the right root). This is not really optimized, but so far it has not been a bottleneck that I needed to fix. In fact, this issue reminds me that I did not even optimize for both inputs being real, in which case it will suffice to increase the precision until convergence...
More generally, it's an interesting problem to optimize computing sign(a1+a2+...+an) or proving a1+a2+...+an == 0 without computing the entire sum explicitly.
comment:14 followups: 15 21 Changed 21 months ago by
Replying to tscrim:
comment:10: Once you have different minimal polynomials, then I believe you essentially have everything. Perhaps I have misunderstood something?
Consider the following computation:
x = AA(1+100*2^(1000))^(1/100) y = AA(1+101*2^(1000))^(1/101) x < y
(which calls sign(yx)
internally). With your patch (if I read it correctly; I did not actually run it), Sage will notice that the minimal polynomials are different, but will not take advantage of that information. It will still trigger exact computation of yx
(which takes forever).
I suggest adding for this case (different minimal polynomials) the code
while self._value.contains_zero(): self._more_precision() return self._value.unique_sign()
Is there any reason not to do so? (For the record: in this particular example, this loop takes 82ms on my machine.)
comment:15 Changed 21 months ago by
Replying to ghIliaSmilga:
I suggest adding for this case (different minimal polynomials) the code
while self._value.contains_zero(): self._more_precision() return self._value.unique_sign()Is there any reason not to do so? (For the record: in this particular example, this loop takes 82ms on my machine.)
I see what you're saying, but that is a more invasive change. It probably is better to just do that here. However, I don't quite understand Dima's suggestion on the sagedevel thread. It probably is something simple, and I am just too tired to see it right now.
comment:16 followup: 18 Changed 21 months ago by
In this part of the new code
if sd._left.minpoly() == sd._right.minpoly(): # Negating the element does not change the minpoly right = sd._right if sd._op is operator.sub else sd._right c = cmp_elements_with_same_minpoly(sd._left, right, sd._left.minpoly()) if c == 0: self._set_descr(ANRational(QQ.zero())) return 0 elif c is not None: return c
you seemed to ignore that the operator could be mul
or truediv
.
comment:17 Changed 21 months ago by
I wonder if we can improve exactification instead of avoiding it. At the moment any two instances of AlgebraicGenerator
are recognized to be different, even if they describe the same root of the same polynomial. Is there a special reason for this? What is the idea behind the attribute _index
?
Howsoever, I did the following attempt modifying the method _richcmp_
of this class and changed self is other
in union
to self == other
and accordingly super is self
to super == self
in super_poly
(diff
shown with respect to stable 9.3):

src/sage/rings/qqbar.py
diff git a/src/sage/rings/qqbar.py b/src/sage/rings/qqbar.py index 81e333b..b94bea7 100644
a b class AlgebraicGenerator(SageObject): 2911 2911 sage: gen > qq_generator 2912 2912 True 2913 2913 """ 2914 sf = self._field 2915 of = other._field 2916 if type(sf) != type(of): 2917 return rich_to_bool(op, 1) 2918 if sf != of: 2919 return richcmp_not_equal(sf, of, op) 2920 sa = self.root_as_algebraic() 2921 oa = other.root_as_algebraic() 2922 if sa in QQ and oa in QQ: 2923 return richcmp(QQ(sa), QQ(oa), op) 2924 c = cmp_elements_with_same_minpoly(sa, oa, sf.polynomial()) 2925 if c is not None: 2926 return rich_to_bool(op, c) 2914 2927 return richcmp(self._index, other._index, op) 2915 2928 2916 2929 def is_complex(self): … … class AlgebraicGenerator(SageObject): 3108 3121 return other 3109 3122 if other._trivial: 3110 3123 return self 3111 if self isother:3124 if self == other: 3112 3125 return self 3113 3126 if other in self._unions: 3114 3127 return self._unions[other].parent … … class AlgebraicGenerator(SageObject): 3207 3220 if checked is None: 3208 3221 checked = {} 3209 3222 checked[self] = True 3210 if super isself:3223 if super == self: 3211 3224 return self._field.gen() 3212 3225 for u in self._unions.values(): 3213 3226 if u.parent in checked:
This change passes through all doctests (including long
) and makes exactification possible for the testcase of the ticket description:
sage: x1 = AA(2^(1/100)) ....: x2 = AA(2^(1/100)) ....: y = x1  x2 ....: z = x1  x2 sage: %time y._exact_field() CPU times: user 160 ms, sys: 16 ms, total: 176 ms Wall time: 174 ms Trivial generator sage: %time y == z CPU times: user 260 ms, sys: 0 ns, total: 260 ms Wall time: 259 ms True
Of course this does not solve the testcase of comment:14. Concerning this it seems, that the function composed_op
could do a good job here:
sage: x = AA(1+2*2^(1000))^(1/2); y = AA(1+3*2^(1000))^(1/3) sage: p = x.minpoly() sage: q = y.minpoly() sage: %time r = p.composed_op(q, operator.sub) CPU times: user 4 ms, sys: 0 ns, total: 4 ms Wall time: 5.53 ms sage: %time F = r.factor() CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 2.65 ms
I would prefer such a solution since it would not just fix this issue, but also improve exactification in general. On the other hand I agree with Vincent that this only makes sense (in view of the upcoming Calcium), if it will be easy to realize.
But note, that this issue is different from the former one in not just hanging in Pari's nffactor
but already one step before in the construction of the Pari number field (of degree two!):
sage: AG = x._exact_field() sage: AG.pari_field() ^C KeyboardInterrupt Traceback (most recent call last) <ipythoninput82b02365498e1> in <module> > 1 AG.pari_field() ~/devel/sage/local/lib/python3.9/sitepackages/sage/rings/qqbar.py in pari_field(self) 3025 if self._pari_field is None: 3026 pari_pol = self._field.pari_polynomial("y") > 3027 self._pari_field = pari_pol.nfinit(1) 3028 return self._pari_field 3029 cypari2/auto_gen.pxi in cypari2.gen.Gen_base.nfinit() KeyboardInterrupt:
Thus, I would take this for bug in Pari, isn't it?
comment:18 Changed 20 months ago by
Replying to vdelecroix:
In this part of the new code
if sd._left.minpoly() == sd._right.minpoly(): # Negating the element does not change the minpoly right = sd._right if sd._op is operator.sub else sd._right c = cmp_elements_with_same_minpoly(sd._left, right, sd._left.minpoly()) if c == 0: self._set_descr(ANRational(QQ.zero())) return 0 elif c is not None: return cyou seemed to ignore that the operator could be
mul
ortruediv
.
Those cases already would have been picked up earlier in this block:
elif type(sd) is ANBinaryExpr: ls = sd._left.sign() rs = sd._right.sign() if sd._op is operator.mul or sd._op is operator.truediv:  return sd._left.sign() * sd._right.sign() + return ls * rs
comment:19 Changed 20 months ago by
@soehms That seems like a good idea. However, we now have to come up with a better hash. My guess behind the _index
idea is that they wanted very fast comparison checks and hashing that is consistent with equality. This line actually dates back to the initial implementation of algebraic reals (commit 07704df513
). So there is room for improvement here.
comment:20 followup: 22 Changed 20 months ago by
What about postponing further improvements to other tickets? This one is ready and I think it is worth setting to positive review.
comment:21 Changed 20 months ago by
Replying to ghIliaSmilga:
Replying to tscrim:
comment:10: Once you have different minimal polynomials, then I believe you essentially have everything. Perhaps I have misunderstood something?
Consider the following computation:
x = AA(1+100*2^(1000))^(1/100) y = AA(1+101*2^(1000))^(1/101) x < y
I agree that for a single +
operation on two elements for which you know the minimal polynomial, infinite refinement is a way to go.
However, you need someting to compute minimal polynomials in order to handle
sage: a = AA(1 + 2^(1000)) sage: x = a^(1/100) + a^(1/104)  a^(1/102)  a^(1/103)
I opened #31889 for that purpose.
comment:22 Changed 20 months ago by
Replying to vdelecroix:
What about postponing further improvements to other tickets? This one is ready and I think it is worth setting to positive review.
Agreed!
comment:23 Changed 20 months ago by
Commit:  5da8b2774eb2e024cc7a850789e1ed1a6816d354 → 642b7c2adc0d2a54c1097eb1cf923bb2e648df0f 

comment:24 Changed 20 months ago by
The bot was green modulo one pyflakes before I pushed. The last change just removes an unused import. Tests still pass for me.
comment:25 Changed 20 months ago by
Reviewers:  → Vincent Delecroix, Sebastian Oehms 

Status:  needs_review → positive_review 
comment:27 Changed 20 months ago by
Branch:  public/performance/qqbar_sign31767 → 642b7c2adc0d2a54c1097eb1cf923bb2e648df0f 

Resolution:  → fixed 
Status:  positive_review → closed 
I have not run any timings on this, but things to check for speed regressions would be low degree minimal polynomials.
I don't completely understand why I need to update some of the test for the eigenvalues, but the real precision seems to have significantly improved. So there might be some other doctests that need fixing; waiting for the patchbot.
I also took the change to mark a test as
# long time
that was taking nearly 2 seconds on my computer.New commits:
Improvements to sign() and comparisons for qqbar elements.