#10140 closed enhancement (fixed)
Base sage.geometry.cone on the Parma Polyhedra Library (PPL)
Reported by: | vbraun | Owned by: | mhampton |
---|---|---|---|
Priority: | major | Milestone: | sage-4.7.1 |
Component: | geometry | Keywords: | ppl |
Cc: | novoselt, nthiery | Merged in: | sage-4.7.1.alpha1 |
Authors: | Volker Braun, Andrey Novoseltsev | Reviewers: | Andrey Novoseltsev, Volker Braun |
Report Upstream: | N/A | Work issues: | |
Branch: | Commit: | ||
Dependencies: | #10039 | Stopgaps: |
Description (last modified by )
As a first useful application of the PPL Cython library interface I have changed sage.geometry.cone.Cone
to use the PPL wrapper instead of cddlib
. Here is a quick benchmark with a fan that was somewhat challenging:
sage: from sage.schemes.generic.toric_variety_library import toric_varieties_rays_cones sage: rays, cones = toric_varieties_rays_cones['BCdlOG'] sage: timeit('Fan(cones,rays)') 5 loops, best of 3: 1.95 s per loop
With the old Polyhedron
/cddlib
interface, I got instead
5 loops, best of 3: 42.1 s per loop
Apply
- trac_10140_sublattice_intersection.patch
- trac_10140_base_cone_on_ppl_original.patch
- trac_10140_reviewer.patch
- trac_10140_switch_point_containment_to_PPL.patch
Apply trac_10140_sublattice_intersection.patch, trac_10140_base_cone_on_ppl_original.patch, trac_10140_reviewer.patch, trac_10140_switch_point_containment_to_PPL.patch
Attachments (6)
Change History (63)
comment:1 Changed 9 years ago by
- Cc novoselt added
- Status changed from new to needs_work
comment:2 Changed 9 years ago by
Wonderful speed gain! However, when do you expect PPL to become a standard library? It is my impression that currently it is difficult to add new standard packages and it is more likely to become an optional one for a while. In which case there should be an option to use PPL, but its presence should not affect work of toric geometry.
Little picks at the patch:
- I don't see why it is necessary to remove the possibility of
Cone
construction from aPolyhedron
. - I think that new possible input
C_Polyhedron
should be documented in the INPUT section rather than a note in the end. - Is it possible to make
C_Polyhedrons
immutable (on demand)? - I prefer all line generators to be put in the end of ray list for non-strictly convex cones, if they were determined automatically. Because if one works with faces of such a cone, then all these generators are the same and, therefore, others are more "important". That's very minor, of course.
- I am trying to follow PEP8 style guide http://www.python.org/dev/peps/pep-0008/ which says: When raising an exception, use "raise
ValueError
('message')" instead of the older form "raiseValueError
, 'message'". New line 1167 tries to change this recommended form to an old one ;-) - Looking at what you have done in facet normals, maybe we should change
cone.dual_lattice()
method to always return ZZ^{n} if there is no dual method for the lattice? Thinking of the class group situation, it seems like a more sensible default. - I think cones should not be printed explicitly (i.e. using "print") when
ValueError
is raised due to attempt to intersect cones in lattices of different dimension. The reason is that maybe some users want to intercept this exception and deal with it in their own way without showing any output. Also, I think that the check should be done not for equality of lattice dimensions, but for equality of lattices themselves, because cones of different lattices should not be intersected.
comment:3 Changed 9 years ago by
- Well you can always write
Cone(Polyhedron.rays())
. I took out the possibility to directly pass aPolyhedron
to make sure that I caught all instances of old code.
- PPL stuff should not be visible to the user, I think. If anything, I'll probably move the note to a comment in the code.
- In the C++ library there is no particular support for that. Of course, in C++ you just declare things const if you don't want them mutable. So I guess we should emulate that by
get/set_immutable()
methods.
4,5,6. Sounds good.
- Oops left-over
print()
from debugging things :-)
comment:4 Changed 9 years ago by
- My concern was that it removes publicly exposed feature that has been in Sage for several months (and is likely to stay there a few months more). So I vote for resurrecting this possibility, which is actually very easy when things are PPL based - you just replace
Polyhedron
withPolyhedron.rays()
, as you said. More importantly, if PPL will become something that users have to install separately, then ALL of the old code related to polyhedron -> cone should remain untouched, but there should be some new path to go around it. By the way, I am not quite familiar with packages - is it possible to change Sage library during package installation? If not and all PPL-related code should be in the library no matter whether PPL is installed or not, then "doctest-fixing" may be a bad approach here. Maybe we should change doctests to make them deterministic. Although I am not quite sure how.
- I see your point, but having non-documented input rubs me the wrong way. If you don't want to mention PPL in the documentation (which is quite reasonable), maybe we can add a function like
_Cone_from_PPL
for such input? For the actual class there is no need in this - it is already not supposed to be called directly by users.
- Yes, that's what I had in mind. My concern is that when you use PPL representation, you may need to actually change it somehow to get the result. So far in your code you were constructing a new object based on existing one, so that the original is intact, but it seems to me that this step is quite easy to forget and it would be nice to have something forcing you to do so.
- That explains it ;-)
comment:5 Changed 9 years ago by
This is quite possibly a stupid question, but does it mean we can get rid of cddlib if this gets merged?
comment:6 Changed 9 years ago by
No! Currently cddlib is required by Gfan. It may be possible to convince Anders Jensen, the author of Gfan, to switch to PPL as well.
comment:7 Changed 9 years ago by
- Status changed from needs_work to needs_review
The updated version now cleans up all doctests.
comment:8 Changed 9 years ago by
comment:9 Changed 9 years ago by
comment:10 Changed 9 years ago by
I have trouble applying trac_10140_base_cone_on_ppl.patch. The following section doesn't apply at least:
@@ -1578,7 +1667,7 @@ Let's take a 3-d cone on 4 rays:: - sage: c = Cone([(1,0,1), (0,1,1), (-1,0,1), (0,-1,1)]) + sage: c = Cone([(1,0,1), (0,1,1), (-1,0,1), (0,-1,1)], check=False) Then any ray generates a 1-d face of this cone, but if you construct such a face directly, it will not "sit" inside the cone::
Is it due to #9972? I can see that particular section in #9972 but slightly shifted in line number.
comment:11 Changed 9 years ago by
It is quite likely, since #9972 was invasive and written mostly by me, i.e. it was not coordinated with this one. What exactly do you mean by "have trouble"? I just tried to apply it on top of my queue and it succeeded with two fuzzy hunks.
For the record: I am going to review this ticket but I am waiting for PPL package to get approved ;-)
comment:12 Changed 9 years ago by
comment:13 Changed 9 years ago by
- Status changed from needs_review to needs_work
- Work issues set to rebasement
This is on a fresh installation of 4.6.2.alpha2 without any patches applied:
applying trac_10140_base_cone_on_ppl.patch patching file sage/geometry/cone.py Hunk #30 succeeded at 1981 with fuzz 1 (offset 10 lines). Hunk #39 succeeded at 2332 with fuzz 2 (offset 29 lines). now at: trac_10140_base_cone_on_ppl.patch
comment:14 Changed 9 years ago by
- Status changed from needs_work to needs_review
I've uploaded my rebased patch.
comment:15 Changed 9 years ago by
- Work issues rebasement deleted
François, does the new version work for you? (Now I don't get any fuzz for both patches.)
comment:16 Changed 9 years ago by
It applies if I start from 4.6.2.alpha2, but I have fuzz:
patching file sage/geometry/cone.py Hunk #29 succeeded at 1969 (offset 10 lines). Hunk #30 succeeded at 1980 with fuzz 1 (offset 10 lines). Hunk #31 succeeded at 2076 (offset 30 lines). Hunk #32 succeeded at 2110 (offset 30 lines). Hunk #33 succeeded at 2129 (offset 30 lines). Hunk #34 succeeded at 2217 (offset 30 lines). Hunk #35 succeeded at 2237 (offset 30 lines). Hunk #36 succeeded at 2271 (offset 30 lines). Hunk #37 succeeded at 2289 (offset 30 lines). Hunk #38 succeeded at 2314 (offset 30 lines). Hunk #39 succeeded at 2330 with fuzz 2 (offset 29 lines). Hunk #40 succeeded at 2464 (offset 29 lines). Hunk #41 succeeded at 2492 (offset 29 lines). Hunk #42 succeeded at 2523 (offset 29 lines). Hunk #43 succeeded at 2608 (offset 29 lines). Hunk #44 succeeded at 2874 (offset 38 lines). Hunk #45 succeeded at 3100 (offset 38 lines). Hunk #46 succeeded at 3119 (offset 38 lines). Hunk #47 succeeded at 3189 (offset 38 lines). Hunk #48 succeeded at 3204 (offset 38 lines). patching file sage/geometry/fan.py Hunk #3 succeeded at 1824 (offset 13 lines). patching file sage/geometry/fan_morphism.py
Presumably the same fuzz you had before. I wonder if it is because of #10336 which was merged in alpha2? But it applies and that's the main thing.
comment:17 Changed 9 years ago by
On a positive note, not only it applies but the tests pass without problem on a variety of platform (linux-x86,linux-amd64, OS X 10.5 (x86)).
comment:18 Changed 9 years ago by
- Milestone changed from sage-feature to sage-4.7
- Reviewers set to Andrey Novoseltsev
Now that PPL package got positively reviewed, I will go over this one shortly.
comment:19 Changed 9 years ago by
This looks good to me, and #10039 has positive review now. It seems that Andrey's request to let Polyhedron objects be allowed as Cone input has been addressed, as well as the other issues he raised.
All tests in the geometry module pass with these patches on linux (64-bit), solaris (mark on skynet), and OS X 10.6.
I would give a positive review, but I will defer to Andrey in case of design disagreements.
comment:20 Changed 9 years ago by
Thank you, I definitely want to look over it again.
In particular, I don't like that the order of rays has changed in doctests in fan_morphism
(the very end of the first patch). So I want to figure out why does it happen there and how to fix it, if possible.
comment:21 Changed 9 years ago by
comment:22 Changed 9 years ago by
- Status changed from needs_review to needs_info
So - am I right that PPL does not preserve the order of rays when it computes the minimal set of generators?
I am a strong believer in preserving this order (i.e. if the given set is already minimal, the output should be the same, and if there are "extras" then they should be removed, but the rest should be still in order). If PPL does not preserve it, I propose adding such "sorting" to the cone constructor. It should also eliminate the need for many doctest fixes.
The only argument against, I think, is performance, but I doubt that it will be noticeable in toric geometry computations, where one usually wants to work with simplicial cones that don't have many rays.
comment:23 Changed 9 years ago by
One more suggestion: can we please completely return my code for constructing a cone from a Polyhedron? (Modulo parameter names, of course). Because:
- I had checks that it is a cone and its vertex is at the origin.
- While it is easy to get rays from it and proceed using PPL, it is unnecessary since it is already known that rays are minimal generators, and we also get splitting into rays/lines for free, which was cached.
comment:24 Changed 9 years ago by
Neither PPL nor cddlib preserve ray orders. PPL does return them in a canonical (internal) order, irregardless of the original order.
comment:25 follow-up: ↓ 26 Changed 9 years ago by
I think its easier to just go with PPL's ordering. It also matches the behaviour of the Polyhedron class which will reorder generators. We already have the Cone_of_fan
to keep track of ray orderings with respect to some particular enumeration of rays.
I can add back in a check that the polyhedron is really a cone, perhaps tomorrow. Generating a Cone from a Polyhedron is only for convenience, I don't see any need to make it fast. All actual computations are now done with PPL.
comment:26 in reply to: ↑ 25 Changed 9 years ago by
Replying to vbraun:
I think its easier to just go with PPL's ordering. It also matches the behaviour of the Polyhedron class which will reorder generators. We already have the
Cone_of_fan
to keep track of ray orderings with respect to some particular enumeration of rays.
It is very annoying to deal with several objects that are "the same" but are slightly different, e.g. have different ordering of rays. Also, users may have some ideas about "convenient order" and provide rays in it: if I create the first quadrant as Cone([(1,0), (0,1)])
, I expect that the first ray will be the first basis vector and the second - the second, as it was given. Remembering to add check=False
is annoying. Making check=False
default is very dangerous, as I have learned the hard way. I really really want to preserve the order whenever possible. I even plan to add the possibility to manually provide the order of facets and points for lattice polytopes, since it will allow convenient work with Cayley cones and polytopes of nef-partitions - they are easy to create with current code, but keeping track of face duality is very ugly.
I will be very happy to add the sorting myself on top your patch, fixing all affected doctests, so pleeease let me do it ;-)
I can add back in a check that the polyhedron is really a cone, perhaps tomorrow. Generating a Cone from a Polyhedron is only for convenience, I don't see any need to make it fast. All actual computations are now done with PPL.
I am more concerned about validity checks than performance here. And I am also happy to do the necessary copy-pasting here. So you don't really have to do anything, just say that you are OK with it!
comment:27 follow-up: ↓ 28 Changed 9 years ago by
I also want to remind you of the case of non-strict cones where the ray representation is not unique; There need not be a subset of the initial rays that is minimal.
I understand that its sometimes convenient to keep rays ordered in some particular way, but then you are really working with decorated rays and not just with cones. Instead of enforcing some particular order, maybe we can have a DecoratedCone
(Though how is that different from IntegralRayCollection
) / DecoratedLatticePolytope
?
Having said that, I'm not totally opposed to it. But we should at least think a little bit about the points I mentioned...
comment:28 in reply to: ↑ 27 Changed 9 years ago by
That was indeed useful to think about it! But I didn't change my mind ;-)
Regarding non-strictly convex cones, I think that there is no point in trying to preserve user's order, but it would be nice to state it in the documentation: if users care about the order in these cases, they must use check=False
option. Otherwise I think the output should be as it is now:
strict_ray_0, ..., strict_ray_k, line_1, -line_1, ..., line_m, -line_m
For strictly convex cones I indeed want to work with decorated cones, but I'd rather not call them that way explicitly everywhere. The reason is that these cones are designed for toric geometry and so their rays (or ray generators) correspond to homogeneous coordinates. If I want to have rays r1, r2, r3
and I want to associate to them variables x, y, z
, then I would probably create a cone with rays r1, r2, r3
, and then create a toric variety based on this cone with variable names x, y, z
. If the order of rays may change during construction, it means that I need to construct the cone, look at the order of rays, and then rearrange my variable names accordingly, if necessary. This is inconvenient, so while mathematically the cone is determined by the set {r1, r2, r3}
and associated variables are not x, y, z
or even x1, x2, x3
, but x_r1, x_r2, x_r3
and one should refer to them specifying the ray, in practice it is convenient to have some fixed order. Since there is no natural order on ray, the best one seems to be the one given by the user. I think that in most case users will actually provide the generating set, so computing it is almost redundant, but sometimes users (if they are like me ;-)) THINK that they provide the generating set, but it is not - that's why it is important to have check=True
by default. Regarding this option, it also seems natural that if it is acceptable to give check=False
for certain input, with check=True
the output will be the same, but slower.
I am not sure if there is any sense in having separate classes for cones and decorated cones. It is also not obvious which one should be the base class. On the one hand, decoration is extra structure. On the other hand, storing stuff as tuples gives this structure "for free". Personally, I never wished to have cones without any order. That does not mean that others don't need them, but right now one can use current cones with is_equivalent
instead of ==
and ray_set
instead of rays
to disregard the order.
Some more personal experience:
- The first time I had problems due to order change was with getting nef-partitions from nef.x. Because in general PALP preserved the vertices if they are vertices, but nef.x was reordering them when computing nef-partitions (I think this is no longer true in the last version of PALP, but I started using it a little earlier). The solution was to add sorting to some function in the guts of lattice polytope, so that this reordering is not exposed to the user.
- The second annoying issue with PALP was that the i-th face of dimension 0 could be generated by j-th vertex for i != j. While in principle vertices as points of the ambient space and vertices as elements of the face lattice are different, this discrepancy can be quite inconvenient: you need to remember that it exists and you need to insert extra code to do translation from one to another. So eventually I added sorting for 0-dimensional faces.
- Faces of reflexive polytopes and their polars are in bijection. I find it very convenient to write things like
for y, yv in zip(p.faces(dim=2), p.polar().faces(codim=3))
This eliminates the need for each face to be able to compute its dual. While it is not terribly difficult, it is certainly longer than having no need to do any computations at all. In order to accomplish it, there is some twisted logic in lattice polytopes that ensures that only one polytope in the polar pair computes its faces and the other one then "copies it with adjustments". I hope to redo it in a better way, but at least it is isolated from users, for whom the dual of the i-th face of dimension d is the i-the face of codimension d.
Somewhat related behaviour of other parts of Sage:
- Submodules compute their own internal basis by default, but there is an option to give user's basis. In the latter case the internal basis is still computed and one has access to either basis. One can also get coordinates of elements in terms of the user basis or in terms of the standard basis using
coordinates
for one andcoordinate_vector
for another and I never remember which one is which. Personally, I think that this design can be improved. But in any case they do have some order on the generators as well as otherParent
s without strict distinction between decorated and "plain" objects. QQ["x, y"] == QQ["y, x"]
evaluates to False, so while these rings are generated by the same variables the order matters. (To my surprise, one can also defineQQ["x, x"]
and it will be a ring in two variables with the same name...)
By the way - currently cones DON'T try to preserve the given order of rays, they just happen to do it often and I didn't really notice it in the beginning (or maybe I assumed that cddlib preserves the order and there is no need to do anything). But I consider it a bug.
As I understand, some people need to work with polyhedra that have a lot of vertices, maybe thousands. In this case sorting according to the "original order" may be a significant performance penalty, so it makes sense that PPL does not preserve it. It may also be difficult for general polyhedra with generators of different type and non-uniqueness of minimal generating set (as it is the case for non-strictly convex cones). But cones for toric geometry in practice tend to be relatively simple and strictly convex (maybe with non-strictly convex dual, but users are less likely to construct those directly), so I think that this is not an issue. We can add an option to turn sorting on or off, but, of course, I'd like to have it on by default.
As I said, I will be very happy to do writing of sorting and doctest fixing myself!-)
comment:29 follow-up: ↓ 30 Changed 9 years ago by
Marshall, what's your opinion on the order topic?
comment:30 in reply to: ↑ 29 Changed 9 years ago by
Replying to novoselt:
Marshall, what's your opinion on the order topic?
I don't have strong feelings on it. I think it might be worth sorting things to some canonical order - sorting rays or vertices or whatever other input should be fast, even in python, compared to the actual polyhedral work. But I know with cddlib that was a bit of a hassle since the internal ordering might be different.
comment:31 follow-up: ↓ 32 Changed 9 years ago by
for y, yv in zip(p.faces(dim=2), p.polar().faces(codim=3))
This is absolutely terrible and the prime reason why the face lattice shouldn't make any promises with regards to ordering. Just so that nobody can use a construct like this. For anybody but the author it is completely illegible and utterly mysterious that the faces end up being dual. At the same time, it would be so much easier to write
for face in p.faces(dim=3): print face, face.dual()
Also I'm totally opposed to requiring rays in cones to always be decorated. The cones of a fan are not the only place that uses Cone
. In fact, the main reason for this ticket is that the current (PALP-based) implementation of cones is unsuitable for Kahler cones because it is constrained to low dimension.
We could have an OrderedCone()
function that constructs the cone while keeping the ray order intact if possible (at a performance penalty). This function could then even raise a ValueError
if there is no unique minimal representation. But I still think that its a bad idea to implicitly have some particular order.
As far as associating rays with homogeneous variables, I think we really should subclass the multivariate polynomial ring. Then we can convert rays to homogeneous variables and properly check homogeneity of polynomials.
Finally, about QQ['x, x']
`: The variable name is just a string and can be pretty much anything. The issue was raised during the recent bug days and the real problem is that conversion to e.g. maxima is broken in such a case. Sage generally only cares about the order of the variables.
comment:32 in reply to: ↑ 31 Changed 9 years ago by
Replying to vbraun:
for y, yv in zip(p.faces(dim=2), p.polar().faces(codim=3))
This is absolutely terrible and the prime reason why the face lattice shouldn't make any promises with regards to ordering. Just so that nobody can use a construct like this. For anybody but the author it is completely illegible and utterly mysterious that the faces end up being dual. At the same time, it would be so much easier to write
for face in p.faces(dim=3): print face, face.dual()
I agree that the second form is better (and I think that it is implemented). But in order to implement it efficiently, one still has to copy the face lattice of the original polytope.
Now let's consider faces of a polytope P and of the polytope 2*P. It is also more natural to write
for face in P.faces(dim=3): print y, 2*y
but if you would like 2*y to be a face of 2*P, not just a polytope obtained from y, how should it be implemented? If they are stored in the same order, you can easily and very efficiently iterate over faces and their multiples.
Move on to cones. Let tau be a face of sigma. What is tau.dual()
? Is it the cone dual to the cone tau or the face of the cone dual to cone sigma which is dual to the face tau? So we need to have a different method name then, tau.dual_face()
perhaps.
Now let P, P*, C, and C* be Cayley polytopes and cones with their duals. Faces of P and C are in direct relation, let F be a face of P. How should I get the face of C corresponding to it? Perhaps C(F)
is a natural notation, but how is it going to be implemented? Finding a face generated by rays coinciding with vertices of F is quite inefficient and no matter how it is done the result probably should be cached somewhere. Where? Maybe in P or C and perhaps it is easy to achieve with a special class for supporting polytopes and supported cones. It is probably a good idea to implement this in a long run. But there are very many good ideas that have to be implemented and if faces are ordered in a consistent way, then those ideas can wait a little, because there is a satisfactory access to related faces now. And even when they are implemented, I think that the most efficient implementation will still rely on internal consistency of orders.
Maybe it is mysterious currently that the face order represents duality. Then it is my fault for not documenting it correctly and I will try to fix it. If it was clearly stated in the documentation, with an example how to use it, there would be nothing mysterious about it.
Note also, that I don't want to SORT faces in any particular way. I just want to have related faces in the same order as the "initial ones" happened to be. If nothing else, it leads to efficient implementation, so it is good to do it even if end-users will use some other interface.
Also I'm totally opposed to requiring rays in cones to always be decorated. The cones of a fan are not the only place that uses
Cone
. In fact, the main reason for this ticket is that the current (PALP-based) implementation of cones is unsuitable for Kahler cones because it is constrained to low dimension.
What I was trying to say is that some kind of decoration occurs on its own just because rays are internally stored as tuples. If you are really against it and want to store them as sets, that means that anything that uses rays of that cone should refer to complete rays and instead of other lists and tuples with order matching the order of rays you need to use dictionaries with rays as keys. Which certainly can be convenient but you can do it even if cones have some internal order of rays, so I don't see a reason for having extra classes.
We could have an
OrderedCone()
function that constructs the cone while keeping the ray order intact if possible (at a performance penalty). This function could then even raise aValueError
if there is no unique minimal representation. But I still think that its a bad idea to implicitly have some particular order.
I'd rather go with UnorderedCone()
;-) I suggested adding a parameter that will turn off sorting for performance gain. I also don't want to have IMPLICIT order, I want to completely and clearly document it in the description of the Cone constructor and its face methods when/if there is any particular order on anything!
As far as associating rays with homogeneous variables, I think we really should subclass the multivariate polynomial ring. Then we can convert rays to homogeneous variables and properly check homogeneity of polynomials.
Another good idea, but even after this conversion do you really want to use names like x_(1,0,0,0,0), x_(0,1,0,0,0), x_(0,0,1,0,0), x_(0,0,0,1,0), x_(0,0,0,0,1), x_(-1,-1,-1,-1,-1)
for coordinates on P^5
?! And for more complicated varieties expressions in such variables will be completely unreadable! So it is very natural to fix some order of rays and create variables whose index refers to the index of the corresponding ray. Even better is to have both "order matching" and "direct conversion".
Finally, about
QQ['x, x']
`: The variable name is just a string and can be pretty much anything. The issue was raised during the recent bug days and the real problem is that conversion to e.g. maxima is broken in such a case. Sage generally only cares about the order of the variables.
Let's leave QQ["x, x,"]
and come back to QQ["x, y"]
. This notation asks for a polynomial ring over QQ with variables named x and y, correct? Well, QQ["y, x"]
asks exactly the same, yet the rings are different. So, really, QQ["x, y"]
asks for a polynomial ring over QQ with the first variable named "x" and the second one "y". In the same way I think that Cone[(1,0), (0,1)])
asks for a cone with the first ray (1,0) and the second ray (0,1).
If we do have
def Cone(..., keep_ray_order=True)
then interactive usage will be pretty, yet there will be no performance penalty for internal computations. Users don't construct Kaehler cone explicitly, they call Kaehler_cone
and somewhere inside there is a call to Cone which is written once and may have as many extra parameters as necessary.
People refer to the first/second/third equation of the system, generator of the ideal etc. all the time even without assigning some labels/indexed names to them. It is natural to assume that you have specified the order of something when you have written this something in a certain order. Perhaps, the optimal solution is to let Cone([(1,0), (1,1), (0,1)])
to be the cone generated by three (ordered) rays - all of the given ones. And then have a function like minimal_ray_generators
. I guess that's kind of what ideals are doing. I didn't think about this option before. And now I am a bit hesitant to implement it since I am not sure how many places in the code rely on the minimality of the set of generators for the strictly convex case...
I do realize that all your points are valid and my arguments significantly reflect my personal taste, however I also do use the toric code in addition to writing it and if I find something to be convenient it is likely that some other people also will find it convenient. Unfortunately, it is a bit hard to say how many (or how few) of these other people would like my way. Although I suspect that not very many will strongly dislike it since it is harmless (except for some potential slow down, which should not be a noticeable issue and I suggest to have a way around it when it is.) I'll ask on sage-combinat for some input.
At the very least I would like to be able to call a function sage.geometry.cone.keep_ray_order(True)
that will turn on user-order preservation for the current session, even if the default is not to keep the order. But I still strongly believe in trying to preserve the order whenever possible, i.e. in the strictly convex case. (Overall, this issue is terribly similar to ambient_ray_indices
discussed some time ago on #9972.)
comment:33 Changed 9 years ago by
comment:34 follow-up: ↓ 35 Changed 8 years ago by
Since the faces of P
and (2*P).polar()
have no particular relationship I see it as a feature that we don't pretend otherwise. In the worst case you have to separately construct the face lattice of 2*P
. Is it really that time-critical? Why can't you just work with P
alone in that case?
The underlying implementation may very well rely on some ray order, this is why we have the check=False
option. Just make sure that it is exposed by some reasonable class/method hierarchy, and make sure to spell out that the underlying implementation is subject to change and everyone has to go through the dual_face()
method or whatever it is called.
Its funny that you mention the ideals, because they make no promise about the generators. The MPolynomialIdeal
class tries to abstract the mathematical notion of ideal, there is no particular choice of generators implied (and any particular choice is subject to change without notice). This is why methods like ideal.groebner_basis()
return a PolynomialSequence
and not a MPolynomialIdeal
.
comment:35 in reply to: ↑ 34 Changed 8 years ago by
Replying to vbraun:
Since the faces of
P
and(2*P).polar()
have no particular relationship I see it as a feature that we don't pretend otherwise. In the worst case you have to separately construct the face lattice of2*P
. Is it really that time-critical? Why can't you just work withP
alone in that case?
I was talking about P and 2*P, whose faces are in as direct relationship as it goes. But it also implies that faces of P and (2*P).polar() are in a canonical inclusion-reversing bijection. If the order of their faces "match", one can iterate over related pairs without any extra effort/code and I found it to be extremely convenient and natural (having said that, I admit that I am biased and there can be a cleaner interface). Necessity to do such iterations arises in working with Hodge numbers and generating functions corresponding to nef-partitions. Regarding time: my first straightforward implementation using 2*y as just a polytope obtained from y was taking about an hour on a certain (simple) example. After some optimization and in particular enforcing face order on P and 2*P and using 2*y as a face of 2*P brought it down to less than a minute. Of course, part of the problem is that PALP is used via system calls so combining many of them into a single one helps a lot, but in any case nothing can beat zip(P.faces(2), twoP.faces(2))
;-) Anyway, I plan to redo lattice polytopes ones I'm done with my thesis and I'll try to both explain this efficient method in the documentation and provide a more natural interface whenever possible.
Its funny that you mention the ideals, because they make no promise about the generators.
I meant that by default they don't even compute a minimal set of generators. But I don't recall any use for non-minimal representation of cones, so I think that there isn't any point to do the same for cones.
We got a reply from Nicolas on sage-combinat, which I copy below:
Just two cents from an outsider (I'll certainly will have a need for Cone's at some point, but don't have practical experience).
When there is no clear cut answer for a design decision, I tend, whenever possible, to just postpone it; more often than not, the answer will become clear by itself after accumulating practical experience. In that case, there could be an option like:
Cone(rays=[(1,0),(0,1)], keep_order = True)
and the documentation could explicitly specify that the default value is currently *undefined*, and will be chosen later on. I guess for the moment I would unofficially set it to False, since that's the cheapest while True is somehow "adding a feature"; so that's less likely to break code in case of change later on.
So, I guess, I am currently loosing 1:2, unless you have changed your mind ;-)
If not, I propose the following:
- State in the documentation of
Cone
that by default the order of rays is going to be "fixed but random" and in particular may change in future versions of Sage. (Which may very well happen due to upgrade in PPL. Personally, I don't like when these "random orders" change and it is yet another reason to stick with the user ordering ;-)) - Add
keep_order=False
to the list of parameters. Ifkeep_order=True
, well, keep the order as much as possible in the strictly convex case, i.e. through away extra generators from the original ordered list. Ifkeep_order=True
and the cone is not strictly convex, perhaps give aUserWarning
like "keep_order=True does not affect not strictly convex cones, see check=False instead!" - Add a function
sage.geometry.cone.keep_order(True/False)
, without importing it into the global name space, that will switch the default behaviour for the current session, so that if users always wantkeep_order=True
, they don't have to repeat it all the time. Mention this function in the description ofkeep_order
parameter inCone
. - Perhaps in the documentation of that function we may mention that if users feel strongly that it should be the default always, they can explain it on the above sage-combinat discussion. (I suppose it is OK to include such links in docstrings.)
- In the documentation examples where the ray order is important, use
keep_order=True
instead ofcheck=False
(there are some examples in the patch where you have added this option). - Maybe
keep_ray_order
is better than justkeep_order
.
Does it sound like a good compromise? (I.e. the one that leaves everyone mad (c) Calvin ;-))
comment:36 Changed 8 years ago by
How about ordered=[True|False]
, just to throw out yet another possibility. Though either name is fine with me. I agree that passing this option is preferable to check=False
.
I don't see any point in documenting a way to change the default behavior. If anybody wants that then they can just overwrite Cone()
in the global namespace with their own version. At least then its expected that no doctests work any more. I don't remember seeing a similar function anywhere else in the Sage library.
comment:37 Changed 8 years ago by
ordered
seems unintuitive to me, let's stick with keep_order
as the middle-length option.
I think that functions adjusting default behaviour are awesome and convenient (for those who use them). automatic_names
comes to mind and there is something else similar in spirit. But if you are totally against it, scratch 3&4.
comment:38 Changed 8 years ago by
I don't see automatic_names
as anything in that spirit. It only changes the preparser. Which is special since you don't call it yourself usually, so you can't pass options to it.
comment:39 Changed 8 years ago by
- Cc nthiery added
Hi Volker, I have started work on the reviewer patch, so please don't change the attached ones. (They don't seem to apply cleanly anymore, but I will take care of it.)
Hi Nicolas, I am adding you to the cc field in case you have further comments on the ray order ;-)
I have realized that
- I mostly care about preserving the order of rays only if they are already the minimal generating rays of a strictly convex cone and
- implementing this does not introduce any performance penalty: once we have constructed the PPL object, we know if the cone is strictly convex or not and we can just compare the number of its rays with the input - it these numbers are the same, then we can use the input rays as minimal generators directly.
So instead of keep_order
I would like to add the above check. Advantages:
- It is convenient if the user cares about the ray order, e.g. for constructing an affine toric variety straight from a cone, when one may want to associate particular variable names with particular rays.
- It allows easy writing of doctests which explicitly show rays without worrying that the order will change in future releases and a bunch of trivial adjustments will be required.
- It constructs the same cones with
check=True
andcheck=False
options in cases when it is indeed OK to usecheck=False
option. - It is a bit more intuitive to the user and in particular nice for tutorials and live demonstrations.
- It makes me happy ;-)
Frankly, I cannot really think of any disadvantages...
So - can I proceed and implement this?
comment:40 Changed 8 years ago by
Why
lattice = rays[0].parent()
was replaced with
lattice = rays[0].parent().ambient_module()
on the new line 378? I think if the input rays
live in some sublattice, then the constructed cone also should live in the same sublattice. This distinction is important e.g. for constructing dual cones (I don't think that such duals will currently work, but it is not a reason to prohibit using sublattices ;-))
There are also several doctests where you have replaced things like Cone([(0,0)])
with Cone([], lattice=ToricLattice(2))
- what is your objection against the first form? I think it is a convenient and natural way to construct the origin cone, if you don't care about lattices.
comment:41 Changed 8 years ago by
In the intersection
method of cones there is now
if self.lattice_dim() != other.lattice_dim(): raise ValueError('The cones must be in same-dimensional lattices.')
Why? And why in this form? I think that we probably should check that lattices are compatible, but not by dimension equality. We probably don't want to allow intersecting a cone in N with a cone in M. On the other hand, it is reasonable to allow intersection of cones living in sublattices of the same lattice. The lattice of the intersection should be the intersection of sublattices. Any objections?
comment:42 Changed 8 years ago by
- Description modified (diff)
- Status changed from needs_info to needs_work
- Work issues set to intersect sublattices
First of all, what I have done with Volker's patches:
- Folded and rebased them (on top of 3
FanMorphism
patches as I wanted to check that nothing breaks, without them there is one fuzzy hunk but it still applies). - Removed doctest fixes related to the ray order as it is restored by the reviewer patch.
- Removed the modification to cone-from-polyhedron code since the old one still works fine and makes all the checks.
- Left the original of the line mentioned in http://trac.sagemath.org/sage_trac/ticket/10140#comment:40 since it seems to be the correct one to me.
The reviewer patch does the following:
- Preserves ray order for strictly convex cones given by the minimal generating set of rays, without performance penalty.
- Removes
self.dual_lattice() is self.lattice()
check asdual_lattice
now returnsZZ^n
if there is no "honest dual". (This was not the case when this ticket was created.) - Modifies the intersection code (discussed above) slightly. This is not its final version, I need to play a bit with lattice intersection but I think it is the way to go.
While I was writing it, I ran tests on top of clean sage-4.7.alpha4 and there are breaks which seem to be related to rays[0].parent()
without .ambient_module()
. I believe that they are fixed by #10882.
comment:43 Changed 8 years ago by
Sounds good. I don't remember why I added rays[0].parent().ambient_module()
but I'm pretty sure you'll find out when you run the doctests.
I don't know any place where we actually use intersections of cones in different sublattices. I would be fine with leaving this NotImplemented
.
comment:44 Changed 8 years ago by
Fan morphisms (will) use cones in sublattices: the kernel fan naturally lives either in the domain lattice or the kernel sublattice, which corresponds to different toric varieties.
I also suspect that while you were working on this patch zero rays passed to the cone constructor were breaking the code and that's why you have replaced the related doctests. However, there is now a catch for this case, so they should be fine. In general, I think it is important to allow zero rays in the input, e.g. if you are constructing a projected cone and some of the rays are mapped to the origin.
Anyway, I'll take care of sublattices and move here appropriate chunks from #10943, #10882, and #11200 (unless they get reviewed in the near future and can be left in front of this one ;-))
comment:45 Changed 8 years ago by
Either syntax for constructing the trivial cone is fine with me. Personally I prefer the one I used since it is a bit more obvious.
comment:46 Changed 8 years ago by
- Description modified (diff)
- Work issues changed from intersect sublattices to remove/review dependencies
I have removed the first change of the trivial cone construction since it was in the place where two variants are explicitly described.
The new patch slightly changes the general method of intersection modules and adds an extending method to toric lattices. Now there is no need to check lattice compatibility in the cone intersection: it will fail if they are wrong.
Added #10882 as a dependency for now, but may remove it on the weekend by moving some code here. It is not difficult, but it will break my "thesis queue" so I want to do it carefully and take care of consequences.
Changed 8 years ago by
comment:47 Changed 8 years ago by
- Dependencies set to #10039
- Description modified (diff)
- Reviewers changed from Andrey Novoseltsev to Andrey Novoseltsev, Volker Braun
- Status changed from needs_work to needs_review
- Work issues remove/review dependencies deleted
OK, the new patches apply cleanly to sage-4.7.alpha4 and pass all long tests, documentation builds without warnings. The first patch now includes some adjustments to lattice treatment, moved here from #10882 (I'll update the patch there shortly.)
Volker, if you are fine with my patches, please switch it to positive review!
Changed 8 years ago by
comment:48 Changed 8 years ago by
- Description modified (diff)
One more patch: I was learning how to use PPL and trying to switch point containment check in cones so that it does not call polyhedron
method. In the process I have discovered a bug with constructing cones without rays, i.e. like Cone([], lattice=N)
: the PPL representation in this case didn't know the ambient space of this cone leading to mistakes. It is fixed in the second hunk of the last patch.
Regarding original goal, here are the timings before
sage: timeit("c = Cone([(1,0), (0,1)]); (1,1) in c", number=1000) 1000 loops, best of 3: 27.8 ms per loop sage: c = Cone([(1,0), (0,1)]) sage: timeit("(1,1) in c", number=1000) 1000 loops, best of 3: 729 µs per loop
and after
sage: timeit("c = Cone([(1,0), (0,1)]); (1,1) in c", number=1000) 1000 loops, best of 3: 2.3 ms per loop sage: c = Cone([(1,0), (0,1)]) sage: timeit("(1,1) in c", number=1000) 1000 loops, best of 3: 290 µs per loop
As we see, even on very simple example we get 10x speedup for "single checks" when most of the time is spent constructing different representations of the cone. When everything is precached and we count only actual containment, we have 3x speed up.
A more complicated example before:
sage: c = Cone([(4, 0, 1, 12, 1), (-2, -2, -73, 1, 0), (1, -2, 0, 1, -3), (5, -1, -1, sage: 1, 1), (-4, 5, 1, 2, 3), (0, -1, -23, 0, 1), (-2, 1, -1, 1, -1), (0, 2, sage: -3, 1, 0), (-1, 3, 2, -1, 0), (0, 2, -1, 4, 15)]) sage: c.ray_matrix() [ -4 -2 -2 -1 0 5 0 1] [ 5 -2 1 3 -1 -1 2 -2] [ 1 -73 -1 2 -23 -1 -1 0] [ 2 1 1 -1 0 1 4 1] [ 3 0 -1 0 1 1 15 -3] sage: timeit("(1,2,3,4,5) in c", number=1000) 1000 loops, best of 3: 4.52 ms per loop
and after
sage: c = Cone([(4, 0, 1, 12, 1), (-2, -2, -73, 1, 0), (1, -2, 0, 1, -3), (5, -1, -1, sage: 1, 1), (-4, 5, 1, 2, 3), (0, -1, -23, 0, 1), (-2, 1, -1, 1, -1), (0, 2, sage: -3, 1, 0), (-1, 3, 2, -1, 0), (0, 2, -1, 4, 15)]) sage: timeit("(1,2,3,4,5) in c", number=1000) 1000 loops, best of 3: 1.3 ms per loop
(I didn't bother with fresh start timing here.)
Conclusion: Volker's PPL wrapper rocks!
comment:49 Changed 8 years ago by
- Status changed from needs_review to positive_review
I'm happy with the reviewer patch, so positive review alltogether ;-)
comment:50 Changed 8 years ago by
- Milestone changed from sage-4.7 to sage-4.7.1
comment:51 Changed 8 years ago by
- Status changed from positive_review to needs_work
@ Andrey Novoseltsev: Can you upload your reviewer patch again using hg export tip
? The user and date fields are missing.
Changed 8 years ago by
comment:53 Changed 8 years ago by
- Merged in set to sage-4.7.1.alpha1
- Resolution set to fixed
- Status changed from positive_review to closed
comment:54 Changed 8 years ago by
- Description modified (diff)
I'm adding an unformatted "Apply" section in the ticket description to help the patch buildbot figure out the correct patches.
comment:55 Changed 8 years ago by
Apply trac_10140_sublattice_intersection.patch, trac_10140_base_cone_on_ppl_original.patch, trac_10140_reviewer.patch, trac_10140_switch_point_containment_to_PPL.patch
comment:56 follow-up: ↓ 57 Changed 8 years ago by
How hard would it be to set up PPL to do Vrepresentation and Hrepresentation of Polyhedron?
In the current patch I fixed all doctest errors in
cone.py
andfan.py
. There are some more broken doctests in other modules due to a different output ray ordering, and I will fix these once the ppl spkg and wrapper are reviewed.