Opened 6 years ago
Last modified 6 years ago
#17920 needs_work enhancement
Reimplement IntegerLists using Polyhedron.integral_points()
Reported by: | jdemeyer | Owned by: | |
---|---|---|---|
Priority: | major | Milestone: | sage-6.6 |
Component: | combinatorics | Keywords: | |
Cc: | ncohen, aschilling, tscrim, nthiery | Merged in: | |
Authors: | Jeroen Demeyer | Reviewers: | |
Report Upstream: | N/A | Work issues: | |
Branch: | u/jdemeyer/ticket/17920 (Commits, GitHub, GitLab) | Commit: | 621d467827d8d0c6a0cb1113cb4b861cca936f41 |
Dependencies: | #17937, #18087 | Stopgaps: |
Description (last modified by )
This fixes #17548.
It also adds new features to IntegerLists
:
- Negative integers are allowed (but the default still is
min_part=0
).
- There does not need to be a fixed sum, one can do for example
IntegerLists(max_part=2)
for all lists of integers <= 2. One can also give a lower/upper bound for the sum.
Note that the current implementation requires, for a given length, that there are only finitely many lists of that length. This limitation could be lifted in the future.
Change History (81)
comment:1 Changed 6 years ago by
- Cc ncohen added
comment:2 Changed 6 years ago by
- Milestone changed from sage-6.6 to sage-duplicate/invalid/wontfix
- Resolution set to invalid
- Status changed from new to closed
comment:3 Changed 6 years ago by
- Milestone changed from sage-duplicate/invalid/wontfix to sage-6.6
- Resolution invalid deleted
- Status changed from closed to new
- Summary changed from Reimplement IntegerLists using MILP to Reimplement IntegerLists using Polyhedron.integral_points()
comment:4 Changed 6 years ago by
- Branch set to u/jdemeyer/ticket/17920
comment:5 Changed 6 years ago by
- Commit set to 492696fb8bbca610dd8b5415b44910196e9d9222
I do not understand what this is... Did you copy/paste the original file? It seems that you copy/pasted the original files and made some modifications to it O_o
Nathann
New commits:
492696f | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:6 follow-up: ↓ 7 Changed 6 years ago by
Yes, that's what I did. Anyway, this is still very much work in progress...
comment:7 in reply to: ↑ 6 Changed 6 years ago by
Yes, that's what I did. Anyway, this is still very much work in progress...
Oh, okay!
Nathann
comment:8 Changed 6 years ago by
- Dependencies set to #17937
comment:9 Changed 6 years ago by
- Commit changed from 492696fb8bbca610dd8b5415b44910196e9d9222 to 5896b1160e7dd123abfc03325583ffe805d0912b
comment:10 Changed 6 years ago by
- Commit changed from 5896b1160e7dd123abfc03325583ffe805d0912b to c835117a1e14e84d3f823b4111a092a80b4c3ef4
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
c835117 | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:11 Changed 6 years ago by
- Commit changed from c835117a1e14e84d3f823b4111a092a80b4c3ef4 to 755b67a1c9516bcd44f59aab6a8b6032d257591d
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
755b67a | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:12 follow-up: ↓ 13 Changed 6 years ago by
This now seems to work reasonably well. Not yet ready, but good enough for example to compare with the existing implementation. That's how I found all the bugs at #17548.
Due to the polyhedra overhead, it is generally (a lot) slower than the existing code.
comment:13 in reply to: ↑ 12 Changed 6 years ago by
Replying to jdemeyer:
This now seems to work reasonably well. Not yet ready, but good enough for example to compare with the existing implementation. That's how I found all the bugs at #17548.
Due to the polyhedra overhead, it is generally (a lot) slower than the existing code.
Is it worth changing this branch so that it changes the existing code instead of adding a new file ? As you said: let's be correct first, *then* fast.
Nathann
comment:14 Changed 6 years ago by
My code does not yet support all (undocumented!) features of the old IntegerListsLex
, so we cannot yet replace IntegerListsLex
.
comment:15 Changed 6 years ago by
- Commit changed from 755b67a1c9516bcd44f59aab6a8b6032d257591d to 674a518d20e6b220f668d88ec748376217b243da
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
674a518 | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:16 Changed 6 years ago by
- Commit changed from 674a518d20e6b220f668d88ec748376217b243da to 4e5dbaec98f2d779a6c73a69d5b71581be7fb5da
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
4e5dbae | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:17 Changed 6 years ago by
This now implements Compositions
using my new IntegerListsLex
(is the lex ordering really important? I guess not, it's for sure nowhere documented). However, doing the same for Partitions
leads to all kinds of breakage and I don't understand why.
comment:18 Changed 6 years ago by
- Commit changed from 4e5dbaec98f2d779a6c73a69d5b71581be7fb5da to c0772f8981d3856bdf85cb3eb07094944aa73262
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
c0772f8 | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:19 Changed 6 years ago by
- Commit changed from c0772f8981d3856bdf85cb3eb07094944aa73262 to 631c47652412b518bb4bcd8cb4465a4fbfe16a83
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
631c476 | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:20 Changed 6 years ago by
- Commit changed from 631c47652412b518bb4bcd8cb4465a4fbfe16a83 to 3b7f4cd93be25051cc9ecf08d573b2aacdc02010
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
3b7f4cd | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:21 Changed 6 years ago by
- Commit changed from 3b7f4cd93be25051cc9ecf08d573b2aacdc02010 to 936a2c73d3612c76cf115d1b5dd690417e6ce4ab
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
936a2c7 | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:22 Changed 6 years ago by
- Cc aschilling tscrim added
Note that this changes 3 tests (just reordering the output) in src/sage/tests/book_schilling_zabrocki_kschur_primer.py
comment:23 Changed 6 years ago by
The code on this ticket is essentially complete, I just need to add more doctests to comply with the "coverage" policy.
comment:24 Changed 6 years ago by
- Commit changed from 936a2c73d3612c76cf115d1b5dd690417e6ce4ab to c9fa1c403feb7efb7b0c272b36e9062b3f0988d3
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
c9fa1c4 | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:25 Changed 6 years ago by
- Commit changed from c9fa1c403feb7efb7b0c272b36e9062b3f0988d3 to b0a04aa5a4454766ed9802d8e99abcd7fb3e105b
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
b0a04aa | Reimplement IntegerLists using Polyhedron.integral_points()
|
comment:26 Changed 6 years ago by
- Status changed from new to needs_review
comment:27 follow-ups: ↓ 28 ↓ 29 ↓ 32 Changed 6 years ago by
Do you have some timings?
Also, both of these are wrong:
sage: Compositions(3, max_length=2, inner=[1,1,1]).list() [] sage: Compositions(10, outer=[4], inner=[1,1]).list() []
The first should be [[2, 1], [1, 2]]
since the inner
(or outer
) are not related to the min or max lengths. For the second, the inner composition is extendedby the minimum part, so there are many such compositions, such as [4,6]
, [2,8]
, etc.
comment:28 in reply to: ↑ 27 Changed 6 years ago by
comment:29 in reply to: ↑ 27 Changed 6 years ago by
Replying to tscrim:
Do you have some timings?
My code is much slower.
Also, both of these are wrong:
sage: Compositions(3, max_length=2, inner=[1,1,1]).list() [] sage: Compositions(10, outer=[4], inner=[1,1]).list() []The first should be
[[2, 1], [1, 2]]
I don't think so. The Compositions
code explicitly adds the length of the inner
argument as minimal length. I didn't change this.
comment:30 Changed 6 years ago by
- Priority changed from major to blocker
comment:31 Changed 6 years ago by
- Description modified (diff)
comment:32 in reply to: ↑ 27 Changed 6 years ago by
Replying to tscrim:
The first should be
[[2, 1], [1, 2]]
since theinner
(orouter
) are not related to the min or max lengths.
To clarify: you might be confusing with the floor
and ceiling
arguments of IntegerListsLex
. Those do not have any effect on the length, but inner
/outer
do add lower/upper bounds to the length. With both the existing code as well as with my code, we have for example
sage: Compositions(3, inner=[1,1,1]).list() [[1, 1, 1]]
comment:33 follow-ups: ↓ 34 ↓ 35 ↓ 36 ↓ 51 Changed 6 years ago by
- Cc nthiery added
On the ordering, from the title of the class, the output should be in lexicographical order. Moreover, since these are EnumeratedSets
, the change in the ordering could lead to subtle changes that breaks people's code.
Compositions
also does a similar thing with the max length and outer
, so I agree that those should be empty. However IMO these tests should be in Compositions
.
@vdelecroix It's mostly correct and there is a lot of code which depends on this being fast. Minimal slowdowns are okay (IMO), but significant slowdowns are unacceptable.
comment:34 in reply to: ↑ 33 Changed 6 years ago by
On the ordering, from the title of the class, the output should be in lexicographical order. Moreover, since these are
EnumeratedSets
, the change in the ordering could lead to subtle changes that breaks people's code.
We can sort it before returning it I guess.
@vdelecroix It's mostly correct
Jeroen compiled many related bugs in the description of #17548.
Minimal slowdowns are okay (IMO), but significant slowdowns are unacceptable.
Significant slowdown can be a problem, that's for sure. If they turn out to be our only way to have a code which does not return wrong results, however, we will learn to live with them.
Nathann
comment:35 in reply to: ↑ 33 Changed 6 years ago by
I think it is great that Jeroen implemented this to get the correct results! Of course we do want fast code at the end.
As I mentioned on sage-devel, the order of lists of tableaux does not matter very much.
As Travis mentioned, there might be some subtle places where the order matters. One example that comes to mind is that the representations of S_n and characters are returned as matrices with rows and columns indexed by integers instead of partitions. So if the order of partitions changes, the interpretation of the results might change!
comment:36 in reply to: ↑ 33 Changed 6 years ago by
Replying to tscrim:
On the ordering, from the title of the class, the output should be in lexicographical order.
The name is now IntegerLists
and I do provide IntegerListsLex
for "backwards compatibility" which does sort.
Since the documentation of neither Partitions
nor Compositions
says anything about the order, I think it's allowed to change the order.
comment:37 Changed 6 years ago by
About the speed: if you manage to fix all existing bugs in the IntegerListsLex
code, you can again use that implementation for Compositions
and Partitions
. It's probably good to have two indepdendent implementations anyway.
Even better would of course be that somebody speeds up Polyhedron().integral_points()
which would benefit everybody using polyhedra.
comment:38 Changed 6 years ago by
- Description modified (diff)
comment:39 follow-ups: ↓ 40 ↓ 41 Changed 6 years ago by
Dear Jeroen,
Thanks a lot for taking action! It's definitely a good thing to have a
good connection between IntegerListLex?
and
Polyhedron
, as
there is some non trivial overlap. The main differences is that
IntegerListLex?
was specifically designed for allowing for
(essentially) Constant Amortized Time lexicographic Iteration in
almost constant memory, which is an important feature.
So I can see a work path along the following lines:
- Get this ticket in to have a robust implementation of list
- Completely rewrite the current
IntegerListLex?
iterator to be robust (it's definitely possible); keep the Polyhedron implementation for testing purposes as well as for counting, ...
- Optimize the iterator (Cythonization, using ClonableIntArray?, ...).
Please do not change the enumeration order, at least as default: quite some code depends on it (I agree, this should be made explicit in the documentation). The proposed generalizations (n in a range, negative entries) are fine since the iterator could be made to handle them.
Cheers,
Nicolas
comment:40 in reply to: ↑ 39 ; follow-ups: ↓ 52 ↓ 53 Changed 6 years ago by
Replying to nthiery:
Please do not change the enumeration order, at least as default
I disagree with this: the default should be "do not sort, return stuff in the fastest possible way". Sorting an iterator is very expensive and should only be done if really needed.
quite some code depends on it
Is that really true? The only doctest failures that I saw where "obvious" failures where some list order changed, I didn't see anything subtle.
comment:41 in reply to: ↑ 39 ; follow-ups: ↓ 42 ↓ 54 Changed 6 years ago by
Replying to nthiery:
keep the Polyhedron implementation for testing purposes as well as for counting, ...
I'm not sure about the counting... I guess a well-written Cython implementation of IntegerListsLex
will usually be faster than the current polyhedra code. Profiling shows that a lot of time is spent in just constructing the polyhedra (if there are not so many points, enumerating them takes a lot less time than constructing the polyhedron in the first place).
comment:42 in reply to: ↑ 41 ; follow-up: ↓ 43 Changed 6 years ago by
Hello,
I'm not sure about the counting... I guess a well-written Cython implementation of
IntegerListsLex
will usually be faster than the current polyhedra code.
True. This being said, your polyhedron code may very well be 'all we can do' to implement this feature while handling all possible combinations of parameters.
Profiling shows that a lot of time is spent in just constructing the polyhedra
True. Do you have any idea where that comes from ? I had similar troubles with the Poset constructor (related to memory usage).
Nathann
comment:43 in reply to: ↑ 42 ; follow-up: ↓ 44 Changed 6 years ago by
Replying to ncohen:
Profiling shows that a lot of time is spent in just constructing the polyhedra
True. Do you have any idea where that comes from ?
It's just PPL.
Interestingly, arithmetic with infinity also shows up quite high in the profiling reports (up to 10% of the time), so optimizing src/sage/rings/infinity.py
will also give some speedup.
comment:44 in reply to: ↑ 43 ; follow-up: ↓ 45 Changed 6 years ago by
Interestingly, arithmetic with infinity also shows up quite high in the profiling reports (up to 10% of the time), so optimizing
src/sage/rings/infinity.py
will also give some speedup.
Aahahah. Yaeh, Vincent has been fighting a lot with some abstractions we have, which makes code run *much* slower. For +oo
he advises to solve the problem by using float("inf") instead of Infinity. It is *MUCH* faster.
At some point he got some crazy somewhere by adding 'from math import sqrt' in a module to overwrite Sage's sqrt function.
Nathann
comment:45 in reply to: ↑ 44 ; follow-up: ↓ 46 Changed 6 years ago by
Replying to ncohen:
Interestingly, arithmetic with infinity also shows up quite high in the profiling reports (up to 10% of the time), so optimizing
src/sage/rings/infinity.py
will also give some speedup.Aahahah. Yaeh, Vincent has been fighting a lot with some abstractions we have, which makes code run *much* slower. For
+oo
he advises to solve the problem by using float("inf") instead of Infinity.
In general, I don't like the "X is slow, therefore let's not use X" mentality. My idea is: "let's use X and then optimize X".
comment:46 in reply to: ↑ 45 Changed 6 years ago by
Replying to jdemeyer:
Interestingly, arithmetic with infinity also shows up quite high in the profiling reports (up to 10% of the time), so optimizing
src/sage/rings/infinity.py
will also give some speedup.Aahahah. Yaeh, Vincent has been fighting a lot with some abstractions we have, which makes code run *much* slower. For
+oo
he advises to solve the problem by using float("inf") instead of Infinity.In general, I don't like the "X is slow, therefore let's not use X" mentality. My idea is: "let's use X and then optimize X".
On a tangential note: if someone makes major changes to the infinity rings, please consider adding a NaN
element to them.
comment:47 follow-up: ↓ 48 Changed 6 years ago by
In general, I don't like the "X is slow, therefore let's not use X" mentality. My idea is: "let's use X and then optimize X".
Well, for the sqrt problem we were only computing on floats, and sqrt(5) in Sage is not a float. Having this symbolic value in the code we compared it with floats and this had a cost we had no reason to pay, so from math import sqrt
made total sense, and there was nothing in Sage's sqrt that we could have wanted to change.
As per Sage's Infinity... Well, it also seems to have been designed with a different aim in mind. I usually want speed, and I do not want to pay for pointless abstraction. In infinity.py you will find parents, elements, rings and generators, while the feature I need is already provided by the constant LONG_MAX.
This Sage object is called 'infinity', but it turns out that one definition of "infinity" cannot cover all uses that we have for infinity on a computer. And I don't think that we could beat a single CPU instruction while dealing with parents and elements in a .py file.
Nathann
comment:48 in reply to: ↑ 47 Changed 6 years ago by
Replying to ncohen:
while the feature I need is already provided by the constant LONG_MAX.
In this case, we shouldn't aritificially restrict to long
s:
sage: IntegerLists(10^100, max_length=1).list() [[10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000]]
This Sage object is called 'infinity', but it turns out that one definition of "infinity" cannot cover all uses that we have for infinity on a computer.
I think we can have one definition which covers all uses within Sage. There is nothing fundamentally wrong with Infinity
, it's just slow.
comment:49 Changed 6 years ago by
Note the difference of a factor 20 between the following:
sage: b = ZZ(1); a = Infinity; timeit('a < b', repeat=20, number=10^4) 10000 loops, best of 20: 11.7 µs per loop sage: b = ZZ(1); a = QQ(2); timeit('a < b', repeat=20, number=10^4) 10000 loops, best of 20: 681 ns per loop
A proper Cython implementation of Infinity
should match the speed for QQ
.
comment:50 Changed 6 years ago by
- Commit changed from b0a04aa5a4454766ed9802d8e99abcd7fb3e105b to 5f81a0a08bed2dc5c73f8f99dd32deafdfb3c9a1
Branch pushed to git repo; I updated commit sha1. New commits:
5f81a0a | Move doctests
|
comment:51 in reply to: ↑ 33 Changed 6 years ago by
comment:52 in reply to: ↑ 40 Changed 6 years ago by
comment:53 in reply to: ↑ 40 ; follow-ups: ↓ 56 ↓ 61 Changed 6 years ago by
Replying to jdemeyer:
Replying to nthiery:
Please do not change the enumeration order, at least as default
I disagree with this: the default should be "do not sort, return stuff in the fastest possible way". Sorting an iterator is very expensive and should only be done if really needed.
For IntegerList? itself, any default is fine, and sorting is certainly very bad. But for Partitions, Compositions, ... users are really expecting lexicographic order. Besides, this is only a temporary solution, and we will switch back to lexicographic once we have a proper implementation of IntegerListLex?.
Is that really true? The only doctest failures that I saw where "obvious" failures where some list order changed, I didn't see anything subtle.
That's a point indeed; my feeling is that we have been lucky, although it could be that some changes w.r.t. MuPAD-Combinat makes the code less dependent on that feature. I am worried by user's personal code out there. Well, if you are willing to help those guys ...
Cheers,
Nicolas
comment:54 in reply to: ↑ 41 ; follow-up: ↓ 55 Changed 6 years ago by
Replying to jdemeyer:
I'm not sure about the counting... I guess a well-written Cython implementation of
IntegerListsLex
will usually be faster than the current polyhedra code. Profiling shows that a lot of time is spent in just constructing the polyhedra (if there are not so many points, enumerating them takes a lot less time than constructing the polyhedron in the first place).
Agreed, especially if we further go parallel: counting through polyhedral methods only becomes relevant for relatively large polyhedron. But this would be a very useful feature. So count would eventually have some threshold to choose between the two methods.
By the way: we don't yet use Barvinok-like algorithms for counting (e.g. through LattE), or do we? This could make a difference too.
Cheers,
Nicolas
comment:55 in reply to: ↑ 54 ; follow-ups: ↓ 58 ↓ 66 Changed 6 years ago by
Replying to nthiery:
By the way: we don't yet use Barvinok-like algorithms for counting (e.g. through LattE), or do we? This could make a difference too.
I just read the first paragraph of the LattE manual and it does exactly what we need for counting:
1.1 What is LattE? The name “LattE” is an abbreviation for “Lattice point Enumeration.” LattE was developed in 2001 to count lattice points contained in convex polyhedra defined by linear equations and inequalities with integer coefficients. The poly- hedra can be of any (reasonably small) dimension.
comment:56 in reply to: ↑ 53 Changed 6 years ago by
Replying to nthiery:
But for Partitions, Compositions, ... users are really expecting lexicographic order.
Well, certainly for Compositions
, the current order is not defined:
sage: Compositions(2).list() [[1, 1], [2]] sage: Compositions(2, max_slope=0).list() [[2], [1, 1]]
comment:57 Changed 6 years ago by
http://trac.sagemath.org/ticket/17529#comment:11 (we already have a LattE package)
comment:58 in reply to: ↑ 55 Changed 6 years ago by
comment:59 Changed 6 years ago by
- Commit changed from 5f81a0a08bed2dc5c73f8f99dd32deafdfb3c9a1 to 6f3164941f2565627afc1128ace01973c788f767
Branch pushed to git repo; I updated commit sha1. New commits:
6f31649 | Add two extra tests
|
comment:60 Changed 6 years ago by
FYI - float('inf')
works quite well as a good and fast substitute for Infinity
(which is why it is used in the current IntegerListsLex
).
comment:61 in reply to: ↑ 53 Changed 6 years ago by
Replying to nthiery:
But for Partitions users are really expecting lexicographic order.
For which applications does this really matter?
I know that's how partitions are traditionally written down and how things are done in Sage historically. But I don't think that's enough reason to not change it, especially given the fact that the documentation doesn't say anything about the order. Any order of the list of partitions is a good answer mathematically.
comment:62 follow-up: ↓ 63 Changed 6 years ago by
I'm worried this could lead to errors being raised when trying to convert between different bases of the symmetric functions (which are indexed by partitions). IIRC the code relies on some of the (graded) transition matrices being upper triangular, which requires the order be compatible with dominance ordering.
comment:63 in reply to: ↑ 62 ; follow-up: ↓ 64 Changed 6 years ago by
Replying to tscrim:
I'm worried this could lead to errors being raised when trying to convert between different bases of the symmetric functions (which are indexed by partitions). IIRC the code relies on some of the (graded) transition matrices being upper triangular, which requires the order be compatible with dominance ordering.
To be honest, I don't know what you mean mathematically. But, like I said, the fact that there are no strange doctest failures shows that the issue cannot be so serious.
And in the cases where the order really matters, I think those places should simply explicitly sort or use IntegerListsLex
. Slowing down all of Paritions()
just because one or two applications require it seems stupid.
comment:64 in reply to: ↑ 63 ; follow-up: ↓ 65 Changed 6 years ago by
But, like I said, the fact that there are no strange doctest failures shows that the issue cannot be so serious.
This might be an issue though
sage: S=SymmetricGroup(3) sage: S.character_table() [ 1 -1 1] [ 2 0 -1] [ 1 1 1] sage: type(S.character_table()) <type 'sage.matrix.matrix_cyclo_dense.Matrix_cyclo_dense'>
The character table should really be indexed by partitions (or conjugacy classes of S_n). So the meaning of the matrix changes if the order of the list of partitions changes.
comment:65 in reply to: ↑ 64 Changed 6 years ago by
Replying to aschilling:
This might be an issue though
sage: S=SymmetricGroup(3) sage: S.character_table() [ 1 -1 1] [ 2 0 -1] [ 1 1 1] sage: type(S.character_table()) <type 'sage.matrix.matrix_cyclo_dense.Matrix_cyclo_dense'>The character table should really be indexed by partitions (or conjugacy classes of S_n). So the meaning of the matrix changes if the order of the list of partitions changes.
Okay, so the meaning of the matrix changes. There is nothing wrong with output changing as long as things stay mathematically correct and internally consistent.
comment:66 in reply to: ↑ 55 Changed 6 years ago by
Replying to jdemeyer:
Replying to nthiery:
By the way: we don't yet use Barvinok-like algorithms for counting (e.g. through LattE), or do we? This could make a difference too.
I just read the first paragraph of the LattE manual and it does exactly what we need for counting:
Moreover, counting the points is faster than enumerating the points; there could be exponentially many points to count, but still the number of them is only polynomial in the input size, for fixed dimension, and LattE provides a truly polynomial-time procedure for this counting.
comment:67 Changed 6 years ago by
- Status changed from needs_review to needs_work
a badly formated doc in composition.py
TESTS:: + Check that :trac:`17548` is fixed::
comment:68 Changed 6 years ago by
- Commit changed from 6f3164941f2565627afc1128ace01973c788f767 to 62b56ba0e0fa855ab89d832755d0219bd9f6b5f5
Branch pushed to git repo; I updated commit sha1. New commits:
62b56ba | Improve IntegerLists_polyhedron; sort Partitions by default
|
comment:69 Changed 6 years ago by
- Commit changed from 62b56ba0e0fa855ab89d832755d0219bd9f6b5f5 to 5096e5f0fb03432307efea20493216bf713eb020
Branch pushed to git repo; I updated commit sha1. New commits:
5096e5f | Fix documentation
|
comment:70 Changed 6 years ago by
- Status changed from needs_work to needs_review
comment:71 Changed 6 years ago by
- Dependencies changed from #17937 to #17937, #18087
comment:72 Changed 6 years ago by
- Commit changed from 5096e5f0fb03432307efea20493216bf713eb020 to 621d467827d8d0c6a0cb1113cb4b861cca936f41
comment:73 Changed 6 years ago by
- Priority changed from blocker to major
- Status changed from needs_review to needs_info
Whats your plan with the code here? It might be useful to check. Thoughts?
comment:74 Changed 6 years ago by
- Status changed from needs_info to needs_review
This code is useful as it works in more general context than the new IntegerListsLex
as it allows entries in ZZ
(instead of just NN
) and when lex ordering doesn't hit every element in finite time. Plus I like alternative algorithms for testing, and this is faster in certain cases currently as well. I just need to find some time to review this.
comment:75 follow-up: ↓ 80 Changed 6 years ago by
Indeed, we want this code in Sage! I promised Jeroen I would rebase his code, and work on the review. This could be a good sprint for Sage Days 67. But yes this certainly is not a blocker anymore for 6.6.
comment:76 Changed 6 years ago by
- Status changed from needs_review to needs_work
Needs to be rebased in any case. I might also try to make it work in all cases of infinite iterator (currently, I still require that there are only finitely many lists of every given length).
comment:77 follow-up: ↓ 78 Changed 6 years ago by
I would actually like to redesign IntegerListsLex
and IntegerLists_polyhedron
such that they share code: they could be the same class but with a different implementation for __iter__
and __contains__
.
comment:78 in reply to: ↑ 77 ; follow-up: ↓ 79 Changed 6 years ago by
Replying to jdemeyer:
I would actually like to redesign
IntegerListsLex
andIntegerLists_polyhedron
such that they share code: they could be the same class but with a different implementation for__iter__
and__contains__
.
+1
why not several iterators on the same class? (with a reasonable one by default in __iter__
).
comment:79 in reply to: ↑ 78 ; follow-up: ↓ 81 Changed 6 years ago by
Replying to vdelecroix:
Replying to jdemeyer:
I would actually like to redesign
IntegerListsLex
andIntegerLists_polyhedron
such that they share code: they could be the same class but with a different implementation for__iter__
and__contains__
.+1
why not several iterators on the same class? (with a reasonable one by default in
__iter__
).
Well, it will be something along those lines. But I haven't thought too much about the actual design.
comment:80 in reply to: ↑ 75 Changed 6 years ago by
Replying to nthiery:
I promised Jeroen I would rebase his code
Thanks for the offer, but that will not be needed (in any case, it's just merging with -X theirs
essentially)
comment:81 in reply to: ↑ 79 Changed 6 years ago by
Replying to jdemeyer:
Replying to vdelecroix:
Replying to jdemeyer:
I would actually like to redesign
IntegerListsLex
andIntegerLists_polyhedron
such that they share code: they could be the same class but with a different implementation for__iter__
and__contains__
.+1
why not several iterators on the same class? (with a reasonable one by default in
__iter__
).Well, it will be something along those lines. But I haven't thought too much about the actual design.
+1 to sharing code between the classes; in fact I had put a mental note on doing this when I offered to do the merge :-)
Having a class that can handle any set of constraints, even if it does only containment check, would be useful. We would need this in particular to properly refactor integer vectors.
I am not sure whether we want a single class, or two classes:
class IntegerLists: __iter__ -> __iter__ on the polyhedron class IntegerListsLex(IntegerLists):
With the second one having the additional specification that the enumeration shall be lexicographic.
Thoughts?
Cheers,
Nicolas
The Sage MILP solvers cannot enumerate all solutions => closing as invalid.