45 | | which shall detect whether `_current_list` has a chance to be a |
46 | | prefix of some valid list. Using the above `area` method of the |
47 | | envelopes, it should be possible to detect most dead branches. |
48 | | Using dichotomy on the area / parts, it should be possible to |
49 | | achieve good complexity in most cases. |
| 36 | Putting back the length and sum constraints, there exists a valid |
| 37 | list if and only if the interval `I_k = [sum(f_k), sum(c_k)]` |
| 38 | intersects the interval `[min_n, max_n]` for some `k` between |
| 39 | `min_length` and `max_length`. |
| 40 | |
| 41 | - Extend `Envelope` with a method `.sum(k)` which computes `sum(f_k)`. |
| 42 | |
| 43 | It should have appropriate algorithmic and caching so that the |
| 44 | overall time complexity of computing `sum(f_k)` for all `k<=l` is |
| 45 | linear in `l`. |
| 46 | |
| 47 | Trick: store, for each `j`, the smallest `i<=j` such that between |
| 48 | `i` and `j` the slope is exactly `max_slope`. The point is that |
| 49 | computing the sum of the parts of `f_k` between `i` and `j` is |
| 50 | constant time. Use that incrementally to derive quickly `sum(f_k)` |
| 51 | from `sum(f_j)` for some `j<=k`. |
| 52 | |
| 53 | Suggestion for a faster development cycle: experiment with this by |
| 54 | extracting the current code for `Envelope` in a separate file. |
| 55 | |
| 56 | - Use the above characterization and the `.sum(k)` methods to |
| 57 | implement a method `IntegerListsLex.is_empty` which terminates with |
| 58 | guaranteed result on non degenerate cases. This is essentially what |
| 59 | the current `lookahead` method does, but starting from `0` and with |
| 60 | guaranteed results thanks to the reverse smoothing. |
| 61 | |
| 62 | Determine precisely the degenerate cases where `is_empty` may take |
| 63 | an arbitrarily long time (something like: the ceiling can be `0` for |
| 64 | an arbitrary long time, without a known `limit=0` and `limit_start`). |
| 65 | |
| 66 | Decide on appropriate metrics to bound tightly the complexity in the |
| 67 | other cases (something around `max_length`, `max_n`, `limit_start`). |
| 68 | |
| 69 | As a side product, we probably want to store for later usage the |
| 70 | information of which `n` admit a valid list, as a collection of non |
| 71 | overlapping intervals. Then, deciding whether there exists a valid |
| 72 | list of sum `n` for a given `n` can be achieved efficiently by |
| 73 | dichotomy. |
| 74 | |
| 75 | - Assuming that `l` is a prefix of a valid list, find out how to |
| 76 | determine for which `i` `l+[i]` is still a prefix of a valid list. |
| 77 | |
| 78 | Potential building block: the choice of a prefix imposes more |
| 79 | constraints on the floor and ceiling later on (our former `adapt` |
| 80 | method), and we want to keep track of that. Given a prefix `l`, |
| 81 | write `f_k(l)` be the list obtained by extending `l` to a list of |
| 82 | length `k` using the floor `f_k`, with appropriate forward |
| 83 | smoothing. Same for `c_k(l)`. Write `I_k(l) = [sum(f_k(l)), |
| 84 | sum(c_k(l))]`. |
| 85 | |
| 86 | Now, `l` is a prefix of a valid list if one of the `I_k(l)` |
| 87 | intersect `[min_sum, max_sum]`. So, while increasing the length of |
| 88 | `l` we want to do something like incrementally adapting the sequence |
| 89 | of intervals `I_k(l)` to always make sure we stay within one of |
| 90 | them. |
| 91 | |
| 92 | - Use this to reimplement `lookahead(self)` implementing the above, |
| 93 | with a guaranteed result and good complexity. And if at all possible |
| 94 | a constant time complexity when the constraints are simple |
| 95 | (e.g. `floor=0`, ...). |