Opened 4 years ago

Closed 4 years ago

# Add discrete_complementarity_set() method for cones

Reported by: Owned by: mjo major sage-6.9 geometry novoselt Michael Orlitzky Andrey Novoseltsev N/A 75b2f43 (Commits) 75b2f438073042e310adbcaea224959ba5f2105f

### Description

Any sort of "linear property" on a cone can usually be verified on a generating set of the cone instead of the whole thing. This involves a finite number of steps for a polyhedral cone.

The discrete complementarity set is a finite subset, consisting of generators, of the usual complementarity set that appears in complementarity and mathematical programming problems. Having it available lets us check complementarity properties on the cone.

### comment:1 Changed 4 years ago by mjo

• Authors set to Michael Orlitzky
• Branch set to u/mjo/ticket/19332
• Commit set to 88b74bba339c1ff53063ebe43819b222fd478f1b
• Status changed from new to needs_review

The only thing "new" here is the name (which I coin in the reference). The idea of taking a finite subset of the complementarity set is old, but usually only in the context of extreme vectors and proper cones.

New commits:

 ​88b74bb `Trac #19332: Add discrete_complementarity_set() method for cones.`

### comment:2 follow-up: ↓ 4 Changed 4 years ago by novoselt

• Status changed from needs_review to needs_work

I do not like the implementation at all, why do we need to do

```sage: self = Cone([(1,0),(0,1)])
sage: V = self.lattice().vector_space()
sage: G1 = [ V(x) for x in self.rays() ]
sage: G2 = [ V(s) for s in self.dual().rays() ]
sage: [ (x,s) for x in G1 for s in G2 if x.inner_product(s) == 0 ]
[((1, 0), (0, 1)), ((0, 1), (1, 0))]
```

when exactly the same is achieved with

```sage: [(x, s) for x in self for s in self.dual() if x * s == 0]
[(N(1, 0), M(0, 1)), (N(0, 1), M(1, 0))]
```

which uses only memory for tuples (but not their elements) and can be cached???

Products work just fine between rays of dual cones and to big extent the whole point of introducing toric lattices was to allow only "correct" ways of mixing them, i.e. you cannot multiply rays that live in the same lattice. If the user is not happy with the type of the above output and wants to do illegal products, that user is welcome to do explicit conversion.

Note also that the result can be achieved using

```sage: [[(r, n) for r in f] for f, n in zip(self.facets(), self.facet_normals())]
[[(N(1, 0), M(0, 1))], [(N(0, 1), M(1, 0))]]
sage: self.orthogonal_sublattice()
Sublattice <>
```

which will not be as short of a code and above, but may be faster if face lattice is used (and thus computed/cached) for something else. Not sure if there is any point in using it for the implementation, but perhaps the relation can be mentioned in the documentation (and I definitely like your detailed docstrings!).

A bipartite graph also seems to be a natural structure for the output ;-)

### comment:3 Changed 4 years ago by git

• Commit changed from 88b74bba339c1ff53063ebe43819b222fd478f1b to 872932952e7db998d013637636ca82124ace544e

Branch pushed to git repo; I updated commit sha1. New commits:

 ​5852e1c `Trac #19332: Use reviewer's implementation of discrete_complementarity_set().` ​8729329 `Trac #19332: Documentation updates for discrete_complementarity_set().`

### comment:4 in reply to: ↑ 2 Changed 4 years ago by mjo

• Status changed from needs_work to needs_review

I do not like the implementation at all, why do we need to do ... when exactly the same is achieved with

```sage: [(x, s) for x in self for s in self.dual() if x * s == 0]
[(N(1, 0), M(0, 1)), (N(0, 1), M(1, 0))]
```

which uses only memory for tuples (but not their elements) and can be cached???

A few months ago, I'm sure I had a reason... but this works just fine now. I used your implementation and updated all of the doctests. I did use an explicit `x.inner_product(s)` because it's not easy to figure out what `x*s` does.

Note also that the result can be achieved using ... which will not be as short of a code and above, but may be faster if face lattice is used (and thus computed/cached) for something else.

You got my hopes up: this is an order of magnitude faster. But it also outputs the wrong answer =)

```sage: K = Cone([(1,0)])
sage: [[(r, n) for r in f] for f, n in zip(K.facets(), K.facet_normals())]
[[]]
sage: K.discrete_complementarity_set()
[(N(1, 0), M(0, 1)), (N(1, 0), M(0, -1))]
```

A bipartite graph also seems to be a natural structure for the output ;-)

Now that it's outputting lattice elements, sure, but can we do anything cool with the graph? The only thing I ever do with the complementarity set is loop through it and check properties.

The complementarity set can be huge so I don't want to build a graph unless I can do something with it.

### comment:5 follow-up: ↓ 6 Changed 4 years ago by novoselt

It does work correctly if you take into account `orthogonal_sublattice` mentioned earlier, I was just too lazy to write the combination, but how about

```sage: self = Cone([(1,0)])
sage: dcs = [(r, n) for f, n in zip(self.facets(), self.facet_normals()) for r in f]
sage: dcs.extend((r, n) for r in self for n in self.orthogonal_sublattice().basis())
sage: dcs.extend((r, -n) for r in self for n in self.orthogonal_sublattice().basis())
sage: dcs
[(N(1, 0), M(0, 1)), (N(1, 0), M(0, -1))]

sage: self = Cone([(1,0),(0,1)])
sage: dcs = [(r, n) for f, n in zip(self.facets(), self.facet_normals()) for r in f]
sage: dcs.extend((r, n) for r in self for n in self.orthogonal_sublattice().basis())
sage: dcs.extend((r, -n) for r in self for n in self.orthogonal_sublattice().basis())
sage: dcs
[(N(1, 0), M(0, 1)), (N(0, 1), M(1, 0))]
```

Note that some rays here will be immutable while others not, so perhaps the actual implementation should be

```@cached_method
def discrete_complementarity_set(self):
dcs = [(r, n) for f, n in zip(self.facets(), self.facet_normals()) for r in f]
# Extra elements for non-full-dimensional cones
orthogonal_generators = list(self.orthogonal_sublattice().gens())
orthogonal_generators += [-g for g in orthogonal_generators]
[g.set_immutable() for g in orthogonal_generators]
dcs.extend((r, g) for r in self for g in orthogonal_generators)
return tuple(dcs)
```

with doc fixes that a tuple is returned.

If your tests for large cones show that it is indeed faster, let's use this one. I am quite positive that it will be faster on its own, but if you take constructing the face lattice into account it is not so obvious anymore.

The graphs was more of a joke, typical use would probably be iterating over edges and that's exactly what your function is returning directly.

### comment:6 in reply to: ↑ 5 Changed 4 years ago by mjo

It does work correctly if you take into account `orthogonal_sublattice` mentioned earlier, I was just too lazy to write the combination, but how about...

Ohhh OK. I see what you're doing. I actually use that trick in a few places in the paper, but I never thought to apply it to the complementarity set. Thank you!

Note that some rays here will be immutable while others not, so perhaps the actual implementation should be... with doc fixes that a tuple is returned.

Why return a tuple instead of a list? This was the fastest I could make it (prior to making it a cached method):

```dcs = [ (r,n) for (f,n) in zip(self.facets(), self.facet_normals())
for r in f ]
orthogonal_gens = list(self.orthogonal_sublattice().gens())
orthogonal_gens += [ (-g).set_immutable() for g in orthogonal_gens ]
return dcs + [ (r,g) for r in self for g in orthogonal_gens ]
```

If your tests for large cones show that it is indeed faster, let's use this one. I am quite positive that it will be faster on its own, but if you take constructing the face lattice into account it is not so obvious anymore.

It is still much faster.

### comment:7 Changed 4 years ago by mjo

Whoops, this is obviously wrong:

```orthogonal_gens += [ (-g).set_immutable() for g in orthogonal_gens ]
```

However this still speeds things up a bit:

```return dcs + [ (r,g) for r in self for g in orthogonal_gens ]
```

Does the tuple/list choice matter for caching?

### comment:8 Changed 4 years ago by novoselt

List is mutable, tuple is not, that the only difference I am aware off between the two. So if you cache the result, it is better be a tuple (with all rays made immutable), not a list. Otherwise things still may work, but when something modifies your cache you'll get hard to debug bugs.

I also doubt there is much point in optimizing this implementation any further - after all you don't want just to get this set as fast as possible, you want to also do something interesting with it. But if `+` is better than `extend` here of course we can use it.

By the way, when benchmarking, take into account that face lattice, faces, facet normals - everything is cached and so will not be seen in any timings that run multiple times or one after another on the same object. The best way of getting "scratch timing" I could come up with in such situations is to write a function that constructs a new cone and calls new methods, then time calls to this function.

### comment:9 Changed 4 years ago by git

• Commit changed from 872932952e7db998d013637636ca82124ace544e to 75b2f438073042e310adbcaea224959ba5f2105f

Branch pushed to git repo; I updated commit sha1. New commits:

 ​75b2f43 `Trac #19332: Return a cached tuple from discrete_complementarity_set().`

### Changed 4 years ago by mjo

A cone with a large discrete complementarity set

### comment:10 Changed 4 years ago by mjo

I made an embarrassing mistake in my benchmarks. I've been using one big cone for all my tests, but I accidentally saved the cone after its face lattice was cached.

After fixing that mistake, the naive implementation takes around 400ms consistently. The "fast" one takes over a second for the first call, and then is much faster afterwards. But if we're going to cache the result anyway, the naive implementation will do fine.

I've attached the cone object (without the face lattice cached!) if you want to check. This is how I've been timing a single call.

```sage: K = load('/home/mjo/K.sobj')
sage: timeit('K.discrete_complementarity_set()', repeat=1, number=1)
```

### comment:11 Changed 4 years ago by novoselt

• Reviewers set to Andrey Novoseltsev
• Status changed from needs_review to positive_review

Let's go with naive as it is the clearest as well. If it ever becomes a bottleneck, one can try to be smarter (e.g. using face lattice if it is in cache already or doing matrix multiplication instead of vector-by-vector).

### comment:12 Changed 4 years ago by vbraun

• Branch changed from u/mjo/ticket/19332 to 75b2f438073042e310adbcaea224959ba5f2105f
• Resolution set to fixed
• Status changed from positive_review to closed
Note: See TracTickets for help on using tickets.