Opened 9 years ago

Closed 2 years ago

#10046 closed defect (wontfix)

Polyhedron's incidence_matrix performs poorly on some numerical input

Reported by: mhampton Owned by: mhampton
Priority: major Milestone: sage-duplicate/invalid/wontfix
Component: geometry Keywords: days88
Cc: vbraun Merged in:
Authors: Reviewers: Vincent Delecroix
Report Upstream: N/A Work issues:
Branch: Commit:
Dependencies: Stopgaps:


The following 3D simplex shouldn't be all that challenging, but we don't handle it well:

test = Polyhedron(vertices=[[0.8333333333, 0.8333333333, 0.8333333333], [1.022799569, 0.6321252838, 0.0], [0.7354697967, 0.280924808, 0.4545456098], [1.348361678, 0.0, 0.5150289612]], field=RDF)

[[0, [0, 1, 3]], [1, [1, 2, 3]], [2, []], [3, [0, 1, 2]]]

The root of this seems to be in how the incidence_matrix is based on the is_zero method:

H = list(test.Hrep_generator())[2]
print H
for V in test.Vrep_generator():
    if test._is_zero(H*V):
        test._incidence_matrix[V.index(),H.index()] = 1
        print H*V, ' for vertex ', V.vector()

An inequality (848702.5784, 1373226.161, -2221927.54) x + -1.0 >= 0
-0.000499999849126  for vertex  (0.8333333333, 0.8333333333, 0.8333333333)
1736102.60814  for vertex  (1.022799569, 0.6321252838, 0.0)
-0.000186597928405  for vertex  (0.7354697967, 0.280924808, 0.4545456098)
-5.35207800567e-05  for vertex  (1.348361678, 0.0, 0.5150289612)

and in the _init_field method of Polyhedron:

# 1e-6 is the cddf+ default fuzzy zero cutoff
            self._is_zero = lambda x: abs(x)<=1e-6

I realize that there will always be some problems like this for numerical (i.e. inexact) polyhedra, but it seems pretty sad that we can't handle a nice, reasonably sized simplex in 3D. The only thing that immediately occurs to me is to either change how the inequalities are scaled or do something more complicated than the current _is_zero.

Change History (19)

comment:1 Changed 9 years ago by vbraun

This is not a reasonable simplex, it is one with razor-sharp teeth ready to gnaw your leg off:

sage: v = test.Vrepresentation()
sage: matrix([vector(v[0]), vector(v[2]), vector(v[3])]).det()

That said, it would be nice if cdd could work with doubles instead of float for ~16 accurate digits. PPL can do that easily using templates and c++.

comment:2 Changed 9 years ago by vbraun

  • Status changed from new to needs_review

I vote to close this ticket as "wontfix". There is nothing we can do except get rid of cdd.

comment:3 Changed 9 years ago by mhampton

Well, that seems far too pessimistic to me. This isn't very far from a regular simplex. Its unacceptable to me that we can't handle this, even with cddlib. It means that a variety of graphical output totally fails.

That said, I'm all in favor of switching to something better for our default convex hull method.

comment:4 Changed 9 years ago by vbraun

It has only 10^-7 of the volume of a standard simplex, I call that a quite drastic deviation. If you find a way to get that accuracy reliably with floats then please do tell. But I'm not holding my breath...

comment:5 Changed 9 years ago by mhampton

I'm confused about your volume statement. A regular simplex with edge length 1 has a volume of about 0.18. This simplex has a volume of about 0.048, about 40% of the regular simplex. The largest edge length is about 1.03, the smallest 0.64. I don't think this problem really has much to do with cddlib using floats instead of doubles.

comment:6 Changed 9 years ago by vbraun

You are right, the problem is that cdd fixes the overall scale of the <0,2,3> supporting inequality by scaling the constant term to unity, not that the simplex is degenerate. cdd should normalize the normal vector to unity instead, but not blow up the coefficients just because the supporting hypersurface comes close to the origin.

This needs to be addressed within cddlib, we can't just renormalize the cddlib output. I think it would be a waste of time to try to fix up cddlib, though.

comment:7 Changed 9 years ago by mhampton

I don't understand why we can't just renormalize the cddlib output.

This sort of thing is still a problem with PPL: with this same input, the PPL output for the H-representation is integer-valued, and the integers are big (presumably converted from double precision - ?). I think this would have to rescaled somehow when using numerical 'fields'. (Btw, the output of PPL is different on linux versus OS X, which is mildly disturbing).

comment:8 Changed 9 years ago by mhampton

I'm thinking of changing the Inequality and Equation initializations to have:

if polyhedron._cdd_type == 'real':
    normal_norm = vector(data[1:]).norm()
    data = [x/normal_norm for x in data]

This solves the problem I had above, and all tests pass. Some of those tests include numerical polytopes. I also tested it on a few additional high dimensional cases.

I won't make a patch yet since I suspect I will hear some counter-arguments...

comment:9 Changed 9 years ago by vbraun

If cddlib internally computes with the wrong normalizations then its output can not be trusted. In your case you are presumably lucky since you have a simplex, so there is not much that can go wrong. But once you have lots of points it will blow up in your face.

If you insist, can you put the normalization somewhere in the cdd IO routines so that there are no crufty hacks in the main Polyhedron logic.

Which binary are you using to test PPL? The ppl_lcdd emulation supports only rational cdd input files.

comment:10 Changed 9 years ago by mhampton

I disagree, and I would like to fix this problem. For rigorous computation I don't trust numerical computation of convex hulls anyway, unless there is some additional way to certify the correctness. The numerical routines are important for having fast graphical output.

The current behavior is just terrible, and I think this is a real improvement for lots of actual use cases.

To keep the main Polyhedron logic clean I will put in _init_from_cdd_output as follows:

        self._Hrepresentation = Sequence([])
        equations = cdd_linearities()
        l = cddout.pop(0).split()
        assert self._ambient_dim == int(l[1])-1, "Different ambient dimension?"
        for i in range(int(l[0])):
            l = cddout.pop(0)
            data = cdd_convert(l)
            if self._cdd_type == 'real':
                normal_norm = vector(data[1:]).norm()
                data = [x/normal_norm for x in data]
            if i in equations:
                Equation(self, data);
                Inequality(self, data);

comment:11 Changed 9 years ago by mhampton

  • Status changed from needs_review to needs_work

comment:12 Changed 9 years ago by mhampton

For PPL, I was using ppl_lcdd. It accepted my input file with 'real' input specified, and on linux the output was correct in the sense of being equivalent to cddlib's to the lower precision. I haven't checked the correctness of the OS X output, just noticed it was different.

For numerical input I'd like to test Qhull and if it performs better we might want to make it the default for numerical polyhedra.

comment:13 Changed 6 years ago by jdemeyer

  • Milestone changed from sage-5.11 to sage-5.12

comment:14 Changed 6 years ago by vbraun_spam

  • Milestone changed from sage-6.1 to sage-6.2

comment:15 Changed 6 years ago by vbraun_spam

  • Milestone changed from sage-6.2 to sage-6.3

comment:16 Changed 5 years ago by vbraun_spam

  • Milestone changed from sage-6.3 to sage-6.4

comment:17 Changed 2 years ago by jipilab

  • Milestone changed from sage-6.4 to sage-duplicate/invalid/wontfix
  • Status changed from needs_work to needs_review

This is out of date. It is to be expected that polyhedron over RDF will gives such erroneous results on such inputs.

I set this as wontfix.

comment:18 Changed 2 years ago by vdelecroix

  • Keywords days88 added
  • Reviewers set to Vincent Delecroix
  • Status changed from needs_review to positive_review

comment:19 Changed 2 years ago by embray

  • Resolution set to wontfix
  • Status changed from positive_review to closed
Note: See TracTickets for help on using tickets.