Opened 4 years ago

Last modified 4 years ago

#18765 new enhancement

Add Cython wrappers for GLPK's interface glpssx.h (exact rational simplex)

Reported by: mkoeppe Owned by:
Priority: minor Milestone: sage-6.8
Component: numerical Keywords: lp
Cc: yzh, ncohen, dimpase Merged in:
Authors: Reviewers:
Report Upstream: N/A Work issues:
Branch: Commit:
Dependencies: Stopgaps:

Description

Compare with #18764 / #18735.

In this ticket, we would be using GLPK's header file glpssx.h. We would get direct access to rational simplex data. So, in contrast to #18764 + #18735, there would be no need to reconstruct the solution using possibly slow rational matrix computations on the Sage side. The downside is that glpssx.h is not installed and not advertised as a public API; see ​http://lists.gnu.org/archive/html/help-glpk/2007-10/msg00031.htmlhttp://lists.gnu.org/archive/html/help-glpk/2008-06/msg00006.htmlhttp://lists.gnu.org/archive/html/help-glpk/2013-11/msg00019.html

One could make a new MixedIntegerLinearProgram backend that maintains both a standard glp problem (double floats) and a glpssx problem (GMP rationals). First solve the double-float problem using standard glp_ functions; then copy the basis to glpssx and continue there with the exact solver.

Change History (3)

comment:1 follow-up: Changed 4 years ago by dimpase

I don't understand how using standard glp_ functions would not lead to loss of precision, rendering subsequent exact computations meaningless. Are you doing to watch for the numerical troubles in the double-float phase?

Further, I don't think using non-public non-documented features is a good idea. Next version would break them, and we'd be stuck with maintaining a fork... Perhaps we have to find a way first to make GLPK folks finally address the public need of making these things public?

comment:2 in reply to: ↑ 1 ; follow-up: Changed 4 years ago by mkoeppe

Replying to dimpase:

I don't understand how using standard glp_ functions would not lead to loss of precision, rendering subsequent exact computations meaningless. Are you doing to watch for the numerical troubles in the double-float phase?

One just uses double float to navigate to some basis that's hopefully close to an optimal one. Then move to the same basis in the exact problem, and start exact simplex from there. This is always correct, no matter what numerical troubles the double-float phase ran into.

Further, I don't think using non-public non-documented features is a good idea. Next version would break them, and we'd be stuck with maintaining a fork... Perhaps we have to find a way first to make GLPK folks finally address the public need of making these things public?

With this ticket I want to first find out it will be worth it, performance-wise, comparing to other options. If it is, we can look into asking the GLPK developers.

comment:3 in reply to: ↑ 2 Changed 4 years ago by dimpase

Replying to mkoeppe:

Replying to dimpase:

I don't understand how using standard glp_ functions would not lead to loss of precision, rendering subsequent exact computations meaningless. Are you doing to watch for the numerical troubles in the double-float phase?

One just uses double float to navigate to some basis that's hopefully close to an optimal one.

more realistic scenario is "try to use double floats" to navigate to "something that hopefully is close enough to a basis". This is what I learnt while trying to solve LPs which are too hard for floating point simplex (implemented in CPLEX, say). Typical scenario:

  • CPLEX: your problem is infeasible
  • me: OK, give me a certificate of this
  • CPLEX: oops, I cannot :(
Note: See TracTickets for help on using tickets.