#22684 closed defect (fixed)
pynormaliz fails to build on 32bit system
Reported by:  tmonteil  Owned by:  

Priority:  major  Milestone:  sage8.0 
Component:  packages: optional  Keywords:  days86, sdl 
Cc:  mkoeppe, vdelecroix, Winfried  Merged in:  
Authors:  Thierry Monteil  Reviewers:  Matthias Koeppe 
Report Upstream:  N/A  Work issues:  
Branch:  f64520f (Commits, GitHub, GitLab)  Commit:  f64520f609260af0fbab6949bdb69f8a8fa3ccc8 
Dependencies:  Stopgaps: 
Description (last modified by )
See the attached log.
Upstream tarballs:
Attachments (3)
Change History (66)
Changed 5 years ago by
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
 Description modified (diff)
comment:3 Changed 5 years ago by
 Branch set to u/tmonteil/pynormaliz_fails_to_build_on_32bit_system
comment:4 Changed 5 years ago by
 Commit set to c3a81c0657ac87c1f3d1e15f5c30de416cb77809
 Description modified (diff)
 Keywords days86 added
 Status changed from new to needs_info
Updating pynormalize requires an update of normaliz
, which installs and selfchecks correctly, but i get the following doctest failure:
********************************************************************** File "backend_normaliz.py", line 500, in sage.geometry.polyhedron.backend_normaliz.Polyhedron_normaliz.integral_points Failed example: len(P.integral_points()) # optional  pynormaliz Exception raised: Traceback (most recent call last): File "/opt/sagemath/sagesource/local/lib/python2.7/sitepackages/sage/doctest/forker.py", line 503, in _run self.compile_and_execute(example, compiler, test.globs) File "/opt/sagemath/sagesource/local/lib/python2.7/sitepackages/sage/doctest/forker.py", line 866, in compile_and_execute exec(compiled, globs) File "<doctest sage.geometry.polyhedron.backend_normaliz.Polyhedron_normaliz.integral_points[14]>", line 1, in <module> len(P.integral_points()) # optional  pynormaliz File "/opt/sagemath/sagesource/local/lib/python2.7/sitepackages/sage/geometry/polyhedron/backend_normaliz.py", line 578, in integral_points for g in PyNormaliz.NmzResult(cone, "ModuleGenerators"): interface_error: std::bad_alloc **********************************************************************
New commits:
645811e  #22684 update normaliz to 3.2.1.

c3a81c0  #22684 update PyNormaliz to 1.5.

comment:5 Changed 5 years ago by
 Cc Winfried added
comment:6 followup: ↓ 7 Changed 5 years ago by
Thierry, is this doctest failure on a 32bit platform? On Mac OS X (64bit), I can't reproduce it.
comment:7 in reply to: ↑ 6 Changed 5 years ago by
Replying to mkoeppe:
Thierry, is this doctest failure on a 32bit platform? On Mac OS X (64bit), I can't reproduce it.
Sorry for not being clear, i got that on 64bit system (i have to move to pynormaliz 1.5 for 32bits reasons, but it is easier for me to first work on my 64bit laptop). I will try to rebuild.
comment:8 Changed 5 years ago by
I rebuilt normaliz and pynormaliz, i still get the same error (+ some change in the ordering):
$ sage t long src/sage/geometry/polyhedron/backend_normaliz.py too few successful tests, not using stored timings Running doctests with ID 201704230120482c49990f. Git branch: t/22684/pynormaliz_fails_to_build_on_32bit_system Using optional=4ti2,d3js,gdb,giacpy_sage,git_trac,latte_int,lidia,lrslib,mpir,normaliz,ore_algebra,pandoc_attributes,pandocfilters,pynormaliz,python2,qepcad,rst2ipynb,saclib,sage Doctesting 1 file. sage t long src/sage/geometry/polyhedron/backend_normaliz.py ********************************************************************** File "src/sage/geometry/polyhedron/backend_normaliz.py", line 412, in sage.geometry.polyhedron.backend_normaliz.Polyhedron_normaliz.integral_hull Failed example: set(PI.Vrepresentation()) # optional  pynormaliz Expected: {A vertex at (1, 0), A vertex at (0, 1), A ray in the direction (1, 0)} Got: {A ray in the direction (1, 0), A vertex at (1, 0), A vertex at (0, 1)} ********************************************************************** File "src/sage/geometry/polyhedron/backend_normaliz.py", line 420, in sage.geometry.polyhedron.backend_normaliz.Polyhedron_normaliz.integral_hull Failed example: set(PI.Vrepresentation()) # optional  pynormaliz Expected: {A vertex at (1, 0), A ray in the direction (1, 0), A line in the direction (1, 1)} Got: {A line in the direction (1, 1), A ray in the direction (1, 0), A vertex at (1, 0)} ********************************************************************** File "src/sage/geometry/polyhedron/backend_normaliz.py", line 500, in sage.geometry.polyhedron.backend_normaliz.Polyhedron_normaliz.integral_points Failed example: len(P.integral_points()) # optional  pynormaliz Exception raised: Traceback (most recent call last): File "/opt/sagemath/sagesource/local/lib/python2.7/sitepackages/sage/doctest/forker.py", line 503, in _run self.compile_and_execute(example, compiler, test.globs) File "/opt/sagemath/sagesource/local/lib/python2.7/sitepackages/sage/doctest/forker.py", line 866, in compile_and_execute exec(compiled, globs) File "<doctest sage.geometry.polyhedron.backend_normaliz.Polyhedron_normaliz.integral_points[14]>", line 1, in <module> len(P.integral_points()) # optional  pynormaliz File "/opt/sagemath/sagesource/local/lib/python2.7/sitepackages/sage/geometry/polyhedron/backend_normaliz.py", line 578, in integral_points for g in PyNormaliz.NmzResult(cone, "ModuleGenerators"): interface_error: std::bad_alloc ********************************************************************** 2 items had failures: 2 of 10 in sage.geometry.polyhedron.backend_normaliz.Polyhedron_normaliz.integral_hull 1 of 33 in sage.geometry.polyhedron.backend_normaliz.Polyhedron_normaliz.integral_points [95 tests, 3 failures, 28.21 s]  sage t long src/sage/geometry/polyhedron/backend_normaliz.py # 3 doctests failed  Total time for all tests: 28.4 seconds cpu time: 29.7 seconds cumulative wall time: 28.2 seconds
comment:9 Changed 5 years ago by
But when i rerun the test, the doctest crashes, i attach the backtrace.
Changed 5 years ago by
comment:10 Changed 5 years ago by
As far as I can see, 2 of the failzres reported are simply a permutation of the output data. This has nothing to do with Normaliz.
The crash is another matter. Could I have the input data to run this outside of Sage?
I am not sure whether I can find a genuine 32bit system in my neighborhood, and I don't have a Mac.
It could be useful to add debugging output in matrix.cpp before line 778 to see the vlaue of nc (number of clolumns of "this") because the allocation of the vector w fails.
comment:11 followup: ↓ 12 Changed 5 years ago by
It must be an example with latge numbers because Normaliz calculates with GMP ontegers. It only does this if it is afraid of an overflow or is forced to do it.
This could explain why the problem arises with 32 bit and not with 64 bit.
comment:12 in reply to: ↑ 11 Changed 5 years ago by
Replying to Winfried:
This could explain why the problem arises with 32 bit and not with 64 bit.
To clarify: i had top upgrade to 1.5 because pynormaliz 1.0 was not building on 32bit.
However, the errors reported on the ticket appear on Debian jessie 64bits.
comment:13 followup: ↓ 14 Changed 5 years ago by
Thanks for the clarification.
But let me repeat by request: can you please isolate the input data that cause the crash? I am not familiar enough with Sage to do this in reasonable time.
comment:14 in reply to: ↑ 13 Changed 5 years ago by
Replying to Winfried:
Thanks for the clarification.
But let me repeat by request: can you please isolate the input data that cause the crash? I am not familiar enough with Sage to do this in reasonable time.
I will try, but i will first rebuild Sage from scratch on my laptop, just in case. Indeed, this ticket passes on my 32bit VM !
comment:15 Changed 5 years ago by
Winfried, this is the test case. I added verbose=True
, which displays the PyNormaliz? command that the sage code translates to.
sage: P = Polyhedron(vertices=((0, 0), (1789345,37121))) + 1/1000*polytopes.hypercube(2) sage: P = Polyhedron(vertices=P.vertices_list(), # optional  pynormaliz ....: backend='normaliz', verbose=True) # Calling PyNormaliz.NmzCone(['vertices', [[1, 1, 1000], [1, 1, 1000], [1, 1, 1000], [1789345001, 37121001, 1000], [1789345001, 37120999, 1000], [1789344999, 37121001, 1000]], 'cone', [], 'subspace', []]) sage: len(P.integral_points()) # optional  pynormaliz # Calling PyNormaliz.NmzResult(cone, "ModuleGenerators") 3654
The error that Thierry noticed appears when calling PyNormaliz.NmzResult(cone, "ModuleGenerators")
on this cone. I can't reproduce it on Mac OS, however.
comment:16 Changed 5 years ago by
I have made the following input file for Normaliz:
amb_space 2 vertices [[1, 1, 1000], [1, 1, 1000],
[1, 1, 1000], [1789345001, 37121001, 1000], [1789345001, 37120999, 1000], [1789344999, 37121001, 1000]]
I get 3654 lattice points. Normaliz uses approximation as desired (by me). There is 1 local transition to GMP, but no global switch to GMP. I will have another look at the backtrace.
For some reason the formatter puts a question mark behind the last line "ModuleGenerators?" of the input file.
comment:17 Changed 5 years ago by
PS. It could be helpful to run this example in Normaliz with the option c and to post the terminal output (or to produce he terminal output via PyNormaliz?).
comment:18 Changed 5 years ago by
i rebuilt sage from scratch and the error persists. Here are the details you asked:
If the file plop.in
contains:
amb_space 2 vertices [[1, 1, 1000], [1, 1, 1000], [1, 1, 1000], [1789345001, 37121001, 1000], [1789345001, 37120999, 1000], [1789344999, 37121001, 1000]] ModuleGenerators
running normaliz c plop.in
from sage sh
shell leads to:
\..... Normaliz 3.2.1 \.... \... (C) The Normaliz Team, University of Osnabrueck \.. February 2017 \. \ ************************************************************ Command line: c plop.in Compute: ModuleGenerators ************************************************************ starting primal algorithm (only support hyperplanes) ... Generators sorted lexicographically Start simplex 1 2 3 gen=4, 4 hyp gen=5, 5 hyp gen=6, 6 hyp Checking pointedness ... done. Select extreme rays via comparison ... done.  transforming data... done. Computing approximating polytope ************************************************************ starting primal algorithm with partial triangulation ... Roughness 1 Generators sorted by degree and lexicographically Generators per degree: 1: 12 Start simplex 1 2 3 gen=4, 4 hyp, 0 simpl gen=5, 4 hyp, 0 simpl gen=6, 5 hyp, 0 simpl gen=7, 5 hyp, 2 simpl gen=8, 6 hyp, 3 simpl gen=9, 7 hyp, 3 simpl gen=10, 6 hyp, 4 simpl gen=11, 7 hyp, 4 simpl gen=12, 8 hyp, 4 simpl Pointed since graded Select extreme rays via comparison ... done. evaluating 4 simplices  4 simplices, 3578686 deg1 vectors accumulated. Total number of pyramids = 4, among them simplicial 4 GMP transitions: matrices 0 hyperplanes 1 vector operations 0  transforming data... done.
The plop.out
file contains:
3654 module generators 0 Hilbert basis elements of recession monoid 6 vertices of polyhedron 0 extreme rays of recession cone 6 support hyperplanes of polyhedron (homogenized) embedding dimension = 3 affine dimension of the polyhedron = 2 (maximal) rank of recession monoid = 0 internal index = 4000 dehomogenization: 0 0 1 module rank = 3654 *********************************************************************** 3654 module generators: 0 0 1 241 5 1 482 10 1 723 15 1 2603 54 1 2844 59 1 3085 64 1 3326 69 1 3567 74 1 3808 79 1 5688 118 1 5929 123 1 6170 128 1 6411 133 1 6652 138 1 6893 143 1 8773 182 1 9014 187 1 9255 192 1 9496 197 1 9737 202 1 9978 207 1 10219 212 1 [ ... OUTPUT MANUALLY dTRUNCATED ... ] 1780090 36929 1 1780331 36934 1 1780572 36939 1 1782452 36978 1 1782693 36983 1 1782934 36988 1 1783175 36993 1 1783416 36998 1 1783657 37003 1 1785537 37042 1 1785778 37047 1 1786019 37052 1 1786260 37057 1 1786501 37062 1 1786742 37067 1 1788622 37106 1 1788863 37111 1 1789104 37116 1 1789345 37121 1 0 Hilbert basis elements of recession monoid: 6 vertices of polyhedron: 1 1 1000 1 1 1000 1 1 1000 1789344999 37121001 1000 1789345001 37120999 1000 1789345001 37121001 1000 0 extreme rays of recession cone: 6 support hyperplanes of polyhedron (homogenized): 18560500 894672500 913233 1000 0 1789345001 0 1000 37121001 0 1000 1 1000 0 1 18560500 894672500 913233
Running the following from #comment:15 within in a Sage shell leads to:
┌────────────────────────────────────────────────────────────────────┐ │ SageMath version 8.0.beta2, Release Date: 20170412 │ │ Type "notebook()" for the browserbased notebook interface. │ │ Type "help()" for help. │ └────────────────────────────────────────────────────────────────────┘ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Warning: this is a prerelease version, and it may be unstable. ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ sage: P = Polyhedron(vertices=((0, 0), (1789345,37121))) + 1/1000*polytopes.hypercube(2) sage: P = Polyhedron(vertices=P.vertices_list(), ....: backend='normaliz', verbose=True) # Calling PyNormaliz.NmzCone(['vertices', [[1, 1, 1000], [1, 1, 1000], [1, 1, 1000], [1789345001, 37121001, 1000], [1789345001, 37120999, 1000], [1789344999, 37121001, 1000]], 'cone', [], 'subspace', []]) sage: len(P.integral_points())  interface_error Traceback (most recent call last) <ipythoninput319b874c06a0f> in <module>() > 1 len(P.integral_points()) /opt/sagemath/sagesource/local/lib/python2.7/sitepackages/sage/geometry/polyhedron/backend_normaliz.pyc in integral_points(self, threshold) 576 cone = self._normaliz_cone 577 assert cone > 578 for g in PyNormaliz.NmzResult(cone, "ModuleGenerators"): 579 assert g[1] == 1 580 points.append(vector(ZZ, g[:1])) interface_error: std::bad_alloc
Regarding the question mark, this is because ModuleGenerators
is camelcase. If you want a block to be displayed raw, just enclose it between triple braces.
comment:19 Changed 5 years ago by
Thanks, but the problem is now more mysterious. The standalone version runs as expected, but the access from Sage fails. Please forget what I said about transition to GMP. I will have a look at the backtrace and the source.
comment:20 Changed 5 years ago by
I have analyzed what is going on. As said, the lattice points in this polytope are computed by "approximatiion": Normaliz computes the lattice points in an INTEGRAL overpolytope and then selects the ones that lie in the given polytope. This process creates > 3 million lattice points in the overpolytope  it would actually be better in this case to suppress approximation by NoApproximation?. I will go over this issue anyway in the next days.
Could it be that the access from Sage simply fails because of lack of memory for the > 3 million lattice points that must even coexist in long long and GMP? It might be good to have a look at "top" in another window.
std::bad_alloc is the exception that I see when I run out of memory.
comment:21 Changed 5 years ago by
Here is what (h)top gives juste before the crash:
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 12752 thierry 20 0 2833M 1038M 44812 S 0.0 21.1 0:01.90 python /opt/sagemath/sagesource/src/bin/sageruntests long src/sage/ 12751 thierry 20 0 2833M 1038M 44812 R 99.0 21.1 0:28.30 python /opt/sagemath/sagesource/src/bin/sageruntests long src/sage/
So it eats 21% of the RAM (i.e. 1GB out of 5).
comment:22 Changed 5 years ago by
If it is not the memory, then I have no idea at the moment. I personally would insert debugging code in order to locate the cause. Let me know if you are willing to test this. Then I can send you patches. It may of course take several rounds.
For the next release 3.3.0: I have worked on the function that computes the lattice points in the polytope. It is now much more efficient, and uses much less memory.
comment:23 Changed 5 years ago by
Sure, i can run tests for you (just tell me the commands to run).
comment:24 Changed 5 years ago by
By the way, i removed the singular stuff from spkginstall
since i could not find the Singular
directory in the normaliz tarball, and because the singular stuff was commented in the corresponding Makefiles. I am doing this correctly, or there is something to do for singular support ?
comment:25 followup: ↓ 26 Changed 5 years ago by
O.K. I will come send some code for debugging.
I am sure that the Singular stuff has nothing to do with it. Normaliz does not use Singular in any way. It has a Singular library so that it can be accessed from Singular. The data exchange uses files.
comment:26 in reply to: ↑ 25 Changed 5 years ago by
Replying to Winfried:
I am sure that the Singular stuff has nothing to do with it.
Sure, i am just asking whether it was safe to remove the last two lines of spkginstall
or if there should be some replacing procedure.
Normaliz does not use Singular in any way. It has a Singular library so that it can be accessed from Singular. The data exchange uses files.
What is the current procedure to let Singular know about Normaliz ? It there a replacement for Singular/normaliz.lib
that was shipped in previous versions ? Or does Singular just detects the existence of a normaliz binary itself now ?
comment:27 Changed 5 years ago by
I think it will not do any harm if you the last two lines of spkg/install, but I am not absolutely sure. Matthias should know.
There is no replacement for Singular/normaliz.lib yet, not even an update. The current version works well with the current version(s) of Normaliz, but cannot use all Normaliz features.
Before we try any debugging, please rebuild libnormaliz after replacing libnormaliz/cone.cpp, line 3266 by
const Matrix<Integer>& Raw=ApproxCone.getDeg1ElementsMatrix();
I was too careless at this pint. The line above should reduce the memory usage since it does no longer copy the matrix, but accesses it via the const reference.
comment:28 Changed 5 years ago by
Correction: line 3226
comment:29 Changed 5 years ago by
 Commit changed from c3a81c0657ac87c1f3d1e15f5c30de416cb77809 to c4e3cc31ef78814eb5c6b355e31e462e3297d994
comment:30 Changed 5 years ago by
I added a patch to implement you suggestion (see the current branch), but the problem persists.
comment:31 Changed 5 years ago by
If this can help, i reverted normaliz to 3.1.4, and testing again the suggestion from comment:15 i got something that fails earlier:
sage: P = Polyhedron(vertices=((0, 0), (1789345,37121))) + 1/1000*polytopes.hypercube(2) sage: P = Polyhedron(vertices=P.vertices_list(), ....: backend='normaliz', verbose=True) # Calling PyNormaliz.NmzCone(['vertices', [[1, 1, 1000], [1, 1, 1000], [1, 1, 1000], [1789345001, 37121001, 1000], [1789345001, 37120999, 1000], [1789344999, 37121001, 1000]], 'cone', [], 'subspace', []]) python: malloc.c:3695: _int_malloc: Assertion `(unsigned long) (size) >= (unsigned long) (nb)' failed. 
Then Sage hangs.
comment:32 Changed 5 years ago by
Again with the normaliz 3.1.4, if i test the comment:16, i got:
$ normaliz c plop.in \..... Normaliz 3.1.4 \.... \... (C) The Normaliz Team, University of Osnabrueck \.. November 2016 \. \ ************************************************************ Command line: c plop.in Compute: ModuleGenerators ************************************************************ starting primal algorithm (only support hyperplanes) ... Generators sorted lexicographically Start simplex 1 2 3 gen=4, 4 hyp gen=5, 5 hyp gen=6, 6 hyp Checking pointedness ... done. Select extreme rays via comparison ... done.  transforming data... done. Computing approximating polytope ************************************************************ starting primal algorithm with partial triangulation ... Roughness 1 Generators sorted by degree and lexicographically Generators per degree: 1: 12 Start simplex 1 2 3 gen=4, 4 hyp, 0 simpl gen=5, 4 hyp, 0 simpl gen=6, 5 hyp, 0 simpl gen=7, 5 hyp, 2 simpl gen=8, 6 hyp, 3 simpl gen=9, 7 hyp, 3 simpl gen=10, 6 hyp, 4 simpl gen=11, 7 hyp, 4 simpl gen=12, 8 hyp, 4 simpl Pointed since graded Select extreme rays via comparison ... done. evaluating 4 simplices  4 simplices, 3578686 deg1 vectors accumulated. Total number of pyramids = 4, among them simplicial 4 GMP transitions: matrices 0 hyperplanes 1 vector operations 0  transforming data... done. Erreur de segmentation
(note the last line that means Segmentation Fault
in french).
and the produced plop.out
ends with:
1786501 37062 1 1786742 37067 1 1788622 37106 1 1788863 37111 1 1789104 37116 1 1789345 37121 1 0 Hilbert basis elements of recession monoid: 6 vertices of polyhedron: 1 1 1000 1 1 1000 1 1 1000 1789344999 37121001 1000 1789345001 37120999 1000 1789345001 37121001 1000 0 extreme rays of recession cone: 6 support hyperplanes of polyhedron (homogenized): 18560500 894672500 913233 1000 0 1789345001 0 1000 37121001 0 1000 1 1000 0 1 18560500 894672500 913233 3 basis elements of lattice: 1 0 0 0 1 0 0 0 1
comment:33 Changed 5 years ago by
I will try the computation in 3.1.4. So far no idea where the segmentation fault comes from.
It is difficult to have an idea why Normaliz standalone works and then the computations fail within PyNormaliz/Sage
. With 3.1.4 the computation takes another route than with 3.2.1 (it is the route that you get in 3.2.1 with the option NoSymmetrization
).
Let us stick to 3.2.1. Tomorrow I will send some code that could us closer to the problem.
Next week Sebastian Gutsche will visit me. He has written PyNormaliz
.
comment:34 followup: ↓ 36 Changed 5 years ago by
On version 3.1.4:
The failure shown in comment:31 is caused by Python, not by Normaliz. I will show it to Sebastian Gutsche.
The segmentation fault in comment:32 does not show up in my system. Can you produce a gdb backtrace?
A correction: for this example 4.1.4 = 3.2.1 with NoApproximation.
comment:35 followup: ↓ 37 Changed 5 years ago by
For 3.2.1 I suggest to first find out when the problem comes up. Please replace the block starting at line 2437 in cone.cpp by the following. It just inserts a counter. (This is of course meant only for debugging.) If the computation runs successfully, then 3578698 is the last value shown. According to the backtrace you sent, you will not reach it.
if (FC.isComputed(ConeProperty::Deg1Elements)) { Deg1Elements = Matrix<Integer>(0,dim); typename list< vector<IntegerFC> >::const_iterator DFC(FC.Deg1_Elements.begin()); vector<Integer> tmp; long counter=0; for (; DFC != FC.Deg1_Elements.end(); ++DFC) { counter++; cout << counter << endl; BasisChangePointed.convert_from_sublattice(tmp,*DFC); Deg1Elements.append(tmp); } Deg1Elements.sort_by_weights(WeightsGrad,GradAbs); is_Computed.set(ConeProperty::Deg1Elements); }
comment:36 in reply to: ↑ 34 ; followup: ↓ 38 Changed 5 years ago by
Replying to Winfried:
On version 3.1.4:
The failure shown in comment:31 is caused by Python, not by Normaliz. I will show it to Sebastian Gutsche.
The segmentation fault in comment:32 does not show up in my system. Can you produce a gdb backtrace?
I do not know much about gdb, so after reading some webpages i ran this command from a sage sh
shell (please tell me if i am wrong):
$ gdb args /opt/sagemath/sage/local/bin/normaliz c plop.in
And then i typed
(gdb) run
This leads to this output:
Starting program: /opt/sagemath/sagesource/local/bin/normaliz c plop.in Got object file from memory but can't read symbols: Fichier tronqué. warning: Could not load shared library symbols for linuxvdso.so.1. Do you need "set solibsearchpath" or "set sysroot"? [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64linuxgnu/libthread_db.so.1". \..... Normaliz 3.1.4 \.... \... (C) The Normaliz Team, University of Osnabrueck \.. November 2016 \. \ ************************************************************ Command line: c plop.in Compute: ModuleGenerators ************************************************************ starting primal algorithm (only support hyperplanes) ... Generators sorted lexicographically Start simplex 1 2 3 [New Thread 0x7ffff6591700 (LWP 8292)] gen=4, 4 hyp gen=5, 5 hyp gen=6, 6 hyp Checking pointedness ... done. Select extreme rays via comparison ... done.  transforming data... done. Computing approximating polytope ************************************************************ starting primal algorithm with partial triangulation ... Roughness 1 Generators sorted by degree and lexicographically Generators per degree: 1: 12 Start simplex 1 2 3 gen=4, 4 hyp, 0 simpl gen=5, 4 hyp, 0 simpl gen=6, 5 hyp, 0 simpl gen=7, 5 hyp, 2 simpl gen=8, 6 hyp, 3 simpl gen=9, 7 hyp, 3 simpl gen=10, 6 hyp, 4 simpl gen=11, 7 hyp, 4 simpl gen=12, 8 hyp, 4 simpl Pointed since graded Select extreme rays via comparison ... done. evaluating 4 simplices  4 simplices, 3578686 deg1 vectors accumulated. Total number of pyramids = 4, among them simplicial 4 GMP transitions: matrices 0 hyperplanes 1 vector operations 0  transforming data... done. Program received signal SIGSEGV, Segmentation fault. 0x0000000000000000 in ?? ()
Then if i type :
(gdb) backtrace
i got:
#0 0x0000000000000000 in ?? () #1 0x0000000000000000 in ?? ()
The plop.out is similar to the previous one (comment:32).
By the way, notice that the plop.out in the comment:32 (with normaliz 3.1.4) has the following additional lines, that the comment:18 (with normaliz 3.2.1) does not have:
3 basis elements of lattice: 1 0 0 0 1 0 0 0 1
comment:37 in reply to: ↑ 35 ; followup: ↓ 40 Changed 5 years ago by
Replying to Winfried:
For 3.2.1 I suggest to first find out when the problem comes up. Please replace the block starting at line 2437 in cone.cpp by the following. It just inserts a counter. (This is of course meant only for debugging.) If the computation runs successfully, then 3578698 is the last value shown. According to the backtrace you sent, you will not reach it.
if (FC.isComputed(ConeProperty::Deg1Elements)) { Deg1Elements = Matrix<Integer>(0,dim); typename list< vector<IntegerFC> >::const_iterator DFC(FC.Deg1_Elements.begin()); vector<Integer> tmp; long counter=0; for (; DFC != FC.Deg1_Elements.end(); ++DFC) { counter++; cout << counter << endl; BasisChangePointed.convert_from_sublattice(tmp,*DFC); Deg1Elements.append(tmp); } Deg1Elements.sort_by_weights(WeightsGrad,GradAbs); is_Computed.set(ConeProperty::Deg1Elements); }
I will try this. Should i keep the const Matrix<Integer>& Raw=ApproxCone.getDeg1ElementsMatrix();
suggested in comment:27 ?
comment:38 in reply to: ↑ 36 Changed 5 years ago by
Replying to tmonteil:
Replying to Winfried:
I do not know much about gdb, so after reading some webpages i ran this command from a
sage sh
shell (please tell me if i am wrong):$ gdb args /opt/sagemath/sage/local/bin/normaliz c plop.inAnd then i typed
(gdb) run
Absolutely correct.
This leads to this output:
...
Program received signal SIGSEGV, Segmentation fault. 0x0000000000000000 in ?? () }}}
Then if i type :
(gdb) backtracei got:
#0 0x0000000000000000 in ?? () #1 0x0000000000000000 in ?? ()
Hard to tell where the segmentation fault occurs. Does not lokk like Normaliz.
The plop.out is similar to the previous one (comment:32).
By the way, notice that the plop.out in the comment:32 (with normaliz 3.1.4) has the following additional lines, that the comment:18 (with normaliz 3.2.1) does not have:
3 basis elements of lattice: 1 0 0 0 1 0 0 0 1
I will check why the lattice basis is shown. It is of course correct, but should only show up if it is not the trivial one. At least the newer version has the expected behavior.
comment:39 Changed 5 years ago by
Regarding comment:35 and comment:37 (on 3.2.1) i kept the const Matrix<Integer>& Raw=ApproxCone.getDeg1ElementsMatrix();
change.
The last number i got is 2680870. I will attach the backtrace as crash.comment_39.log
.
Changed 5 years ago by
comment:40 in reply to: ↑ 37 ; followup: ↓ 41 Changed 5 years ago by
Replying to tmonteil:
I will try this. Should i keep the
const Matrix<Integer>& Raw=ApproxCone.getDeg1ElementsMatrix();
suggested in comment:27 ?
Yes
comment:41 in reply to: ↑ 40 Changed 5 years ago by
Replying to Winfried:
Replying to tmonteil:
I will try this. Should i keep the
const Matrix<Integer>& Raw=ApproxCone.getDeg1ElementsMatrix();
suggested in comment:27 ?Yes
OK, this si what i did (see comment:39).
comment:42 Changed 5 years ago by
 Commit changed from c4e3cc31ef78814eb5c6b355e31e462e3297d994 to 5858acb6a03d2f354cff381e85fa68c69abc2ffa
Branch pushed to git repo; I updated commit sha1. New commits:
5858acb  #22684 : add tests suggested at comment 35.

comment:43 Changed 5 years ago by
I added the patch corresponding to the test to the current branch for information and if people want to try.
comment:44 followup: ↓ 45 Changed 5 years ago by
Let us continue our experiment. I still assume that you are facing a memory problem. The current line 2445 of cone.cpp is
Deg1Elements.append(tmp);
We double it to
Deg1Elements.append(tmp); Deg1Elements.append(tmp);
This means that every vector is stored twice at this point. This will give a wrong result, but that is irrelevant at the moment. If the conjecture on memory holds true, then you should reach only a considerably smaller value of the counter.
comment:45 in reply to: ↑ 44 Changed 5 years ago by
This time i go until 2671113, i do not get a crash, but a bad_alloc catched by the interface, here are the last lines:
2671112 2671113  interface_error Traceback (most recent call last) <ipythoninput319b874c06a0f> in <module>() > 1 len(P.integral_points()) /opt/sagemath/sagesource/local/lib/python2.7/sitepackages/sage/geometry/polyhedron/backend_normaliz.py in integral_points(self, threshold) 576 cone = self._normaliz_cone 577 assert cone > 578 for g in PyNormaliz.NmzResult(cone, "ModuleGenerators"): 579 assert g[1] == 1 580 points.append(vector(ZZ, g[:1])) interface_error: std::bad_alloc
If i rerun the sage: len(P.integral_points())
command again, from the same Sage command line, it only goes to 82:
79 80 81 82  interface_error Traceback (most recent call last) <ipythoninput419b874c06a0f> in <module>() > 1 len(P.integral_points()) /opt/sagemath/sagesource/local/lib/python2.7/sitepackages/sage/geometry/polyhedron/backend_normaliz.py in integral_points(self, threshold) 576 cone = self._normaliz_cone 577 assert cone > 578 for g in PyNormaliz.NmzResult(cone, "ModuleGenerators"): 579 assert g[1] == 1 580 points.append(vector(ZZ, g[:1])) interface_error: std::bad_alloc
comment:46 Changed 5 years ago by
 Commit changed from 5858acb6a03d2f354cff381e85fa68c69abc2ffa to 28aaa7d3b46daf19861bbefd02bc96dc5c4cecf8
Branch pushed to git repo; I updated commit sha1. New commits:
28aaa7d  #22684 : add tests suggested at comment 44.

comment:47 Changed 5 years ago by
Let us try to go one step further.
The std::bad_alloc exception shown by crash.log occurs when the vector w in line 778 of matrix.cpp is to be allocated:
vector<Integer> w(nc,0);
The only reason why this could fail because of a Normaliz bug is a corrupted value of nc (the number of columns of the matrix *this). Therefore I suggest to insert
cout << nc << endl;
before this line. When our counter starts running, its printed value must always be followed by 4 (in the critical example). If there is a 4 immediately before the exception is thrown, then the std::bad_alloc is a system problem.
comment:48 Changed 5 years ago by
Here is how it ends:
sage: P = Polyhedron(vertices=((0, 0), (1789345,37121))) + 1/1000*polytopes.hypercube(2) ....: P = Polyhedron(vertices=P.vertices_list(), backend='normaliz', verbose=True) ....: len(P.integral_points()) ....: # Calling PyNormaliz.NmzCone(['vertices', [[1, 1, 1000], [1, 1, 1000], [1, 1, 1000], [1789345001, 37121001, 1000], [1789345001, 37120999, 1000], [1789344999, 37121001, 1000]], 'cone', [], 'subspace', []]) 3 3 3 3 3 3 3 [snip] 2698103 4 2698104 4 2698105 4 2698106 4 2698107 4 2698108 4 2698109 4 2698110 4  interface_error Traceback (most recent call last) <ipythoninput12df9fc6cdad6> in <module>() 1 P = Polyhedron(vertices=((Integer(0), Integer(0)), (Integer(1789345),Integer(37121)))) + Integer(1)/Integer(1000)*polytopes.hypercube(Integer(2)) 2 P = Polyhedron(vertices=P.vertices_list(), backend='normaliz', verbose=True) > 3 len(P.integral_points()) /opt/sagemath/sagesource/local/lib/python2.7/sitepackages/sage/geometry/polyhedron/backend_normaliz.pyc in integral_points(self, threshold) 576 cone = self._normaliz_cone 577 assert cone > 578 for g in PyNormaliz.NmzResult(cone, "ModuleGenerators"): 579 assert g[1] == 1 580 points.append(vector(ZZ, g[:1])) interface_error: std::bad_alloc
So indeed, the last thing before the exception is a 4.
comment:49 Changed 5 years ago by
 Commit changed from 28aaa7d3b46daf19861bbefd02bc96dc5c4cecf8 to 2d371277f9245f7a6749f69c7685a56f78c49e95
comment:50 Changed 5 years ago by
If i understand your last comment, it is a system problem. I upgraded my kernel and libc (Debian jessie), rebooted, and rebuilt normaliz but the problem persists. Is there something to do ? Does this mean that the problem comes from my computer and not from the normaliz code ?
comment:51 Changed 5 years ago by
I really don't think that the problem comes from the Normaliz code. The last think Normaliz tries to do is to correctly allocate a vector with 4 components, and the system responds by a std::bad_alloc. I am not a system expert. The only explanation I can think of is that you run out of memory.
comment:52 Changed 5 years ago by
 Commit changed from 2d371277f9245f7a6749f69c7685a56f78c49e95 to 797a9673a8fdbb8d123161ac70eb31164b5d5b06
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
797a967  Merge branch 'develop' into HEAD

comment:53 Changed 5 years ago by
 Commit changed from 797a9673a8fdbb8d123161ac70eb31164b5d5b06 to 240ce84c240df0130bef65efb609e427a1c8fcfe
Branch pushed to git repo; I updated commit sha1. New commits:
240ce84  #22684 : lower memory requirement of a normaliz doctest.

comment:54 Changed 5 years ago by
 Commit changed from 240ce84c240df0130bef65efb609e427a1c8fcfe to 2dd1ca5ad23b4bf1760554f3397cc5422abf129f
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
2dd1ca5  22684 : lower memory requirement of a normaliz doctest.

comment:55 followup: ↓ 57 Changed 5 years ago by
 Status changed from needs_info to needs_review
Replying to Winfried:
I really don't think that the problem comes from the Normaliz code. The last think Normaliz tries to do is to correctly allocate a vector with 4 components, and the system responds by a std::bad_alloc. I am not a system expert. The only explanation I can think of is that you run out of memory.
You are right, it seems that the weirdness of the error came from a ulimit (3GB for Sage) i set to protect the memory of my computer. If i remove it, then the process goes until my computer freezes out of memory.
Hence, i reduced the size of the culprit polytope by a factor 10 in one direction (hope it is still illustrative, please tell me).
Also, i kept the patch suggested at comment:27.
comment:56 Changed 5 years ago by
The checksum for pynormaliz is wrong.
comment:57 in reply to: ↑ 55 Changed 5 years ago by
Replying to tmonteil:
i reduced the size of the culprit polytope by a factor 10 in one direction (hope it is still illustrative, please tell me).
Yes, I agree with this change.
comment:58 Changed 5 years ago by
 Commit changed from 2dd1ca5ad23b4bf1760554f3397cc5422abf129f to f64520f609260af0fbab6949bdb69f8a8fa3ccc8
Branch pushed to git repo; I updated commit sha1. New commits:
f64520f  #22684 : update PyNormaliz checksums.

comment:59 Changed 5 years ago by
Weird indeed. I checked: the current tarball contains:
PyNormaliz1.5/ PyNormaliz1.5/README PyNormaliz1.5/PyNormaliz.py PyNormaliz1.5/COPYING PyNormaliz1.5/GPLv2 PyNormaliz1.5/NormalizModule.cpp PyNormaliz1.5/PKGINFO PyNormaliz1.5/setup.py
while the previous contains:
PyNormaliz1.5/ PyNormaliz1.5/.gitignore PyNormaliz1.5/.travisinstall.sh PyNormaliz1.5/.travistest.sh PyNormaliz1.5/.travis.yml PyNormaliz1.5/COPYING PyNormaliz1.5/GPLv2 PyNormaliz1.5/MANIFEST.in PyNormaliz1.5/Makefile PyNormaliz1.5/NormalizModule.cpp PyNormaliz1.5/PyNormaliz.py PyNormaliz1.5/README PyNormaliz1.5/Readme.md PyNormaliz1.5/examples/ PyNormaliz1.5/examples/PyNormaliz_Tutorial.ipynb PyNormaliz1.5/examples/first.py PyNormaliz1.5/examples/first_long.py PyNormaliz1.5/examples/simple.py PyNormaliz1.5/setup.py
The common files are the same. I updated the checksums but perhaps something went wrong there.
comment:60 Changed 5 years ago by
In the next version 3.3.0 the algorithm that computes the lattice points in this example will be significantly improved. In particular it will use less memory by discarding superfluous vectors as early as possible (and not as late as possible).
comment:61 Changed 5 years ago by
 Reviewers set to Matthias Koeppe
 Status changed from needs_review to positive_review
comment:62 Changed 5 years ago by
 Branch changed from u/tmonteil/pynormaliz_fails_to_build_on_32bit_system to f64520f609260af0fbab6949bdb69f8a8fa3ccc8
 Resolution set to fixed
 Status changed from positive_review to closed
comment:63 Changed 3 years ago by
 Keywords sdl added
The new upstream version 1.5 has some 32/64bit work. Could you test this?