Opened 13 years ago
Closed 13 years ago
#723 closed enhancement (fixed)
[with patch] Make Sage's LLL faster: Magma seems to totally blow Sage (i.e., NTL) away for large-ish problems.
Reported by: | was | Owned by: | was |
---|---|---|---|
Priority: | major | Milestone: | sage-2.8.8 |
Component: | linear algebra | Keywords: | |
Cc: | Merged in: | ||
Authors: | Reviewers: | ||
Report Upstream: | Work issues: | ||
Branch: | Commit: | ||
Dependencies: | Stopgaps: |
Description (last modified by )
Magma's LLL is... way way way faster than the one in NTL. E.g., (this requires patch #325)
sage: a = random_matrix(ZZ,200) sage: time b=a.lll() CPU times: user 8.74 s, sys: 0.08 s, total: 8.83 s Wall time: 8.85 sage: m = magma(a) sage: time c=m.LLL() Wall time: 1.02 sage: a = random_matrix(ZZ,400) sage: time b=a.lll() CPU times: user 202.89 s, sys: 1.54 s, total: 204.44 s Wall time: 206.16 sage: time c=magma(a) CPU times: user 0.24 s, sys: 0.02 s, total: 0.26 s Wall time: 0.38 sage: time d=c.LLL() Wall time: 13.23
It would also be good to benchmark PARI's LLL and compare. Make sure to use the 1 option, so it knows the matrix is integral:
sage: a = random_matrix(ZZ,100) sage: time b=a.lll() CPU times: user 0.53 s, sys: 0.00 s, total: 0.53 s Wall time: 0.53 sage: c = pari(a) sage: time d=c.qflll(1) CPU times: user 0.47 s, sys: 0.00 s, total: 0.47 s Wall time: 0.47 sage: a = random_matrix(ZZ,200) sage: time b=a.lll() CPU times: user 9.02 s, sys: 0.06 s, total: 9.08 s Wall time: 9.14 sage: c = pari(a) sage: time d=c.qflll(1) CPU times: user 9.88 s, sys: 0.05 s, total: 9.93 s Wall time: 9.95
Attachments (1)
Change History (13)
comment:1 Changed 13 years ago by
comment:2 Changed 13 years ago by
- Milestone changed from sage-wishlist to Sage-2.10
comment:3 Changed 13 years ago by
The attached patch exposes NTL's floating point LLL implementations which are significantly faster than the exact one. However, these are approximates only.
Changed 13 years ago by
comment:4 Changed 13 years ago by
after patch:
double precision vs. MAGMA:
sage: a = random_matrix(ZZ,200) sage: time b=a.LLL(algorithm="NTL:LLL_FP") Time: CPU 0.91 s, Wall: 0.93 s sage: m = magma(a) sage: time c=m.LLL() Wall: 1.33 s sage: a = random_matrix(ZZ,400) sage: time b=a.LLL(algorithm="NTL:LLL_FP") Time: CPU 6.75 s, Wall: 6.94 s sage: c=magma(a) sage: time d=c.LLL() Wall: 20.41 s
quad float precision vs. MAGMA:
sage: a = random_matrix(ZZ,200) sage: time b=a.LLL(algorithm="NTL:LLL_QP") Time: CPU 2.56 s, Wall: 2.60 s sage: m = magma(a) sage: time c=m.LLL() Wall: 1.34 s sage: a = random_matrix(ZZ,400) sage: time b=a.LLL(algorithm="NTL:LLL_QP") Time: CPU 31.36 s, Wall: 32.00 s sage: c=magma(a) sage: time d=c.LLL() Wall: 20.13 s
However:
sage: B = MatrixSpace(IntegerRing(), 50, 51)(0 sage: for i in range(50): B[i,0] = ZZ.random_element(2**10000) sage: for i in range(50): B[i,i+1] = 1; sage: B.LLL(algorithm="NTL:LLL_FP") LLL_FP: numbers too big...use LLL_XD ... <type 'exceptions.RuntimeError'> sage: time B.LLL(algorithm="NTL:LLL_XD") 50 x 51 dense matrix over Integer Ring CPU time: 43.55 s, Wall time: 44.73 s sage: M = magma(B) sage: time C = M.LLL() Wall: 15.53 s
comment:5 Changed 13 years ago by
fpLLL as mentioned above is patched into SAGE by the patch found at
. Using this code, we can achieve a quite good performance.
sage: = MatrixSpace(IntegerRing(), 50, 51)(0) sage: for i in range(50): B[i,0] = ZZ.random_element(2**10000) ....: sage: for i in range(50): B[i,i+1] = 1 ....: sage: time C = B.LLL('fpLLL:fast') CPU times: user 9.54 s, sys: 0.00 s, total: 9.54 s Wall time: 9.56 sage: C.is_LLL_reduced() True sage: BM = B._magma_() sage: time CM = BM.LLL() CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 15.20 sage: time magma.eval("CM := LLL(%s:Fast:=1)"%BM.name()) CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s Wall time: 11.68
comment:6 Changed 13 years ago by
- Summary changed from Make Sage's LLL faster: Magma seems to totally blow Sage (i.e., NTL) away for large-ish problems. to [with patch] Make Sage's LLL faster: Magma seems to totally blow Sage (i.e., NTL) away for large-ish problems.
comment:7 Changed 13 years ago by
- Milestone changed from Sage-2.10 to sage-2.8.8
comment:8 Changed 13 years ago by
- Description modified (diff)
comment:9 Changed 13 years ago by
This patch has fplll.c, which is Cython autogenerated code. Please make a clean new patch that doesn't have fplll.c or any other autogenerated code in it.
comment:10 Changed 13 years ago by
I *have* applied ntls-fp-lll.patch. The patch that isn't clean is
comment:11 Changed 13 years ago by
- Summary changed from [with patch] Make Sage's LLL faster: Magma seems to totally blow Sage (i.e., NTL) away for large-ish problems. to [with patch that needs work] Make Sage's LLL faster: Magma seems to totally blow Sage (i.e., NTL) away for large-ish problems.
comment:12 Changed 13 years ago by
- Resolution set to fixed
- Status changed from new to closed
- Summary changed from [with patch that needs work] Make Sage's LLL faster: Magma seems to totally blow Sage (i.e., NTL) away for large-ish problems. to [with patch] Make Sage's LLL faster: Magma seems to totally blow Sage (i.e., NTL) away for large-ish problems.
The Damien part has been moved to ticket 929.
David Kohel had by far the most interesting response on sage-devel to this ticket: