Opened 5 years ago

Closed 5 years ago

#18137 closed enhancement (fixed)

Centrality betweenness in Sage

Reported by: ncohen Owned by:
Priority: major Milestone: sage-6.6
Component: graph theory Keywords:
Cc: dcoudert, borassi Merged in:
Authors: Nathann Cohen Reviewers: David Coudert
Report Upstream: N/A Work issues:
Branch: 2db68fb (Commits) Commit: 2db68fb0f56ce38214484cd4619e2fb2a8f4aa7b
Dependencies: Stopgaps:

Description (last modified by ncohen)

I hate it that we do not appear in comparisons like the following, just because we are slower than the worst library :-P

http://graph-tool.skewed.de/performance

With this branch we can compute the betweenness centrality in Sage with a decent speed.

Nathann

P.S.: The version of the code that deals with rational instead of floats has been removed because it is *much* slower (60x in some cases), and because I did not see how to make the two coexist without copy/pasting most of the code.

Change History (55)

comment:1 Changed 5 years ago by ncohen

  • Branch set to u/ncohen/18137
  • Commit set to 0402abffb7656e123ce64d38e6a8cb8392eecf89
  • Status changed from new to needs_review

New commits:

7b63297trac #18137: Add new centrality module
0402abftrac #18137: keep only the 'double' version, get rid of the rationals

comment:2 follow-up: Changed 5 years ago by dcoudert

Hello,

I have only small remarks:

  • Wouldn't it be slightly faster to use arrays of bint rather than bitsets ? It would use more memory, but since you want to be fast, any saving is important.
  • You could add a doctest to compare the result of your implementation with networkx
  • You could pre-compute the ((n-1)*(n-2)), although its minor improvement.
  • You could add cdef double x
  • Variables k and d are not used.
  • You wrote from centrality import centrality_betweenness. Shouldn't it be from sage.graphs.centrality import centrality_betweenness ?

comment:3 in reply to: ↑ 2 Changed 5 years ago by ncohen

Hello,

  • Wouldn't it be slightly faster to use arrays of bint rather than bitsets ? It would use more memory, but since you want to be fast, any saving is important.

It would not be much faster, because most of what this array would contain is zeroes (bint is an int in memory). Plus the bottleneck is float computation in this case :-/

  • You could add a doctest to compare the result of your implementation with networkx

There is one, isn't there? In centrality.pyx. The one with a "tolerance" flag.

  • You could pre-compute the ((n-1)*(n-2)), although its minor improvement.

I don't think that it is worth it. Save a linear number of multiplications after all this work, really?.. :-P

  • You could add cdef double x

Done.

  • Variables k and d are not used.

Done.

  • You wrote from centrality import centrality_betweenness. Shouldn't it be from sage.graphs.centrality import centrality_betweenness ?

They are in the same folder, so it works. It is even more robust as a result, as we can move them wherever we want and the path does not change.

I also changed a Cython flag which checks for exceptions when doing float divisions.

Nathann

comment:4 Changed 5 years ago by git

  • Commit changed from 0402abffb7656e123ce64d38e6a8cb8392eecf89 to 47291d7a004cef908625eeb40ac3997645a7cce3

Branch pushed to git repo; I updated commit sha1. New commits:

47291d7trac #18137: Review

comment:5 Changed 5 years ago by chapoton

there is numerical noise, add tolerance, see patchbot report.

comment:6 Changed 5 years ago by chapoton

  • Status changed from needs_review to needs_work

comment:7 Changed 5 years ago by git

  • Commit changed from 47291d7a004cef908625eeb40ac3997645a7cce3 to 421dc01c948b631dd227c17aeba0459080293b94

Branch pushed to git repo; I updated commit sha1. New commits:

421dc01trac #18137: Numerical noise

comment:8 Changed 5 years ago by ncohen

  • Status changed from needs_work to needs_review

Another thing for which pathbot save us :-P

Thanks,

Nathann

comment:9 Changed 5 years ago by ncohen

  • Description modified (diff)

comment:10 follow-up: Changed 5 years ago by vdelecroix

The advantage of using rationals is that it was exact! Here you are using floats but without any guarantee on the result. Aren't you? Do you have an estimate on the error depending on the number of vertices/edges? One solution solution would be to use ball arithmetic that also produce a bound on the error (see the recently added arb package). Or interval arithmetic (but that is slower).

Vincent

comment:11 Changed 5 years ago by dcoudert

Although Nathann would prefer not to, we could have 2 versions of the code, the fast one as default, and a slower exact one. David.

comment:12 in reply to: ↑ 10 ; follow-up: Changed 5 years ago by ncohen

Hello,

The advantage of using rationals is that it was exact!

And this is the very reason why I wrote both implementations.

I am not so sure that it is a very big problem, however, as the algorithm will not add noise to noise like it can happen for PDE computations.

The current version of centrality_betweenness, from NetworkX shipped with Sage, also computes on floats (in Python):

    sage: import networkx
    sage: networkx.algorithms.centrality.betweenness._accumulate_basic??

I wanted to check how Boost does it, but I was not able to locate the source code (God, how can anyone read those files???).

(15 minutes later)

Here it is! Line 338 of:

http://boost.cvs.sourceforge.net/viewvc/boost/boost/boost/graph/betweenness_centrality.hpp?annotate=1.2.6.1

So the answer is that "it depends of dependency_type", which is.. A template.

For igraph it is apparently a double too: https://github.com/igraph/igraph/blob/master/src/centrality.c#L1685 https://github.com/igraph/igraph/blob/master/src/centrality.c#L1804

For graph-tools (last of the libraries compared on the link in the ticket's description) it is apparently a double too, though I can't make sure for I do not find the get_betweenness function

https://git.skewed.de/count0/graph-tool/blob/master/src/graph_tool/centrality/__init__.py#L326

Sooooooo please don't just limit your argumentation to "not exact=BAD". I care about this, and for this reason I implemented both (which definitely took more than a couple of minutes as you can imagine), but I do believe that for this kind of computations working on floats is not that bad, for I know when the divisions occur and, well, we do not mind much.

I would personally be very happy to have both in Sage, with an easy flag to switch from one implementation to the other. If you just checkout the first of my commits you will see that only one variable need to be changed so that double become rationals. My trouble is that using Cython's preprocessor instructions requires to run sage -b, and we do not want that.

I would also like to NOT have the same code copy/pasted twice, and to not pay for the 'if' inside of the loops.

I would be happy to have both if there is a free (in terms of computations) way to handle both at once, and a cheap (in term of lines of code) way to have both.

So far I did not find any way out, and I thought that the best was to have what everybody seemds interested in: computations on double (we can also turn them into 'long double' if necessary).

Nathann

P.S.: I uploaded a commit with both versions so that it will be available somewhere (and not on my computer only) if we ever need that implementation. I did that on purpose, to have it archived somewhere.

comment:13 in reply to: ↑ 12 ; follow-up: Changed 5 years ago by vdelecroix

Replying to ncohen:

Sooooooo please don't just limit your argumentation to "not exact=BAD".

Do not oversimplify. My argumentation was "not exact => extra care needed". Floats are wonderful because they are very fast.

And this is the very reason why I wrote both implementations.

Would be interesting to investigate (experimentally) the error propagation.

I am not so sure that it is a very big problem, however, as the algorithm will not add noise to noise like it can happen for PDE computations.

Already summing (a lot of) floating point numbers create problems. Simple (not so dramatic) example

sage: sum(1.r/n for n in range(1,10000000))
16.69531126585727
sage: sum(1.r/n for n in range(9999999,0,-1))
16.695311265859964

If you mix that with division, it is of course even worse.

I care about this, and for this reason I implemented both (which definitely took more than a couple of minutes as you can imagine), but I do believe that for this kind of computations working on floats is not that bad, for I know when the divisions occur and, well, we do not mind much.

I also believe so, but it would be better if we were sure and the documentation mentioned it. Something like: if you do have a graph with m vertices and n edges than the worst case is an error of function(m,n).

Vincent

comment:14 in reply to: ↑ 13 Changed 5 years ago by ncohen

Hello !

I agree that float operations make errors, but I do not know how to evaluate it. I expect the relative error to stay very very small in those cases, and in the graphs that are of interest for the networks community.

Would you know a trick to have both implementations available in the code (without recompilation)? I do not think that we can have 'real templates' in Cython, can we?

Nathann

comment:15 follow-up: Changed 5 years ago by ncohen

Okay. Here it is. It cost me the last four hours.

Nathann

comment:16 Changed 5 years ago by git

  • Commit changed from 421dc01c948b631dd227c17aeba0459080293b94 to 2f2fbd47b8527ca6a4dac02f01f8c2167907cd20

Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:

2f2fbd4trac #18137: Add new centrality module

comment:17 in reply to: ↑ 15 ; follow-up: Changed 5 years ago by vdelecroix

Replying to ncohen:

Okay. Here it is. It cost me the last four hours.

Youhou! You initiated me to the world of Cython templating!

I am having a careful look right now.

comment:18 in reply to: ↑ 17 Changed 5 years ago by ncohen

Youhou! You initiated me to the world of Cython templating!

I am having a careful look right now.

It is cool without being great. There are several limitations that you will only notice if you write some code yourself. In particular, the templating forced me to introduce a pure C function.

Nathann

comment:19 Changed 5 years ago by vdelecroix

At least from the following example, the computation with double is perfectly valid

sage: g = graphs.RandomGNP(100,.2)
sage: cb0 = g.centrality_betweenness(exact=0)
sage: cb1 = g.centrality_betweenness(exact=1)
sage: max(cb0[i]-cb1[i] for i in cb0)
6.938893903907228e-18

The mantissa of the double is 2^(-53) ~ 10^(-16) and the numbers are about 0.01 which makes the error affect only the one or two last bits.

comment:20 Changed 5 years ago by vdelecroix

All right. I did not check what the algorithm does, but at least I can propose a simplification in the Cython fused business. Have a look at public/18137.

Vincent

comment:21 Changed 5 years ago by ncohen

  • Branch changed from u/ncohen/18137 to public/18137
  • Commit changed from 2f2fbd47b8527ca6a4dac02f01f8c2167907cd20 to 6dfcbc3b9043f911ab3f706d4b18dbce34b94d66

Works for me. It would be cool if you reviewed the code, however. For some reason these days people come set my tickets to 'needs_work' and leave forever, my ticket in needs_review


New commits:

6dfcbc3Trac 18137: some simplification + tiny doc changes
Version 0, edited 5 years ago by ncohen (next)

comment:22 Changed 5 years ago by dcoudert

Hello,

you should add the description of parameter exact in the cython centrality_betweenness method.

It does matter much, but you could specify the return type of the C method (dict).

otherwise, the method is working perfectly on my mac ;)

David.

comment:23 Changed 5 years ago by git

  • Commit changed from 6dfcbc3b9043f911ab3f706d4b18dbce34b94d66 to f21ae32a86cfe0c9d86f75906491cb804af45b68

Branch pushed to git repo; I updated commit sha1. New commits:

f21ae32trac #18137: Review

comment:24 Changed 5 years ago by ncohen

  • Cc borassi added

comment:26 Changed 5 years ago by chapoton

still not passing the tests

and please use ....: newstyle doctest continuation

comment:27 Changed 5 years ago by git

  • Commit changed from f21ae32a86cfe0c9d86f75906491cb804af45b68 to eaa6a3eccacf8f90064cde79febef149c7b8e5e8

Branch pushed to git repo; I updated commit sha1. New commits:

eaa6a3etrac #18137: ... -> ....:

comment:28 Changed 5 years ago by ncohen

Done. Not that I did not introduce any '...'. I only added a "# tol" flag on an existing one.

Nathann

comment:29 follow-up: Changed 5 years ago by borassi

Hello, this is my first comment: please, forgive me if I make mistakes due to my lack of experience, and I am willing to receive suggestions. First of all, congrats for the excellent running time! Some comments follow (I think the first is important, while the others are only "style" comments):

  1. bitset_union(seen,seen,next_layer): this operation takes time O(n), and it is performed at each layer. This means that, if there are many layers, then the running time might be O(n2) to perform a BFS, and O(mn2) overall (for instance, if the input is a path). I would change the code by memorizing the distances from the source, and making tests with distances. Example: it takes more time to analyze a path of length 10,000 than a G(N,p) random graph with N=10,000 and p=0.001 (with about 100,000 edges).
  2. Only to improve readability of the code: why do you use layer_current_beginning, layer_current_end, and layer_next_end? I would rather use a variable for the start of the queue and another for the end of the queue, as in a standard BFS. Is there any particular reason why you do not do it?
  3. In the comment part, Section "ALGORITHM", I would add something to explain what happens if k=0.

Hope it is useful!

Michele

comment:30 in reply to: ↑ 29 ; follow-up: Changed 5 years ago by ncohen

Hello,

this is my first comment: please, forgive me if I make mistakes due to my lack of experience, and I am willing to receive suggestions.

Your comments are rather good, actually.

  1. bitset_union(seen,seen,next_layer): this operation takes time O(n), and it is performed at each layer. This means that, if there are many layers, then the running time might be O(n2) to perform a BFS, and O(mn2) overall (for instance, if the input is a path). I would change the code by memorizing the distances from the source, and making tests with distances. Example: it takes more time to analyze a path of length 10,000 than a G(N,p) random graph with N=10,000 and p=0.001 (with about 100,000 edges).

HMmm.. Well, indeed when thinking of a long path this may actually make a big difference. Do you feel like giving it a try and telling us about the difference?

I believe that it may make performances worse in some case, though I do not know by how much. Storing everything as a bitset (instead of storing distances) also makes things more compact in memory, and the cpu cache works better. Though again, I have no idea how much time that represents.

  1. Only to improve readability of the code: why do you use layer_current_beginning, layer_current_end, and layer_next_end? I would rather use a variable for the start of the queue and another for the end of the queue, as in a standard BFS. Is there any particular reason why you do not do it?

I do not understand your question. What do you mean by "a variable for the start of the que and another for the end of the queue"? This is precisely what layer_current_beginning and layer_current_end do, in my understanding.

  1. In the comment part, Section "ALGORITHM", I would add something to explain what happens if k=0.

In this case the sum is empty, and thus equal to 0. Do you think that it should be emphasized?

Nathann

comment:31 Changed 5 years ago by ncohen

Okay, two news:

1) You are right, when profiling the code on a long path the bottleneck is this 'or'

2) We can have everything at the same time. Instead of clearing the new layer bitset and running this big 'or', we can keep adding stuff to it without ever clearing it, AND only adding the new elements to the 'seen' bitset at the end of the loop (using the content of the queue). 100% win. I will upload a commit in a second.

Nathann

comment:32 Changed 5 years ago by ncohen

FYI, before the optimization

sage: g = graphs.PathGraph(10000)
sage: %time _=g.centrality_betweenness()
CPU times: user 10 s, sys: 0 ns, total: 10 s
Wall time: 10 s

comment:33 Changed 5 years ago by git

  • Commit changed from eaa6a3eccacf8f90064cde79febef149c7b8e5e8 to c84caef2c24cf2a42bf802c6d2e829886fd74149

Branch pushed to git repo; I updated commit sha1. New commits:

c84caeftrac #18137: Thanks Michele!!!

comment:34 Changed 5 years ago by ncohen

Here is is :-)

sage: g = graphs.PathGraph(10000)
sage: %time _=g.centrality_betweenness()
CPU times: user 1.76 s, sys: 8 ms, total: 1.77 s
Wall time: 1.75 s

Thanks,

Nathann

comment:35 in reply to: ↑ 30 ; follow-up: Changed 5 years ago by borassi

Replying to ncohen:

Hello,

this is my first comment: please, forgive me if I make mistakes due to my lack of experience, and I am willing to receive suggestions.

Your comments are rather good, actually.

  1. bitset_union(seen,seen,next_layer): this operation takes time O(n), and it is performed at each layer. This means that, if there are many layers, then the running time might be O(n2) to perform a BFS, and O(mn2) overall (for instance, if the input is a path). I would change the code by memorizing the distances from the source, and making tests with distances. Example: it takes more time to analyze a path of length 10,000 than a G(N,p) random graph with N=10,000 and p=0.001 (with about 100,000 edges).

HMmm.. Well, indeed when thinking of a long path this may actually make a big difference. Do you feel like giving it a try and telling us about the difference?

I believe that it may make performances worse in some case, though I do not know by how much. Storing everything as a bitset (instead of storing distances) also makes things more compact in memory, and the cpu cache works better. Though again, I have no idea how much time that represents.

Well, it seems you have already done it! :-)

  1. Only to improve readability of the code: why do you use layer_current_beginning, layer_current_end, and layer_next_end? I would rather use a variable for the start of the queue and another for the end of the queue, as in a standard BFS. Is there any particular reason why you do not do it?

I do not understand your question. What do you mean by "a variable for the start of the que and another for the end of the queue"? This is precisely what layer_current_beginning and layer_current_end do, in my understanding.

My point is that I do not like the nested loops:

while layer_current_beginning<layer_current_end:
    for j in range(layer_current_beginning,layer_current_end):

I would rather use only the while loop, by replacing j with layer_current_beginning, and adding layer_current_beginning++ at the end of the while. However, I realize now that, if you do not want to store distances, this is probably not possible, because you have two for loops. So, maybe the best is to leave it as it is.

  1. In the comment part, Section "ALGORITHM", I would add something to explain what happens if k=0.

In this case the sum is empty, and thus equal to 0. Do you think that it should be emphasized?

That's what I meant, but if you think it's not worth doing it, no problem!

Nathann

comment:36 in reply to: ↑ 35 Changed 5 years ago by ncohen

Hello,

Well, it seems you have already done it! :-)

Well.. Three lines of code for an "all-cases improvement" and an order of magnitude for the worst case... Of course :-P

My point is that I do not like the nested loops:

I would rather use only the while loop, by replacing j with layer_current_beginning, and adding layer_current_beginning++ at the end of the while. However, I realize now that, if you do not want to store distances, this is probably not possible, because you have two for loops. So, maybe the best is to leave it as it is.

Yep yep, I am forced to do that if I want to know where exactly I am without storing distances ^^;

That's what I meant, but if you think it's not worth doing it, no problem!

I just looked at this explanation, and I do not see any way to hint at it (without saying the obvious 'if the sum is empty, well, it's zero'). SOooooo I will pass.

Thanks again for the speedup,

Nathann

comment:37 Changed 5 years ago by dcoudert

One possible extra speedup: decompose the graph into biconnected components ;) the centrality of cut vertices is obvious to compute. For other vertices it might be more difficult to gather every parts.

David.

comment:38 Changed 5 years ago by ncohen

Ahahah. Well, honestly I don't mind much. I am only implementing this to be able to say 6 months from now (future stable release) to anybody that compares graph libraries that they should also include Sage, and that we are faster than everybody else. Boost included :-P

Nathann

comment:39 Changed 5 years ago by dcoudert

That's fair enough ;)

We will certainly have future patches for other stuff.

One point: apparently boost computes simultaneously the vertex and the edge betweenness. This may explain some slow-down.

David.

comment:40 Changed 5 years ago by ncohen

Well. We can have it for free too if we care. Instead of

betweenness_source[v] += (betweenness_source[u]+1)*(n_paths_from_source[v]/n_paths_from_source[u])

you must save the right value somewhere (that's your edge betweenness). It may induce a small loss in speed, but certainly not an integer factor.

Nathann

P.S.: Okay, now that I implemented so many cool things would anybody feel like ending the review? :-P

comment:41 Changed 5 years ago by borassi

  • Status changed from needs_review to positive_review

comment:42 Changed 5 years ago by ncohen

HMmmmmmmmmmm O_o

Well, I don't mind much your setting my ticket to positive review, but given that you are new here... Have you looked at this page?

http://www.sagemath.org/doc/developer/reviewer_checklist.html

I am not saying that newcomers are not allowed to change the status -- everybody is, and everybody can review anything here -- it is just that I want to make sure that you know what reviewing a ticket involves (running the tests, wondering if they are sufficient, building the doc).

Thanks anyway,

Nathann

comment:43 Changed 5 years ago by borassi

  • Status changed from positive_review to needs_review

comment:44 Changed 5 years ago by borassi

Sorry, this is the "mistake due to lack of experience". I thought "positive review" meant that I was happy with the code, but now I understand it is much more. I think it's better to leave this issue to more experienced people.

comment:45 Changed 5 years ago by ncohen

No proooooob!!! If you have some spare time you can read our manual a bit. Reviewing a ticket is not very complicated and the 'technical checks' do not take more than a couple of minutes once you get used to them. And of course you can ask us any question if the manual isn't clear ;-)

Nathann

comment:46 follow-up: Changed 5 years ago by dcoudert

  • Status changed from needs_review to needs_work

One issue:

sage: G = Graph()
sage: G.centrality_betweenness()
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
...
ValueError: bitset capacity must be greater than 0

Did you know that your code is also working for multi-graphs?

sage: G = Graph(multiedges=True)
sage: G.add_edge(0,1)
sage: G.add_edge(0,1)
sage: G.add_edge(1,2)
sage: G.add_edge(2,3)
sage: G.add_edge(0,3)
sage: G.centrality_betweenness(exact=1)
{0: 2/9, 1: 2/9, 2: 1/9, 3: 1/9}

comment:47 Changed 5 years ago by git

  • Commit changed from c84caef2c24cf2a42bf802c6d2e829886fd74149 to c9f83e7edf6d3b5a7e74b22a98127fbdbbc8cee6

Branch pushed to git repo; I updated commit sha1. New commits:

c9f83e7trac #18137: Trivial cases... As always.

comment:48 in reply to: ↑ 46 Changed 5 years ago by ncohen

One issue:

Right. Fixed.

Did you know that your code is also working for multi-graphs?

Yeah, that was a good news! At some point I wondered whether I should add a 'scream if not simple' somewhere, then figured out that it worked fine. It also extends the definition in the most natural way, i.e. by considering a path as a set of edges instead of a set of vertices.

And it also works for loops :-PPPP

Nathann

Last edited 5 years ago by ncohen (previous) (diff)

comment:49 Changed 5 years ago by dcoudert

  • Reviewers set to David Coudert
  • Status changed from needs_work to positive_review

Good.

comment:50 Changed 5 years ago by vbraun

  • Status changed from positive_review to needs_work

I'm getting this on 32-bit. You should probably add an # abs tol

sage -t --long src/sage/graphs/graph.py
**********************************************************************
File "src/sage/graphs/graph.py", line 4662, in sage.graphs.graph.Graph.?
Failed example:
    (graphs.ChvatalGraph()).centrality_betweenness(
      normalized=False) # abs tol abs 1e10
Expected:
    {0: 3.833333333333333, 1: 3.833333333333333, 2: 3.333333333333333,
     3: 3.333333333333333, 4: 3.833333333333333, 5: 3.833333333333333,
     6: 3.333333333333333, 7: 3.333333333333333, 8: 3.333333333333333,
     9: 3.333333333333333, 10: 3.333333333333333,
     11: 3.333333333333333}
Got:
    {0: 3.833333333333333,
     1: 3.833333333333333,
     2: 3.333333333333333,
     3: 3.333333333333333,
     4: 3.833333333333333,
     5: 3.833333333333333,
     6: 3.333333333333333,
     7: 3.3333333333333335,
     8: 3.333333333333333,
     9: 3.333333333333333,
     10: 3.333333333333333,
     11: 3.333333333333333}
**********************************************************************
1 item had failures:
   1 of  66 in sage.graphs.graph.Graph.?
    [652 tests, 1 failure, 15.40 s]

comment:51 Changed 5 years ago by git

  • Commit changed from c9f83e7edf6d3b5a7e74b22a98127fbdbbc8cee6 to dddf5024500101cae695b56943f69d433351276e

Branch pushed to git repo; I updated commit sha1. New commits:

dddf502trac #18137: broken doctest

comment:52 Changed 5 years ago by ncohen

  • Status changed from needs_work to positive_review

comment:53 Changed 5 years ago by git

  • Commit changed from dddf5024500101cae695b56943f69d433351276e to 2db68fb0f56ce38214484cd4619e2fb2a8f4aa7b
  • Status changed from positive_review to needs_review

Branch pushed to git repo; I updated commit sha1 and set ticket back to needs_review. This was a forced push. New commits:

2db68fbtrac #18137: broken doctest

comment:54 Changed 5 years ago by ncohen

  • Status changed from needs_review to positive_review

comment:55 Changed 5 years ago by vbraun

  • Branch changed from public/18137 to 2db68fb0f56ce38214484cd4619e2fb2a8f4aa7b
  • Resolution set to fixed
  • Status changed from positive_review to closed
Note: See TracTickets for help on using tickets.