#31520 closed enhancement (fixed)

refresh similarity class

Reported by: Frédéric Chapoton Owned by:
Priority: major Milestone: sage-9.3
Component: combinatorics Keywords:
Cc: Samuel Lelièvre Merged in:
Authors: Frédéric Chapoton Reviewers: Samuel Lelièvre
Report Upstream: N/A Work issues:
Branch: 39c3abf (Commits, GitHub, GitLab) Commit: 39c3abf6b3611124c1b0faf5badbbddcc59a6d7e
Dependencies: Stopgaps:

Status badges

Description

using autopep8 to cleanup

and make the long time doctest much shorter

Change History (5)

comment:1 Changed 19 months ago by Frédéric Chapoton

Branch: u/chapoton/31520
Commit: 39c3abf6b3611124c1b0faf5badbbddcc59a6d7e
Status: newneeds_review

New commits:

39c3abfrefresh /similarity_class_type.py and shorter doctest

comment:2 Changed 19 months ago by Samuel Lelièvre

Cc: Samuel Lelièvre added
Reviewers: Samuel Lelièvre

Good improvements.

Optional further steps, but feel free to consider them out of scope here.

Reindex to clarify and avoid subtractions:

-    return prod(1 - q**(-i - 1) for i in range(n))
+    return prod(1 - q**i for i in range(-n, 0))

Use iterator rather than list for product:

-    return q**centralizer_algebra_dim(la)*prod([fq(m, q=q) for m in la.to_exp()])
+    return q**centralizer_algebra_dim(la)*prod(fq(m, q=q) for m in la.to_exp())

or maybe even use product with starting point:

-    return q**centralizer_algebra_dim(la)*prod([fq(m, q=q) for m in la.to_exp()])
+    return prod((fq(m, q=q) for m in la.to_exp()), q**centralizer_algebra_dim(la))

Use parentheses instead of end-of-line backslashes:

-        return isinstance(other, PrimarySimilarityClassType) and \
-            self.degree() == other.degree() and \
-            self.partition() == other.partition()
+        return (isinstance(other, PrimarySimilarityClassType)
+                and self.degree() == other.degree()
+                and self.partition() == other.partition())

Use iterator rather than list for products:

-        return prod([PT.centralizer_group_card(q=q) for PT in self])
+        return prod(PT.centralizer_group_card(q=q) for PT in self)
-        numerator = prod([prod([primitives(d+1, invertible=invertible, q=q)-i for i in range(list_of_degrees.count(d+1))]) for d in range(maximum_degree)])
+        numerator = prod(prod(primitives(d+1, invertible=invertible, q=q) - i
+                              for i in range(list_of_degrees.count(d + 1)))
+                         for d in range(maximum_degree))

In that last case, would a double comprehension be faster than a product of products?

-        numerator = prod([prod([primitives(d+1, invertible=invertible, q=q)-i for i in range(list_of_degrees.count(d+1))]) for d in range(maximum_degree)])
+        numerator = prod(primitives(d + 1, invertible=invertible, q=q) - i
+                         for d in range(maximum_degree)
+                         for i in range(list_of_degrees.count(d + 1)))

(or maybe a falling factorial or Pochhammer symbol?)

Iterator vs list for products and sums:

-        return prod([PT.statistic(func, q=q) for PT in self])
+        return prod(PT.statistic(func, q=q) for PT in self)
-            return sum([tau.statistic(stat, q=q)*tau.number_of_matrices(invertible=invertible, q=q) for tau in self])
+            return sum(tau.statistic(stat, q=q)
+                       * tau.number_of_matrices(invertible=invertible, q=q)
+                       for tau in self)
-            return sum([tau.statistic(stat, q=q)*tau.number_of_classes(invertible=invertible, q=q) for tau in self])
+            return sum(tau.statistic(stat, q=q)
+                       * tau.number_of_classes(invertible=invertible, q=q)
+                       for tau in self)
-            return sum([tau.statistic(stat, invertible=invertible, q=q) for tau in self])
+            return sum(tau.statistic(stat, invertible=invertible, q=q)
+                       for tau in self)

Line break:

-        yield (tau.centralizer_group_card(q=q), tau.number_of_classes(invertible=invertible, q=q))
+        yield (tau.centralizer_group_card(q=q),
+               tau.number_of_classes(invertible=invertible, q=q))

Iterator vs list in sums and products:

-        return prod([ext_orbits(PT, q=q, selftranspose=selftranspose) for PT in tau])
+        return prod(ext_orbits(PT, q=q, selftranspose=selftranspose)
+                    for PT in tau)
-    return sum([tau.number_of_classes(invertible=invertible, q=q)*ext_orbits(tau, q=q, selftranspose=selftranspose) for tau in SimilarityClassTypes(n)])
+    return sum(tau.number_of_classes(invertible=invertible, q=q)
+               * ext_orbits(tau, q=q, selftranspose=selftranspose)
+               for tau in SimilarityClassTypes(n))
-            size = prod([list(entry)[0] for entry in item])
-            freq = prod([list(entry)[1] for entry in item])
+            size = prod(list(entry)[0] for entry in item)
+            freq = prod(list(entry)[1] for entry in item)

Grab pair components at iteration:

-        for pair in ext_orbit_centralizers(tau, q=q, selftranspose=selftranspose):
-            yield (q**tau.centralizer_algebra_dim()*pair[0], tau.number_of_classes(invertible=invertible, q=q)*pair[1])
+        for a, b in ext_orbit_centralizers(tau, q=q, selftranspose=selftranspose):
+            yield (q**tau.centralizer_algebra_dim()*a,
+                   tau.number_of_classes(invertible=invertible, q=q)*b)

comment:3 Changed 19 months ago by Frédéric Chapoton

C'est sur, y a encore du boulot sur ce fichier. J'ai pas trop envie d'y passer plus de temps. L'objectif est surtout d'avoir un doctest plus raisonnable.

comment:4 Changed 19 months ago by Samuel Lelièvre

Status: needs_reviewpositive_review

Alors restons-en là pour aujourd'hui.

comment:5 Changed 19 months ago by Volker Braun

Branch: u/chapoton/3152039c3abf6b3611124c1b0faf5badbbddcc59a6d7e
Resolution: fixed
Status: positive_reviewclosed
Note: See TracTickets for help on using tickets.