#10963 closed enhancement (fixed)
Axioms and more functorial constructions
Reported by:  nthiery  Owned by:  stumpc5 

Priority:  major  Milestone:  sage6.3 
Component:  categories  Keywords:  days54 
Cc:  sagecombinat, SimonKing, saliola, aschilling, vbraun, nbruin, zabrocki  Merged in:  
Authors:  Nicolas M. Thiéry  Reviewers:  Volker Braun, Nils Bruin, Peter Bruin, Frédéric Chapoton, Darij Grinberg, Florent Hivert, Simon King, Travis Scrimshaw 
Report Upstream:  N/A  Work issues:  To be merged simultaneously with #15801 
Branch:  c16f18b (Commits)  Commit:  
Dependencies:  #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394, #15150, #15506, #15757, #15759, #16244, #16269  Stopgaps: 
Description (last modified by )
This ticket implements:
 Support for full subcategories defined by an axiom (Finite, Infinite, Facade, Commutative, Associative, Unital, Inverse, Distributive, NoZeroDivisors?, Division, FiniteDimensional?, Connected, WithBasis?, Irreducible), and joins thereof:
sage: Groups() & Sets().Finite() Category of finite groups sage: Algebras(QQ).Finite() & Monoids().Commutative() Category of finite commutative algebras over Rational Field sage: (Monoids() & CommutativeAdditiveGroups()).Distributive() Category of rings sage: Rings().Division() & Sets().Finite() Category of finite fields
 New categories:
 AdditiveSemigroups?, AdditiveMonoids?, AdditiveGroups?
 DistributiveMagmasAndAdditiveMagmas?
 MagmaticAlgebras? (will replace Algebras in #15043)
 AssociativeAlgebras?
 UnitalAlgebras?
 Algebras of additive semigroups and monoids
 More mathematical rules:
 A subquotient of a finite set is a finite set
 The algebra of a finite set is finite dimensional
 The algebra of a commutative magma is commutative
 A finite division ring is a field
 ...
 Documentation:
 More documentation for IsomorphicObjects?
 Complete revamping of sage.categories.primer
 Misc
 Use SubcategoryMethods? to put the functorial constructions where they belong. E.g. DualObjects?, TensorProducts?, and Graded are now only defined for subcategories of Modules.
 More lazy imports, removed a bunch of unused imports, ...
This ticket is dedicated to the town of Megantic where I was so warmly welcomed and a good chunk of this ticket got implemented!
Attachments (7)
Change History (807)
comment:1 Changed 6 years ago by
 Dependencies set to #11224
comment:2 Changed 6 years ago by
 Description modified (diff)
comment:3 Changed 6 years ago by
 Owner changed from nthiery to stumpc5
comment:4 Changed 6 years ago by
 Dependencies changed from #11224 to #11224, #8327
comment:5 Changed 6 years ago by
 Cc SimonKing added
comment:6 Changed 6 years ago by
 Status changed from new to needs_info
 Work issues set to Find dependencies
comment:7 Changed 6 years ago by
 Work issues changed from Find dependencies to Find dependencies. Finite dimensional vector spaces
Working with the combinat queue, I could do some first tests. What I find very strange is the fact that the category of vector spaces coincides with the category of finitedimensional vector spaces:
sage: VectorSpaces(QQ).FiniteDimensional() is VectorSpaces(QQ) True
Is that really intended? I thought that the idea of this ticket is to create new categories dynamically. Hence, even though there previously was no specific implementation of the category of finite dimensional vector spaces, the construction VectorSpaces(QQ).FiniteDimensional()
would automatically create one. Or am I misunderstanding something?
comment:8 Changed 5 years ago by
 Cc saliola added
comment:9 Changed 5 years ago by
 Cc aschilling added
comment:10 Changed 5 years ago by
Would you mind actually uploading the patch in question here?
comment:11 Changed 5 years ago by
 Description modified (diff)
comment:12 Changed 4 years ago by
Little update: after two good weeks of work, here is the status of the patch in the SageCombinat queue:
 100% doctested
 Passes all long tests (with the patches above it; well, there is a failure in sage_object.pyx, but it is caused by an above tests)
 Reasonably documented, but needs a pass of proof reading and an overview documentation
 Has not yet been tested for performance; but creating an instance of each of the categories in sage.categories (120 of them) takes less than 0.1s, so nothing horrible a priori.
 Needs some discussions for good naming conventions and later directions. I have made a precise list which I'll post later on.
 Has some trivial dependencies on unrelated patches that are close to positive review; I need to figure out the exact list.
In short: getting ready for review next week!
comment:13 Changed 4 years ago by
 Dependencies changed from #11224, #8327 to #11224, #8327, #10193 #12895, #14516, #14722
comment:14 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193 #12895, #14516, #14722 to #11224, #8327, #10193, #12895, #14516, #14722, #13589
 Work issues Find dependencies. Finite dimensional vector spaces deleted
comment:15 Changed 4 years ago by
Just for the record: I currently have applied
trac_12876_category_abstract_classes_for_hom.patch trac11935_weak_pickling_by_constructionnt.patch trac_11935weak_pickling_by_constructionreviewts.patch trac_12895subcategorymethodsnt.patch trac_12895review.patch trac_10193graded_setsrebased.patch trac_10193graded_setsreviewts.patch trac_13589categoriesc3_under_controlnt.patch trac13589_cmp_key_attribute.patch trac13589_improve_startuptime.patch trac_12630_quivers_v2.patch trac12630_refactor_code.2.patch trac_14722lazy_import_at_startupnt.patch trac_14266_ascii_art_13_05_15_EliXjbp.patch trac_14266ascii_artreviewts.patch trac_14266_terminal_width.patch trac_14402tensor_product_infinite_crystalsts.patch trac_14143alcovepathal.3.patch trac_14413elementary_crystalsbs.patch trac_14516crystals_speedupts.patch
on top of sage5.10.rc1 (I think these are all dependencies).
So, as soon as Nicolas told me how to get the patch from git and what is meant by "and followups", I can start reviewing!
comment:16 Changed 4 years ago by
With these list I gave above, the patch does not apply. Some of it might be to blame the latest version of my trac13589_improve_startuptime.patch
. So, let's try to remove this. But there seem to be further dependencies.
comment:17 Changed 4 years ago by
Even when I remove trac13589_improve_startuptime.patch
, I still get 4 mismatches and 1 noise in category.py, 4 mismatches in category_singleton.pyx, and 1 mismatch and 2 noises in c3_controlled.pyx
comment:18 Changed 4 years ago by
 Reviewers set to Simon King
 Status changed from needs_info to needs_review
Back at work. These patches on top of sage5.11.b3 do apply:
trac_14516crystals_speedupts.2.patch trac_14722lazy_import_at_startupnt.patch trac_13589categoriesc3_under_controlnt.patch trac_10963more_functorial_constructionsnt.patch
(the last patch applies with a little fuzz)
However, if we decide to include the two additional patches from #13589, then the last patch needs to be rebased.
For now, I'll try without the two additional patches, since they only concern performance (and seem to have disappointingly little effect).
comment:19 Changed 4 years ago by
Hi Simon!
Great that the patches apply.
I am happy to handle the rebase on top of the extra patches for #13589. I also have some modifications in primer.py that I need to finish merging it. I'll try to finish this today. I guess there is enough to review elsewhere to keep you busy until then :)
Thanks a lot!
Cheers,
Nicolas
comment:20 Changed 4 years ago by
 Work issues set to Rebase wrt. #13589
comment:21 followup: ↓ 22 Changed 4 years ago by
Just to make sure I understand correctly: During __init__
of a group algebra, only the coercion from the group is registered, since the coercion from the base ring is registered during __init_extra__
, which is obtained from the category?
comment:22 in reply to: ↑ 21 Changed 4 years ago by
Replying to SimonKing:
Just to make sure I understand correctly: During
__init__
of a group algebra, only the coercion from the group is registered, since the coercion from the base ring is registered during__init_extra__
, which is obtained from the category?
Yes indeed!
There is nothing specific to do for GroupAlgebras? about this feature, since it's already provided by Algebras.
comment:23 followup: ↓ 24 Changed 4 years ago by
There is now a category of nonassociative algebras. But that's misleading, because it certainly contains all associative algebras too, isn't it? I'd say that "nonassociative nonunital (noncommutative) (nonfinitedimensional) algebras" should simply be "algebras".
In other words, I am against mentioning the absence of an axiom in the category name. Only the presence of an axiom must play a role.
comment:24 in reply to: ↑ 23 ; followup: ↓ 25 Changed 4 years ago by
Replying to SimonKing:
There is now a category of nonassociative algebras. But that's misleading, because it certainly contains all associative algebras too, isn't it? I'd say that "nonassociative nonunital (noncommutative) (nonfinitedimensional) algebras" should simply be "algebras".
In other words, I am against mentioning the absence of an axiom in the category name. Only the presence of an axiom must play a role.
Yeah, that's been a recurrent issue. I agree that this is not nice, even though it's relatively common practice in maths to label as "nonfoo things" the larger field of study where one is interested in things that are "not necessarily foo". For non associative non unital algebras, Florent mentionned yesterday that "magmatic algebras" was fairly standard, and I am happy to go with it. Do you have a better name for "non unital algebras"? I am not really keen on "NotNecessarilyUnitalAlgebras?".
Cheers,
comment:25 in reply to: ↑ 24 ; followup: ↓ 26 Changed 4 years ago by
Replying to nthiery:
Do you have a better name for "non unital algebras"? I am not really keen on "NotNecessarilyUnitalAlgebras?".
Yes. A not necessarily unital not necessarily associative not necessarily finitedimensional not necessarily noetherian not necessarily ... is commonly known as an algebra.
In other words, I suggest to name the categories exactly parallel to the axioms it provides. Actually, before reading your patch, I thought that you aim to automatically create a category of "associative algebras", given the category of algebras and the axiom "associative".
Hence, I think it should be
algebras / \ associative algebras unital algebras \ / associative unital algebras
and similar for commutative algebras, commutative associative algebras, commutative associative unital algebras, and so on.
comment:26 in reply to: ↑ 25 ; followup: ↓ 27 Changed 4 years ago by
Replying to SimonKing:
Yes. A not necessarily unital not necessarily associative not necessarily finitedimensional not necessarily noetherian not necessarily ... is commonly known as an algebra.
Yup, and that's indeed what Wikipedia says which is a good point. However in many textbooks and other pieces of literature "algebra" implicitly includes "associative" and "unital" (for the same reason that it will be heavy for us to write almost everywhere Algebras().Associative().Unital()).
More importantly: changing the semantic current "Algebras" in Sage would be seriously backward incompatible. And we would have to think about what we want to do about categories like "HopfAlgebras?" to keep things consistent.
So I definitely see your point but at this point I am not keen on opening yet another can of worms (both technical and social) to this already too big patch.
Actually, before reading your patch, I thought that you aim to automatically create a category of "associative algebras", given the category of algebras and the axiom "associative".
Up to the names, that's precisely what's its doing :)
Hence, I think it should be
algebras / \ associative algebras unital algebras \ / associative unital algebras
What about, at least as a temporary measure, going for:
magmatic algebras / \ associative magmatic algebras unital magmatic algebras \ / algebras
(or any other notyetused name you like instead of "magmatic algebra")
Cheers,
Nicolas
comment:27 in reply to: ↑ 26 ; followup: ↓ 28 Changed 4 years ago by
Replying to nthiery:
However in many textbooks and other pieces of literature "algebra" implicitly includes "associative" and "unital"
Certainly there also exist textbooks that will for simplicity say "algebra" when they in fact mean "commutative algebra". But I would expect that all these textbooks state at some point the definition of (plain) algebras and later say that "for simplicity" or "unless stated otherwise" they assume whatever additional axioms.
And even "better": There were times when a certain algebraic community would only talk about finite groups. I recently heard colleagues talk about these times. It was like "they provided generators and relations and then needed to prove that it is a group", which in today's language is "they provided a group presentation and needed to prove that the group is finite".
You see: There are certain conventions peculiar to certain fields of research.
But I think a general computer algebra system should not be biased towards any of these peculiar conventions. Hence, it should use the "greatest common divisor" of the notions, which is: An Ralgebra is a an Rmodule and a multiplicative magma, such that multiplication is Rbilinear.
(for the same reason that it will be heavy for us to write almost everywhere Algebras().Associative().Unital()).
We can certainly have a shortcut for defining this thing.
More importantly: changing the semantic current "Algebras" in Sage would be seriously backward incompatible.
Backward compatibility is indeed important. It would be difficult to switch from Algebras in the current Sageuse to Algebras in the (I think) normal mathematical use.
However, I do think that to the very very least we should let Algebras()
print as "Category of unital associative algebras".
And we would have to think about what we want to do about categories like "HopfAlgebras?" to keep things consistent.
Wikipedia does not assume associativity for algebras, but it does assume coassociativity for coalgebras. Weird.
So I definitely see your point but at this point I am not keen on opening yet another can of worms (both technical and social) to this already too big patch.
Concerning social: I vividly remember many talks in the séminaire quantique in Strasbourg, entitled along the lines of "quasicommutative quasicocommutative quasiHopf algebras". I think these guys would be unhappy about tacitly assuming too many axioms for algebras. And I just checked: There also is the notion of quasiassociative algebras in literature...
What about, at least as a temporary measure, going for:
magmatic algebras / \ associative magmatic algebras unital magmatic algebras \ / algebras(or any other notyetused name you like instead of "magmatic algebra")
I have never heard about "magmatic algebras" before. But I have no better idea ("plain algebras"?).
Sagedevel poll? Sagealgebra poll (although this list seems dead)? Sagecombinatdevel poll?
comment:28 in reply to: ↑ 27 Changed 4 years ago by
Replying to SimonKing:
Sagedevel poll? Sagealgebra poll (although this list seems dead)? Sagecombinatdevel poll?
I think a CC to all three is in order in this case. I'll try to launch the poll tomorrow.
Thanks for your work on the review!
comment:29 Changed 4 years ago by
 Work issues changed from Rebase wrt. #13589 to Finish merging some changes in the primer
Patch rebased on top of #13589
comment:30 Changed 4 years ago by
The patch applies with fuzz, but it does apply.
comment:31 followup: ↓ 46 Changed 4 years ago by
Why is there a doubleunderscore __neg__
method as element method of additive groups? The reason for the single underscore arithmetic methods is, to my understanding, to enable the coercion model. But the coercion model is not involved in the case of __neg__
, isn't it? Hence, I think one should keep it double underscore, and should not ask for an implementation via a single underscore method.
comment:32 Changed 4 years ago by
Mental note: A lot of things happen when joining categories. I recall that in some examples forming the join of categories was the reason for slowness in algebraic constructions. Hence, we should have a look at the speed.
comment:33 Changed 4 years ago by
I see that you define a method _cmp_key(self)
for join categories, that just tells that one shouldn't call it on join categories. That's bad, because meanwhile _cmp_key is a lazy attribute (or an optimized version of a lazy attribute), hence, it is no method anyway. Can we remove this method?
comment:34 Changed 4 years ago by
You ask:
# TODO: find a better way to check that cls is an abstract class
What about a class attribute? Something like
"abstract" in cls.__base__.__dict__
Or is it generally not the case that cls.__base__
coincides with cls.__mro__[1]
?
comment:35 followup: ↓ 37 Changed 4 years ago by
make ptest resulted in
sage t devel/sage/sage/geometry/polyhedron/plot.py # 1 doctest failed sage t devel/sage/sage/categories/category.py # 3 doctests failed sage t devel/sage/sage/quivers/free_small_category.py # 2 doctests failed sage t devel/sage/sage/categories/category_with_axiom.py # 1 doctest failed
comment:36 Changed 4 years ago by
I don't see why the patchbot had trouble applying the patch. Let's kick it:
Apply trac_10963more_functorial_constructionsnt.patch
comment:37 in reply to: ↑ 35 Changed 4 years ago by
Replying to SimonKing:
make ptest resulted in
sage t devel/sage/sage/geometry/polyhedron/plot.py # 1 doctest failed sage t devel/sage/sage/categories/category.py # 3 doctests failed sage t devel/sage/sage/quivers/free_small_category.py # 2 doctests failed sage t devel/sage/sage/categories/category_with_axiom.py # 1 doctest failed
Yes, as you can see, I have #12630 applied as well. But this only introduces new modules, but does not interfere with old modules. Hence, I don't think the errors come from this.
comment:38 followup: ↓ 45 Changed 4 years ago by
Here are the failures in detail: The first is noise:
sage t devel/sage/sage/geometry/polyhedron/plot.py [178 tests, 5.55 s]  All tests passed!  Total time for all tests: 5.8 seconds cpu time: 4.3 seconds cumulative wall time: 5.6 seconds
The second:
sage t devel/sage/sage/categories/category.py ********************************************************************** File "devel/sage/sage/categories/category.py", line 1940, in sage.categories.category.Category.join Failed example: type(TCF) Expected: <class 'sage.categories.category_with_axiom.TestObjects.Commutative.Facade_with_category'> Got: <class 'sage.categories.category_with_axiom.Commutative.Facade_with_category'> ********************************************************************** File "devel/sage/sage/categories/category.py", line 1950, in sage.categories.category.Category.join Failed example: type(TCF) Expected: <class 'sage.categories.category_with_axiom.TestObjects.Commutative.FiniteDimensional_with_category'> Got: <class 'sage.categories.category_with_axiom.Commutative.FiniteDimensional_with_category'> ********************************************************************** File "devel/sage/sage/categories/category.py", line 1963, in sage.categories.category.Category.join Failed example: type(TUCF) Expected: <class 'sage.categories.category_with_axiom.TestObjects.FiniteDimensional.Unital.Commutative_with_category'> Got: <class 'sage.categories.category_with_axiom.Unital.Commutative_with_category'> ********************************************************************** 1 item had failures: 3 of 47 in sage.categories.category.Category.join [388 tests, 3 failures, 6.89 s]  sage t devel/sage/sage/categories/category.py # 3 doctests failed  Total time for all tests: 7.3 seconds cpu time: 6.8 seconds cumulative wall time: 6.9 seconds
The third needs to be taken care of only if #12630 finally gets a review.
The last one:
sage t devel/sage/sage/categories/category_with_axiom.py ********************************************************************** File "devel/sage/sage/categories/category_with_axiom.py", line 755, in sage.categories.category_with_axiom.CategoryWithAxiom.__reduce__ Failed example: C.__class__ Expected: <class 'sage.categories.distributive_magmas_and_additive_magmas.DistributiveMagmasAndAdditiveMagmas.AdditiveAssociative.AdditiveCommutative_with_category'> Got: <class 'sage.categories.distributive_magmas_and_additive_magmas.AdditiveAssociative.AdditiveCommutative_with_category'> ********************************************************************** 1 item had failures: 1 of 8 in sage.categories.category_with_axiom.CategoryWithAxiom.__reduce__ [179 tests, 1 failure, 0.25 s]
So, nothing dramatic.
comment:39 Changed 4 years ago by
 Status changed from needs_review to needs_work
 Work issues changed from Finish merging some changes in the primer to Reduce startup time by 5%. Avoid "recursion depth exceeded (ignored)". Trivial doctest fixes.
Patchbot finds
sage t long /mnt/storage2TB/patchbot/Sage/sage5.11.beta3/devel/sage/sage/rings/number_field/number_field.py # 1 doctest failed sage t long /mnt/storage2TB/patchbot/Sage/sage5.11.beta3/devel/sage/doc/ru/tutorial/tour_groups.rst # 1 doctest failed sage t long /mnt/storage2TB/patchbot/Sage/sage5.11.beta3/devel/sage/sage/geometry/polyhedron/plot.py # 1 doctest failed sage t long /mnt/storage2TB/patchbot/Sage/sage5.11.beta3/devel/sage/sage/categories/category.py # 3 doctests failed sage t long /mnt/storage2TB/patchbot/Sage/sage5.11.beta3/devel/sage/sage/categories/category_with_axiom.py # 1 doctest failed
This sounds serious:
sage t long /mnt/storage2TB/patchbot/Sage/sage5.11.beta3/devel/sage/sage/rings/number_field/number_field.py ********************************************************************** File "/mnt/storage2TB/patchbot/Sage/sage5.11.beta3/devel/sage/sage/rings/number_field/number_field.py", line 309, in sage.rings.number_field.number_field.? Failed example: RR.coerce_map_from(K) Expected: Composite map: From: Number Field in a with defining polynomial x^3  2 To: Real Field with 53 bits of precision Defn: Generic morphism: From: Number Field in a with defining polynomial x^3  2 To: Real Lazy Field Defn: a > 1.259921049894873? then Conversion via _mpfr_ method map: From: Real Lazy Field To: Real Field with 53 bits of precision Got: Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <function remove at 0x2820668> ignored Composite map: From: Number Field in a with defining polynomial x^3  2 To: Real Field with 53 bits of precision Defn: Generic morphism: From: Number Field in a with defining polynomial x^3  2 To: Real Lazy Field Defn: a > 1.259921049894873? then Conversion via _mpfr_ method map: From: Real Lazy Field To: Real Field with 53 bits of precision
File "/mnt/storage2TB/patchbot/Sage/sage5.11.beta3/devel/sage/doc/ru/tutorial/tour_groups.rst", line 14, in doc.ru.tutorial.tour_groups Failed example: G = PermutationGroup(['(1,2,3)(4,5)', '(3,4)']) Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <function remove at 0xfe16e0> ignored
File "/mnt/storage2TB/patchbot/Sage/sage5.11.beta3/devel/sage/sage/geometry/polyhedron/plot.py", line 461, in sage.geometry.polyhedron.plot.Projection.__init__ Failed example: p = polytopes.icosahedron() Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded while getting the str of an object' in <function remove at 0x2e03e60> ignored Exception RuntimeError: 'maximum recursion depth exceeded while getting the str of an object' in <function remove at 0x2e03e60> ignored
And the result of the startup time plugin is also a bad news.
+Average increase of 0.057 secs or 8.1%. +With 100% confidence, startup time increased by at least 5% +With 100% confidence, startup time increased by at least 2.5% +With 100% confidence, startup time increased by at least 1% +With 100% confidence, startup time increased by at least 0.5% +With 100% confidence, startup time increased by at least 0.25%
comment:40 Changed 4 years ago by
There is a naked
assert False
in sage.categories.category.Category.__init__
, followed(!) by a deprecation warning. How can the warning ever appear after asserting a false statement?
comment:41 Changed 4 years ago by
I have a question on how to implement a new category with support of axioms.
If I understand correctly, the method _with_axiom_categories
tells which categories need to be added to self in order to get the category with the axiom added. Is one supposed to overload this method? Sorry, I did not read your patch completely yet. Is this question answered somewhere? If not, I think it must be added to a tutorial.
comment:42 Changed 4 years ago by
Here is some cryptic phrase from category_with_axiom:
The later two are implemented using respectively :meth:`__classcall__` and :meth:`__classget__` which see.
It ends in the middle of the phrase.
comment:43 Changed 4 years ago by
I try to summarize how I understand how the patch worksso, please correct me if I misunderstood.
Generally, adding an axiom A to a category C means: Forming the join of C with C._with_axiom_categories(A)
, unless there is a class getattr(C,A)
, for example: Magmas().Associative
is Semigroups
. The categories returned by C._with_axiom_categories(A)
would, for example, provide new parent and element methods (such as: prod()
only makes sense in an associative magma, _test_one()
only makes sense for unital magmas).
If I understand correctly, the same "axiom category" (here: Magmas.Associative
) is available to all subcategories of Magmas()
, because it is defined in Magmas.SubcategoryMethods
. Or am I mistaken and it is not the same? Is another dynamic class involved here? This could also give rise to slowness.
Apart from hardcoded cases, the rôle of JoinCategory
has generally increased, and this indicates it might make sense to spend some optimization there. A highly significant increase of at least 5% of startup time is rather much. I don't know if JoinCategory
is the only problem here.
comment:44 followup: ↓ 47 Changed 4 years ago by
If I understand correctly, the reason for creating a JoinCategory
is to get the correct supercategories. But there are alternative ways to get the supercategories right. I could imagine to use a dynamic class instead. So, the aim of this post is to present an alternative approach that avoids joins.
If C is a category and one wants to have C.MyAxiom()
, then I suggest to create a dynamic class cls
out of C.__class__
(and perhaps also using the class C.MyAxiom
?), and set a class attribute cls._used_axioms
which is a (frozen) set formed by C.__class__._used_axioms
and "MyAxiom"
.
Note: The order in which the axioms are given should not matter. Hence, the way of caching the dynamic class should be: By a class that has no axioms, and by C.__class__._used_axioms
.
We would like to call cls
with the same __init__
arguments that were used for creating C
. So, how to get the init data? No problem, since C
uses UniqueRepresentation
!. For example:
sage: C = Bimodules(ZZ, QQ) sage: C._reduction (sage.categories.bimodules.Bimodules, (Integer Ring, Rational Field), {})
So, C.MyAxiom()
would eventually do something like this
cls = dynamic_class("MyAxiom"+C.__class__.__name__, (C.__class__, C.MyAxiom), C.__class__, <take care of caching>) return cls(*(C._reduction[1][0]), **(C._reduction[1][1]))
Note that by way of caching the dynamic class, I guess the above would automatically cover the corner case that C.__class__._used_axioms
contains "MyAxiom"
. Namely, in this case, cls is C.__class__
by means of caching the dynamic class, and then cls(*..., **...)
coincides with C, since it is a UniqueRepresentation
.
By means of explicitly overloading the cache of the dynamic class, one could even ensure that DivisionRings.Finite()
returns Fields.Finite()
, I guess.
Let's denote C2=C.MyAxiom()
. And then, the critical question is: How to determine the super categories of C2
?
I guess for each axiom A in C2.__class__._used_axioms
, we want to return C2._without_axiom(A)
, and we want to return D._with_axiom(A)
for all D
is in C2._without_axiom(A).super_categories()
, of course removing duplicates.
So, there only remains to answer: What is C2._without_axiom(A)
?
Again, we can use C2._reduction
to get the input data, but how to get the class of D=C2._without_axiom(A)
? Note that C2
might have several axioms, and we do not order the axioms.
However, we know what D.__class__._used_axioms
is supposed to look like: It is C2.__class__._used_axiom.difference("MyAxiom")
.
Thus, we get something like this:
@cached_method def _without_axiom(self, axiom): if axiom not in self.__class__._used_axioms: <raise some error> new_axioms = self.__class__._used_axioms.difference([axiom]) for cls in self.__class__.__mro__: if getattr(cls, "_used_axioms", None) == new_axioms: break if cls is object: <raise some error> return cls(*(self._reduction[1][0]), **(self._reduction[1][1]))
Do you think this would make sense?
comment:45 in reply to: ↑ 38 Changed 4 years ago by
Replying to SimonKing:
sage t devel/sage/sage/categories/category.py ********************************************************************** File "devel/sage/sage/categories/category.py", line 1940, in sage.categories.category.Category.join Failed example: type(TCF) Expected: <class 'sage.categories.category_with_axiom.TestObjects.Commutative.Facade_with_category'> Got: <class 'sage.categories.category_with_axiom.Commutative.Facade_with_category'>
}}}
Ah yes, good point: I have #9107 in my queue. So we will have to either add it as a dependency if it gets ready soon, or update the doctests. As you point out, nothing dramatic.
comment:46 in reply to: ↑ 31 Changed 4 years ago by
Replying to SimonKing:
Why is there a doubleunderscore
__neg__
method as element method of additive groups? The reason for the single underscore arithmetic methods is, to my understanding, to enable the coercion model. But the coercion model is not involved in the case of__neg__
, isn't it? Hence, I think one should keep it double underscore, and should not ask for an implementation via a single underscore method.
All I did is to lift this method from "sage.structure.element.ModuleElement?", as a step toward deprecating this class.
I agree that the _neg_ feature itself is questionable (it has no purpose besides consistency). So one could think about removing it (and fixing the couple modules in Sage that implement _neg_). But that would require a discussion on sagedevel and is in any cases for a different ticket.
For this ticket, do you think I should add a little comment about this in the doc?
comment:47 in reply to: ↑ 44 ; followups: ↓ 48 ↓ 49 ↓ 50 Changed 4 years ago by
Hi Simon!
Thanks for all your work on the review of this ticket! I am currently on vacations, so my answers might be slow.
Replying to SimonKing:
If I understand correctly, the reason for creating a
JoinCategory
is to get the correct supercategories.
The reason to call "join" is indeed to get the correct supercategories
for C.MyAxiom()
. Note that, on the other hand and unless I
screwed up somewhere, there should be no JoinCategory
produced
(unless of course the end result of C.MyAxiom()
itself is such a
JoinCategory
).
But there are alternative ways to get the supercategories right. I could imagine to use a dynamic class instead. So, the aim of this post is to present an alternative approach that avoids joins.
In general, I agree that joins are called quite often and it would be nice to optimize them and/or call them less often. However, I think we really want to call a join to get the full power of the architecture. Imagine for example that:
 C is a super category of A and B
A.MyAxiom()
impliesA.MyOtherAxiom()
B.MyOtherAxiom()
is non trivial
Then we want C.MyAxiom().super_categories()
to automatically
include B.MyOtherAxiom()
, for otherwise we would need to
basically replicate the information that A.MyAxiom()
implies
A.MyOtherAxiom()
over and over in subcategories, and this would
not scale.
Handling this kind of stuff is precisely the core of the logic in
join
. So if you see a way to optimize the computation of the super
categories of
C.MyAxiom()
while preserving the above feature,
then I believe you actually have found a way to optimize join
in
the first place.
Cheers,
Nicolas
PS: let's keep in mind this idea of using the reduction. It could indeed be that it could be used in a place or two to simplify the logic.
comment:48 in reply to: ↑ 47 Changed 4 years ago by
Replying to nthiery:
Thanks for all your work on the review of this ticket! I am currently on vacations, so my answers might be slow.
So am I. Perhaps I can try to provide a proof of concept, IF I manage to deal with the scenario that you mentioned.
Have nice holidays!
Simon
comment:49 in reply to: ↑ 47 Changed 4 years ago by
Replying to nthiery:
The reason to call "join" is indeed to get the correct supercategories for
C.MyAxiom()
. Note that, on the other hand and unless I screwed up somewhere, there should be noJoinCategory
produced (unless of course the end result ofC.MyAxiom()
itself is such aJoinCategory
).
Really? So, then I was mislead by a couple of doctests that demonstrate that a certain category is in fact a join category, even though it is not printed as such, and also mislead by the code that uses self._with_axiom_categories(...)
, which I thought does in fact form a join.
comment:50 in reply to: ↑ 47 ; followup: ↓ 55 Changed 4 years ago by
Hi Nicolas,
Replying to nthiery:
 C is a super category of A and B
A.MyAxiom()
impliesA.MyOtherAxiom()
B.MyOtherAxiom()
is non trivial
I suppose you mean: C is a subcategory of A and B.
What is an axiom?
First of all, I wonder if we have the same understanding of "axiom". For me, an axiom is defined in terms of an algebraic structure that is provided by a certain category without this axiom. In particular A.Associative()
is actually not welldefined: One should in theory have A.MultiplicativeAssociative()
, where MultiplicativeAssociative
is provided by Magmas()
, or A.AdditiveAssociative()
, where AdditiveAssociative
is provided by AdditiveMagmas()
.
Granted, if A=Algebras(ZZ)
, then A.Associative()
should be synonym of A.MultiplicativeAssociative()
. So, we might want to introduce reasonable shortcuts in some cases.
Your Example
Now, in your example, if MyAxiom
is defined for both A and B, then the meet of A and B is a subcategory of a category M, for which MyAxiom
and MyOtherAxiom
are defined. In your example, MyAxiom
implies MyOtherAxiom
for A but not for B. Hence, A can be written as a subcategory of M.SpecialAxiom()
, and SpecialAxiom
together with MyAxiom
implies MyOtherAxiom
.
Now, you consider a category C defined by C.__class__(data)
, which is a common subcategory of A and B, and you wonder about the supercategories of C.MyAxiom()
.
Since A satisfies SpecialAxiom
, C satisfies it as well. Hence, D = C.MyAxiom()
will also satisfy MyOtherAxiom
. I guess the logic of this implication is encoded in the way how D.__class__._used_axioms
is determined. Hence, D.__class__._used_axioms
contains SpecialAxiom
, MyAxiom
and MyOtherAxiom
.
In a previous post, I presented an algorithm for determining D.super_categories()
. Let us study what it will return. Recall that it returns D._without_axiom(axiom)
for all axiom in D.__class__._used_axioms
, after removing duplicates. Hence:
axiom=SpecialAxiom
: We go along the mro ofD.__class__
until we find something that does not haveSpecialAxiom
. This will be a certain supercategory X of C that is a subcategory of B (supposing that B does not satisfySpecialAxiom
). This will result inX....MyAxiom().MySpecialAxiom()
, applying all axioms (exceptSpecialAxiom
) that hold for D but not for X.axiom=MyAxiom
: This will yieldC.MyOtherAxiom()
.axiom=MyOtherAxiom
: This will yieldC.MyAxiom()
, which coincides with D and is thus removed as a duplicate.
Note that in this explanation I have modified my previous suggestion for D._without_axiom(this_axiom)
: We can not expect that D.__class__.__mro__
provides something that has the same axioms than D, with just this_axiom
omitted. But since applying axioms commutes, it is fine to take the first class in D.__class__.__mro__
that does not have this_axiom
, and then create a category from this class (with the known input data of D), and apply all other missing axioms.
Anyway, I think that the above answer to C.MyAxiom().super_categories()
looks fine. Or what else would you like to have?
Cheers,
Simon
comment:51 followup: ↓ 52 Changed 4 years ago by
I guess I should rethink the above in a more concrete scenario. Let D = DivisionRings()
. What do we do with D.Finite()
?
Would we agree on D = Rings().WithMultiplicativeInverses()
? I guess we would obtain Fields()=D.Commutative()
. So, as in the situation above, we have the rule that if WithMultiplicativeInverses()
is applied to Rings()
, then the additional axiom Finite()
implies the axiom Commutative()
.
Hence, D.Finite()
yields Fields().Finite()=FiniteFields()
. To be discussed: Should this be created dynamically, or should there be a hardcoded separate class definition?
So, what would FiniteFields().super_categories()
return by the algorithm I presented above?
 Omit
Commutative
: We still have the axiomsWithMultiplicativeInverses
andFinite
, hence, we recoverFiniteFields()
, which is thus a duplicate and not part ofFiniteFields().super_categories()
.  Omit
Finite
: The remaining axioms are those of commutative division rings, which yieldsFields()
.  Omit
WithMultiplicativeInverses
: Yields finite commutative rings.
So, FiniteFields().super_categories()
returns [Fields(), Rings().Commutative().Finite()]
. Do you think this answer makes sense?
comment:52 in reply to: ↑ 51 ; followup: ↓ 53 Changed 4 years ago by
Replying to SimonKing:
So,
FiniteFields().super_categories()
returns[Fields(), Rings().Commutative().Finite()]
. Do you think this answer makes sense?
Actually I'd like this answer more than the current answers.
Without your patch:
sage: FiniteFields().super_categories() [Category of fields, Category of finite enumerated sets]
With your patch:
sage: FiniteFields().super_categories() [Category of fields, Category of finite monoids]
It seems to me that
sage: FiniteFields().super_categories() [Category of fields, Category of finite commutative rings]
would be more accurate.
comment:53 in reply to: ↑ 52 ; followup: ↓ 54 Changed 4 years ago by
Replying to SimonKing:
It seems to me that
sage: FiniteFields().super_categories() [Category of fields, Category of finite commutative rings]would be more accurate.
But this would mean constructing a trivial category for finite commutative rings (there is currently no category code for finite commutative rings). The point of the axioms infrastructure is precisely to avoid such trivial categories in the category hierarchy in order to prevent the potential combinatorial explosion.
Besides: should this be finite commutative rings? Or finite domains? Or finite euclidean rings? ...
comment:54 in reply to: ↑ 53 ; followup: ↓ 56 Changed 4 years ago by
Replying to nthiery:
But this would mean constructing a trivial category for finite commutative rings (there is currently no category code for finite commutative rings).
That's the point: In my approach, this category would be constructed on the fly, by means of a dynamic construction.
Besides: should this be finite commutative rings? Or finite domains? Or finite euclidean rings? ...
To be discussed. In the end of the day, this is a matter of what axioms we have for fields that do not hold for all division rings, and which are thus implied by adding Finite()
to Rings().Division()
.
However, I do think that the category of finite commutative rings should be a supercategory of the category of finite fields. But (with your patch):
sage: Rings().Commutative().Finite() in Fields().Finite().all_super_categories() False
even though
sage: (Fields().Finite()).is_subcategory(Rings().Commutative().Finite()) True
comment:55 in reply to: ↑ 50 Changed 4 years ago by
Replying to SimonKing:
I suppose you mean: C is a subcategory of A and B.
Oops, yes, sure.
What is an axiom?
First of all, I wonder if we have the same understanding of "axiom". For me, an axiom is defined in terms of an algebraic structure that is provided by a certain category without this axiom.
Yes.
In particular
A.Associative()
is actually not welldefined: One should in theory haveA.MultiplicativeAssociative()
, whereMultiplicativeAssociative
is provided byMagmas()
, orA.AdditiveAssociative()
, whereAdditiveAssociative
is provided byAdditiveMagmas()
. Granted, ifA=Algebras(ZZ)
, thenA.Associative()
should be synonym ofA.MultiplicativeAssociative()
. So, we might want to introduce reasonable shortcuts in some cases.
Of course. But that would be heavy and require to have an infrastructure for shortcuts. So I just followed the previously set convention (as in CommutativeRings? w.r.t CommutativeAdditiveMonoids?): by default, the associative/commutative/unital/... axioms are about the multiplicative structure, and I think that's ok.
Your Example
Now, in your example, if
MyAxiom
is defined for both A and B, then the meet of A and B is a subcategory of a category M, for whichMyAxiom
andMyOtherAxiom
are defined. In your example,MyAxiom
impliesMyOtherAxiom
for A but not for B. Hence, A can be written as a subcategory ofM.SpecialAxiom()
, andSpecialAxiom
together withMyAxiom
impliesMyOtherAxiom
.Now, you consider a category C defined by
C.__class__(data)
, which is a common subcategory of A and B, and you wonder about the supercategories ofC.MyAxiom()
.Since A satisfies
SpecialAxiom
, C satisfies it as well. Hence,D = C.MyAxiom()
will also satisfyMyOtherAxiom
. I guess the logic of this implication is encoded in the way howD.__class__._used_axioms
is determined. Hence,D.__class__._used_axioms
containsSpecialAxiom
,MyAxiom
andMyOtherAxiom
.In a previous post, I presented an algorithm for determining
D.super_categories()
. Let us study what it will return. Recall that it returnsD._without_axiom(axiom)
for all axiom inD.__class__._used_axioms
, after removing duplicates. Hence:
axiom=SpecialAxiom
: We go along the mro ofD.__class__
until we find something that does not haveSpecialAxiom
. This will be a certain supercategory X of C that is a subcategory of B (supposing that B does not satisfySpecialAxiom
). This will result inX....MyAxiom().MySpecialAxiom()
, applying all axioms (exceptSpecialAxiom
) that hold for D but not for X.axiom=MyAxiom
: This will yieldC.MyOtherAxiom()
.axiom=MyOtherAxiom
: This will yieldC.MyAxiom()
, which coincides with D and is thus removed as a duplicate.Note that in this explanation I have modified my previous suggestion for
D._without_axiom(this_axiom)
: We can not expect thatD.__class__.__mro__
provides something that has the same axioms than D, with justthis_axiom
omitted. But since applying axioms commutes, it is fine to take the first class inD.__class__.__mro__
that does not havethis_axiom
, and then create a category from this class (with the known input data of D), and apply all other missing axioms.Anyway, I think that the above answer to
C.MyAxiom().super_categories()
looks fine. Or what else would you like to have?
Honestly I don't have the time to check all the details. If you believe that computing A.Axiom() is intrinsically simpler than computing a join (I don't and would favor optimizing join instead), feel free to write a prototype. The test suite in category_with_axiom.py should be a good guide. Just be warned that it took me a good two weeks of solid work to get things right, and that after at least two iterations :)
Enjoy your vacations too!
comment:56 in reply to: ↑ 54 ; followup: ↓ 57 Changed 4 years ago by
Replying to SimonKing:
Replying to nthiery:
But this would mean constructing a trivial category for finite commutative rings (there is currently no category code for finite commutative rings).
That's the point: In my approach, this category would be constructed on the fly, by means of a dynamic construction.
We do not even want to construct it on the fly! FiniteFields? satisfies at least four axioms that can apply to Magmas (Associative, Finite, Unital, Commutative). We do not want the category hierarchy above FiniteFields? to contain {2^{4} categories (most of which being trivial) just for Magmas. And that many for additive magmas. }
To be discussed. In the end of the day, this is a matter of what axioms we have for fields that do not hold for all division rings, and which are thus implied by adding
Finite()
toRings().Division()
.
Note that this is currently resolved automatically by the current mechanism by looking which axioms are defined/implemented by the various categories.
However, I do think that the category of finite commutative rings should be a supercategory of the category of finite fields. But (with your patch):
sage: Rings().Commutative().Finite() in Fields().Finite().all_super_categories() Falseeven though
sage: (Fields().Finite()).is_subcategory(Rings().Commutative().Finite()) True
Which is exactly what I want since finite commutative rings is trivial, and realized as a join category. There is no point in adding join categories in all_super_categories.
Cheers,
Nicolas
comment:57 in reply to: ↑ 56 ; followups: ↓ 58 ↓ 61 Changed 4 years ago by
Replying to nthiery:
Replying to SimonKing:
That's the point: In my approach, this category would be constructed on the fly, by means of a dynamic construction.
We do not even want to construct it on the fly! FiniteFields? satisfies at least four axioms that can apply to Magmas (Associative, Finite, Unital, Commutative). We do not want the category hierarchy above FiniteFields? to contain {2^{4} categories (most of which being trivial) just for Magmas. }
OK, that's a considerable change. In the "good" old times, a category C was (by definition) a subcategory of another category D, if and only if D was contained in C.all_super_categories()
. So, you say this shall change (or already has?).
Which is exactly what I want since finite commutative rings is trivial, and realized as a join category. There is no point in adding join categories in all_super_categories.
OK, it somehow convinces me that we don't want to create categories "on the fly" that do not provide any additional information (methods etc) beyond the categories that were created anyway.
But then, I still don't see why this should be implemented by a plain join category.
Do we agree that there is a category Magmas().Commutative()
, such that all information on Algebras(ZZ).Commutative()
is provided by Algebras(ZZ)
together with Magmas().Commutative()
? Sure, we could then implement Algebras(ZZ).Commutative()
by a JoinCategory
.
But then, I would expect that we can have a class which is similar to JoinCategory
but is specially designed and thus faster. After all, creating the join of a list of categories should be more complicated then adding a list of "axiom categories" (such as Magmas().Commutative()
and Magmas().Division()
and Sets().Finite()
) to a given category (such as Rings()
).
Anyway, I think my original suggestion of creating classes for categorieswithaxiom on the fly was probably not so good. But I think I will try to experiment with the other idea (using a specially designed "mock join" for adding axioms).
comment:58 in reply to: ↑ 57 ; followup: ↓ 62 Changed 4 years ago by
Replying to SimonKing:
After all, creating the join of a list of categories should be more complicated then adding a list of "axiom categories" (such as
Magmas().Commutative()
andMagmas().Division()
andSets().Finite()
) to a given category (such asRings()
).
Or perhaps rather Rngs().Division()
, because we ask for inverses for all nonzero elements, hence Division()
requires a category that has a notion of a zero and is at the same time a multiplicative monoid.
comment:59 followup: ↓ 63 Changed 4 years ago by
A question:
Why should we have a hardcoded category Fields()
, if all information is encoded in the combination of Rings().Division()
and Rings().Commutative()
? Should we not aim at removing sage.categories.fields if we take the axiomatic approach serious?
comment:60 followup: ↓ 64 Changed 4 years ago by
And a more general question we should answer: What is the semantics of super_categories()
?
It used to be like this, if I understood correctly: C.super_categories()
should return a list of all categories S1, S2, ...
constructible in Sage such that C is a proper subcategory of S1, S2, ...
and there is no category D constructible in Sage such that C is a proper subcategory of D and D is a proper subcategory of any of the S1, S2, ...
.
The problem with this old meaning of C.super_categories()
is, of course, that "constructible in Sage" is a moving target, and hence it won't scale.
Now it seems that you want to change the old semantics, and I wonder about the exact definition of the new semantics.
It seems to me that you suggest the following: C.super_categories()
shall return categories S1, S2, ...
such that C is a proper subcategory of S1, S2, ...
, and such that all "named classes" of C (i.e., results of calls to C._make_named_class(...)
) can be constructed from the corresponding named classes of S1, S2, ...
and from attributes of C.__class__
(for example, from C.__class__.ParentMethods
)?
comment:61 in reply to: ↑ 57 ; followup: ↓ 66 Changed 4 years ago by
Replying to SimonKing:
OK, that's a considerable change. In the "good" old times, a category C was (by definition) a subcategory of another category D, if and only if D was contained in
C.all_super_categories()
. So, you say this shall change (or already has?).
This was already like this for join categories. E.g. with plain sage 5.11.beta3:
sage: C1 = Category.join([Magmas(), CommutativeAdditiveMonoids()]) sage: C2 = Rings() sage: C2.is_subcategory(C1) True
But then, I still don't see why this should be implemented by a plain join category.
Do we agree that there is a category
Magmas().Commutative()
, such that all information onAlgebras(ZZ).Commutative()
is provided byAlgebras(ZZ)
together withMagmas().Commutative()
?
Those two pieces are indeed sufficient to recover the category:
sage: C = Algebras(ZZ) & Magmas().Commutative(); C Category of commutative algebras over Integer Ring
But the join calculation is non trivial since Sage discovers by introspection that there is a specific category for commutative rings, so we get:
sage: C.super_categories() [Category of algebras over Integer Ring, Category of commutative rings]
Granted, the example is not so great since the commutative rings category is actually currently empty; so we could think about removing it, though it's likely to eventually contain something. For a more interesting example we can look at:
Maybe a better example is:
sage: Rings().Finite().super_categories() [Category of rings, Category of finite monoids]
And some good tests (compare with the sources!):
sage: from sage.categories.category_with_axiom import TestObjectsOverBaseRing sage: C = TestObjectsOverBaseRing(QQ) sage: C.Facade().Commutative().FiniteDimensional().Finite().Unital().super_categories() [Category of finite finite dimensional test objects over base ring over Rational Field, Category of finite commutative test objects over base ring over Rational Field, Category of facade commutative test objects over base ring over Rational Field, Category of finite dimensional commutative unital test objects over base ring over Rational Field] sage: C.Facade().Commutative().Finite().Unital().super_categories() [Category of finite commutative test objects over base ring over Rational Field, Category of facade commutative test objects over base ring over Rational Field, Category of unital test objects over base ring over Rational Field]
Sure, we could then implement
Algebras(ZZ).Commutative()
by aJoinCategory
.But then, I would expect that we can have a class which is similar to
JoinCategory
but is specially designed and thus faster. After all, creating the join of a list of categories should be more complicated then adding a list of "axiom categories" (such asMagmas().Commutative()
andMagmas().Division()
andSets().Finite()
) to a given category (such asRings()
).
I guess I don't see at this point what can be made really simpler/lighter for a join category when it comes from adding axioms to a category. I still believe we can't spare the join calculation.
Cheers,
Nicolas
comment:62 in reply to: ↑ 58 Changed 4 years ago by
Replying to SimonKing:
Or perhaps rather
Rngs().Division()
, because we ask for inverses for all nonzero elements, henceDivision()
requires a category that has a notion of a zero and is at the same time a multiplicative monoid.
Yes; and pushing your argument to its conclusion that could even be DistributiveMagmasAndAssociativeMagmas?().AdditiveUnital?().Division(). I guess DivisionRing? will do for now.
comment:63 in reply to: ↑ 59 ; followup: ↓ 65 Changed 4 years ago by
Replying to SimonKing:
Why should we have a hardcoded category
Fields()
, if all information is encoded in the combination ofRings().Division()
andRings().Commutative()
? Should we not aim at removing sage.categories.fields if we take the axiomatic approach serious?
Fields is already implemented as a CategoryWithAxiom?. But it's a non trivial category (there are quite a few parent and element methods), so we want to keep it around.
Cheers,
Nicolas
comment:64 in reply to: ↑ 60 ; followup: ↓ 67 Changed 4 years ago by
Replying to SimonKing:
And a more general question we should answer: What is the semantics of
super_categories()
?It used to be like this, if I understood correctly:
C.super_categories()
should return a list of all categoriesS1, S2, ...
constructible in Sage such that C is a proper subcategory ofS1, S2, ...
and there is no category D constructible in Sage such that C is a proper subcategory of D and D is a proper subcategory of any of theS1, S2, ...
.
I very much like this definition, and think it's still perfectly up to date. Maybe one would use "implemented in Sage" rather than "constructible in Sage" to rule out join categories.
(a short rephrasing is that super_cartegories give the covering relations in the poset of implemented categories in Sage).
The only difference after this ticket is that there is a new syntax to
implement a category, e.g. Blahs().Finite()
, as:
class Blahs(Category): class Finite(CategoryWithAxiom):
but that's not really different from what we were already doing for
e.g. Blahs().CartesianProducts()
.
The problem with this old meaning of
C.super_categories()
is, of course, that "constructible in Sage" is a moving target, and hence it won't scale.
Let me rephrase this as: since "constructible in Sage" is a moving
target, maintaining by hand the information in super_categories()
often does not scale. So whenever possible, super_categories
should be calculated automatically.
Cheers,
Nicolas
comment:65 in reply to: ↑ 63 ; followup: ↓ 70 Changed 4 years ago by
Replying to nthiery:
Fields is already implemented as a CategoryWithAxiom?. But it's a non trivial category (there are quite a few parent and element methods), so we want to keep it around.
Sure it has quite a few parent and element methods. But the point is: Since the category of fields is nothing but Rings().Division().Commutative()
, all these methods should be defined somewhere else.
comment:66 in reply to: ↑ 61 ; followup: ↓ 69 Changed 4 years ago by
Replying to nthiery:
Replying to SimonKing:
OK, that's a considerable change. In the "good" old times, a category C was (by definition) a subcategory of another category D, if and only if D was contained in
C.all_super_categories()
. So, you say this shall change (or already has?).This was already like this for join categories.
You're right. But I think this has been the *only* exception.
comment:67 in reply to: ↑ 64 ; followup: ↓ 68 Changed 4 years ago by
Replying to nthiery:
Replying to SimonKing:
And a more general question we should answer: What is the semantics of
super_categories()
?It used to be like this, if I understood correctly:
C.super_categories()
should return a list of all categoriesS1, S2, ...
constructible in Sage such that C is a proper subcategory ofS1, S2, ...
and there is no category D constructible in Sage such that C is a proper subcategory of D and D is a proper subcategory of any of theS1, S2, ...
.I very much like this definition, and think it's still perfectly up to date.
This totally surprises me now.
Back to the Fields().Finite().super_categories()
example. I have argued that we have a couple of axioms, and keeping all axioms but one gives us a list that (after removing duplicates) gives us a list of super categories that exactly follows the specification above. And in comment:51, I have shown that this definition more or less forces us to have Fields().Finite().super_categories() = [Category of fields, Category of finite commutative rings]
.
And you argued against this answer (because of having 2^{4} many additional "empty" categories in the list of all super categories). You seemed to be in favour of Fields().Finite().super_categories() = [Category of fields, Category of finite enumerated sets]
.
Actually, this is why I came up with the other specification of C.super_categories()
. That's why it surprises me that you now say you like this specification less.
comment:68 in reply to: ↑ 67 Changed 4 years ago by
Replying to SimonKing:
This totally surprises me now.
Hmm, it feels like there is a rolling confusion here :) Trac communication is not so easy!
Back to the
Fields().Finite().super_categories()
example. I have argued that we have a couple of axioms, and keeping all axioms but one gives us a list that (after removing duplicates) gives us a list of super categories that exactly follows the specification above. And in comment:51, I have shown that this definition more or less forces us to haveFields().Finite().super_categories() = [Category of fields, Category of finite commutative rings]
.And you argued against this answer (because of having 2^{4} many additional "empty" categories in the list of all super categories). You seemed to be in favour of
Fields().Finite().super_categories() = [Category of fields, Category of finite enumerated sets]
.
Yes and no: I indeed don't want all 2^{4} potential categories. But I do want those that are *implemented* in Sage. In the current state, we have no category implemented for finite commutative rings (in other words, Rings().Commutative().Finite() is a join category), but we do have one for finite monoids (in Monoids.Finite). Hence the current answer:
[Category of fields, Category of finite monoids]
Cheers,
comment:69 in reply to: ↑ 66 ; followup: ↓ 83 Changed 4 years ago by
Replying to SimonKing:
You're right. But I think this has been the *only* exception.
And this still is the only exception: Rings().Commutative().Finite()
is a join category.
sage: type(Rings().Finite().Commutative()) <class 'sage.categories.category.JoinCategory_with_category'>
comment:70 in reply to: ↑ 65 Changed 4 years ago by
Replying to SimonKing:
Sure it has quite a few parent and element methods. But the point is: Since the category of fields is nothing but
Rings().Division().Commutative()
, all these methods should be defined somewhere else.
Where else? Of course some of the methods currently in Fields might actually work in a more general setting and could be lifted to some super categories. But others are really about fields (like the trivial is_field :)), so that's their natural spot, isn't it?
Btw:
sage: Rings.Division.Commutative <class 'sage.categories.fields.Fields'>
comment:71 Changed 4 years ago by
Hi Simon!
Back to work after some good vacations :)
The updated patch includes a complete refactoring of the primer and fixes the continuations in docstrings as reported by the patchbot. The next step is to handle the renaming of NonAssociativeNonUnitalAlgebras?.
Did you manage to reproduce the doctest errors reported by the patchbot about recursion loop?
Cheers,
Nicolas
comment:72 Changed 4 years ago by
 Work issues changed from Reduce startup time by 5%. Avoid "recursion depth exceeded (ignored)". Trivial doctest fixes. to Rename NonAssociativeNonUnitalAlgebras. Reduce startup time by 5%. Avoid "recursion depth exceeded (ignored)".
comment:73 Changed 4 years ago by
What shall we do about #9107 (mangling of nested class names)? The dependency is rather trivial: just a doctests or two to update.
comment:74 Changed 4 years ago by
 Work issues changed from Rename NonAssociativeNonUnitalAlgebras. Reduce startup time by 5%. Avoid "recursion depth exceeded (ignored)". to RenReduce startup time by 5%. Avoid "recursion depth exceeded (ignored)".
comment:75 Changed 4 years ago by
The updated patch also fixes doctests in the primer (I had forgotten to run the tests after the revamping).
comment:76 Changed 4 years ago by
 Work issues changed from RenReduce startup time by 5%. Avoid "recursion depth exceeded (ignored)". to Rename NoReduce startup time by 5%. Avoid "recursion depth exceeded (ignored)".
Replacing the current Algebras by MagmaticAlgebras? is now the follow up #15043.
comment:77 Changed 4 years ago by
 Work issues changed from Rename NoReduce startup time by 5%. Avoid "recursion depth exceeded (ignored)". to Reduce startup time by 5%. Avoid "recursion depth exceeded (ignored)".
comment:78 Changed 4 years ago by
Reworked the renaming: we might as well use directly AssociativeAlgebras? rather than AssociativeMagmaticAlgebras?. I added appropriate pointers to #15043.
comment:79 Changed 4 years ago by
 Description modified (diff)
comment:80 Changed 4 years ago by
 Description modified (diff)
comment:81 Changed 4 years ago by
 Description modified (diff)
comment:82 Changed 4 years ago by
Hi Simon,
Let me know when you will be back to reviewing this patch, and I'll be more careful with providing incremental patches.
My recent changes concern:
 Fixing broken links
 Fixing imports
 Moving functorial constructions where they belong (e.g. TensorProducts? is only defined for subcategories of Modules)
I am now going to try to investigate the "recursion" errors.
comment:83 in reply to: ↑ 69 Changed 4 years ago by
Hi Nicolas,
concerning the notion of "super_categories":
Replying to nthiery:
Replying to SimonKing:
You're right. But I think this has been the *only* exception.
And this still is the only exception:
Rings().Commutative().Finite()
is a join category.sage: type(Rings().Finite().Commutative()) <class 'sage.categories.category.JoinCategory_with_category'>
Exactly. And my impression is that your patch drastically increases the use of join categories. Perhaps this is actually not the case, but if I recall correctly, creating a join is the default when adding an axiom.
And this means that you implicitly really change the meaning of C.super_categories()
. It used to be "the list of all supercategories of C such that Sage does not implement categories properly between C and the supercategory", except in the case of joins. But if joins are ubiquitous, then C.super_categories()
will usually (i.e., implicitly by default) be "the list of all supercategories of C, such that all named classes associated with C can be defined by means of the named classes of these supercategories together with attributes of C.__class__
."
My question (or even suggestion!) is to make this implicit policy official. Rationale:
 The notion of a "category implemented in Sage" is not a mathematical notion, but refers to implementation. Since we don't talk about mathematics here, we should give priority to the question: Where is
super_categories()
used in our implementations?  If I am not mistaken, we currently use
super_categories()
for two purposes: Construct named classes
 Create a list of all_super_categories, which is used to test for subcategories in the case of nonjoin categories.
Since in future many categories will be joincategories, only the first point remains really relevant. And this means: We seek a notion of super_categories
in terms of named classes. That's where my suggestion comes from.
comment:84 Changed 4 years ago by
In some posts (don't find them right now), I said that adding axioms is easier than forming a join category, and thus one should try to find an implementation that does not rely on plain joins. Let me try to elaborate on it.
It seems to me that there are (in Sage) two types of categories. My suggestion boils down to: "Implement two different Python classes for these two types of categories".
Firstly, there are categories that stipulate the existence of certain algebraic operations. Most notably: AdditiveMagmas()
(providing the operator "+") and Magmas()
(providing the operator "*"), but conceivably there should also be something like RightGSets(G)
(providing the operator "^
" or perhaps another version of "*") and LeftGSets(G)
. Probably one can formulate all of them as the categories of sets with actions by some free operad. Lets call them "free operator categories".
And secondly, there are "axiom categories" that stipulate certain axioms that hold for the operators defined in one of the "free operator categories". For example,
AdditiveCommutative() = AdditiveMagmas().Commutative()
is a category which is a subcategory of AdditiveMagmas()
in which the "+"operator is commutative; and the law of (left/right/twosided) distributivity is encoded as a subcategory of the join of AdditiveMagmas()
and Magmas()
.
Hence, an "axiom category" should tell what operations it is referring to, and this can be expressed by a (join of) "free operator categories". Hence, Rings().free_operator_category()
would return Category.join([AdditiveMagmas(), Magmas()])
.
And I guess the join of such "free operator categories" is trivial: We have one nonjoin free operator category for each algebraic operation (*, +, /, ...), and forming the join of two free operator categories just amounts to remove duplicates.
What about the join of two "axiom categories"? Well, first of all, we need to form the join of the underlying free operator categories (which is trivial), and then combine the two lists of axioms. Here is where one can implement theorems such as "a finite commutative division ring is a field". But again, the default is to remove duplicate axioms.
Possible approach to an implementation:
 I guess
class FreeOperatorCategory
should provide exactly one operation (multiple operations will be implemented by joins). It should have a methodC.operator()
returningoperator.mul
oroperator.add
oroperator.sub
and so on, telling what Python operator it corresponds to. And it should request certain abstract methods by which the operation is implemented (i.e.,_add_
or_mul_
). class CategoryWithAxioms
keeps a nonempty duplicatefree list FOC of nonjoin "free operator categories", and
 a duplicatefree list EAC of "elementary (nonjoin) axiom categories". In a nonjoin axiom category, this second list is empty.
In addition, it has to provide some
_test_...
method(s) for parent and/or element classes, testing against the axioms.
Hence, C=Rings()
would have C.FOC = [AdditiveMagmas(), SubtractiveMagmas(), Magmas()]
and C.EAC = [Distributive(), AdditiveCommutative(), AdditiveAssociative(), AdditiveInverses(), Associative()]
(perhaps I forgot some axioms).
Now, about C.super_categegories()
.
 If
C
is a nonjoin free operator category, then it returns[Objects()]
, which is the only operatorfree category (note thatSets()
is a free operator category for the operator "in", requesting an abstract parent method__contains__
).  If
C
is a join of nonjoin free operator categories, then it returns the list of these nonjoin free operator categories.  If
C
is an axiom category, then it returns the listC.FOC+C.EAC
after removing those items ofC.FOC
which are already covered by one of the elementary axiom categories fromC.EAC
.
Example
Enumerated rings ER
are a join of rings (with FOC and EAC as above) and enumerated sets. The latter is a free operator category with empty operator but requested abstract parent method __iter__
. Hence, I would suggest to have
ER.super_categories() == [EnumeratedSets(), Distributive(), AdditiveCommutative(), AdditiveAssociative(), AdditiveInverses(), Associative()]
This is because EnumeratedSets()
is the only FOC that is not subject to one of the EAC. Further,
AdditiveInverses().super_categories() == [AdditiveMagmas(), SubtractiveMagmas()]
i.e., the super categories just state that the axioms are formulated in terms of + and .
Hm. We also have "0" (zero). Should one say that "0" is a nullary operator, and thus "0" is defined in a free operator category, and that the axiomatic properties of "0" as additive unit is then defined in an elementary axiom category WithAdditiveUnit()
?
Anyway. This is the approach that I would have taken. Do you find anything useful in this approach?
comment:85 followup: ↓ 86 Changed 4 years ago by
Hi Simon,
Would you have time for a phone / skype call? It probably would be the quickest to discuss the matter. I am at home and available all afternoon.
Cheers,
comment:86 in reply to: ↑ 85 ; followup: ↓ 87 Changed 4 years ago by
Replying to nthiery:
Would you have time for a phone / skype call? It probably would be the quickest to discuss the matter. I am at home and available all afternoon.
Yes, but I need to get some late lunch first.
comment:87 in reply to: ↑ 86 Changed 4 years ago by
comment:88 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471
comment:89 Changed 4 years ago by
The updated patch has been trivially rebased on top of #14471 which just got merged.
Changed 4 years ago by
comment:90 Changed 4 years ago by
Hi Simon,
I am investigating the recursion error. It is definitely caused by the weak reference handling. If you run
sage tp 8 bla.py bla.py
with the attached extract of pushout.py, you get a bunch of error messages like:
Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x347af68> ignored
And if you comment out the "del" line in TripleDictEraser?.call, then the error message disapears.
I am know going to proceed reducing further bla.py to get something that hopefuly would trigger the bug without the functorial construction patch. The hard part is that basically removing any line in bla.py gets the error not to appear.
Let me know if you have ideas ...
comment:91 followups: ↓ 92 ↓ 93 Changed 4 years ago by
Ok, from Volker's suggestion on Sagedevel, one can raise a similar error message when garbage collecting the entries of a MonoDict? (would be the same with a tripledict) involves a big recursion because deleting an entry triggers the deletion of another entry and so on:
from sage.structure.coerce_dict import MonoDict M = MonoDict(11) class A: pass a = A() prev = a for i in range(1000): newA = A() M[prev] = newA prev = newA len(M) del a Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.MonoDictEraser object at 0x5a13788> ignored
At this point, my guess is that our weak dictionary infrastructure currently has an intrisic limitation on the depth of the reference graph, and that all the functioral construction patch does is putting a bit more stress and reach this limitation. So now the question is: is it possible to fix the weak dict infrastructure to let it scale properly by somehow unrolling the recursion as Volker suggests in [1].
Cheers,
[1] https://groups.google.com/d/msg/sagedevel/us0JCrRwGz0/McDlwepFve4J
comment:92 in reply to: ↑ 91 ; followup: ↓ 94 Changed 4 years ago by
Replying to nthiery:
Ok, from Volker's suggestion on Sagedevel, one can raise a similar error message when garbage collecting the entries of a MonoDict? (would be the same with a tripledict) involves a big recursion because deleting an entry triggers the deletion of another entry and so on:
This sounds like we need a different ticket. What do Python's weak dictionaries do? Don't they have similar problems?
comment:93 in reply to: ↑ 91 Changed 4 years ago by
Replying to nthiery:
len(M) del a Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.MonoDictEraser object at 0x5a13788> ignored
Are you sure that the error in bla.py is the same as the error reported by the patchbot? After all, the patchbot did not mention MonoDictEraser
, but named a function remove()
, isn't it?
comment:94 in reply to: ↑ 92 Changed 4 years ago by
Replying to SimonKing:
This sounds like we need a different ticket. What do Python's weak dictionaries do? Don't they have similar problems?
They do. In your example, just replace M = MonoDict(11)
by M = weakref.WeakKeyDictionary()
, and you get essentially the same error:
sage: del a Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <function remove at 0x5f9d578> ignored
And this actually sounds much closer to the error reported by the patchbot.
comment:95 Changed 4 years ago by
Note that for the join, we are using a WeakValueDictionary
, and with your patch we are making increased use of joins. Could this be the source of trouble?
But to my surprise, with a WeakValueDictionary
, one can not get the same error (here, of course, we need to delete the first value, not the first key):
sage: class A: pass sage: M = weakref.WeakValueDictionary() sage: a = A() ....: prev = a ....: for i in range(1000): ....: newA = A() ....: M[newA] = prev ....: prev = newA ....: sage: len(M) 1000 sage: del a sage: len(M) 0
comment:96 followup: ↓ 97 Changed 4 years ago by
... and a WeakKeyDictionary
is only used in sage.misc.randstate, nowhere else! WeakValueDictionary
is used more often.
Hm.
Let me try to summarize:
 The patchbot reported
Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <function remove at 0x2820668> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <function remove at 0xfe16e0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while getting the str of an object' in <function remove at 0x2e03e60> ignored Exception RuntimeError: 'maximum recursion depth exceeded while getting the str of an object' in <function remove at 0x2e03e60> ignored
 Your bla.py fails with
Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x347af68> ignored
 It is possible to get an error message similar to the first two examples of the patchbot by using a
WeakKeyDictionary
, but seemingly not by using aWeakValueDictionary
. But only the latter is used in Sage.
Where shall one start to try and analyse the problem?
comment:97 in reply to: ↑ 96 Changed 4 years ago by
Replying to SimonKing:
Yup. I should add that it produces a combination of "remove" and "TripleDictEraser?" errors; I pointed to the later because it was more specific. Also the whole thing is very sensitive to changes: if you change a line in bla.py you can switch from one message to the other. My bet is that we have a recursive data structure which is a mix of Triple dict and other weak dictionary, so depending on how deep the recursion breaks you get one message or the other.
Where shall one start to try and analyse the problem?
Looking at the sources of WeakValueDictionary
to see how they work
around the recursion issue?
Cheers,
Nicolas
comment:98 followup: ↓ 99 Changed 4 years ago by
WeakValueDictionary
uses
def remove(wr, selfref=ref(self)): self = selfref() if self is not None: del self.data[wr.key]
and WeakKeyDictionary
uses
def remove(k, selfref=ref(self)): self = selfref() if self is not None: del self.data[k]
as a callback.
And I think I see why WeakValueDictionary
does not crash. Recall from comment:95 that I did (of course with more layers)
M[b]=a M[c]=b M[d]=c
and the only elements with a strong reference being kept are d and a. When deleting a, then successively the items keyed by b, c and d are removed from the WeakValueDictionary
.
But think for a moment what is happening during the callback, in the line
del self.data[wr.key]
When this is first called, wr
is a weak reference pointing to a, and wr.key
is b. Hence, when del self.data[wr.key]
is executed, then there is still a strong reference to b, namely in wr.key. Only when the call to remove()
is finished, wr
will be released, and in this moment the last reference to b is gone. Hence, del self.data[wr.key]
is called again, but this time wr
points to b and wr.key
is a strong reference to c.
Conclusion: During the deletion of the dictionary item (b,a), there is a strong reference to the key b. Hence, the deletion of the item (c,b) will only be started after deletion of the item (b,a) is completed. Hence, no recursion.
But for a WeakKeyDictionary
things are different. There,we have
(of course with more layers)
M[b]=a M[c]=b M[d]=c
and we only keep a reference to a and d. When we delete d, then the item (d,c) will be removed. In the line
del self.data[k]
there is no strong reference to the value of self.data[k]
. Hence, while self.data[k]
is deleted, it could be that the callback of a weak reference pointing to this value is invoked.
And this analysis gives rise to a solution: In the TripleDictEraser
and MonoDictEraser
, one should not simply do del bucket[i:i+3]
or del bucket[i:i+7]
, but one should assign a temporary variable to bucket[i+2]
or bucket[i+6]
that will keep the value alive until the call to the eraser is completed, thus, avoiding the recursion.
I suggest to open a separate ticket for this issue.
comment:99 in reply to: ↑ 98 Changed 4 years ago by
Replying to SimonKing:
And this analysis gives rise to a solution: In the
TripleDictEraser
andMonoDictEraser
, one should not simply dodel bucket[i:i+3]
ordel bucket[i:i+7]
, but one should assign a temporary variable tobucket[i+2]
orbucket[i+6]
that will keep the value alive until the call to the eraser is completed, thus, avoiding the recursion.
Cool! I am looking forward seeing this work! Thanks for investigating.
I suggest to open a separate ticket for this issue.
Definitely!
Cheers,
Nicolas
comment:100 Changed 4 years ago by
 Cc vbraun nbruin added
Hmmmm. It is not as easy as I thought. Therefore I put Volker and Nils on Cc, because they know a lot more on Python than I do.
By inserting print statements, I verified that with unpatched Sage in the MonoDict
example, the eraser is called recursively: When invoking the eraser for a key K1, the deletion of the keyvalue pair (K1,K2) results in calling the eraser for K2 before the eraser of K1 has finished. Hence, the order is like this:
 the eraser is called recursively, because a deletion happening inside of the eraser triggers the call to the next eraser.
 the
RuntimeError
is reported after the last deletion has happened  after reporting the error, the 1000 nested erasers return one after the other.
With a tentative patch, I can make the callback function work in a seemingly good way: The eraser is invoked for a key K1, and by assigning K2 to a local variable, the keyvalue pair (K1,K2) can be "safely" removed. The order is like this:
 one eraser is called, and inside of it a deletion happens.
 the next eraser is only called after when the first eraser returns.
And now comes the big surprise: In the very end, the RuntimeError
is still reported! Even though the inserted print statements show that the calls are not nested!
comment:101 Changed 4 years ago by
Two more data points.
When I provide the class A with a __del__
method printing a debug message, I see that first the (now nonnested) calls to the eraser happen, and then FOUR Exception RuntimeError: ... ignored
are reported, namely:
Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.MonoDictEraser object at 0x5610440> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <bound method A.__del__ of <__main__.A instance at 0x573b3f8>> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <bound method A.__del__ of <__main__.A instance at 0x573b368>> ignored Exception RuntimeError: 'maximum recursion depth exceeded' in <bound method A.__del__ of <__main__.A instance at 0x573b2d8>> ignored
and only in the very end, all calls to __del__
happen. I.e., the calls to __del__
are not mixed with calls to the eraser.
However, when I define A
as a cdef class with __weakref__
and a __dealloc__
method (again printing debug info), the picture is different: The __dealloc__
of K1 is called, then the eraser for K1 is called (deleting the keyvalue pair K1,K2 from the MonoDict
), then the eraser for K1 returns, then __dealloc__
for K2 is called, followed by the eraser for K2, and so on. So, __del__
is not mixed with calling the erasers, but __dealloc__
is mixed. And in the very and, there are two errors reported:
Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <_home_simon__sage_temp_linux_sqwp_site_5429_tmp_oooXg4_spyx_0.A object at 0x570e738> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.MonoDictEraser object at 0x560f440> ignored
Frankly I'm puzzled.
comment:102 followup: ↓ 103 Changed 4 years ago by
For the record, I currently work with this patch

sage/structure/coerce_dict.pyx
diff git a/sage/structure/coerce_dict.pyx b/sage/structure/coerce_dict.pyx
a b 187 187 h,offset = r.key 188 188 cdef list bucket = <object>PyList_GET_ITEM(buckets, (<size_t>h) % PyList_GET_SIZE(buckets)) 189 189 cdef Py_ssize_t i 190 cdef object val 190 191 for i from 0 <= i < PyList_GET_SIZE(bucket) by 3: 191 192 if PyInt_AsSsize_t(PyList_GET_ITEM(bucket,i))==h: 192 193 if PyList_GET_ITEM(bucket,i+offset)==<void *>r: 194 val = <object>PyList_GET_ITEM(bucket,i+2) 195 print "deletion for",<size_t>h,"with value",<size_t><void*>val 193 196 del bucket[i:i+3] 194 197 D._size = 1 195 198 break 196 199 else: 197 200 break 201 print "last line for",<size_t>h,"with value",<size_t><void*>val 198 202 199 203 cdef class TripleDictEraser: 200 204 """
and the examples are
sage: from sage.structure.coerce_dict import MonoDict sage: M = MonoDict(11) sage: class A: ....: def __del__(self): ....: print "__del__",id(self) ....: sage: a = A() sage: prev = a sage: M = MonoDict(11) sage: for i in range(1000): ....: newA = A() ....: M[prev] = newA ....: prev = newA ....: sage: del a deletion for 91294536 with value 89650384 last line for 91294536 with value 89650384 deletion for 89650384 with value 89650600 last line for 89650384 with value 89650600 deletion for 89650600 with value 89660160 last line for 89650600 with value 89660160 deletion for 89660160 with value 89660232 last line for 89660160 with value 89660232 deletion for 89660232 with value 89660016 last line for 89660232 with value 89660016 ... deletion for 91409944 with value 91410016 last line for 91409944 with value 91410016 Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.MonoDictEraser object at 0x54169f0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <bound method A.__del__ of <__main__.A instance at 0x572ce60>> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <bound method A.__del__ of <__main__.A instance at 0x572ce18>> ignored Exception RuntimeError: 'maximum recursion depth exceeded' in <bound method A.__del__ of <__main__.A instance at 0x572cdd0>> ignored __del__ 91409800 __del__ 91409728 __del__ 91409656 __del__ 91409584 __del__ 91409512 ... __del__ 89660160 __del__ 89650600 __del__ 89650384 __del__ 91294536
respectively
sage: from sage.structure.coerce_dict import MonoDict sage: M = MonoDict(11) sage: cython(""" ....: cdef class A: ....: cdef __weakref__ ....: def __dealloc__(self): ....: print "__dealloc__",id(self) ....: """) ....: sage: a = A() sage: prev = a sage: for i in range(1000): ....: newA = A() ....: M[prev] = newA ....: prev = newA ....: sage: len(M) 1000 sage: del a __dealloc__ 140403054971016 deletion for 140403054971016 with value 140403054971064 last line for 140403054971016 with value 140403054971064 __dealloc__ 140403054971064 deletion for 140403054971064 with value 140403054971184 last line for 140403054971064 with value 140403054971184 __dealloc__ 140403054971184 deletion for 140403054971184 with value 140403054971160 last line for 140403054971184 with value 140403054971160 __dealloc__ 140403054971160 deletion for 140403054971160 with value 140403054971208 last line for 140403054971160 with value 140403054971208 __dealloc__ 140403054971208 deletion for 140403054971208 with value 140403054971088 last line for 140403054971208 with value 140403054971088 __dealloc__ 140403054971088 deletion for 140403054971088 with value 140403054971112 last line for 140403054971088 with value 140403054971112 __dealloc__ 140403054971112 deletion for 140403054971112 with value 140403054971232 last line for 140403054971112 with value 140403054971232 ... __dealloc__ 91285256 deletion for 91285256 with value 91285280 last line for 91285256 with value 91285280 __dealloc__ 91285280 deletion for 91285280 with value 91285304 last line for 91285280 with value 91285304 Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <_home_simon__sage_temp_linux_sqwp_site_5429_tmp_oooXg4_spyx_0.A object at 0x570e738> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.MonoDictEraser object at 0x560f440> ignored
Conclusion: I thought I understood what was happening, but these two examples prove me wrong.
comment:103 in reply to: ↑ 102 ; followup: ↓ 105 Changed 4 years ago by
Hi Simon,
Thanks for your ongoing investigation! I just created #15070 for it. I guess the discussion might as well continue there.
Cheers,
Nicolas
comment:104 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15070
comment:105 in reply to: ↑ 103 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15070 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069
comment:106 followup: ↓ 107 Changed 4 years ago by
 Work issues changed from Reduce startup time by 5%. Avoid "recursion depth exceeded (ignored)". to Reduce startup time by 5%.
Thanks so much Simon and Volker for #15069! Now we just have to handle the startup time.
I am now kicking the patchbot to see the result of the tests.
comment:107 in reply to: ↑ 106 Changed 4 years ago by
Any further progress on this patch? Many other patches are waiting to get in and depend on it :) !
Anne
comment:108 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094
There's a very minor dependency from #15094 in qsym.py
.
comment:109 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688
There is an additional dependency from #11688.
comment:110 Changed 4 years ago by
 Status changed from needs_work to needs_review
 Work issues Reduce startup time by 5%. deleted
I thereby pledge to pay off my startup time tax by working on lazy importing more combinat stuff. See #15293 for a proof of concept showing that one can easily gain back the current 5% we might loose here. I thus have removed the needs work for this ticket.
The only remaining issue is the dependency upon #9107, which we can either finalize (I hope to get to it by the end of the week), or trivially work around.
Simon, pleeeeeeeaase, do you have a chance to get back to the review of this ticket? So many things depend on it ...
comment:111 followup: ↓ 115 Changed 4 years ago by
IMHO its ridiculous to hold up this ticket for 2 months because of a tiny startup time increase. Just get this merged, especially if lots of other stuff depends on it.
comment:112 Changed 4 years ago by
This needs to be rebased on 5.13.beta0, see the latest report of patchbot
comment:113 Changed 4 years ago by
Thanks for the notice! I am about to upload the latest version on the combinat server which Travis rebased recently. That is likely to do the job.
Cheers,
Nicolas
comment:114 Changed 4 years ago by
There are some failing doctests, see the patchbot report.
comment:115 in reply to: ↑ 111 Changed 4 years ago by
Replying to vbraun:
IMHO its ridiculous to hold up this ticket for 2 months because of a tiny startup time increase. Just get this merged, especially if lots of other stuff depends on it.
Yes, especially with the Sage Days coming up in November it would be good to get this ticket merged!!! A lot of stuff depends on it.
comment:116 Changed 4 years ago by
Hello,
here is a patch correcting the failing doctests. Nicolas, could you please check that I have not made a mistake ? and maybe qfold if you want ?
I have been forced to remove (or rather disconnect) the example of graded module with basis. But this can wait for another ticket.
There remains a strange failing doctest in sagenb.notebook.interact.list_of_first_n. I am not able to solve that. Could somebody else help ?
apply trac_10963more_functorial_constructionsnt.patch trac_10963_doctest_correctionfc.patch
comment:117 Changed 4 years ago by
The patches now pass the tests.
Is there something else to do, before this can be positivereviewed ?
comment:118 Changed 4 years ago by
I am sorry for my long silence. The current 8% of regression in the startup time is, of course, not good. But, as Nicolas has already said, he might be able to improve the startup time in other places.
Also, I don't think that the suggested solution is final. In some posts long time ago, I indicated some further ideas. However, the suggested solution is certainly better (in the sense of "will scale better") than the status quo. Hence, it would be silly to wait longer, just because one could perhaps solve the issue with a different (and not necessarily better) approach.
I have studied the code in the past, and it made sense to me. The doctests work. Startup time regression has been taken care of in a different ticket. And further improvements (performance and conception) may be done in future. So, it seems good to go from my perspective.
Since I had such a long break, I think it would be fair that one declares Frédéric's patch as review patch, he adds himself to the list of reviewers, and changes it to a positive review (unless he finds further problems, of course).
comment:119 Changed 4 years ago by
 Reviewers changed from Simon King to Simon King, Frédéric Chapoton
Ok, then I agree to give a positive review, although I have not looked at the code.
But first Nicolas has to refresh his patch (there is a HUNK).
comment:120 Changed 4 years ago by
To make the release manager comfortable: I state that the code and the maths behind it look good, and Frédéric please make sure that the patch is fine wrt. the current version of Sage.
comment:121 Changed 4 years ago by
The hunk comes from #12453 in the combinat queue (this does some refactoring and cleanup) which doesn't quite commute past this, but it can be made so, i.e. there is no functional dependency AFAIK. The lazy/cunning part of me would say, "Lets review #12453 first and I don't think it's too difficult of a review job (it looks bigger because of the reorganization)," but I don't want to hold this up if you don't agree with that sentiment.
comment:122 Changed 4 years ago by
Well, I guess maybe one would then have to wait longer (days ? months ? years ?) unless an enthousiastic reviewer for #12453 enters the scene right now.
comment:123 Changed 4 years ago by
Yippee! Thanks for the (almost) positive review!
I am going to double check Frederic's patch and the HUNK in the train later this morning.
Cheers,
Changed 4 years ago by
comment:124 Changed 4 years ago by
I checked Frederic's doctest fixes. Some of them were the expected fixes in waiting for #9107. Thanks for handling those! The others actually stemmed from a regression which was revealed by the commutation with #11688: GradedModules?(QQ) was constructed as a join category. I fixed that (well, it's more like a workaround, but that will do for now) in :attachment:trac_10963more_functorial_constructionsgradedmodulesfixnt.patch, and reuploaded Frederic's patch without the corresponding hunks.
As for the commutation issue in the queue with #12453: this does not affect this ticket. It just requires to rebase #12453 in the queue, which I am about to do now. And rerun the tests. Note that this will be for 5.12 since I don't have the latest beta installed though.
Please have a quick look to the updated patches, and set the ticket to positive review if you are happy!
Time permitting, I'll have a look at #9107 in case we could just get done with it and avoid Frederics patch altogether, but let's not wait for that.
comment:125 Changed 4 years ago by
The patches in this ticket are now high up in the queue. I rebased #12453 accordingly (trivial).
comment:126 Changed 4 years ago by
 Description modified (diff)
Hello Nicolas,
On 5.13.beta1, one got
application de trac_10963more_functorial_constructionsnt.patch patching file sage/combinat/integer_vector_weighted.py Hunk #1 succeeded at 122 with fuzz 2 (offset 141 lines).
So I have refreshed your patch and uploaded a version with no hunks.
for the patchbot:
apply trac_10963more_functorial_constructionsntrefreshed.patch trac_10963_doctest_correctionfc.patch trac_10963more_functorial_constructionsgradedmodulesfixnt.patch
comment:127 Changed 4 years ago by
 Status changed from needs_review to positive_review
comment:128 followup: ↓ 130 Changed 4 years ago by
I don't remember why I am the owner of this patch, but let me congratulate that you finally got it positively reviewed!
My main motivation for implementing stuff in Sage was that I wanted to work on #11187. But I accidentally built it on top of this patch, so I never finished it, and it never made it into main Sage. I hope to find the motivation for working it out again!
Best, Christian
Changed 4 years ago by
comment:129 Changed 4 years ago by
Sorry to do a change on a patch with positive review. trac_10963more_functorial_constructionsgradedmodulesfixnt.patch now includes two trivial updates to failing doctests it caused in c3_controlled.pyx. All tests now pass on my machine, so I allow myself to leave it on positive review.
comment:130 in reply to: ↑ 128 Changed 4 years ago by
Replying to stumpc5:
I don't remember why I am the owner of this patch, but let me congratulate that you finally got it positively reviewed!
Thanks!
My main motivation for implementing stuff in Sage was that I wanted to work on #11187. But I accidentally built it on top of this patch, so I never finished it, and it never made it into main Sage. I hope to find the motivation for working it out again!
Yeah, I am sorry for all the good features that have been postponed forever due to this patch; I paid the price myself and know how frustrating this could be! But now I am looking forward your coming back to this topic!
Cheers,
Nicolas
comment:131 Changed 4 years ago by
class BialgebrasWithBasis(Category_over_base_ring): +def BialgebrasWithBasis(base_ring): """  The category of bialgebras with a distinguished basis + The category of finite dimensional coalgebras with a distinguished basis
Shouldn't the "coalgebras" still be "bialgebras"?
Similarly:
class GradedBialgebras(Category_over_base_ring): +def GradedBialgebras(base_ring): """  The category of bialgebras with several bases + The category of finite dimensional coalgebras with a distinguished basis
Also, not sure where the "finite dimensional" has come from...
comment:132 Changed 4 years ago by
 Milestone set to sage5.13
 Status changed from positive_review to needs_review
Various changes/comments after positive_review => please review
comment:133 Changed 4 years ago by
Thanks Darij for spotting this. I uploaded a new version of the patch that fixes those and a couple others. Here is the patch diff:
@@ 2099,8 +2102,7 @@ diff git a/sage/categories/bialgebras_ class BialgebrasWithBasis(Category_over_base_ring): +def BialgebrasWithBasis(base_ring): """  The category of bialgebras with a distinguished basis + The category of finite dimensional coalgebras with a distinguished basis + The category of bialgebras with a distinguished basis EXAMPLES:: @@ 7273,8 +7275,7 @@ diff git a/sage/categories/finite_dime class FiniteDimensionalBialgebrasWithBasis(Category_over_base_ring): +def FiniteDimensionalBialgebrasWithBasis(base_ring): """  The category of finite dimensional bialgebras with a distinguished basis + The category of finite dimensional coalgebras with a distinguished basis + The category of finite dimensional bialgebras with a distinguished basis EXAMPLES:: @@ 8366,7 +8367,7 @@ diff git a/sage/categories/graded_bial +def GradedBialgebras(base_ring): """  The category of bialgebras with several bases + The category of finite dimensional coalgebras with a distinguished basis ++ The category of graded bialgebras EXAMPLES:: ...skipping... + The category of graded coalgebras EXAMPLES:: @@ 8608,7 +8607,7 @@ diff git a/sage/categories/graded_hopf +def GradedHopfAlgebras(base_ring): """  The category of GradedHopf algebras with several bases + The category of graded coalgebras with a distinguished basis ++ The category of graded Hopf algebras EXAMPLES:: @@ 15116,7 +15115,7 @@ diff git a/sage/categories/with_realiz diff git a/sage/combinat/all.py b/sage/combinat/all.py  a/sage/combinat/all.py +++ b/sage/combinat/all.py @@ 135,6 +135,8 @@ from cluster_algebra_quiver.all import * +@@ 133,6 +133,8 @@ from cluster_algebra_quiver.all import * #import lrcalc @@ 15205,15 +15204,15 @@ diff git a/sage/combinat/free_module.p diff git a/sage/combinat/integer_vector_weighted.py b/sage/combinat/integer_vector_weighted.py  a/sage/combinat/integer_vector_weighted.py +++ b/sage/combinat/integer_vector_weighted.py @@ 261,7 +261,7 @@ class WeightedIntegerVectors_all(Disjoin   sage: C = WeightedIntegerVectors([2,1,3]) +@@ 120,7 +120,7 @@ class WeightedIntegerVectors_all(Disjoin + sage: C.__class__ + <class 'sage.combinat.integer_vector_weighted.WeightedIntegerVectors_all_with_category'> sage: C.category()  Join of Category of infinite enumerated sets and Category of sets with grading + Join of Category of sets with grading and Category of infinite enumerated sets sage: TestSuite(C).run() """  self._weights = weight + self._weights = weights
comment:134 Changed 4 years ago by
 Description modified (diff)
comment:135 followup: ↓ 137 Changed 4 years ago by
All long tests passed for me on 5.12.beta1, except for one of those random failures:
sage t long devel/sage/sage/schemes/toric/weierstrass_covering.py ********************************************************************** File "devel/sage/sage/schemes/toric/weierstrass_covering.py", line 72, in sage.schemes.toric.weierstrass_covering Failed example: P2_112 = toric_varieties.P2_112() Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <function remove at 0x194e398> ignored
As for the patchbot failure, it's due to a wrong order in the application of the patches. Someone knows how to fix this?
comment:136 Changed 4 years ago by
comment:137 in reply to: ↑ 135 Changed 4 years ago by
Replying to nthiery:
All long tests passed for me on 5.12.beta1, except for one of those random failures:
sage t long devel/sage/sage/schemes/toric/weierstrass_covering.py ********************************************************************** File "devel/sage/sage/schemes/toric/weierstrass_covering.py", line 72, in sage.schemes.toric.weierstrass_covering Failed example: P2_112 = toric_varieties.P2_112() Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <function remove at 0x194e398> ignored
That's bad and needs to be fixed.
Can you test whether #13394 fixes it? I am pretty confident that it does.
As for the patchbot failure, it's due to a wrong order in the application of the patches. Someone knows how to fix this?
Give the order by saying "apply: patch1 patch2 patch3" in some comment, and then kick the patchbot?
comment:138 followup: ↓ 139 Changed 4 years ago by
Could you export a mercurial patch for #13394 for me? I only have an old version of the sagegit dev tools under hand right now ...
Apply: :attachment:trac_10963more_functorial_constructionsnt.patch :attachment:trac_10963_doctest_correctionfc.patch :attachment:trac_10963more_functorial_constructionsgradedmodulesfixnt.patch
(redundant with the description ...)
comment:139 in reply to: ↑ 138 Changed 4 years ago by
Replying to nthiery:
Could you export a mercurial patch for #13394 for me? I only have an old version of the sagegit dev tools under hand right now ...
See trac13394.patch. It is a diff patch, hence, it does not contain the commit messages of the separate commits.
Hence, you may try to apply it, and then do what the patchbot should do:
apply trac_10963more_functorial_constructionsnt.patch trac_10963_doctest_correctionfc.patch trac_10963more_functorial_constructionsgradedmodulesfixnt.patch
comment:140 Changed 4 years ago by
PS: Note the different folder layout: In the diff patch for #13394, we have $SAGE_ROOT/src/sage
. So, it could be that you need to tell mercurial how to apply my diff patch...
Anyway, it seems that I can apply your patch to my git repository.
comment:141 Changed 4 years ago by
Spoke to soon. It appears that I have not been able to import your patches, even though git did not give errors.
comment:142 Changed 4 years ago by
Too bad. I needed to change filenames a lot, but then I still obtain
error: Anwendung des Patches fehlgeschlagen: src/doc/en/reference/categories/index.rst:127 error: src/doc/en/reference/categories/index.rst: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/algebras/group_algebra_new.py:143 error: src/sage/algebras/group_algebra_new.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/categories/algebras.py:67 error: src/sage/categories/algebras.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/categories/algebras_with_basis.py:122 error: src/sage/categories/algebras_with_basis.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/categories/category.py:106 error: src/sage/categories/category.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/categories/coalgebras_with_basis.py:3 error: src/sage/categories/coalgebras_with_basis.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/categories/commutative_rings.py:5 error: src/sage/categories/commutative_rings.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/categories/groups.py:430 error: src/sage/categories/groups.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/categories/infinite_enumerated_sets.py:31 error: src/sage/categories/infinite_enumerated_sets.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/categories/rngs.py:3 error: src/sage/categories/rngs.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/combinat/debruijn_sequence.pyx:287 error: src/sage/combinat/debruijn_sequence.pyx: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/rings/finite_rings/integer_mod_ring.py:259 error: src/sage/rings/finite_rings/integer_mod_ring.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/schemes/generic/morphism.py:268 error: src/sage/schemes/generic/morphism.py: Patch konnte nicht angewendet werden error: Anwendung des Patches fehlgeschlagen: src/sage/structure/parent.pyx:293 error: src/sage/structure/parent.pyx: Patch konnte nicht angewendet werden
Need to see what is happening.
comment:143 Changed 4 years ago by
The following did not apply (clearly, because it mentions weakref
, which is not used any longer because of #13394).
diff a/src/sage/categories/category.py b/src/sage/categories/category.py (rejected hunks) @@ 106,66 +107,13 @@ from sage.structure.sage_object import S from sage.structure.unique_representation import UniqueRepresentation from sage.structure.dynamic_class import DynamicMetaclass, dynamic_class from weakref import WeakValueDictionary _join_cache = WeakValueDictionary()  def _join(categories, as_list):  """  This is an auxiliary function for :meth:`Category.join`   INPUT:    ``categories``: A tuple (no list) of categories.   ``as_list`` (boolean): Whether or not the result should be represented as a list.   EXAMPLES::   sage: Category.join((Groups(), CommutativeAdditiveMonoids())) # indirect doctest  Join of Category of groups and Category of commutative additive monoids  sage: Category.join((Modules(ZZ), FiniteFields()), as_list=True)  [Category of finite fields, Category of modules over Integer Ring]   """  # Since Objects() is the top category, it is the neutral element of join  if len(categories) == 0:  from objects import Objects  return Objects()   if not as_list:  try:  return _join_cache[categories]  except KeyError:  pass   # Ensure associativity by flattening JoinCategory's  # Invariant: the super categories of a JoinCategory are not JoinCategories themselves  categories = sum( (tuple(category._super_categories) if isinstance(category, JoinCategory) else (category,)  for category in categories), ())   # canonicalize, by removing redundant categories which are super  # categories of others, and by sorting  result = ()  for category in categories:  if any(cat.is_subcategory(category) for cat in result):  continue  result = tuple( cat for cat in result if not category.is_subcategory(cat) ) + (category,)  result = tuple(sorted(result, key = category_sort_key, reverse=True))  if as_list:  return list(result)  if len(result) == 1:  out = _join_cache[categories] = result[0]  else:  out = _join_cache[categories] = JoinCategory(result)  return out   class Category(UniqueRepresentation, SageObject): r""" The base class for modeling mathematical categories, like for example:   Groups(): the category of groups   EuclideanRings(): the category of euclidean rings   VectorSpaces(QQ): the category of vector spaces over the field of rational +  ``Groups()``: the category of groups +  ``EuclideanDomains()``: the category of euclidean rings +  ``VectorSpaces(QQ)``: the category of vector spaces over the field of rational See :mod:`sage.categories.primer` for an introduction to categories in Sage, their relevance, purpose and usage. The
Do I understand correctly that you want to remove the _join_cache
weak value dictionary and the def _join
function entirely?
comment:144 Changed 4 years ago by
To be precise: I notice that there is a module level _join_cache and a _join_cache that appears to be an attribute of Category
. You want to remove the module level cache, but not the class level cache, right?
comment:145 followup: ↓ 146 Changed 4 years ago by
I think I managed to turn your patches into a git branch on top of #13394. But I'll run tests before I push it to Trac.
comment:146 in reply to: ↑ 145 Changed 4 years ago by
Replying to SimonKing:
I think I managed to turn your patches into a git branch on top of #13394. But I'll run tests before I push it to Trac.
Meanwhile I have put a "proper" mercurial patch to #13394, so that
 there might be a chance to get this into Sage before version 6.0
 you can easily get the patch from #13394 and can test if everything works. I did test, and all tests pass for me.
Question: Shall I create a git branch for this ticket and push it? Or do you want to keep it in the mercurial world (which means it might be mergeable before 6.0)?
Changed 4 years ago by
comment:147 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394
I just ran all tests on sage5.12 with #13394 applied, and they all passed! Yippee! Thanks Simon! I am running them on 5.13.beta1.
I have reuploaded the main patch here after the trivial rebase upon #13394 you mentionned (indeed, _join_cache is now only an attribute of Category), and added #13394 as dependency. If the patchbot goes back to green, and you confirm that you agree with :attachment: trac_10963more_functorial_constructionsgradedmodulesfixnt.patch, then this ticket can go back to positive review I guess.
Cheers,
Nicolas
comment:148 Changed 4 years ago by
All tests passed on 5.13.beta1 with the following patches applied:
trac_15237fix_crystals_graphvizts.patch trac_9290geometric_coxeter_groupsts.patch trac9290review.patch trac_15195family_cardinalityts.patch trac_15309sga_alg_gens_fixts.patch trac_10358oeistm_rebase.patch trac_10358oeisreview_1tm.patch trac_10358oeisreview_2tm.patch trac_10358oeisreview_3nctm.patch trac13394weak_value_dictionary.patch trac_10963more_functorial_constructionsnt.patch trac_10963more_functorial_constructionsgradedmodulesfixnt.patch trac_10963_doctest_correctionfc.patch
comment:149 followup: ↓ 150 Changed 4 years ago by
comment:150 in reply to: ↑ 149 ; followup: ↓ 151 Changed 4 years ago by
comment:151 in reply to: ↑ 150 Changed 4 years ago by
Replying to SimonKing:
Replying to nthiery:
Note: I haven't pushed the latest version of the patch and #13394 on the sagecombinat queue yet, for #13394 triggers quite a bit of recompilation.
How can this be? #13394 does not touch any pxdfile at all.
Indeed; actually you are right, it seemed like a lot but in fact it's just because it was for #10963 and #13394 together; the overhead of #13394 is minor. I am about to push.
Cheers,
comment:152 Changed 4 years ago by
for the patchbots:
appply trac_10963more_functorial_constructionsnt.patch trac_10963_doctest_correctionfc.patch trac_10963more_functorial_constructionsgradedmodulesfixnt.patch
comment:153 Changed 4 years ago by
for the patchbots:
apply trac_10963more_functorial_constructionsnt.patch trac_10963_doctest_correctionfc.patch trac_10963more_functorial_constructionsgradedmodulesfixnt.patch
comment:154 Changed 4 years ago by
It seems that this patch increases the memory usage in Sage. This used to pass, but now fails:
$ ulimit v 2300000; ./sage t long devel/sage/sage/schemes/elliptic_curves/heegner.py Running doctests with ID 20131104214906b972df36. Doctesting 1 file. sage t long devel/sage/sage/schemes/elliptic_curves/heegner.py Process DocTestWorker1: Traceback (most recent call last): File "/scratch/release/merger/sage5.13.beta3/local/lib/python/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/scratch/release/merger/sage5.13.beta3/local/lib/python2.7/sitepackages/sage/doctest/forker.py", line 1802, in run task(self.options, self.outtmpfile, msgpipe, self.result_queue) File "/scratch/release/merger/sage5.13.beta3/local/lib/python2.7/sitepackages/sage/doctest/forker.py", line 2113, in __call__ result_queue.put(result, False) File "/scratch/release/merger/sage5.13.beta3/local/lib/python/multiprocessing/queues.py", line 107, in put self._start_thread() File "/scratch/release/merger/sage5.13.beta3/local/lib/python/multiprocessing/queues.py", line 191, in _start_thread self._thread.start() File "/scratch/release/merger/sage5.13.beta3/local/lib/python/threading.py", line 743, in start _start_new_thread(self.__bootstrap, ()) error: can't start new thread Bad exit: 1 ********************************************************************** Tests run before process (pid=22245) failed: sage: E = EllipticCurve('433a') ## line 13 ## sage: P = E.heegner_point(8,3) ## line 14 ## sage: z = P.point_exact(201); z ## line 15 ## (4/3 : 1/27*a  4/27 : 1) [...] sage: E.heegner_index(8) ## line 6478 ## terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc **********************************************************************  sage t long devel/sage/sage/schemes/elliptic_curves/heegner.py # Bad exit: 1 
For me, this doesn't imply needs_work, but somebody should at least confirm that this is normal and expected.
comment:155 Changed 4 years ago by
The lines around the error (line 6478) are
sage: E = EllipticCurve([0, 0, 1, 34874, 2506691]) sage: E.heegner_index(8) Traceback (most recent call last): ... RuntimeError: ...
Hence, we test against a runtime error with unspecified error message. Let us see which, and let us test the memory consumption.
First, with the branch from #13394:
sage: get_memory_usage() 190.30078125 sage: E = EllipticCurve([0, 0, 1, 34874, 2506691]) sage: get_memory_usage() 190.7265625 sage: E.heegner_index(8) Unable to compute the rank with certainty (lower bound=0). This could be because Sha(E/Q)[2] is nontrivial. Try calling something like two_descent(second_limit=13) on the curve then trying this command again. You could also try rank with only_use_mwrank=False.  RuntimeError Traceback (most recent call last) ... RuntimeError: Rank not provably correct. sage: get_memory_usage() 193.0625
Now, with my local branch that I've created for the patch from here:
sage: get_memory_usage() 191.50390625 sage: E = EllipticCurve([0, 0, 1, 34874, 2506691]) sage: get_memory_usage() 191.6328125 sage: E.heegner_index(8) ... RuntimeError: Rank not provably correct. sage: get_memory_usage() 194.25390625
So, it seems that the memory consumption has just increased by a constant amount. Also note that
ulimit v 2300000; ./sage t long src/sage/schemes/elliptic_curves/heegner.py Running doctests with ID 201311042346036fbd5fa5. Doctesting 1 file. sage t long src/sage/schemes/elliptic_curves/heegner.py [1072 tests, 104.05 s]  All tests passed!  Total time for all tests: 106.1 seconds cpu time: 97.4 seconds cumulative wall time: 104.0 seconds
works for me. I'll try to decrease the memory limit.
comment:156 followup: ↓ 157 Changed 4 years ago by
Thanks Jeroen for reporting the issue!
Thanks Simon for investigating!
We certainly should investigate this further later on, but as Jeroen mentions it's not critical. Simon, could you double check the little patch :attachment:trac_10963more_functorial_constructionsgradedmodulesfixnt.patch, and then I guess we can go back to positive review and get this ticket merged!
comment:157 in reply to: ↑ 156 Changed 4 years ago by
Replying to nthiery:
Thanks Jeroen for reporting the issue!
Thanks Simon for investigating!
We certainly should investigate this further later on, but as Jeroen mentions it's not critical. Simon, could you double check the little patch :attachment:trac_10963more_functorial_constructionsgradedmodulesfixnt.patch, and then I guess we can go back to positive review and get this ticket merged!
Do I understand correctly: This is not a new patch, but replaces an old patch? How does it differ?
comment:158 followup: ↓ 160 Changed 4 years ago by
Again, if I understand correctly, I have tested stuff with these three patches from here applied (after moving it to git). My git log starting with the master branch is this:
* 660f126  (HEAD, ticket/10963) Merge branch 'ticket/13394' into ticket/10963, because one commit has been added there (vor 4 Tagen) <Simon King> \  * 11bd210  Remove some trailing whitespace (vor 4 Tagen) <Simon King> *  d0ffcfa  Trac #10963: More functorial constructions (fix graded modules with basis) (vor 4 Tagen) <Nicolas M. Thiery> *  1d8f75a  Trac #10963: doctests corrections (vor 4 Tagen) <Frederic Chapoton> *  b700f98  Trac #10963: More functorial constructions (vor 4 Tagen) <Nicolas M. Thiery> / * 1a12ce6  Fix some typos. Better tests for WeakValueDict iteration guard. (vor 5 Tagen) <Simon King> * e60890e  Add direct and indirect stresstests for the weak value callbacks (vor 6 Tagen) <Simon King> * 851cc95  Avoid some pointer casts in WeakValueDict callbacks (vor 6 Tagen) <Simon King> * 246518f  Use <dict>'s internals in WeakValueDictionary and do not reinvent the bucket. (vor 6 Tagen) <Simon King> * fab0ed4  Use WeakValueDict's iteration guard more consequently (vor 7 Tagen) <Simon King> * e4adaeb  Implement copy and deepcopy for WeakValueDictionary (vor 8 Tagen) <Simon King> * 70a7b8a  Guard WeakValueDictionary against deletions during iteration (vor 8 Tagen) <Simon King> * c3dba98  Replace weakref.WeakValueDictionary by sage.misc.weak_dict.WeakValueDictionary (vor 9 Tagen) <Simon King> * 17b0236  Documentation for WeakValueDictionary (vor 9 Tagen) <Simon King> * f0ed60f  Initial version of a safer and faster WeakValueDictionary (vor 9 Tagen) <Simon King> * 0d00bf7  (trac/master, origin/master, origin/HEAD, master) Merge branch 'build_system' (vor 4 Wochen) <R. Andrew Ohana>
So, if the "graded module" patch did not change since 4 days, then I did successfully test it ("make ptest"). Also, the changes seem reasonable to me.
Jeroen, would this be good enough (from your perspective) to put it back to positive review? I can not reproduce the memory problems you mentioned.
comment:159 followups: ↓ 161 ↓ 162 Changed 4 years ago by
 Status changed from needs_review to needs_work
This happens sometimes:
sage t long devel/sage/sage/rings/function_field/function_field_element.pyx ********************************************************************** File "devel/sage/sage/rings/function_field/function_field_element.pyx", line 714, in sage.rings.function_field.function_field_element.FunctionFieldElement_rational.inverse_mod Failed example: O = K.maximal_order(); I = O.ideal(x^2+1) Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x2421410> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x21e64d0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x21e64d0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored **********************************************************************
and
sage t long devel/sage/doc/en/constructions/elliptic_curves.rst ********************************************************************** File "devel/sage/doc/en/constructions/elliptic_curves.rst", line 72, in doc.en.constructions.elliptic_curves Failed example: G = E.abelian_group() Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x8c71d0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x8c71d0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x8c71d0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x1781a98> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x21e64d0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x21e64d0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x8c71d0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored **********************************************************************
comment:160 in reply to: ↑ 158 Changed 4 years ago by
Replying to SimonKing:
So, if the "graded module" patch did not change since 4 days, then I did successfully test it ("make ptest"). Also, the changes seem reasonable to me.
Confirmed: it's the original patch I uploaded 8 days ago.
comment:161 in reply to: ↑ 159 Changed 4 years ago by
Replying to jdemeyer:
Exception RuntimeError?: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary?.init.<locals>.callback at 0x21e64d0> ignored
Really?? Blimey! The callback of the new WeakValueDictionary
was supposed to be absolutely safe against this kind of problems. So, we need work...
comment:162 in reply to: ↑ 159 Changed 4 years ago by
Replying to jdemeyer:
This happens sometimes:
Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x17814b0> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x2421410> ignored
Looks similar to #15069, so the most likely scenario is that there is a very complicated data structure that gets garbage collected and that the decref of something initiates a chain of subsequent decrefs that is more than 1000 deep.
It seems there are unresolved issues in python with this stuff. See http://bugs.python.org/issue483469 for an even worse (segmentation fault inducing!) problem with __del__
. It looks like the python "maximum recursion depth" is avoided there via a similar trick to #15069, leading do a Cstack overflow as a result (hence the harder crash). Indeed:
sage: class A: pass sage: a=A(); prev=a; sage: from sage.structure.coerce_dict import MonoDict sage: M = MonoDict(11) sage: for i in range(10^5): newA = A(); M[prev] = newA; prev = newA sage: del a Segmentation fault
(the value 10^5
may need adjustment, depending on your Cstack), showing that with the fix on #15069 we only postpone the problem with some order of magnitudes, and get a worse problem instead.
I suspect we're hitting here the same problem (note that for a WeakValueDictionary
we have to chain in the other direction):
sage: a=A(); prev=a; sage: M=WeakValueDictionary() sage: for i in range(10^3+10): newA = A(); M[newA] = prev; prev = newA sage: del a Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x6a527d0> ignored
This problem goes away if we instead define
sage: class A(object): pass
Probably, oldstyle objects do not participate in the "trashcan" but newstyle objects do (see #13901 and cython ticket#797; we need this on cython classes too), which flattens callstacks during deallocation.
The problem also doesn't occur with weakref.WeakValueDictionary
, probably also because there are sufficiently many general python structures involved to let the trashcan kick in.
Oddly enough, replacing object
above by SageObject
or Parent
seems to also work, so the scenario we're running into is probably not exactly what described here.
comment:163 Changed 4 years ago by
 Cc zabrocki added
comment:164 Changed 4 years ago by
I must say that the error occurred only one time, I haven't been able to reproduce it.
comment:165 followup: ↓ 194 Changed 4 years ago by
On a different machine:
sage t long devel/sage/sage/categories/sets_cat.py ********************************************************************** File "devel/sage/sage/categories/sets_cat.py", line 188, in sage.categories.sets_cat.Sets Failed example: TestSuite(Sets()).run() Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x26531d0> ignored **********************************************************************
comment:166 Changed 4 years ago by
There is a way to "borrow" the trashcan: make sure that some standard python
container type is involved in the chain of deletions. For sage.misc.weak_dict
this can be done by modifying del_dictitem_by_exact_value
to return the key
(so it's more "pop_dictitem") and then in the callback do:
... v=(del_dictitem_by_exact_value(<PyDictObject *>cself, <PyObject *>r, r.key),) del v ...
i.e., delete a tuple containing the key rather than the key by itself. The tuple deallocation routine does participate in the trashcan. Drawbacks:
 this is a little slower
 we're doing a memory allocation (creation of a tuple) in a weakref callback. That's not particularly forbidden, but it can trigger a GC.
We could do something similar in MonoDictEraser
etc. to properly solve #15069.
Again, the real solution is for cython to participate in the trashcan.
comment:167 Changed 4 years ago by
 Branch set to public/ticket/10963
 Commit set to 362fd5e462adea5860d52fd99db94a2623044d89
comment:168 Changed 4 years ago by
 Keywords days54 added
comment:169 followup: ↓ 171 Changed 4 years ago by
How about we just create our own trashcan for weakref callbacks?
cdef bool trashcan_closed = True cdef object trashcan = [] def remove(wr): if trashcan_closed: trashcan_closed = False trashcan = [] # empty trash, possible call to remove() trashcan_closed = False trashcan.append(wr.subobject) trashcan_close = True
Limits Python and C recursion depth to 2 frames...
comment:170 Changed 4 years ago by
darij@travisvirtualbox:~/gitsage/sage5.13.beta1$ git pull origin public/ticket/10963 Enter passphrase for key '/home/darij/.ssh/id_rsa': From trac.sagemath.org:sage * branch public/ticket/10963 > FETCH_HEAD Automerging src/sage/structure/parent.pyx Automerging src/sage/misc/misc.py Automerging src/sage/combinat/posets/posets.py Automerging src/sage/combinat/free_module.py Automerging src/sage/combinat/all.py Automerging src/sage/categories/posets.py Automerging src/sage/categories/finite_posets.py Automerging src/sage/categories/category.py CONFLICT (content): Merge conflict in src/sage/categories/category.py Automatic merge failed; fix conflicts and then commit the result. darij@travisvirtualbox:~/gitsage/sage5.13.beta1$
From a (vain) attempt to merge this I can tell in agreeance with Simon that meld *is* confusing. Too bad kdiff3 is buggy...
comment:171 in reply to: ↑ 169 ; followup: ↓ 174 Changed 4 years ago by
Replying to vbraun:
How about we just create our own trashcan for weakref callbacks?
cdef bool trashcan_closed = True cdef object trashcan = [] def remove(wr): if trashcan_closed: trashcan_closed = False trashcan = [] # empty trash, possible call to remove() trashcan_closed = False trashcan.append(wr.subobject) trashcan_close = TrueLimits Python and C recursion depth to 2 frames...
It's easier than that. You can take the same approach as in #15367:
def remove(wr): key_tmp = [key under which wr occurs in dict] remove (key:wr) entry from dict del key_tmp
Instead of deleting a bare key, we're deleting a list containing the key. Since python lists participate in the trashcan, we get to borrow the trashcan count from there.
There's a slight cost to this approach: we're allocating a list only to delete it, but that is a very small cost indeed (and a cost we'd incur with our homerolled trashcan too), and we still get to benefit from all the other performance tunings that have gone into python's trashcan.
comment:172 Changed 4 years ago by
 Commit changed from 362fd5e462adea5860d52fd99db94a2623044d89 to 80d55fe6f2c1ed2496a02cab4ae0db4cb31a7b06
Branch pushed to git repo; I updated commit sha1. New commits:
80d55fe  merging sage/categories/category.py 
comment:173 Changed 4 years ago by
comment:174 in reply to: ↑ 171 ; followup: ↓ 180 Changed 4 years ago by
Replying to nbruin:
Replying to vbraun:
How about we just create our own trashcan for weakref callbacks? ...
It's easier than that. You can take the same approach as in #15367: .. Instead of deleting a bare key, we're deleting a list containing the key. Since python lists participate in the trashcan, we get to borrow the trashcan count from there.
There's a slight cost to this approach: we're allocating a list only to delete it, but that is a very small cost indeed (and a cost we'd incur with our homerolled trashcan too), and we still get to benefit from all the other performance tunings that have gone into python's trashcan.
I am glad to hear that there seems to be an easy solution. I can't wait to see it implemented, tested, reviewed, merged! And then see #10963 merged finally!
Whoever participates to this will get a triple beer from me (chocolates work too :)) at the next occasion! Who's in?
Cheers,
Nicolas
comment:175 Changed 4 years ago by
I don't understand git...
So I've typed
git diff c 362fd5e4..80d55fe  gedit
to check my merge again. It gives a lot of changes (of course  update from beta2 to beta4), including this:
diff git a/src/sage/categories/category.py b/src/sage/categories/category.py index 69373b7..732c233 100644  a/src/sage/categories/category.py +++ b/src/sage/categories/category.py @@ 108,6 +108,60 @@ from sage.structure.sage_object import SageObject from sage.structure.unique_representation import UniqueRepresentation from sage.structure.dynamic_class import DynamicMetaclass, dynamic_class +import sage.misc.weak_dict +from sage.misc.weak_dict import WeakValueDictionary +_join_cache = WeakValueDictionary() + +def _join(categories, as_list): + """ + This is an auxiliary function for :meth:`Category.join` + + INPUT: + +  ``categories``: A tuple (no list) of categories. +  ``as_list`` (boolean): Whether or not the result should be represented as a list. + + EXAMPLES:: + + sage: Category.join((Groups(), CommutativeAdditiveMonoids())) # indirect doctest + Join of Category of groups and Category of commutative additive monoids + sage: Category.join((Modules(ZZ), FiniteFields()), as_list=True) + [Category of finite fields, Category of modules over Integer Ring] + + """ + # Since Objects() is the top category, it is the neutral element of join + if len(categories) == 0: + from objects import Objects + return Objects() + + if not as_list: + try: + return _join_cache[categories] + except KeyError: + pass + + # Ensure associativity by flattening JoinCategory's + # Invariant: the super categories of a JoinCategory are not JoinCategories themselves + categories = sum( (tuple(category._super_categories) if isinstance(category, JoinCategory) else (category,) + for category in categories), ()) + + # canonicalize, by removing redundant categories which are super + # categories of others, and by sorting + result = () + for category in categories: + if any(cat.is_subcategory(category) for cat in result): + continue + result = tuple( cat for cat in result if not category.is_subcategory(cat) ) + (category,) + result = tuple(sorted(result, key = category_sort_key, reverse=True)) + if as_list: + return list(result) + if len(result) == 1: + out = _join_cache[categories] = result[0] + else: + out = _join_cache[categories] = JoinCategory(result) + return out + + class Category(UniqueRepresentation, SageObject): r""" The base class for modeling mathematical categories, like for example:
Why is def _join
marked as newly added? It did not enter Sage between beta2 and beta4. Instead, git blame shows most of it is from 2011:
darij@travisvirtualbox:~/gitsage/sage5.13.beta1$ git blame src/sage/categories/category.py [...] fa807558 (Simon King 20131102 22:51:17 +0100 110) import sage.misc.weak_dict fa807558 (Simon King 20131102 22:51:17 +0100 111) from sage.misc.weak_dict import WeakValueDiction f89e19cf (Simon King 20111225 00:48:56 +0100 112) _join_cache = WeakValueDictionary() f89e19cf (Simon King 20111225 00:48:56 +0100 113) a3c5958b (Simon King 20111006 14:17:11 +0200 114) def _join(categories, as_list): a3c5958b (Simon King 20111006 14:17:11 +0200 115) """ a3c5958b (Simon King 20111006 14:17:11 +0200 116) This is an auxiliary function for :meth:`Cat a3c5958b (Simon King 20111006 14:17:11 +0200 117) a3c5958b (Simon King 20111006 14:17:11 +0200 118) INPUT: a3c5958b (Simon King 20111006 14:17:11 +0200 119) a3c5958b (Simon King 20111006 14:17:11 +0200 120)  ``categories``: A tuple (no list) of categ a3c5958b (Simon King 20111006 14:17:11 +0200 121)  ``as_list`` (boolean): Whether or not the a3c5958b (Simon King 20111006 14:17:11 +0200 122) a3c5958b (Simon King 20111006 14:17:11 +0200 123) EXAMPLES:: a3c5958b (Simon King 20111006 14:17:11 +0200 124) a3c5958b (Simon King 20111006 14:17:11 +0200 125) sage: Category.join((Groups(), Commutati a3c5958b (Simon King 20111006 14:17:11 +0200 126) Join of Category of groups and Category a3c5958b (Simon King 20111006 14:17:11 +0200 127) sage: Category.join((Modules(ZZ), Finite 71230653 (Nicolas M. Thiery 20130531 13:11:13 0400 128) [Category of finite fields, Category of a3c5958b (Simon King 20111006 14:17:11 +0200 129) a3c5958b (Simon King 20111006 14:17:11 +0200 130) """ a3c5958b (Simon King 20111006 14:17:11 +0200 131) # Since Objects() is the top category, it is a3c5958b (Simon King 20111006 14:17:11 +0200 132) if len(categories) == 0: a3c5958b (Simon King 20111006 14:17:11 +0200 133) from objects import Objects a3c5958b (Simon King 20111006 14:17:11 +0200 134) return Objects() a3c5958b (Simon King 20111006 14:17:11 +0200 135) f89e19cf (Simon King 20111225 00:48:56 +0100 136) if not as_list: f89e19cf (Simon King 20111225 00:48:56 +0100 137) try: f89e19cf (Simon King 20111225 00:48:56 +0100 138) return _join_cache[categories] f89e19cf (Simon King 20111225 00:48:56 +0100 139) except KeyError: [...]
comment:176 Changed 4 years ago by
 Commit changed from 80d55fe6f2c1ed2496a02cab4ae0db4cb31a7b06 to a410d05b692eead348214b0378dfc78113a3bf5a
comment:177 followup: ↓ 178 Changed 4 years ago by
Pushed a couple trivial (hopefully...) changes  someone please check!
Also I've confirmed that the version of #13394 in the patch equals the one in the master, so there was no error in that.
Also, am I seeing it right that the category of group algebras is now no longer a subcategory of Hopf algebras, and the category of monoid algebras no longer one of that of bialgebras? Or were they never? EDIT: Yeah, they never were. I guess they should be, at least once bialgebras and Hopf algebras get any useful methods like integrals?
New commits:
a410d05  typos and unused imports 
b21dde5  duplication introduced in my own merge removed 
91b4e23  Merge branch 'master' into 10963 
comment:178 in reply to: ↑ 177 Changed 4 years ago by
Replying to darij:
Pushed a couple trivial (hopefully...) changes  someone please check!
I checked them, and I am happy with them in principle. Thanks for the proofreading! However this ticket is more or less in frozen state, and still officially to be merged on the mercurial side (the git branch is just to allow for development of tickets depending on this one on the git side). In fact, I consider this ticket as positive review modulo the dependency on this memory deallocation bug.
So I'd rather postpone those changes to a later ticket instead of spending time backporting them to mercurial.
Also I've confirmed that the version of #13394 in the patch equals the one in the master, so there was no error in that.
Ok
Also, am I seeing it right that the category of group algebras is now no longer a subcategory of Hopf algebras, and the category of monoid algebras no longer one of that of bialgebras? Or were they never? EDIT: Yeah, they never were. I guess they should be, at least once bialgebras and Hopf algebras get any useful methods like integrals?
Group algebras are still Hopf algebras as they used to be. Making monoid algebras into bialgebras is certainly a desirable feature, in a later ticket.
Cheers,
Nicolas
comment:179 followup: ↓ 182 Changed 4 years ago by
Ooooooh, so the branch is experimental?
Guise! Pleeease don't put experimental branches in the public
namespace!
comment:180 in reply to: ↑ 174 Changed 4 years ago by
Replying to nthiery:
Replying to nbruin:
Replying to vbraun:
How about we just create our own trashcan for weakref callbacks? ...
It's easier than that. You can take the same approach as in #15367: .. Instead of deleting a bare key, we're deleting a list containing the key. Since python lists participate in the trashcan, we get to borrow the trashcan count from there.
There's a slight cost to this approach: we're allocating a list only to delete it, but that is a very small cost indeed (and a cost we'd incur with our homerolled trashcan too), and we still get to benefit from all the other performance tunings that have gone into python's trashcan.
Nils, if you see precisely how to handle this, pleeeeeeaaaaase proceed! I have created #15506 for this issue.
Cheers,
Nicolas
comment:181 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394, #15506
 Work issues set to Fix #15506. Other than that this ticket is good to go and in frozen state.
comment:182 in reply to: ↑ 179 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394, #15506 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394
 Work issues Fix #15506. Other than that this ticket is good to go and in frozen state. deleted
Replying to darij:
Ooooooh, so the branch is experimental?
Guise! Pleeease don't put experimental branches in the
public
namespace!
The branch is/was not experimental. It was just the git version of the hg patch since we needed it to develop other code that is based on this patch. Since it is already positively reviewed it is not a good idea to modify the git branch.
Best,
Anne
comment:183 Changed 4 years ago by
Ah! The fact that this used to be positive_review
should be made clearer IMHO (but this is a trac issue among many...).
comment:184 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394, #15506
 Status changed from needs_work to needs_review
So if I'm understanding everything properly, #15506 is a dependency and this can be set to positive review modulo Darij's tweaks?
comment:185 Changed 4 years ago by
 Milestone changed from sage5.13 to sage6.0
It was just acted by Jeoroen that #15506, and therefore this ticket, is postponned to Sage 6.0.
I consider this ticket as positive review, assuming that #15506 indeed fixes the issue.
I postpone Darij's tweaks to a followup ticket, in order to keep the git and mercurial version of this ticket in sync. Darij, can you open a ticket?
Cheers,
Nicolas
comment:186 Changed 4 years ago by
 Milestone changed from sage6.0 to sage6.1
6.0 will be 5.13 + git (and released at the same time). All tickets that are not related to the git transition (such as this one) will go into 6.1+.
comment:187 followup: ↓ 189 Changed 4 years ago by
comment:188 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394, #15506 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394, #15150 #15506
Since this is getting pushed back, I've put #15150 ahead of this. The only difference is the ordering of the output of categories in sage/combinat/ncsym/bases.py
. If things change, I can easily reverse the ordering.
comment:189 in reply to: ↑ 187 Changed 4 years ago by
comment:190 followup: ↓ 192 Changed 4 years ago by
Hmm. I have a strong *desire* to get this ticket in very soon and get it off my plate. And I *fear* that if it's bounced to 6.1 it will go into another long cycle of conflicts/fixes/... whereas it's currently (almost) good to go. Now besides frustration and emotions is there a *rationale* justifying a special treatment, well, I am in a conflict of interest to judge. The main fact is that many tickets depends on this one and it already took a *long* while to get it straight.
Cheers,
Nicolas
comment:191 followup: ↓ 193 Changed 4 years ago by
Seconding Nicolas. There are a couple changes I wanted to do that I am deferring until #10963 gets merged so as not to create yet more conflicts.
Either way I am worried about the branchvs.patch issue. If the branch is not going to be merged into sage as it is, won't we get a rebase cascade on all the branches that depend on it?
comment:192 in reply to: ↑ 190 ; followup: ↓ 196 Changed 4 years ago by
I can confirm that not having this patch in sage is hampering a lot of development. First of all, there are already many patches that depend on it (such as #14102 for example), but also projects that are stagnant since it has not yet been merged. For example, Franco has code on the representation theory of the symmetric group that would need this (and #11111), Christian has code on Coxeter groups etc.. It has been a long wait! When code cannot be merged (like the dependent tickets), they usually do not get maintained and they rot away! So I strongly, strongly recommend to try to merge this ticket soon!
comment:193 in reply to: ↑ 191 Changed 4 years ago by
Replying to darij:
Either way I am worried about the branchvs.patch issue. If the branch is not going to be merged into sage as it is, won't we get a rebase cascade on all the branches that depend on it?
The branch may be based on an earlier version of sage, but that doesn't preclude it from being merged into a later version without rebasing. There may still be conflicts that need to be resolved, but a merge rather than a rebase should allow git to be a lot more intelligent about what to do. The same holds for other branches that are based off this one: On the whole, a merge rather than rebase should pretty much limit problems to conflicts that actually are conflicts.
comment:194 in reply to: ↑ 165 Changed 4 years ago by
Replying to jdemeyer:
On a different machine:
sage t long devel/sage/sage/categories/sets_cat.py ********************************************************************** File "devel/sage/sage/categories/sets_cat.py", line 188, in sage.categories.sets_cat.Sets Failed example: TestSuite(Sets()).run() Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <cyfunction WeakValueDictionary.__init__.<locals>.callback at 0x26531d0> ignored **********************************************************************
Circumstantial evidence suggests that this ticket makes these kinds of events more likely than they were before. While we know a way to avoid the actual error (and the condition happening is in itself not an error condition either), it would be good to see exactly what deletion chains are responsible for this, since those might be indicative of an "almost" memory leak: I find it hard to conceive of a valid situation where such a deep chain of weakref callbacks triggering further weakref callbacks would occur. It shouldn't hold up the merge of this ticket, but it could well be a worthwhile investigation into whether our data structures are still sane.
comment:195 Changed 4 years ago by
Absolutely nothing personal, but I'm actually more convinced that this should not go into Sage 5.13. If the patch has potential to create subtle issues which are hard to reproduce/debug/understand, then it's not something which should be merged into an rc1.
comment:196 in reply to: ↑ 192 ; followup: ↓ 198 Changed 4 years ago by
Replying to aschilling:
I can confirm that not having this patch in sage is hampering a lot of development.
Anne, that might be true, but postponing Sage 5.13 to wait for this patch isn't going to solve this problem.
In fact, I would say that Sage development is hampered more by all the other patches waiting to get merged in Sage 6.x. If you think that this one ticket here is more important than those 73 other, you need a good reason.
comment:197 Changed 4 years ago by
Thing is, I have experienced two spooky and unexplainable bugs in the coercion system (I believe) in the last few weeks:
https://groups.google.com/forum/#!topic/sagedevel/LapvScfoBuI
http://trac.sagemath.org/ticket/15473#modify
which both magically disappeared when I added in the #10963 branch. I don't know if this is because #10963 fixes some subtle underlying cause, or just because some patches which were merged in lately were already built under the preassumption that #10963 were there and would not really work without. So at least from my viewpoint, the #10963 branch is more stable than current master...
comment:198 in reply to: ↑ 196 Changed 4 years ago by
Replying to jdemeyer:
Anne, that might be true, but postponing Sage 5.13 to wait for this patch isn't going to solve this problem.
In fact, I would say that Sage development is hampered more by all the other patches waiting to get merged in Sage 6.x. If you think that this one ticket here is more important than those 73 other, you need a good reason.
If 5.3 and 6.0 are almost ready to be released so that this ticket can be merged very soon in some beta release of 6.1, I guess that's a reasonable alternative. Especially if this ticket gets some priority :)
comment:199 followup: ↓ 201 Changed 4 years ago by
I'll be happy to merge this as the first ticket in 6.1, but that'll also mean that I am counting on you to fix any newlydiscovered breakage over the xmas holidays :P
comment:200 Changed 4 years ago by
comment:201 in reply to: ↑ 199 Changed 4 years ago by
Replying to vbraun:
I'll be happy to merge this as the first ticket in 6.1,
Great, thanks!
but that'll also mean that I am counting on you to fix any newlydiscovered breakage over the xmas holidays :P
Fair enough :) Hmm, between the 26th and 31st I'll be offline, but other than this, I'll do my best!
comment:202 Changed 4 years ago by
I am currently at reviewing #15367.
comment:203 Changed 4 years ago by
Together with #15367 I get:
sage t long src/sage/combinat/ncsym/bases.py # 1 doctest failed sage t long src/sage/structure/unique_representation.py # 4 doctests failed
Not sure which ticket needs the fix but one of them does.
comment:204 Changed 4 years ago by
You're a tease, Volker. :) What tests fail?
How do I get these two files up to the version you have? I tried "git pull origin master" and "git pull origin develop", but neither of them leaves me with #14912 merged in (which I suppose is what you have applied to unique_representation.py). Since #14912 is supposedly closed, I assume something is going wrong.
Once I find the issues, can I fix them on the public/ticket/10963 branch and keep my older changes? Or should I revert to hg somehow?
comment:205 Changed 4 years ago by
Log:
I'll release 6.0.beta0 shortly, you can merge that in the current ticket to reproduce the error. Or merge #10963 into this ticket if it is indeed the cause.
comment:206 followups: ↓ 207 ↓ 210 Changed 4 years ago by
Thanks!
Ohkay, the ncsym issue is simple (one should simply replace the claimed output by the actual new output), but the unique_representation one is tricky. Simon??
comment:207 in reply to: ↑ 206 Changed 4 years ago by
Replying to darij:
Ohkay, the ncsym issue is simple (one should simply replace the claimed output by the actual new output), but the unique_representation one is tricky. Simon??
Scary. The failing doctest is not using cyclic garbage collection, nor categories, nor parents, nor coercion. Absolutely no idea why this could possibly be failing.
comment:208 Changed 4 years ago by
Interaction with #14912, by any chance?
comment:209 Changed 4 years ago by
comment:210 in reply to: ↑ 206 ; followup: ↓ 212 Changed 4 years ago by
Replying to darij:
Ohkay, the ncsym issue is simple (one should simply replace the claimed output by the actual new output), but the unique_representation one is tricky. Simon??
The NCSym is this ticket (not #15367); see comment:188 and #15150. As for the unique representations, there seems to be a memory leak. Using the failing example from #14912:
sage: import gc sage: O = SomeClass(1) creating new instance for argument 1 sage: del O sage: gc.garbage [] sage: gc.collect() 3 sage: O = SomeClass(1) sage: x = get_memory_usage() sage: x 176.6640625 sage: L = [SomeClass(i) for i in range(10^6)] # I removed the creation printing sage: del L sage: get_memory_usage() 652.1171875 sage: import gc sage: gc.collect() 0 sage: get_memory_usage() 652.1171875
comment:211 Changed 4 years ago by
 Commit changed from a410d05b692eead348214b0378dfc78113a3bf5a to 588c27689bafc908b4e4f68646f9c5f3a2726a86
Branch pushed to git repo; I updated commit sha1. Last 10 new commits:
588c276  Fixed ncsym/bases.py doctest.

1e0990d  Merge branch 'public/ticket/10963' of ssh://trac.sagemath.org:2222/sage into public/ticket/10963

3735dfd  Updated Sage version to 6.1.beta0

47c9c75  Trac #15224: Iterate over the points of a toric variety

f382aec  Trac #15403: knapsack's docstring doesn't document an useful feature

0c95d3d  Trac #15228: Default embedding of Ljubljana graph (typo)

09fd00b  Trac #12217: Finite field polynomials allow division by zero

9acb905  Trac 12217: correctly handle division by zero

8094791  Trac #14912: UniqueRepresentation tutorial could use more love

4ce6a49  Trac #15442: MILP solver CBC : undefined symbol: dgetrf_

comment:212 in reply to: ↑ 210 ; followup: ↓ 213 Changed 4 years ago by
Replying to tscrim:
The NCSym is this ticket (not #15367); see comment:188 and #15150. As for the unique representations, there seems to be a memory leak. Using the failing example from #14912:
Do the usual: look up the objects using [a for a in gc.get_objects() if type(a) is ...]
and use objgraph
or gc.get_referrers
to track who's keeping it alive. Should be pretty quick.
comment:213 in reply to: ↑ 212 Changed 4 years ago by
Replying to nbruin:
Replying to tscrim:
The NCSym is this ticket (not #15367); see comment:188 and #15150. As for the unique representations, there seems to be a memory leak. Using the failing example from #14912:
Do the usual: look up the objects using
[a for a in gc.get_objects() if type(a) is ...]
and useobjgraph
orgc.get_referrers
to track who's keeping it alive. Should be pretty quick.
OK, I did not realise that comment:210 is about something that can be done in an interactive session. I thought that it was something like "it fails in the doctest, but not interactively", which would be considerably more difficult to debug. It will take a while to build the branch of this ticket :/
...
comment:214 Changed 4 years ago by
 Dependencies changed from #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394, #15150 #15506 to #11224, #8327, #10193, #12895, #14516, #14722, #13589, #14471, #15069, #15094, #11688, #13394, #15150, #15506
comment:215 Changed 4 years ago by
 Commit changed from 588c27689bafc908b4e4f68646f9c5f3a2726a86 to 0c907cf81efeb9bd2d0a44f73539c4e32583c1be
Branch pushed to git repo; I updated commit sha1. New commits:
0c907cf  Reverted cached_function to weak_cached_function.

comment:216 followups: ↓ 217 ↓ 218 Changed 4 years ago by
Okay, I figured out the problem. The weak_cached_function
was changed to a cached_function
for CachedRepresentation.__classcall__()
in unique_representation.py
. IDK what impact this might have on the rest of things with this patch, but it fixes the memory leak.
comment:217 in reply to: ↑ 216 Changed 4 years ago by
Replying to tscrim:
Okay, I figured out the problem.
Excellent work! I'm also happy to see that we're finally getting to a point that introducing memory leaks leads to failing doctests (at least sometimes).
comment:218 in reply to: ↑ 216 ; followup: ↓ 219 Changed 4 years ago by
Replying to tscrim:
Okay, I figured out the problem. The
weak_cached_function
was changed to acached_function
forCachedRepresentation.__classcall__()
inunique_representation.py
. IDK what impact this might have on the rest of things with this patch, but it fixes the memory leak.
WHAT??????
I made CachedRepresentation.__classcall__()
a @weak_cached_function
quite a long time ago, and I think I have also added doctests to show that a weak cache is used. So, how could it be possible that such a change almost went unnoticed?
git blame
shows that this change has been done by Nicolas in revision 9d9cae
, and
  *  362fd5e  # Tue Oct 29 20:14:19 2013 +0100 (vor 6 Wochen) <Nicolas M. Thiery>   *  b2914f3  # Sun Oct 27 13:58:49 2013 +0100 (vor 6 Wochen) <Frederic Chapoton>   *  9d9cae3  # Sat Oct 19 11:50:04 2013 +0200 (vor 6 Wochen) <Nicolas M. Thiery>
Why is there no proper commit message? Is this stuff from here? Have I really been the reviewer of this change :\
?
I notice that there are further uses of @cached_function
in the changeset. So, I guess in the next round of review I need to take more care of this point.
comment:219 in reply to: ↑ 218 ; followup: ↓ 222 Changed 4 years ago by
Replying to SimonKing:
Why is there no proper commit message? Is this stuff from here? Have I really been the reviewer of this change
:\
?
Yep ... It's right there at the end of trac_10963more_functorial_constructionsnt.patch
diff git a/sage/structure/unique_representation.py b/sage/structure/unique_representation.py  a/sage/structure/unique_representation.py +++ b/sage/structure/unique_representation.py @@ 29,7 +29,7 @@ AUTHORS: # http://www.gnu.org/licenses/ #****************************************************************************** from sage.misc.cachefunc import weak_cached_function +from sage.misc.cachefunc import cached_function from sage.misc.classcall_metaclass import ClasscallMetaclass, typecall from sage.misc.fast_methods import WithEqualityById @@ 428,7 +428,7 @@ class CachedRepresentation: _included_private_doc_ = ["__classcall__"]  @weak_cached_function # automatically a staticmethod + @cached_function # automatically a staticmethod def __classcall__(cls, *args, **options): """ Constructs a new object of this class or reuse an existing one.
comment:220 Changed 4 years ago by
I digged through the history of the patch on the mercurial queue, and I apparently introduced this hunk back in August 2012 (changeset 7492). Why in the hell I did that, I don't remember. I guess I must have misproperly rebased the patch upon the thenindeveloppement #12215.
In any cases, thanks Travis for catching this!
comment:221 Changed 4 years ago by
Note that this branch currently does not merge cleanly with #15303.
comment:222 in reply to: ↑ 219 ; followup: ↓ 223 Changed 4 years ago by
 Status changed from needs_review to needs_info
Replying to nbruin:
Replying to SimonKing:
Why is there no proper commit message? Is this stuff from here? Have I really been the reviewer of this change
:\
?Yep ... It's right there at the end of trac_10963more_functorial_constructionsnt.patch
Argh. Sorry that I didn't notice it.
But why can this not have led to doctest failures before? I am sure that I added doctests showing that CachedRepresentation
uses a weak cache! Have these tests been removed by the patch?
Something else (and this is a question to Volker, hence, "needs info"):
Why is there no proper commit message in the git log? I thought that sage dev importpatch
would preserve the commit messages from the mercurial patch.
And what shall we do about it? By git's idiosyncratic notion of history, adding a proper commit message would imply a history change, and since this branch is already in use by people, we can't change the history, it would create merge conflicts, etc.
AFAIK, adding a commit message in the mercurial workflow was trivial, since only the code matters for whether or not subsequent patches apply cleanly.
comment:223 in reply to: ↑ 222 Changed 4 years ago by
Replying to SimonKing:
Why is there no proper commit message in the git log? I thought that
sage dev importpatch
would preserve the commit messages from the mercurial patch.
Looks like a bug in sage dev importpatch
. The commit message of trac_10963more_functorial_constructionsnt.patch looks like
commit fc1993ce33bef97f50fbf3d52aef525bd1f3da8d Author: Nicolas M. Thiery <nthiery@users.sf.net> Date: Sat Oct 19 09:50:04 2013 +0000 # Sat Oct 19 11:50:04 2013 +0200 # Node ID f98c0b44c17dbb718c8449f3eabcbc7b8bdc825d # Parent 2306f17ea8f3e40d1a3668c24695a50bfad34d1f #10963: More functorial constructions
comment:224 Changed 4 years ago by
Judging from Anne's posts, this branch is way too widely used for git amend...
Can we just leave the nameless commit there, it being the least of the problems?
comment:225 Changed 4 years ago by
 Status changed from needs_info to needs_work
Leave the commit as it is. Maybe have a look at the converted commit message next time. In any case, there is only a finite number of mercurial patches to convert ;)
comment:226 Changed 4 years ago by
Hi,
I would like to understand that:
sage: from sage.categories.category_with_axiom import CategoryWithAxiom_over_base_ring sage: class A(CategoryWithAxiom_over_base_ring): ....: pass sage: class B(CategoryWithAxiom_over_base_ring): ....: pass ....: sage: setattr(A, "B", B) sage: getattr(A, "B") ValueError Traceback (most recent call last): ... ValueError: could not infer axiom for the nested class <class '__main__.B'> of <class '__main__.A'>
Why do we have this limitation?
Merry Christmas, JeanBaptiste
comment:227 Changed 4 years ago by
JeanBaptiste, if you have questions about functionality you should probably ask on sagedevel. If you think it is a bug or missing feature feel free to open a separate ticket.
comment:228 followup: ↓ 229 Changed 4 years ago by
I'm now getting a fairly reproducable error in set_species.py
:
sage t long src/sage/combinat/species/set_species.py ********************************************************************** File "src/sage/combinat/species/set_species.py", line 172, in sage.combinat.species.set_species.SetSpecies._cis Failed example: g = S.cycle_index_series() Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded' in ignored **********************************************************************
comment:229 in reply to: ↑ 228 Changed 4 years ago by
comment:230 Changed 4 years ago by
 Branch changed from public/ticket/10963 to u/SimonKing/ticket/10963
 Modified changed from 12/25/13 16:34:58 to 12/25/13 16:34:58
comment:231 Changed 4 years ago by
 Commit changed from 0c907cf81efeb9bd2d0a44f73539c4e32583c1be to 14e63b6feebddaf3dc7ab1d569d219690a765ce8
 Status changed from needs_work to needs_review
I expected that sage dev push
would push to the existing branch public/ticket/10963
, but instead it created a branch in u/SimonKing/
. Anyway, the new branch has #15506 merged in. Could you test if this fixes the problem?
comment:232 Changed 4 years ago by
All tests passed for me.
comment:233 Changed 4 years ago by
But then again, this is not failing for me previously (with the most recent develop branch merged in).
comment:234 Changed 4 years ago by
 Branch changed from u/SimonKing/ticket/10963 to public/ticket/10963
 Commit changed from 14e63b6feebddaf3dc7ab1d569d219690a765ce8 to 5ccf253b17c151d8e773037ac634a64f84f03075
comment:235 Changed 4 years ago by
You can try to build it on mod, that seems like its going to trigger it: http://build.sagemath.org/sage/builders/%20%20fast%20UW%20mod%20%28Ubuntu%208.04%20x86_64%29%20incremental/builds/34
comment:236 Changed 4 years ago by
Unfortunately I too get no errors with both new and old state of the branch.

I notice that there are further uses of @cached_function in the changeset. So, I guess in the next round of review I need to take more care of this point.
Has this been done? (I only see one revert in 0c907cf81efeb9bd2d0a44f73539c4e32583c1be.)
comment:237 Changed 4 years ago by
 Status changed from needs_review to needs_work
 Work issues set to Detect and fix Heisenbugs
Sigh.
It seems, according to Darij and Volker, that the existence of errors depends on the machine and so on.
With the new branch, I get two errors, namely:
sage t src/sage/sets/set.py ********************************************************************** File "src/sage/sets/set.py", line 1046, in sage.sets.set.Set_object_union.__cmp__ Failed example: Y = Set(ZZ^2).union(Set(ZZ^3)) Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0xb668971c> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b06554> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b06554> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b064ac> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b064ac> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.MonoDictEraser object at 0x9cdc77c> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b06554> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b06554> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0xa6b814c> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b064ac> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b064ac> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0xb668971c> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0xb6689584> ignored ********************************************************************** 1 item had failures: 1 of 6 in sage.sets.set.Set_object_union.__cmp__ [325 tests, 1 failure, 1.01 s]
and
sage t src/sage/combinat/ncsf_qsym/tutorial.py ********************************************************************** File "src/sage/combinat/ncsf_qsym/tutorial.py", line 30, in sage.combinat.ncsf_qsym.tutorial Failed example: QSym = QuasiSymmetricFunctions(QQ) Expected nothing Got: Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0x9adac8c> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0x9adac8c> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0xb6689584> ignored ********************************************************************** 1 item had failures: 1 of 93 in sage.combinat.ncsf_qsym.tutorial [92 tests, 1 failure, 4.67 s]
Needless to say, since we all knew already: It is a Heisenbug. I get the above with make ptest
. There is no error if I do
king@linuxetl7:~/Sage/git/sage> ./sage t src/sage/combinat/ncsf_qsym/tutorial.py Running doctests with ID 20131225193834bd3e15f5. Doctesting 1 file. sage t src/sage/combinat/ncsf_qsym/tutorial.py [92 tests, 4.56 s]  All tests passed!  Total time for all tests: 4.6 seconds cpu time: 4.6 seconds cumulative wall time: 4.6 seconds king@linuxetl7:~/Sage/git/sage> ./sage t src/sage/sets/set.py Running doctests with ID 20131225193848d7f7d326. Doctesting 1 file. sage t src/sage/sets/set.py [325 tests, 0.92 s]  All tests passed!  Total time for all tests: 1.1 seconds cpu time: 0.8 seconds cumulative wall time: 0.9 seconds
Nils, could it be that I forgot to merge another of your fixes to weak value dictionary respectively triple or monodict?
comment:238 followup: ↓ 239 Changed 4 years ago by
AFAIK, #15506 contains all fixes to the "recursion depth exceeded" problem that Nils came up with.
Interestingly, in one of the errors, the "recursion depth exceeded" combines weak value dictionary and triple dict:
Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0xb668971c> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b06554> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b06554> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b064ac> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b064ac> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.MonoDictEraser object at 0x9cdc77c> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b06554> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b06554> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0xa6b814c> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b064ac> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.structure.coerce_dict.TripleDictEraser object at 0x9b064ac> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0xb668971c> ignored Exception RuntimeError: 'maximum recursion depth exceeded while calling a Python object' in <sage.misc.weak_dict.WeakValueDictEraser object at 0xb6689584> ignored
comment:239 in reply to: ↑ 238 Changed 4 years ago by
Replying to SimonKing:
AFAIK, #15506 contains all fixes to the "recursion depth exceeded" problem that Nils came up with.
Well, we observed the error, came up with an example that produced the same error, and made sure that error didn't occur any more. It may be we're looking at a different cause here (for one thing, it's hard to imagine that doctests produce such deeply nested structures that even the naive deletion code would trigger a recursion depth error). Another hypothesis is that there is a dealloc somewhere that is genuinely recursing on itself, perhaps creating a new parent upon deletion, that then immediately becomes available for deallocation as well. It could be that the first "deeper" call in those cases is always an eraser. In that case, it's just that we're forcing the eraser to work with increasingly smaller (python) stack headroom, in a way that apparently even the trashcan can't avoid.
I'd say the most promising debugging technique is to set a breakpoint on this RuntimeError? (or patch python to throw a segfault or something like it) so that we get to see the stack backtrace for when this problem arises. I'd hope that seeing which routines are involved (I expect it not to be just dictionaries) will show where the real problem may lie.
comment:240 Changed 4 years ago by
Any thoughts? Is the problem that we need a better traceback?
comment:241 Changed 4 years ago by
Here is another stab at debugging this: Put an abort()
in PyErr_WriteUnraisable
to just die and let gdb produce a backtrace where an exception would be ignored. See attachment. Then recompile Python, Cython, and the Sage library with CFLAGS='O0 g3'
for good measure.
The first surprise is that this always hits PyErr_WriteUnraisable
when quitting Sage, its only that printing the message is disabled while Python shuts down (stderr == None). So skip over that, too.
Then force a failure by running
sage btp long globaliterations=100 src/sage/combinat/species/
for a while. Stack backtrace of the eventual failure is at http://boxen.math.washington.edu/home/vbraun/logs/crash_functorial_constructions.log
comment:242 Changed 4 years ago by
Would it make sense to put this debugging on a different ticket? The length of this one makes working on it excruciating (especially tickets with a lot of comments seem particularly slow with trac) and the debugging involved in this is a bit of a sideissue.
Since it is a python recursion bug, we should probably look at the ".py" files involved, so I did a grep:
$ grep "^#.*py:[09]*$" crash_functorial_constructions.log #24 0x00007fee3aa07580 in __classget__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:507 #38 0x00007fee3aa07580 in base_category_class_and_axiom() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:243 #47 0x00007fee3aa07580 in _base_category_class_and_axiom() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:381 #57 0x00007fee3aa07580 in axiom_of_nested_class() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:281 #66 0x00007fee3aa07580 in __classget__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:507 ... #710 0x00007fee3aa07580 in base_category_class_and_axiom() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:243 #719 0x00007fee3aa07580 in _base_category_class_and_axiom() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:381 #729 0x00007fee3aa07580 in axiom_of_nested_class() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:281 #738 0x00007fee3aa07580 in __classget__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:507 ... #2768 0x00007fee3aa07580 in base_category_class_and_axiom() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:243 #2777 0x00007fee3aa07580 in _base_category_class_and_axiom() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:381 #2787 0x00007fee3aa07580 in axiom_of_nested_class() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:281 #2796 0x00007fee3aa07580 in __classget__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py:507 #2809 0x00007fee3aa07580 in extra_super_categories() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/sets_cat.py:1826 #2812 0x00007fee3aa07580 in super_categories() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/covariant_functorial_construction.py:399 #2815 0x00007fee3aa07580 in _super_categories() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category.py:1015 #2823 0x00007fee3aa07580 in _all_super_categories() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category.py:885 #2831 0x00007fee3aa07580 in _super_categories_for_classes() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category.py:1040 #2839 0x00007fee3aa07580 in _make_named_class() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category.py:1246 #2843 0x00007fee3aa07580 in subcategory_class() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category.py:1290 #2851 0x00007fee3aa07580 in __init__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category.py:504 #2856 0x00007fee3aa07580 in __init__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/covariant_functorial_construction.py:355 #2867 0x00007fee3aa07580 in __classcall__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/structure/unique_representation.py:1021 #2875 0x00007fee3aa07580 in __classcall__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/category.py:465 #2883 0x00007fee3aa07580 in category_of() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/covariant_functorial_construction.py:269 ... #3180 0x00007fee3aa07580 in WithRealizations() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/with_realizations.py:181 #3183 0x00007fee3aa07580 in __init__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/combinat/sf/sf.py:767 #3194 0x00007fee3aa07580 in __classcall__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/structure/unique_representation.py:1021 #3206 0x00007fee3aa07580 in __init__() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/combinat/species/generating_series.py:327 #3217 0x00007fee3aa07580 in CycleIndexSeriesRing() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/combinat/species/generating_series.py:314 ...
So it would seem that the creation of a SymmetricFunctions
object leads via a
long chain of WithRealizations
calls to the creation of a category (frame
#2851
) which calls extra_super_categories
(frame #2809
) and then goes off
to a long chain of calls involving axiom_of_nested_class
and
base_category_class_and_axiom
(66 deep apparently)
What probably causes the eventual error is that building such a deep call chain requires memory and thus can trigger a garbage collection, at which point there may be some extra room required on top of the python call stack to execute the various weakref callbacks. Apparently that room isn't there.
66 calls is suspiciously deep, but perhaps symmetric functions realizations are indeed extremely complicated. Let's look at the relevant frames:
#2807 0x00007fee2dcdcd6d in __pyx_tp_descr_get_4sage_4misc_11lazy_import_LazyImport() at /home/vbraun/Code/sage/src/sage/misc/lazy_import.c:7252 #2808 0x00007fee3a9a2c00 in _PyObject_GenericGetAttrWithDict() at /home/vbraun/Code/sage/local/var/tmp/sage/build/python2.7.5.p1/src/Objects/object.c:1439 #2809 0x00007fee3aa07580 in extra_super_categories() at /home/vbraun/Code/sage/local/lib/python2.7/sitepackages/sage/categories/sets_cat.py:1826 > 1826 return [Sets().Facade()]
There seems to be a lazy import involved, and indeed, there seems to be a cycle
of 24 frames that keeps repeating from this point on. So one hypothesis might be
that somehow some lazy import resolution doesn't go right, which confuses the
base_category_class_and_axiom
code, which them keeps recursing. Perhaps a
script like this rings some bells with the persons who are familiar with the
code?
comment:243 Changed 4 years ago by
If you look at the Cython backtrace (2nd half of the crash log) then most stack frames (24 .. 2796) have something to do with category_with_axiom.py
. E.g. search for base_category_class
.
So it seems that this is not a garbage collection issue at all, it is really this ticket recursing very deeply. Is the recursion really needed? My guess is not since the depth has something to do with when the garbage collection is running, so must be too many temporary objects. Everything recursive can be written without recursion...
comment:244 Changed 4 years ago by
After instrumenting base_category_class_and_axiom
a bit with a counter for recursion level (inced on entry, deced on exit points) we get:
File "src/sage/combinat/species/empty_species.py", line 42, in sage.combinat.species.empty_species.EmptySpecies Failed example: X.cycle_index_series().coefficients(4) Expected: [0, 0, 0, 0] Got: base_cat_and_ax LVL 1, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 2, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 3, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 4, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 5, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 6, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 7, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 8, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 9, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 10, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 11, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 12, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 13, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 14, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 15, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 16, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 17, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 18, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 19, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 20, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 21, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 22, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 23, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 24, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 25, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 26, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 27, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 28, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 29, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 30, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 31, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 32, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 33, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 34, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 35, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 36, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 37, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 38, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 39, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 40, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 41, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 42, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 43, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 44, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 45, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 46, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 47, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 48, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 49, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 50, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 51, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 52, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 53, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 54, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 55, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 56, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 57, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 58, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 59, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 60, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 61, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 62, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 63, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 64, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 65, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 66, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 67, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 68, cls=<class 'sage.categories.facade_sets.FacadeSets'> base_cat_and_ax LVL 1, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 2, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 3, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 4, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 5, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 6, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 7, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 8, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 9, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 10, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 11, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 12, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 13, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 14, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 15, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 16, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 17, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 18, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 19, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 20, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 21, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 22, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 23, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 24, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 25, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 26, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 27, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 28, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 29, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 30, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 31, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 32, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 33, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 34, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 35, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 36, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 37, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 38, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 39, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 40, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 41, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 42, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 43, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 44, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 45, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 46, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 47, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 48, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 49, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 50, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 51, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 52, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 53, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 54, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 55, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 56, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 57, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 58, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 59, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 60, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 61, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 62, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 63, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 64, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 65, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 66, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 67, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 68, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 69, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 70, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 71, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 72, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 73, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 74, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 75, cls=<class 'sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis'> base_cat_and_ax LVL 1, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 2, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 3, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 4, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 5, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 6, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 7, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 8, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 9, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 10, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 11, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 12, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 13, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 14, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 15, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 16, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 17, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 18, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 19, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 20, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 21, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 22, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 23, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 24, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 25, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 26, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 27, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 28, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 29, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 30, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 31, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 32, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 33, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 34, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 35, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 36, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 37, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 38, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 39, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 40, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 41, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 42, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 43, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 44, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 45, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 46, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 47, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 48, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 49, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 50, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 51, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 52, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 53, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 54, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 55, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 56, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 57, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 58, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 59, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 60, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 61, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 62, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 63, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 64, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 65, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 66, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 67, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 68, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 69, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 70, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 71, cls=<class 'sage.categories.algebras_with_basis.AlgebrasWithBasis'> base_cat_and_ax LVL 1, cls=<class 'sage.categories.magmatic_algebras.MagmaticAlgebras.WithBasis'> base_cat_and_ax LVL 1, cls=<class 'sage.categories.vector_spaces.VectorSpaces.WithBasis'> base_cat_and_ax LVL 1, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 2, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 3, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 4, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 5, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 6, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 7, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 8, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 9, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 10, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 11, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 12, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 13, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 14, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 15, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 16, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 17, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 18, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 19, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 20, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 21, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 22, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 23, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 24, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 25, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 26, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 27, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 28, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 29, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 30, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 31, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 32, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 33, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 34, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 35, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 36, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 37, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 38, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 39, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 40, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 41, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 42, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 43, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 44, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 45, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 46, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 47, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 48, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 49, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 50, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 51, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 52, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 53, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 54, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 55, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 56, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 57, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 58, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 59, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 60, cls=<class 'sage.categories.modules_with_basis.ModulesWithBasis'> base_cat_and_ax LVL 1, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 2, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 3, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 4, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 5, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 6, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 7, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 8, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 9, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 10, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 11, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 12, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 13, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 14, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 15, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 16, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 17, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 18, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 19, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 20, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 21, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 22, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 23, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 24, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 25, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 26, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 27, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 28, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 29, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 30, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 31, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 32, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 33, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 34, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 35, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 36, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 37, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 38, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 39, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 40, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 41, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 42, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 43, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 44, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 45, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 46, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 47, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 48, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 49, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 50, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 51, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 52, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 53, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 54, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 55, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 56, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 57, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 58, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 59, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 60, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 61, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 62, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 63, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 64, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 65, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 66, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 67, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 68, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 69, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 70, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> base_cat_and_ax LVL 71, cls=<class 'sage.categories.coalgebras_with_basis.CoalgebrasWithBasis'> [0, 0, 0, 0] ********************************************************************** 1 item had failures: 1 of 21 in sage.combinat.species.empty_species.EmptySpecies [37 tests, 1 failure, 0.08 s]  sage t src/sage/combinat/species/empty_species.py # 1 doctest failed  Total time for all tests: 0.1 seconds cpu time: 0.1 seconds cumulative wall time: 0.1 seconds
so it does seem that even when this code doesn't produce errors, it recurses ridiculously deeply. Als, it keeps importing the same file! (that's not apparent from the output given here, but it was when I put the print statement at the import instruction.
This seems another case where an obscure bug report leads us to finding serious deficiencies in code.
comment:245 followup: ↓ 250 Changed 4 years ago by
A little further experimenting shows that the recursion occurs in the assert
statement in the fragment
base_module = importlib.import_module("sage.categories."+base_module_name) base_category_class = getattr(base_module, base_name) assert getattr(base_category_class, axiom, None) is cls, \ "Missing (lazy import) link for %s to %s for axiom %s?"%(base_category_class, cls, axiom) return base_category_class, axiom
So my guess is that the recursions happen when the relevant module has to be actually (lazily) imported. The getattr
on an object defined in the module triggers the actual loading of the module and the initialization code of the module triggers the execution of the same base_category_class_and_axiom
incantation. This is a cached method, but since it is still running through its first call the cache isn't there yet, so another call happens. The surprising thing is that the code finishes at all. It wouldn't surprise me if this is just another incarnation of a circular import problem.
As the tracebacks above suggest, the lazyclassattribute _base_category_class_and_axiom
is probably to blame (it calls the function base_category_class_and_axiom
). Its computation is triggered by the class attribute _axiom
and it looks quite probable that there is some (meta)class magic somewhere that causes this to be called during initialization of some modules.
It surprises me that the code does finish. A hash collision would be too rare an event to explain it (especially because most of python only suffers in performance from a bad hash, not in correctness), so my bet is that things depend on a certain order of import/execution and as we know, execution order during imports isn't completely deterministic.
To fix this: it seems that a lot of magic (looking at module names etc.) is there for convenience (i.e., that a FiniteSet is a Set with the Finite axiom). Perhaps it's worthwhile to consider coding these bits a little more conservatively, exchanging some convenience for enhanced maintainability.
When Nicholas looks at this (he's the only one who has a chance of being able to fix this stuff) I hope he will reconsider some of the design decision. I think there are a lot of potential contributors who can appreciate if the fundamental infrastructure of Sage continues to read like a little more normal python code if at all possible.
The code here looks amazingly ingenious and smart, but unfortunately that is not a desirable trait for basic infrastructure that needs to be maintained for a long time and by many people.
I think the bug/behaviour we're running into now illustrates why "smart" code is often not a good idea: as we see here, the behaviour can be hard to predict.
comment:246 followups: ↓ 247 ↓ 248 Changed 4 years ago by
If it finishes (but nondeterministically) then it must be that it recurses until it chances upon a hash collision in a cache. I don't have any other explanation...
comment:247 in reply to: ↑ 246 Changed 4 years ago by
Replying to vbraun:
If it finishes (but nondeterministically) then it must be that it recurses until it chances upon a hash collision in a cache. I don't have any other explanation...
I'm wondering... would that mean that hash collisions leave active objects in Sage prone to getting garbagecollected? Isn't that a useafterfree danger?
comment:248 in reply to: ↑ 246 Changed 4 years ago by
Replying to vbraun:
If it finishes (but nondeterministically) then it must be that it recurses until it chances upon a hash collision in a cache. I don't have any other explanation...
Well we could test against this by modifying the behavior of @cached_method
by adding a key with some dummy class like CatchedMethodCalledNotReturned
, and if that class is the returned value, raise an error. Although I don't see why it should terminate if there is a hash collision because there should not be an equal key in the cache; unless something scarier is happening: ==
is True
but the corresponding hashes aren't...
comment:249 Changed 4 years ago by
A cheap way out is simply removing the assert statement. I haven't seen excessive recursion without it. However, since this code is infrastructure for all of sage, it would be good to know exactly what the bad condition is that happens with the assert and why avoiding the assert is enough to avoid the bad scenarios.
comment:250 in reply to: ↑ 245 Changed 4 years ago by
Replying to nbruin:
A little further experimenting shows that the recursion occurs in the
assert
statement in the fragmentbase_module = importlib.import_module("sage.categories."+base_module_name) base_category_class = getattr(base_module, base_name) assert getattr(base_category_class, axiom, None) is cls, \ "Missing (lazy import) link for %s to %s for axiom %s?"%(base_category_class, cls, axiom) return base_category_class, axiom
Indeed, this looks suspicious. It could result in importing something while importing it.
This is a cached method, but since it is still running through its first call the cache isn't there yet, so another call happens.
This could be a workaround: Start the function by filling the cache of this function with some value (say: (cls, '')
, or some special value like NotImplemented
that can be tested against when calling the function somewhere), so that the function really will be called only once for each input.
Or rather: We could wrap it in try: ... finally:
, emptying the cache from the wrong value. Then, in case of an error being raised, the cache will be clean afterwards, and if there is no error then the cache will be filled with the correct value anyway.
BTW, note the typo in the docstring: however we ca notdo it robustly
...
The surprising thing is that the code finishes at all.
Indeed.
To fix this: it seems that a lot of magic (looking at module names etc.) is there for convenience (i.e., that a FiniteSet is a Set with the Finite axiom). Perhaps it's worthwhile to consider coding these bits a little more conservatively, exchanging some convenience for enhanced maintainability.
+1. In some discussion above, I suggested to keep track of the axioms of a category by a tuple that is stored as an attribute. Actually I suggested a different approach to implement axioms. But I think this ticket has progressed too much, and if (really "if", I don't know if I will) I want to try to implement the alternative approach, then I can still do it after merging Nicolas' approach.
comment:251 Changed 4 years ago by
I tried to trace how often the lazy class attribute _base_category_class_and_axiom
is called for each class. Note that (as a lazy attribute) it should be called only once!! This is how often it is called:
sage.categories.finite_semigroups.FiniteSemigroups: 76, sage.categories.sets_cat.Sets.Infinite: 1, sage.categories.commutative_rings.CommutativeRings: 82, sage.categories.distributive_magmas_and_additive_magmas.AdditiveAssociative.AdditiveCommutative: 1, sage.categories.distributive_magmas_and_additive_magmas.DistributiveMagmasAndAdditiveMagmas.AdditiveAssociative: 1, sage.categories.additive_magmas.AdditiveMagmas.AdditiveCommutative: 1, sage.categories.additive_magmas.AdditiveUnital.AdditiveInverse: 1, sage.categories.additive_magmas.AdditiveMagmas.AdditiveUnital: 1, sage.categories.distributive_magmas_and_additive_magmas.AdditiveCommutative.AdditiveUnital: 1, sage.categories.magmas.Magmas.Commutative: 1, sage.categories.distributive_magmas_and_additive_magmas.AdditiveUnital.Associative: 1, sage.categories.magmas.Unital.Inverse: 1, sage.categories.magmas.Magmas.Unital: 1, sage.categories.distributive_magmas_and_additive_magmas.AdditiveUnital.AdditiveInverse: 1, sage.categories.commutative_algebras.CommutativeAlgebras: 81, sage.categories.finite_fields.FiniteFields: 82, sage.categories.finite_enumerated_sets.FiniteEnumeratedSets: 82, sage.categories.finite_monoids.FiniteMonoids: 76, sage.categories.finite_sets.FiniteSets: 79, sage.categories.unital_algebras.UnitalAlgebras.WithBasis: 1, sage.categories.infinite_enumerated_sets.InfiniteEnumeratedSets: 81
That's bad.
comment:252 Changed 4 years ago by
PS: Probably this can also explain the startup time regression. Hence, we should think how to fix it.
comment:253 Changed 4 years ago by
PPS: I think I understand why the code is finishing at all. The "assert" statement involves importing something. Hence, it may fail with an ImportError
. This is caught (except (ImportError,AttributeError)
), in which case the axiom under consideration is skipped.
In the worst case, this means that the correct axiom is skipped, so that eventually TypeError("Could not retrieve base category class for %s"%cls)
is raised (EDIT: And the code certainly deals well with this type error, so that it does not surface).
comment:254 Changed 4 years ago by
Here is the proof of concept of a cached function that prevents infinite recursion by storing some value in its cache:
sage: @cached_function ....: def bla(x): ....: bla.cache[(x,),()] = None ....: try: ....: if x: ....: return bla(x1)+2 ....: return bla(x) or 100 ....: finally: ....: try: ....: del bla.cache[(x,),()] ....: except KeyError: ....: print x,"not stored" ....: sage: bla(3) 106
comment:255 Changed 4 years ago by
Doing what I sketched above, Sage starts and I get these numbers:
sage: from sage.categories.category_with_axiom import lazy_cls_attr_counter sage: lazy_cls_attr_counter {sage.categories.sets_cat.Sets.Infinite: 1, sage.categories.commutative_rings.CommutativeRings: 2, sage.categories.distributive_magmas_and_additive_magmas.AdditiveAssociative.AdditiveCommutative: 1, sage.categories.distributive_magmas_and_additive_magmas.DistributiveMagmasAndAdditiveMagmas.AdditiveAssociative: 1, sage.categories.additive_magmas.AdditiveMagmas.AdditiveCommutative: 1, sage.categories.additive_magmas.AdditiveUnital.AdditiveInverse: 1, sage.categories.additive_magmas.AdditiveMagmas.AdditiveUnital: 1, sage.categories.distributive_magmas_and_additive_magmas.AdditiveCommutative.AdditiveUnital: 1, sage.categories.magmas.Magmas.Commutative: 1, sage.categories.distributive_magmas_and_additive_magmas.AdditiveUnital.Associative: 1, sage.categories.magmas.Unital.Inverse: 1, sage.categories.magmas.Magmas.Unital: 1, sage.categories.distributive_magmas_and_additive_magmas.AdditiveUnital.AdditiveInverse: 1, sage.categories.commutative_algebras.CommutativeAlgebras: 2, sage.categories.finite_fields.FiniteFields: 2, sage.categories.finite_sets.FiniteSets: 2, sage.categories.finite_enumerated_sets.FiniteEnumeratedSets: 2, sage.categories.finite_semigroups.FiniteSemigroups: 2, sage.categories.finite_monoids.FiniteMonoids: 2, sage.categories.unital_algebras.UnitalAlgebras.WithBasis: 1, sage.categories.infinite_enumerated_sets.InfiniteEnumeratedSets: 2}
but in the end, things seem to be correct, for example:
sage: sage.categories.finite_semigroups.FiniteSemigroups._base_category_class_and_axiom (sage.categories.semigroups.Semigroups, 'Finite') sage: sage.categories.finite_enumerated_sets.FiniteEnumeratedSets._base_category_class_and_axiom (sage.categories.enumerated_sets.EnumeratedSets, 'Finite')
Hence, I think my approach solves the problem.
comment:256 Changed 4 years ago by
 Commit changed from 5ccf253b17c151d8e773037ac634a64f84f03075 to 8eaf51a82c4e2194769db13457979ae601ebbc04
comment:257 Changed 4 years ago by
With the new commit, all tests pass for me. Please check whether you think it could be a good solution.
comment:258 Changed 4 years ago by
PS: Perhaps one should remove the print "this should not happen!",cls
that I have put at the end of the cached function. Or: One should make the warning provide a clear message to submit a bug report.
comment:259 Changed 4 years ago by
I'm not completely convinced because you're still getting some calls twice. It seems that with your changes, the ones that get called twice are those that are not nested or called via a cached_method
. In fact, it seems like those are the same classes that fell into this recursion loop, so while this might help (and even be a fix for the problem at hand), I don't think this is the "right" solution.
Perhaps this is a question more for Nicolas, but is there a reason why we need a separate (cached) function base_category_class_and_axiom
and not just including it with the lazy attribute? It seems that it's only called in the lazy attribute.
comment:260 followup: ↓ 261 Changed 4 years ago by
I am afraid that Simon's solution, while directly addressing the apparent issue, is insufficient. You are using a caching mechanismsomething that should be used as a performance toolto avoid an infinite recursion. That's bad. You're putting program logic in a place where people won't expect it.
It also relies on the cache being of a particular form and in a particular place. That's an implementation detail of our cached function decorator. It's already hard to figure out where that cache is stored. I can easily see the location changing in the future. Then you'll get stuff cached in two places or some hardtodebug error.
Another issue (and this is already in the code) is that base_category_and_axiom
is a module level function, so has a global cache. Any input to it (including erroneous input?) will be stored for eternity: memory leak.
In addition, _base_category_and_axiom
is a lazy class attribute, so it has a cache of its own. Why are we caching things twice?
Finally, and there are some design decisions in the ticket itself: Currently, there is code that relies on packages being in sage.categories
by looking at the __name__
and doing string mangling on it. Furthermore, it relates the class name CamelCase
to the module camel_case
. These are all fine as programming conventions but I find it questionable if it's a good idea to engrave them in program logic. This is what I referred to when I wrote "doesn't read like python" above. If this were happening in a specialized little corner it wouldn't be so alarming, but categories are supposed to be a fundamental tool to govern inheritance in sage and hence part of the infrastructure. That means a lot of people will have to touch and maintain it in the future, so such code should really be held to a higher standard in terms of understandability, cleanness, and design.
Writing a document that explains how to work with it and maintain the code might help, but isn't enough: documents tend to get out of sync with code over time (even docstrings!). But at least it would mean we have a record of the original author's intent.
comment:261 in reply to: ↑ 260 ; followup: ↓ 263 Changed 4 years ago by
Replying to nbruin:
Another issue (and this is already in the code) is that
base_category_and_axiom
is a module level function, so has a global cache. Any input to it (including erroneous input?) will be stored for eternity: memory leak.
No, it is not a memory leak. The stored items are formed by classes that are defined in modules anyway.
In addition,
_base_category_and_axiom
is a lazy class attribute, so it has a cache of its own. Why are we caching things twice?
When making this function an uncached function (thus relying on the lazy attribute), and hence when removing the hack with temporarily filling the cache, one gets this many calls of base_category_class_and_axiom(cls)
, sorted by cls
:
sage.categories.sets_cat.Sets.Infinite: 1, sage.categories.commutative_rings.CommutativeRings: 98, sage.categories.distributive_magmas_and_additive_magmas.AdditiveCommutative.AdditiveUnital: 1, sage.categories.distributive_magmas_and_additive_magmas.AdditiveAssociative.AdditiveCommutative: 1, sage.categories.distributive_magmas_and_additive_magmas.DistributiveMagmasAndAdditiveMagmas.AdditiveAssociative: 1, sage.categories.additive_magmas.AdditiveMagmas.AdditiveCommutative: 1, sage.categories.additive_magmas.AdditiveUnital.AdditiveInverse: 1, sage.categories.additive_magmas.AdditiveMagmas.AdditiveUnital: 1, sage.categories.distributive_magmas_and_additive_magmas.AdditiveUnital.Associative: 1, sage.categories.magmas.Magmas.Commutative: 1, sage.categories.distributive_magmas_and_additive_magmas.AdditiveUnital.AdditiveInverse: 1, sage.categories.magmas.Unital.Inverse: 1, sage.categories.magmas.Magmas.Unital: 1, sage.categories.commutative_algebras.CommutativeAlgebras: 97, sage.categories.finite_fields.FiniteFields: 98, sage.categories.finite_enumerated_sets.FiniteEnumeratedSets: 98, sage.categories.finite_sets.FiniteSets: 94, sage.categories.finite_semigroups.FiniteSemigroups: 91, sage.categories.finite_monoids.FiniteMonoids: 91, sage.categories.unital_algebras.UnitalAlgebras.WithBasis: 1, sage.categories.infinite_enumerated_sets.InfiniteEnumeratedSets: 97
So, it may be a hack, but what else do you suggest to avoid this high number of function calls?
Finally, and there are some design decisions in the ticket itself: Currently, there is code that relies on packages being in
sage.categories
by looking at the__name__
and doing string mangling on it. Furthermore, it relates the class nameCamelCase
to the modulecamel_case
. These are all fine as programming conventions but I find it questionable if it's a good idea to engrave them in program logic.
+1.
Ceterum censeo: Trac sucks. My browser keeps jumping to the middle of this long ticket while I write.
comment:262 Changed 4 years ago by
I tried to trace at what point redundant calls are happening, and I can confirm what others have stated: They occur in the assert statement. More precisely: They occur when doing getattr(base_category_class, axiom, None)
.
So, we have several options:
 Temporarily fill the cache with
(None, None)
, as I have suggested. The lazy attribute raises aTypeError
when encountering that(None, None)
, and this error seems to result in an import error, which then is caught.  Remove the assertion. However, I believe that it is reasonable to have an assertion, so that future errors with categories and axioms will be found more easily.
 Change how the axiom binds to the
base_category_class
. I guess this involves a__classget__
: Apparently this__classget__
somehow requests the value of the lazy class attribute while it is computed.
Probably the third approach is the cleanest, but so far I am not sure if we really have a __classget__
that we can blame.
comment:263 in reply to: ↑ 261 ; followup: ↓ 265 Changed 4 years ago by
Replying to SimonKing:
No, it is not a memory leak. The stored items are formed by classes that are defined in modules anyway.
Not for things that get fed to it for which it's designed. Does it always raise an error if it gets something else? It could leak that. So (especially when you scribbe in the cache beforehand) you'll have to ensure that the cache is cleansed of undesirable things.
In addition,
_base_category_and_axiom
is a lazy class attribute, so it has a cache of its own. Why are we caching things twice?
When making this function an uncached function (thus relying on the lazy attribute), and hence when removing the hack with temporarily filling the cache, one gets this many calls of
base_category_class_and_axiom(cls)
, sorted bycls
: ... So, it may be a hack, but what else do you suggest to avoid this high number of function calls?
That's just the bug we had before. If you remove the assertion the recursion is also avoided and you'll see a much lower number of calls. The bug here is that executing base_category_and_axiom
(or rather the assert) apparently queries the attribute _base_category_and_axiom
. That's bad, so don't do that. That's exactly what I mean with using a cache to cover up flawed programming logic.
Basically, the whole use of base_category_class_and_axiom
is questionable. Categories shoud just define a _base_category_and_axiom
attribute outright. As far as base_category_class_and_axiom
is concerned, the value is a function of only their name anyway. It's just a tool to save (a small amount of) typing, and as we're seeing, causing nasty problems while trying to do it.
comment:264 Changed 4 years ago by
In __classget__
, we find
if "_base_category_class_and_axiom" not in cls.__dict__: cls._base_category_class_and_axiom = (base_category_class, axiom_of_nested_class(base_category_class, cls)) cls._base_category_class_and_axiom_was_guessed = True else: assert cls._base_category_class_and_axiom[0] is base_category_class, \ "base category class for %s mismatch; expected %s, got %s"%(cls, cls._base_category_class_and_axiom[0], base_category_class)
What does that mean? Can _base_category_class_and_axiom
be in the dict before the call to the lazy class attribute is completed?
comment:265 in reply to: ↑ 263 Changed 4 years ago by
Replying to nbruin:
Replying to SimonKing:
No, it is not a memory leak. The stored items are formed by classes that are defined in modules anyway.
Not for things that get fed to it for which it's designed. Does it always raise an error if it gets something else? It could leak that. So (especially when you scribbe in the cache beforehand) you'll have to ensure that the cache is cleansed of undesirable things.
That's what I do in the "finally" clause (EDIT: which is only executed if there is a manually inserted cache value, as can be seen by the absence of the warning message when starting Sage).
Basically, the whole use of
base_category_class_and_axiom
is questionable. Categories shoud just define a_base_category_and_axiom
attribute outright.
+1, because this would avoid headache.
1, because this is what one would have to do in an abundance of categories. This may even include (in the future?) categories that are dynamically created.
Note that in addition to the lazy attribute, CategoryWithAxiom.__classget__
overrides cls._base_category_class_and_axiom
. So, why is there this lazy class attribute?
I am not sure yet if I come to the same conclusions as you do. But I agree that the relationship of these three things
 a lazy class attribute,
 a
__classget__
that as a side effect overrides the lazy class attribute, and  a cached function that calls the
__classget__
while it is computing stuff for the lazy class attribute
should be straightened.
As far as
base_category_class_and_axiom
is concerned, the value is a function of only their name anyway. It's just a tool to save (a small amount of) typing, and as we're seeing, causing nasty problems while trying to do it.
Yes, for what we want to compute, it would be enough to let the input be cls.__name__
.
But no, if we want to make consistency checks (and in the current implementation we do want consistency checks), then we need to input the class.
comment:266 Changed 4 years ago by
Here is my attempt at explaining the interrelation of the cached function
base_category_class_and_axiom
, the __classget__
, and the lazy attribute
_base_category_class_and_axiom
.
 The lazy attribute needs the cached function to compute its value.
 If the classget is invoked, then the value of the lazy attribute does not
need to be guessed, but can be put explicitly (but then: Why is
cls._base_category_class_and_axiom_was_guessed = True
? It should be False, since the value is in fact not guessed by classget).  The cached function invokes the classget to do a consistency test.
Now, it can obviously happen that everything starts by calling the lazy attribute. Then, classget is called, and overrides the lazy attribute before its computation is finished. Then, the lazy attribute finishes its computation, and is overridden again (by the value it just computed).
I guess this is why the lazy attribute / cached function is called twice for some classes, even with our fixes.
comment:267 Changed 4 years ago by
Ahahah! There is yet another level of indirection!
Namely, I found that the redundant calls to base_category_class_and_axiom
happens because of axiom_of_nested_class
. It starts with this:
if hasattr(nested_cls, "_base_category_class_and_axiom"): axiom = nested_cls._base_category_class_and_axiom[1]
I guess it is wrong to do hasattr
here, since this triggers the (re!)computation of the lazy class attribute. What we should do instead is to look up nested_cls.__dict__
!!
EDIT: That's to say
try: axiom =nested_cls.__dict__["_base_category_class_and_axiom"][1] except KeyError: ...
comment:268 Changed 4 years ago by
Whow!!! When doing what I proposed in my previous comment, the numbers of calling the lazy class attribute drastically drop! No surprise, since now it will be directly written as an attribute:
sage.categories.commutative_rings.CommutativeRings: 1, sage.categories.commutative_algebras.CommutativeAlgebras: 1, sage.categories.finite_fields.FiniteFields: 1, sage.categories.finite_enumerated_sets.FiniteEnumeratedSets: 1, sage.categories.infinite_enumerated_sets.InfiniteEnumeratedSets: 1
That's all, no further calls.
Hence, a new commit is soon to come.
comment:269 followup: ↓ 273 Changed 4 years ago by
 Commit changed from 8eaf51a82c4e2194769db13457979ae601ebbc04 to bdefe0daeb7a4154a506f5ac69a064b6150f8de6
 Work issues Detect and fix Heisenbugs deleted
Here is the new commit. What it does:
 Do not access the lazy attribute (since this may happen while it is being computed) when what we want to know is in fact whether the attribute is in the
__dict__
of the class.  Remove the cache from
base_category_class_and_axiom
, since it is cached by the lazy attribute anyway.  Correctly state that
cls._base_category_class_and_axiom_was_guessed = False
if the attribute is in fact not guessed but explicitly set by invoking the classget.
Consequences:
 Most of the time, the lazy attribute is not guessed but explicitly set. I think this should save a lot of computation time. In fact, during startup, a "guess" only happens precisely five times!
 Since there are some tests expecting
cls._base_category_class_and_axiom_was_guessed == True
, I suppose I have to change some doctests.
Doing make ptest
now. But I guess you can already have a look at the code, to see if it is clearer now.
comment:270 followup: ↓ 271 Changed 4 years ago by
Very nice. However I think we should move the cached function into the lazy attribute too.
comment:271 in reply to: ↑ 270 ; followup: ↓ 274 Changed 4 years ago by
Replying to tscrim:
Very nice. However I think we should move the cached function into the lazy attribute too.
Which one you mean? base_category_class_and_axiom()
is not a cached function, with my new commit.
comment:272 Changed 4 years ago by
BTW: This explains why the code was more or less working, in spite of the recursion. Namely, apparently the import or recursion error resulted in hasattr(nested_cls, "_base_category_class_and_axiom")
returning False (without error!), and from there, things worked, since then the lazy attribute was overridden with its correct value by the classget
.
Now, we override the lazy attribute with its correct value by the classget
right away, without spending time on waiting for an error to happen...
comment:273 in reply to: ↑ 269 ; followup: ↓ 277 Changed 4 years ago by
Replying to SimonKing:
 Do not access the lazy attribute (since this may happen while it is being computed) when what we want to know is in fact whether the attribute is in the
__dict__
of the class.
That sounds like it's not going to work correctly if one is inheriting from a class that has a value for the attribute already. I don't think it's a good idea to move away from normal attribute lookup semantics in python.
comment:274 in reply to: ↑ 271 Changed 4 years ago by
comment:275 followup: ↓ 278 Changed 4 years ago by
Branch conflicts with #15588, can you either merge in either the branch from there or 6.1.beta4? Then I'll give it a whirl on the buildbot....
comment:276 Changed 4 years ago by
The lazy class attribute's documentation states:
The base category class is often another category with axiom, therefore having a special ``__classget__`` method. Storing the base category class and the axiom in a single tuple attribute  instead of two separate attributes  has the advantage of not trigerring, for example, ``Semigroups.__classget__`` upon ``Monoids._base_category_class``.
I think this is actually not correct. If I am not mistaken, Finite.__classget__
is involved when doing Fields().Finite
, but not when calling Fields()._base_category_class
.
In any case, a remark should be added that the classget will set the _base_category_class_and_axiom
attribute of Fields().Finite()
.
comment:277 in reply to: ↑ 273 Changed 4 years ago by
Replying to nbruin:
That sounds like it's not going to work correctly if one is inheriting from a class that has a value for the attribute already.
 If I understand correctly, we are not supposed to subclass a category with axiom.
 If we do subclass a category with axiom,
hasattr(nested_cls, "_base_category_class_and_axiom")
would indeed return True when nested_cls is a subclass of something that has this attribute. However, this is in fact not what we want. We would rather want that the subclass has its own attribute. At least this is my guess what we would want, if we were to create a subclass (which we don't).
I don't think it's a good idea to move away from normal attribute lookup semantics in python.
I think the normal attribute lookup semantics in python simply is the wrong tool here. Actually we don't need hasattr
at all, since we know for sure that the class has the attribute (namely: lazy class attribute). What we want to know is: Is this lazy attribute computed or not? And hasattr can certainly not answer that question.
comment:278 in reply to: ↑ 275 Changed 4 years ago by
comment:279 Changed 4 years ago by
 Commit changed from bdefe0daeb7a4154a506f5ac69a064b6150f8de6 to 408e0545d832e83eab41e88740ab16c18ccde426
Branch pushed to git repo; I updated commit sha1. New commits:
408e054  Merge branch 'develop' into public/ticket/10963

3008fe1  Merge branch 'public/ticket/10963' of trac.sagemath.org:sage into public/ticket/10963

ffbc6a7  Merge branch 'public/ticket/10963' of trac.sagemath.org:sage into testing_10963

comment:280 Changed 4 years ago by
Here's the branch with merging the latest develop
(6.1.beta4), I'm building and testing integer_mod_ring.py
currently.
comment:281 Changed 4 years ago by
Before you start with the tests: I have a new commit. What do I need to do to push it to trac, after you have changed the branch?
Does it suffice to pull (which automatically merges into my current branch) and then to push again?
New commits:
408e054  Merge branch 'develop' into public/ticket/10963

3008fe1  Merge branch 'public/ticket/10963' of trac.sagemath.org:sage into public/ticket/10963

ffbc6a7  Merge branch 'public/ticket/10963' of trac.sagemath.org:sage into testing_10963

comment:282 Changed 4 years ago by
 Commit changed from 408e0545d832e83eab41e88740ab16c18ccde426 to 9dcafa54b452ffe8a340663ed38087ed7c2d8a4d
comment:283 Changed 4 years ago by
It seems the answer is "yes". My additional commit is in the branch. Now you may test...
comment:284 followup: ↓ 285 Changed 4 years ago by
Doing a pull should be sufficient (but I think you've figured that out now). Now to wait for it to recompile...
comment:285 in reply to: ↑ 284 Changed 4 years ago by
Replying to tscrim:
Doing a pull should be sufficient (but I think you've figured that out now).
Yes. But it makes git create another merge commit. Git is ugly.
comment:286 followups: ↓ 287 ↓ 293 Changed 4 years ago by
Travis went a bit overboard with merging here, you only need to merge once. But oh well. In any case, you can't merge without a merge commit. Thats a feature.
comment:287 in reply to: ↑ 286 Changed 4 years ago by
Replying to vbraun:
Travis went a bit overboard with merging here, you only need to merge once. But oh well. In any case, you can't merge without a merge commit. Thats a feature.
I had one commit at the tip of my local branch, and what I really wanted was to rebase it on top of Travis' branch. But pulling means merging, not rebasing.
However, in the end, the code is what it should be, and code is what matters to me.
comment:288 Changed 4 years ago by
sage t src/sage/symbolic/expression.pyx Timed out
Sigh.
comment:289 Changed 4 years ago by
The good new is that the timout can easily be demonstrated on the command line.
sage: %time integral(exp(x + x^2)/(x+1), x) CPU times: user 4.97 s, sys: 0.10 s, total: 5.07 s Wall time: 5.28 s integrate(e^(x^2 + x)/(x + 1), x) sage: %time ascii_art(integral(exp(x + x^2)/(x+1), x)) <HANGS>
The bad new is that there are more timeouts.
comment:290 Changed 4 years ago by
 Work issues set to analyse and fix timeouts
comment:291 Changed 4 years ago by
Note: Without merging develop, I'd get
sage: %time integral(exp(x + x^2)/(x+1), x) CPU times: user 2.11 s, sys: 0.06 s, total: 2.18 s Wall time: 2.26 s integrate(e^(x^2 + x)/(x + 1), x) sage: %time ascii_art(integral(exp(x + x^2)/(x+1), x)) CPU times: user 24.33 s, sys: 0.10 s, total: 24.43 s Wall time: 24.83 s /   2  x + x  e   dx  x + 1  /
This is bad enough. I mean, why does it take more than 20 seconds to computer the ascii art representation of this integral?
And the question is: How can categories make ascii art and integration so dog slow? Are categories even used there??
comment:292 Changed 4 years ago by
 Commit changed from 9dcafa54b452ffe8a340663ed38087ed7c2d8a4d to ec340363a811bbafbb8cd5ff8f39e75db9872f9f
Branch pushed to git repo; I updated commit sha1. New commits:
ec34036  Fixed failing doctest in integer_mod_ring.py from (my bad) merging.

comment:293 in reply to: ↑ 286 Changed 4 years ago by
Replying to vbraun:
Travis went a bit overboard with merging here, you only need to merge once. But oh well. In any case, you can't merge without a merge commit. Thats a feature.
Yea, that wasn't the cleanest bit of git I've done...
Anyways, the hanging comes from calling the calling of sympify
:
def _ascii_art_(self): ... from sympy import pretty, sympify ... try: s = pretty(sympify(self), use_unicode=False) ...
Although this is hanging with a clean develop
branch...
comment:294 Changed 4 years ago by
The slow ascii art is the sympy update (#15512). Now sympy spends a lot of time trying to solve that integral. Drawing the ascii art is still fast.
comment:295 Changed 4 years ago by
 Status changed from needs_work to needs_review
 Work issues analyse and fix timeouts deleted
The slow ascii art issue is now #15636.
comment:296 Changed 4 years ago by
How is one supposed to review this, then? In the good old mercurial workflow, I would take the patches, apply them to a slightly older beta version (before #15512 got merged), and test. But in the git workflow, this would only be possible be rebasing, which changes history (for some notion of history) and is supposed to be bad.
Or what else can we do? Wait for #15636? Why is #15636 not a blocker? After all, the current beta hangs.
comment:297 followup: ↓ 298 Changed 4 years ago by
It doesn't hang for me, just takes a while to draw the ascii art. Does it actually hang for you?
comment:298 in reply to: ↑ 297 Changed 4 years ago by
Replying to vbraun:
It doesn't hang for me, just takes a while to draw the ascii art. Does it actually hang for you?
I didn't run make ptest
with the "pure" develop branch. But with the branch that is currently attached to this ticket, I get several timeouts. As stated in comment:291, without merging develop into the branch of this ticket, it is quite slow (22 seconds to translate an integral into ascii art), but terminates.
And with the pure develop branch, it hangs in the sense of "I lost patience after a couple of minutes". to be precise:
sage: %time integral(exp(x + x^2)/(x+1), x) # this is fine CPU times: user 2.12 s, sys: 0.10 s, total: 2.22 s Wall time: 3.53 s integrate(e^(x^2 + x)/(x + 1), x) sage: %time alarm(120); ascii_art(integral(exp(x + x^2)/(x+1), x)) Traceback (most recent call last): ... AlarmInterrupt:
comment:299 Changed 4 years ago by
Aha! I have not been patient enough.
sage: %time ascii_art(integral(exp(x + x^2)/(x+1), x)) CPU times: user 178.11 s, sys: 1.77 s, total: 179.88 s Wall time: 180.71 s /   2  x + x  e   dx  x + 1  /
with the develop branch. But that's too slow.
comment:300 Changed 4 years ago by
When I tested yesterday, I got
sage t src/sage/symbolic/expression.pyx # Timed out sage t src/sage/plot/plot.py # Timed out sage t src/sage/combinat/crystals/littelmann_path.py # Timed out
before I interrupted after testing only 6 out of 2496 files. That's why I thought #15636 should be a blocker.
comment:301 Changed 4 years ago by
PS: But today, again with the branch from here, I am getting
sage t src/sage/symbolic/expression.pyx [2139 tests, 278.47 s] sage t src/sage/plot/plot.py [373 tests, 147.14 s] sage t src/sage/combinat/crystals/littelmann_path.py [210 tests, 130.32 s]
Strange.
comment:302 followups: ↓ 304 ↓ 305 Changed 4 years ago by
Back to the topic.
Do you think that it is a valid solution to let ._base_category_class_and_axiom
be explicitly set, with the exception of 5 cases in which it is computed/guessed by a lazy attribute? Even with the caveat that this "explicit setting" happens as sideeffect of a __classget__
? I think it is, since the __classget__
method does not need to guess: It merely documents how it has constructed the category that __classget__
returns.
comment:303 Changed 4 years ago by
Now, all tests work:
All tests passed!  Total time for all tests: 4016.6 seconds cpu time: 6508.4 seconds cumulative wall time: 7741.2 seconds
Very strange. But from my perspective, we can be back at review now.
comment:304 in reply to: ↑ 302 Changed 4 years ago by
Replying to SimonKing:
Do you think that it is a valid solution to let
._base_category_class_and_axiom
be explicitly set, with the exception of 5 cases in which it is computed/guessed by a lazy attribute? Even with the caveat that this "explicit setting" happens as sideeffect of a__classget__
? I think it is, since the__classget__
method does not need to guess: It merely documents how it has constructed the category that__classget__
returns.
From reading the discussion so far, this sounds reasonable. I'll be back to Sage development tomorrow (yeah, finally!), and review in details your changes then.
Thanks to all of you for figuring out the issue in my black magic!
Happy new year!
Nicolas
comment:305 in reply to: ↑ 302 ; followup: ↓ 306 Changed 4 years ago by
Replying to SimonKing:
Do you think that it is a valid solution to let
._base_category_class_and_axiom
be explicitly set, with the exception of 5 cases in which it is computed/guessed by a lazy attribute?
Would it be too onerous to just change/handcode those 5 cases? It would get rid of the necessity to have an incredibly fragile magic fallback that has program logic attached to __name__
. When I stumbled into it I was unpleasantly surprised. Sure, as a guess it's not a bad heuristic, but as the Zen of Python says: "In the face of ambiguity refuse the temptation to guess". I think that's often good advice and I think it's here too.
comment:306 in reply to: ↑ 305 Changed 4 years ago by
Replying to nbruin:
Replying to SimonKing:
Do you think that it is a valid solution to let
._base_category_class_and_axiom
be explicitly set, with the exception of 5 cases in which it is computed/guessed by a lazy attribute?Would it be too onerous to just change/handcode those 5 cases?
These 5 are those that appear when starting Sage. Nicolas, can you guarantee that all categorieswithaxiom that are created after starting Sage will be constructed by means of the __classget__
?
I think it would be good to get rid of the guesswork! For example, it makes it impossible to create a new category with axiom just somewhere: The module name must be chosen according to the name of the category class, or creation of an instance of this class will fail.
sage: from sage.categories.category_with_axiom import CategoryWithAxiom sage: class MyAxiom(CategoryWithAxiom): pass sage: C = MyAxiom() Traceback (most recent call last): ... /home/king/Sage/git/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.pyc in base_category_class_and_axiom(cls) 224 module = uncamelcase(name, "_") 225 assert cls.__module__ == "sage.categories."+module,\ > 226 "%s should be implemented in `sage.categories.%s`"%(cls, module) 227 for axiom in all_axioms: 228 if axiom == "WithBasis" and name.endswith(axiom): AssertionError: <class '__main__.MyAxiom'> should be implemented in `sage.categories.my_axiom`
And this is certainly not very pleasant. Occasionally I like to create a category class on the fly, interactively, and I don't want to be forced to implement it in a specific submodule of sage.categories.
comment:307 Changed 4 years ago by
For the record, the buildbot didn't find any further issues. I agree with Nils about not doing string munging if it can be avoided.
comment:308 Changed 4 years ago by
Since we only have 5 (additional) exceptional cases, I agree it would make sense to provide them directly, otherwise rely on the classget. If you don't mind, I'll prepare a commit accordingly. I could imagine that we can get rid of the guessing function base_category_class_and_axiom(cls)
, but this would (perhaps) be a second commit.
comment:309 followup: ↓ 310 Changed 4 years ago by
Aha, it is not so easy. It turns out that the lazy attribute is involved much more often, but not during startup of Sage.
For example, when you start Sage, then FinitePermutationGroups
are not constructed by means of applying an axiom. Hence, the attribute is not available. Hence, things crash.
sage: FinitePermutationGroups() # Note: Here I have replaced the lazy axiom by something that just raises an error Traceback (most recent call last): ... AttributeError: <class 'sage.categories.finite_permutation_groups.FinitePermutationGroups'> does not know its base category and axiom
If one then constructs the category class (not instance!) by means of an axiom, thing work:
sage: PermutationGroups.Finite <class 'sage.categories.finite_permutation_groups.FinitePermutationGroups'> sage: FinitePermutationGroups() Category of finite permutation groups
So, it seems that we have to live with the guesswork. But I wonder if we can simplify the logic, and can make it so that categories with axiom can be defined interactively.
comment:310 in reply to: ↑ 309 ; followups: ↓ 311 ↓ 313 Changed 4 years ago by
Replying to SimonKing:
So, it seems that we have to live with the guesswork.
Or change the way FinitePermutationGroups
is constructed. It's not that the base category of FinitePermutationGroups
is ever going to be anything else than PermutationGroups
.
comment:311 in reply to: ↑ 310 ; followup: ↓ 326 Changed 4 years ago by
Replying to nbruin:
Replying to SimonKing:
So, it seems that we have to live with the guesswork.
Or change the way
FinitePermutationGroups
is constructed. It's not that the base category ofFinitePermutationGroups
is ever going to be anything else thanPermutationGroups
.
How?
The current logic is: There is a category FinitePermutationGroups
, that in the first place is standalone and only knows that it is obtained by some axiom, but it does not know which axiom. To work properly, it needs to find out which axiom it was obtained from.
With the current logic, the category class has three ways to learn the construction information:
 If someone did
PermutationGroups().Finite()
, thenFinitePermutationGroups
would be returned and at the same time the construction information would be stored inFinitePermutationGroups._base_category_class_and_axiom
.  We could handcode
FinitePermutationGroups._base_category_class_and_axiom
.  The correct value of
FinitePermutationGroups._base_category_class_and_axiom
could be guessed from the name, and then verified by calling the base category with the axiom. If an inconsistency arises, then an error is raised stating that the value can not be guessed.
So, questions to Nils:
 Do you intend to handcode everything? Then you need to cover many cases.
 Do you intend to put
PermutationGroups().Finite()
into code that is executed at startup time, and similarly for all other categories with axiom? Then, the startup time would likely to increase, and it would not be less work than the other way to handcode the construction.  Do you intend to change the logic totally? This would mean to rewrite Nicolas patch from scratch.
comment:312 Changed 4 years ago by
For the record: I am in the process of writing a long answer to the discussion.
comment:313 in reply to: ↑ 310 Changed 4 years ago by
Hi Simon, Nils, Volker,
When designing infrastructure, one of my main guiding principle is to
strive hard to get concise, expressive, redundancy free idioms. In
particular, I believe that a bit of well localized black magic in the
infrastructure is acceptable if it avoids redundancy in many places in
the code using that infrastructure. Trading uglyness in one spot for
beauty in many. For example ClasscallMetaclass
is definitely ugly
black magic, but I believe it was the price to pay for the nice
UniqueRepresentation
idiom.
Now, in the case at hand, is this going overboard?
Here are some necessary conditions that should be satisfied:
(1) The black magic should be robust. (2) The black magic should be well documented. (3) The black magic should be transparent to the casual user; that is it should be invisible most of the time, and when it appears on the surface, it should be easy to understand what it does without understanding the implementation details. (4) In particular, in case of error, the black magic should raise clear enough messages. (5) There should be a couple developers that understand the implementation details. (6) It should be worth it :)
Let's take those points one by one.
(1,5) Well, hopefully this is now the case :)
Thanks so much and congratulations for your work on this! The recursion triggered by an assertion check was really a tricky one.
(2) Granted, this is not yet up to speed. In general there is no documentation yet on how axioms work which is very bad. I created the follow up #15643 for this.
(3) From the feedback I got from users / implementers of axioms, this seems to be not so bad.
(4) This is hopefully better now. See below.
(6) the guesswork is currently used in roughly 20 categories:
mistral>grep L base_category `grep l "^class.*CategoryWithAxiom" *.py`  wc l 20
I expect this number to increase much with time, as more and more categories with axioms are added. The existing followup code in the SageCombinat queue adds at least as many.
About Simon's issue with interactively creating categories with axiom. The error message was confusing; it is in fact possible; one just need to specify _base_category_class_and_axiom if the guessing does not work:
sage: class Cs(Category): ....: def super_categories(self): return [Sets()] sage: class Ds(CategoryWithAxiom): ....: _base_category_class_and_axiom = [ Cs, "Finite" ] sage: Cs.Finite = Ds sage: Cs.Finite = Ds sage: Ds() Category of ds sage: Ds().super_categories() [Category of finite sets, Category of cs]
In an upcoming commit, I improved the error message and also the logic: for the guessing to work, only the base category needs to be in the standard location, not the category with axiom itself. So now one can do:
sage: class FacadeSemigroups(CategoryWithAxiom): ....: pass sage: Semigroups.Facade = FacadeSemigroups sage: FacadeSemigroups() Category of facade semigroups
Thanks Simon for pointing those deficiencies!
Altogether, I think it's worth keeping the feature. The implementation is definitely disputable and any further improvement is more than welcome: simplified logic, better error handling, documentation, ... But maybe we can consider it good enough for now.
Cheers,
Nicolas
PS: about Python's Zen about guessing: I believe this is more about human guessing rather than computer heuristics; but that might be just my interpretation!
comment:314 Changed 4 years ago by
 Commit changed from ec340363a811bbafbb8cd5ff8f39e75db9872f9f to 32b3c4e13aca58a7dfc9a33528164a5ef1b273e7
Branch pushed to git repo; I updated commit sha1. New commits:
32b3c4e  Improved the guessing logic for categories with axioms + typo fixes

comment:315 followup: ↓ 317 Changed 4 years ago by
Hi Simon,
I just went through your changes to the guessing, and I am happy with them. Just one thing: did you have a specific rationale for switching the value of "was_guessed" to False when it's set by classget?
A priori, I meant the value to be True whenever the base category and axiom was discovered by the system, and not set explicitly in the category with axiom.
Cheers,
Nicolas
comment:316 Changed 4 years ago by
Note: I'll have a small review patch which I'll push after lunch.
comment:317 in reply to: ↑ 315 ; followup: ↓ 319 Changed 4 years ago by
Replying to nthiery:
I just went through your changes to the guessing, and I am happy with them. Just one thing: did you have a specific rationale for switching the value of "was_guessed" to False when it's set by
__classget__
?
Yes. If it is set by __classget__
then it wasn't guessed. Hence, "was_guessed" should be false.
Moreover, it makes debugging slightly easier, as the "was_guessed" attribute tells how the base category class and axiom were obtained:
 If "was_guessed" is missing: handcoded.
 If "was_guessed==True": lazy class attribute was using name mangling.
 If "was_guessed==False": the class was explicitly constructed in that way, hence, guessing (and name mangling) was not needed.
A priori, I meant the value to be True whenever the base category and axiom was discovered by the system, and not set explicitly in the category with axiom.
See above: This can still be seen in the absence of "was_guessed".
comment:318 Changed 4 years ago by
 Commit changed from 32b3c4e13aca58a7dfc9a33528164a5ef1b273e7 to 478de48553d203516cddb47e0cb89c34ccc210ee
Branch pushed to git repo; I updated commit sha1. New commits:
478de48  Categories with axioms: improved names for the protocol to recover how _base_category_class_and_axiom was set.

comment:319 in reply to: ↑ 317 ; followup: ↓ 320 Changed 4 years ago by
Replying to SimonKing:
Yes. If it is set by
__classget__
then it wasn't guessed. Hence, "was_guessed" should be false.Moreover, it makes debugging slightly easier, as the "was_guessed" attribute tells how the base category class and axiom were obtained:
 If "was_guessed" is missing: handcoded.
 If "was_guessed==True": lazy class attribute was using name mangling.
 If "was_guessed==False": the class was explicitly constructed in that way, hence, guessing (and name mangling) was not needed.
A priori, I meant the value to be True whenever the base category and axiom was discovered by the system, and not set explicitly in the category with axiom.
See above: This can still be seen in the absence of "was_guessed".
Ok; from this discussion it became clear that the name of this attribute was bad since we did not interpret it in the same way. I reworked a tiny bit the protocol so that it's less ambiguous, and improved the doc accordingly.
If all test pass, if you are happy with the above change, and if Nils and Volker are ok with keeping the guessing strategy, then we could go back to positive review!
Cheers,
Nicolas
comment:320 in reply to: ↑ 319 ; followups: ↓ 321 ↓ 325 Changed 4 years ago by
Replying to nthiery:
Ok; from this discussion it became clear that the name of this attribute was bad since we did not interpret it in the same way. I reworked a tiny bit the protocol so that it's less ambiguous, and improved the doc accordingly.
So, now you want that _base_category_class_and_axiom_origin
is explicitly set to 'hardcoded'
whenever someone hardcodes _base_category_class_and_axiom
? Said "someone" will probably forget to set it. That's why I still think it is better to not set it, if it is hardcoded, and to automatically set it in classget resp. in the lazy attribute.
If all test pass, if you are happy with the above change,
Up to the above criticism, I am happy with the previous two commits. But let's see if tests pass.
Some metaremark:
It seems to me that you want to turn sage.categories into a database, but without using existing implementations of databases (aka "reinventing the wheel"). Relations between the items stored in that database are encoded in _base_category_class_and_axiom
on the one hand, and by providing nested classes (e.g.,
PermutationGroups.Finite = LazyImport('sage.categories.finite_permutation_groups', 'FinitePermutationGroups')
on the other hand. Here, _base_category_class_and_axiom
is either hardcoded, guessed from the names (which requires to stick to certain naming conventions), or explicitly obtained if __classget__
happens to be involved.
Perhaps (in a second step, certainly not now) one should think of using dedicated database tools?
comment:321 in reply to: ↑ 320 ; followup: ↓ 322 Changed 4 years ago by
Replying to SimonKing:
So, now you want that
_base_category_class_and_axiom_origin
is explicitly set to'hardcoded'
whenever someone hardcodes_base_category_class_and_axiom
?
Luckily, not! That would be very bad indeed: cluttering the code with redundant information. If you look right after the _base_category_class_and_axiom attribute, you will see that _base_category_class_and_axiom_origin is set to 'hardcoded' which does the trick.
Some metaremark:
It seems to me that you want to turn sage.categories into a database, but without using existing implementations of databases (aka "reinventing the wheel"). Relations between the items stored in that database are encoded in
_base_category_class_and_axiom
on the one hand, and by providing nested classes (e.g.,PermutationGroups.Finite = LazyImport('sage.categories.finite_permutation_groups', 'FinitePermutationGroups')on the other hand. Here,
_base_category_class_and_axiom
is either hardcoded, guessed from the names (which requires to stick to certain naming conventions), or explicitly obtained if__classget__
happens to be involved.Perhaps (in a second step, certainly not now) one should think of using dedicated database tools?
Yes, from the beginning, Categories are definitely some sort of database of algorithms and math knowledge (in particular deduction rules defined programmatically). At first sight, it does not seem obvious that this could be implemented using standard database tools, but we can certainly think about it.
Cheers,
Nicolas
comment:322 in reply to: ↑ 321 Changed 4 years ago by
Replying to nthiery:
Luckily, not! That would be very bad indeed: cluttering the code with redundant information. If you look right after the _base_category_class_and_axiom attribute, you will see that _base_category_class_and_axiom_origin is set to 'hardcoded' which does the trick.
Ahaha, you are right. The "origin" attribute is there by default and overridden as soon as classget or the lazy attribute are involved. OK, then I'm fine with both recent commits, modulo doctests passing.
comment:323 Changed 4 years ago by
For the record, all tests passed on my machine.
comment:324 followup: ↓ 336 Changed 4 years ago by
+ @cached_method + def DualObjects(self): + r""" + Return the category of duals of objects of ``self``. + + The dual of a vector space `V` is the space consisting of + all linear functionals on `V` (see :wikipedia:`Dual_space`). + Additional structure on `V` can endow its dual with + additional structure; e.g. if `V` is an algebra, then its + dual is a coalgebra. + + This returns the category of dual of spaces in ``self`` endowed + with the appropriate additional structure. + + .. SEEALSO:: + +  :class:`.dual.DualObjectsCategory` +  :class:`~.covariant_functorial_construction.CovariantFunctorialConstruction`. + + .. TODO:: add support for graded duals. + + EXAMPLES:: + + sage: VectorSpaces(QQ).DualObjects() + Category of duals of vector spaces over Rational Field + + The dual of a vector space is a vector space:: + + sage: VectorSpaces(QQ).DualObjects().super_categories() + [Category of vector spaces over Rational Field] + + The dual of an algebra is a coalgebra:: + + sage: sorted(Algebras(QQ).DualObjects().super_categories(), key=str) + [Category of coalgebras over Rational Field, + Category of duals of vector spaces over Rational Field]
I know this is not a big issue since the dual()
of an algebra *is* a coalgebra in all cases in which dual()
is implemented (not least because in the infinitedimensional cases it usually means the graded dual). But I'm still unhappy with the docstring lying in my face. Can anyone write a reasonably worded .. WARNING about this dual()
not being the actual vectorspace dual?
comment:325 in reply to: ↑ 320 ; followup: ↓ 334 Changed 4 years ago by
Also, I suspect this to be a typo:
+ class Unital(CategoryWithAxiom): + + class SubcategoryMethods: + + @cached_method + def Inverse(self): + r""" + Returns the full subcategory of the unital objects of ``self``. + + EXAMPLES:: + + sage: Magmas().Unital().Inverse() + Category of inverse unital magmas + sage: Monoids().Inverse() + Category of groups
Should be the full subcategory of the *inverse*, not the unital, objects of self.
comment:326 in reply to: ↑ 311 ; followup: ↓ 328 Changed 4 years ago by
Nicholas has explained his original intent to a large degree in a post above and I mostly agree with him, except on one place. Apparently, in his model every category is constructed from a supercategory and an axiom. In that case there is no need for guessing data: all the data is there when the category gets constructed.
Apparently some categories do not get constructed explicitly from a supercategory and an axiom. Instead, the system splits the __name__
into pieces and uses part as a name for an axiom and another part as name for the supercategory. I guess that this seems a convenient shortcut because most of those categories are defined by literal source code. I think it's crossing a line. Nowhere else in Python does the makeup of a name determine the inheritance of properties (which eventually categories will lead to). As Thierry points out, it's a heuristic one can avoid, but I think it departs so much from what Python normally does that it will frequently lead to confusion. I think you should look for a cleaner paradigm to express these relations concisely, and now is the time to get it right. It will be more painful to change it afterwards.
It doesn't sound to me like the category information forms a database in the sense of a bunch of tables filled with rows and columns. To me it seems closer to a (rooted?) tree, with edged labelled by axioms. Python has syntax to express rooted trees: inheritance. The only thing missing is the label. That can be replaced by a class attribute, simply set to a string. There is a small amount of boilerplate involved with writing classes, e.g., one usually has to write def __init__(self):
. Writing a single line axiom = "Finite"
can be considered acceptable boilerplate in my opinion and removes all guessing. Do you think that having to write that "Finite" is redundancy because "Finite" already occurs in the __name__
? In that case I disagree: everywhere in python people are free to choose names that they think are most informative (avoiding collisions and key words), regardless of semantics of the code.
Replying to SimonKing:
So, questions to Nils:
 Do you intend to handcode everything? Then you need to cover many cases.
I may misunderstand what is required in the process, but at the moment I think "yes". If an axiom is a required property for a Category then it needs to be supplied somewhere, not derived by chopping a part of a __name__
.
 Do you intend to put
PermutationGroups().Finite()
into code that is executed at startup time, and similarly for all other categories with axiom? Then, the startup time would likely to increase, and it would not be less work than the other way to handcode the construction.
You do not make that sound attractive. I don't know what that code would do, so I was not intending to put that anywhere.
 Do you intend to change the logic totally? This would mean to rewrite Nicolas patch from scratch.
If the present patch unavoidably leads to convoluted constructs that that's a strong indication that the design is flawed. In that case it might be advisable to either redesign or carefully argue what the current design is and indicate why the nasty bits are really unavoidable.
This is infrastructure. We'll be living with this for a long time (see coercion framework). It's worth trying to get it right.
comment:327 Changed 4 years ago by
Let me just remind you of the standard lore against using string contents for program flow: It makes it much harder to debug. Python, like pretty much any programming language, protects you from typos by raising errors at the parser stage. But if you encode information in the class name then no parser in the world is going to tell you whether you have a typo or not. Welldesigned code may contstruct names from code, but getting code from names is a bad idea.
comment:328 in reply to: ↑ 326 Changed 4 years ago by
Replying to nbruin:
Nicholas has explained his original intent to a large degree in a post above and I mostly agree with him, except on one place. Apparently, in his model every category is constructed from a supercategory and an axiom. In that case there is no need for guessing data: all the data is there when the category gets constructed.
I think it is two times "no".
 Some categories have no axiom.
sage: Magmas.mro() [sage.categories.magmas.Magmas, sage.categories.category_singleton.Category_singleton, sage.categories.category.Category, sage.structure.unique_representation.UniqueRepresentation, sage.structure.unique_representation.CachedRepresentation, sage.misc.fast_methods.WithEqualityById, sage.structure.sage_object.SageObject, object]
No axiom in it.
 If you look at
sage.categories.finite_permutation_groups.FinitePermutationGroups
, you'll find that this is not coded as the result of applying an axiom to a supercategory. However, it can be constructed by applying an axiom to a supercategory: The result will be the same.
I think it's crossing a line. Nowhere else in Python does the makeup of a name determine the inheritance of properties (which eventually categories will lead to). As Thierry points out, it's a heuristic one can avoid, but I think it departs so much from what Python normally does that it will frequently lead to confusion. I think you should look for a cleaner paradigm to express these relations concisely, and now is the time to get it right. It will be more painful to change it afterwards.
+1
With the caveat that the code does sufficient tests to get the axioms right. These tests make it relatively robust, I think.
It doesn't sound to me like the category information forms a database in the sense of a bunch of tables filled with rows and columns. To me it seems closer to a (rooted?) tree,
No, it is not a tree. It is a digraph, with edges being labelled by axioms, and I guess one also has a poset structure. But certainly it is not a tree.
with edged labelled by axioms. Python has syntax to express rooted trees: inheritance. The only thing missing is the label. That can be replaced by a class attribute, simply set to a string.
This is what large parts of Nicolas' model does. However, since we don't have a tree, some complication arises.
There is a small amount of boilerplate involved with writing classes, e.g., one usually has to write
def __init__(self):
. Writing a single lineaxiom = "Finite"
can be considered acceptable boilerplate in my opinion and removes all guessing. Do you think that having to write that "Finite" is redundancy because "Finite" already occurs in the__name__
? In that case I disagree: everywhere in python people are free to choose names that they think are most informative (avoiding collisions and key words), regardless of semantics of the code.
+1.
Replying to SimonKing:
So, questions to Nils:
 Do you intend to handcode everything? Then you need to cover many cases.
I may misunderstand what is required in the process, but at the moment I think "yes". If an axiom is a required property for a Category then it needs to be supplied somewhere, not derived by chopping a part of a
__name__
.
OK, but it is quite some additional (and rather dull) work involved.
 Do you intend to put
PermutationGroups().Finite()
into code that is executed at startup time, and similarly for all other categories with axiom? Then, the startup time would likely to increase, and it would not be less work than the other way to handcode the construction.You do not make that sound attractive.
Correct. It would increase startup time. By the way: Did the startup time improve after removing the deep recursion?
If the present patch unavoidably leads to convoluted constructs that that's a strong indication that the design is flawed. In that case it might be advisable to either redesign or carefully argue what the current design is and indicate why the nasty bits are really unavoidable.
I think the design is good, robust and clear, but one implementation detail is flawed: I think you agree with me that a naming scheme is not the right tool to implement large parts of a digraph structure. It should better be hardcoded in what currently is a lazy class attributeeven though it is dull work to put it in.
This is infrastructure. We'll be living with this for a long time (see coercion framework). It's worth trying to get it right.
I think the coercion framework isn't totally flawed either. And it is about digraphs, too...
comment:329 Changed 4 years ago by
Concerning startup time, I notice that with commit 5ccf253b17c151d8e773037ac634a64f84f03075 the startup_time plugin does not complain. And that commit is before we fixed the recursion problem. I was kicking the patchbot now, I hope we will soon get more relevant data.
comment:330 followups: ↓ 331 ↓ 332 Changed 4 years ago by
Thanks guys for bringing in your different perspective!
A couple comments:
 We should leave the database discussion aside for now. It's just going to pollute that thread which is already too long.
 Nils is right in that the category *code* is indeed structured as a tree. Of course, the inheritance diagram between the categories forms an acyclic digraph.
 The guessing is just about allowing for the shorthand
FiniteDimensionalAlgebras(QQ) > Algebras(QQ).FiniteDimensional()
Those shorthands are mostly used interactively or for backward compatibility.
 The guessing does not prevent from using any name you like for your class. But if you adhere to the standards, which is what you would do naturally anyway most of the time, you get rewarded by some syntactic sugar and don't have to write down redundant information. I like the idea of encouraging people to follow the standards.
 In case there is a typo in the class name, you get an explicit error the first time you try to create the category. So, assuming the black magic is now reasonably robust (I believe so, but ...), the issues should only pop up when creating a new category, and thus be well localized and easy to debug.
 By setting explicitly _base_category_and_axiom, you are putting in information which is redundant with the reverse link from the base category. For the above example, here are the two pieces of information:
In Algebras:
FiniteDimensional = LazyImport('sage.categories.finite_dimensional_algebras', 'FiniteDimensionalAlgebras')
In FiniteDimensionalAlgebras?:
_base_category_and_axiom = [Algebras, "FiniteDimensional"]
This violates the single point of truth and opens the door for inconsistent information. It also makes restructuring the code a tiny bit be more brittle. Granted, it's not so bad since, in principle, any such inconsistency will be detected and barked about at runtime.
 The act of not specifying _base_category_and_axiom is actually a
statement:
this category is following the naming standards. This statement is used in the name building for category objects (see _repr_object_names). Getting nice names for category objects is an important feature that we will have to support one way or the other.
 For the record, there is already some name mangling occurring in the Sage code, e.g. for unpickling instances of nested classes, or for compiling the documentation thereof with Sphinx. And of course in repr for constructing the names of the objects of the categories; but I agree that the latter is not as touchy since this only affects the output, not the semantic.
Altogether, I really don't like adding this redundant information everywhere. It feels to me as spreading dirt over my carefully crafted idioms :) But I understand your being conservative after the bad bug that hit us. And we need to move forward and get this done. If you *really* can't stand this guessing, go ahead, add the redundant information everywhere this is needed, and fix the implementation of _repr_object_names accordingly.
If instead you have a protocol in mind that avoids this redundant information without doing name mangling and while leaving the same flexibility in terms of code organization (in particular supporting to implement a category with axioms either as a nested class or in a separate file, typically lazy imported), I am all ear! But I converged to this protocol after three years of practical usage. I may of course have missed something obvious but don't foresee myself finding some better protocol in the coming days.
Cheers,
Nicolas
comment:331 in reply to: ↑ 330 ; followups: ↓ 333 ↓ 337 Changed 4 years ago by
Replying to nthiery:
In Algebras:
FiniteDimensional = LazyImport('sage.categories.finite_dimensional_algebras', 'FiniteDimensionalAlgebras')In FiniteDimensionalAlgebras?:
_base_category_and_axiom = [Algebras, "FiniteDimensional"]
Right. I was expecting something like that. The fact that each has to refer to the other is suspicious to me. What is the scenario that makes this absolutely necessary to be there right from the start? Wouldn't it be possible for there to be only one link? Perhaps when that link is actually exercised the other one can be put in place.
My default reaction would be that FiniteDimensionalAlgebras needs to know about Algebras because it needs to inherit the properties of Algebras, but is it absolutely necessary for Algebras to know about a FiniteDimensional version, even before it is instantiated? In general I expect the LazyImport to be a bad sign. LazyImports have trouble getting resolved in the first place. In fact, in the traceback that Volker produced, you can see there are is a LazyImport involved in the deep recursion, so I suspect that the LazyImport in this case indeed does not get cleared properly.
My default would be that FiniteDimensionalAlgebras
only registers itself with Algebras
when it gets instantiated, if that is necessary at all. That requires finding a solution for Algebras().FiniteDimensional()
(is that how you call it?) when that happens before FiniteDimensionalAlgebras
gets imported. Does that happen at all? If so, what's the scenario? Perhaps we can solve that scenario.
(About the coercion framework: I do not intend to imply it's bad; it's just trying to solve a hard problem, and I know for a fact that the current iteration we have was very carefully designed, and certainly wasn't the first version that was tried.)
comment:332 in reply to: ↑ 330 ; followup: ↓ 338 Changed 4 years ago by
Replying to nthiery:
 We should leave the database discussion aside for now. It's just going to pollute that thread which is already too long.
+1. I said that we should get the correct logic into the code right now. But using a "proper" database is something for the future.
 Nils is right in that the category *code* is indeed structured as a tree.
That's why I said "this is what large parts of Nicolas' model do". But:
Of course, the inheritance diagram between the categories
forms an acyclic digraph.
You explained to me that the categories being arranged in something that is not a tree gave you some headache. See "DivisionRings().Finite()==Fields().Finite()
".
=
 The guessing is just about allowing for the shorthand
FiniteDimensionalAlgebras(QQ) > Algebras(QQ).FiniteDimensional()
Those shorthands are mostly used interactively or for backward compatibility.
And, as I have pointed out above, only 5 shorthands are used when Sage starts.
 In case there is a typo in the class name, you get an explicit error the first time you try to create the category. So, assuming the black magic is now reasonably robust (I believe so, but ...), the issues should only pop up when creating a new category, and thus be well localized and easy to debug.
+1. I think it is important that the consistency tests take place, and I think they guarantee robustness of the code.
I can't check the code right now; but didn't one of your last commits remove the "assert" statement that has originally triggered the recursion? I think after my commit, the assertion would not trigger a recursion, and it would perhaps be better to keep it inunless you can point out that an equivalent consistency check is happening anyway.
 By setting explicitly _base_category_and_axiom, you are putting in information which is redundant with the reverse link from the base category. For the above example, here are the two pieces of information:
In Algebras:
FiniteDimensional = LazyImport('sage.categories.finite_dimensional_algebras', 'FiniteDimensionalAlgebras')In FiniteDimensionalAlgebras?:
_base_category_and_axiom = [Algebras, "FiniteDimensional"]This violates the single point of truth and opens the door for inconsistent information.
In a digraph, it is useful that any node knows both the inarrows and the
outarrows. So, the data structure should be so that both
Algebras.FiniteDimensional
and
FiniteDimensionalAlgebras._base_category_and_axiom
are available (and of
course this is what your code provides.
However, it would indeed be nice to have a single point of truth. I believe
that this single point of truth should (in FUTURE!!) not be the naming scheme,
but a database. You'd register the fact that MyNiceCategory
is obtained
from MyUglyCategory
by adding the axiom Makeup
in the database, and the
database would automatically add the relevant information both to
MyNiceCategory
and to MyUglyCategory
.
 The act of not specifying _base_category_and_axiom is actually a statement:
this category is following the naming standards. This statement is used in the name building for category objects (see _repr_object_names). Getting nice names for category objects is an important feature that we will have to support one way or the other.
+1. I'd find it annoying to override _repr_object_names
whenever
implementing a new category.
Altogether, I really don't like adding this redundant information everywhere. It feels to me as spreading dirt over my carefully crafted idioms :)
I agree that it would be annoying to add the same info in two places.
If instead you have a protocol in mind that avoids this redundant information without doing name mangling and while leaving the same flexibility in terms of code organization (in particular supporting to implement a category with axioms either as a nested class or in a separate file, typically lazy imported), I am all ear!
Let's try...
Consider the DivisionRings().Finite()==Fields().Finite()==FiniteFields()
example. My Sage version is on a different branch now, but I hope it is correct
that FiniteFields._base_category_class_and_axiom==(Fields, 'Finite')
. Hence, FiniteFields()
knows one possible way of being
constructed. The other possible way is encoded in
DivisionRings.Finite_extra_super_categories
. In addition to that, we have
Fields.Finite=FiniteFields
(lazily imported).
It seems evident to me that each categorywithaxiom should know one (namely
the default) way of construction, as encoded in
_base_category_class_and_axiom
either hardcoded or encoded in the class
name. In the example, mathematics forces us to encode a second way of
construction, namely DivisionRings.Finite_extra_super_categories
. However,
why should we additionally hardcode Fields.Finite=FiniteFields
, as this
information is already encoded in FiniteFields
?
Couldn't a database do the job? If I am not mistaken, there are databases that
can not only store items (here: Category classes), but also relations between
items (here: Construction of a category by applying an axiom). Hence, the
database knows the default construction of FiniteFields()
(in particular, it
knows that it involves Fields()
), and it knows an additional construction of
FiniteFields()
(namely starting with DivisionRings()
).
Registering default and additional constructions into the database would be your "single point of truth".
It would be possible to have the database act as a metaclass for
CategoryWithAxioms
. Hence, during creation of a category class cls
,
the database would be called, and could certainly be made to look up
 the default construction of
cls
, registering it incls._base_category_class_and_axiom
 any known construction starting with
cls
, registering it in lazily imported class attributes (similar toFields.Finite
and some attribute ofDivisionRings
).
To summarise my suggestion (again: For the future, not for now):
 Have a database that acts as metaclass of
CategoryWithAxiom
 As a single point of truth, constructions are registered in the database.
 Of course, the database is stored, and loaded at startup of Sage. Note that the database knows where to find the category classesit is not the case that loading the database implies loading all possible category classes during startup!!
 When being asked to return a category class
cls
, the metaclass (=the database) would handle the incoming and outgoing constructions ofcls
lazily.  It is possible to register further constructions into the database in an interactive session.
So, we'd have flexibility/interactivity, no name mangling, lazy imports, and a single point of truth.
comment:333 in reply to: ↑ 331 Changed 4 years ago by
Replying to nbruin:
Right. I was expecting something like that. The fact that each has to refer to the other is suspicious to me.
+1.
What is the scenario that makes this absolutely necessary to be there right from the start? Wouldn't it be possible for there to be only one link? Perhaps when that link is actually exercised the other one can be put in place.
You can not know what end of the link will be called first. The user could
first do FinitePermutationGroups()
, or PermutationGroups().Finite()
. You
have no way to know what will happen first. Of course, you can make it so that
starting with one end of the link will make the other and of the link
work. But you don't know at what end you start and thus need to implement
both directions of the link.
... unless you forget to think about the ends of the link, and start thinking about the link itselfand make it so that any attempt to access either end of the link will use the information that is stored in the link (see the suggestion in my previous post).
My default reaction would be that FiniteDimensionalAlgebras needs to know about Algebras because it needs to inherit the properties of Algebras, but is it absolutely necessary for Algebras to know about a FiniteDimensional version, even before it is instantiated?
I think so.
My default would be that
FiniteDimensionalAlgebras
only registers itself withAlgebras
when it gets instantiated, if that is necessary at all. That requires finding a solution forAlgebras().FiniteDimensional()
(is that how you call it?) when that happens beforeFiniteDimensionalAlgebras
gets imported. Does that happen at all?
Sure. This is the default scenario.
comment:334 in reply to: ↑ 325 Changed 4 years ago by
Replying to darij:
Also, I suspect this to be a typo:
+ class Unital(CategoryWithAxiom): + + class SubcategoryMethods: + + @cached_method + def Inverse(self): + r""" + Returns the full subcategory of the unital objects of ``self``. + + EXAMPLES:: + + sage: Magmas().Unital().Inverse() + Category of inverse unital magmas + sage: Monoids().Inverse() + Category of groupsShould be the full subcategory of the *inverse*, not the unital, objects of self.
Thanks for spotting this. Fixed and pushed!
comment:335 Changed 4 years ago by
 Commit changed from 478de48553d203516cddb47e0cb89c34ccc210ee to dbb17b11bb9e8f94b5d9d3424cd34c5efc82564c
Branch pushed to git repo; I updated commit sha1. New commits:
dbb17b1  Fixed typo and improved documentation for Magmas.Unital.Inverse

comment:336 in reply to: ↑ 324 Changed 4 years ago by
Hi Darij,
I know this is not a big issue since the
dual()
of an algebra *is* a coalgebra in all cases in whichdual()
is implemented (not least because in the infinitedimensional cases it usually means the graded dual). But I'm still unhappy with the docstring lying in my face. Can anyone write a reasonably worded .. WARNING about thisdual()
not being the actual vectorspace dual?
Yes, we need to clean up the distinction between dual and graded dual; this is not completely obvious to set the things up so that we can still share some code between the two. But this issue predates this ticket: the code and documentation about dual objects is just being moved around. Please create a separate ticket about this!
Cheers,
Nicolas
New commits:
dbb17b1  Fixed typo and improved documentation for Magmas.Unital.Inverse

New commits:
dbb17b1  Fixed typo and improved documentation for Magmas.Unital.Inverse

comment:337 in reply to: ↑ 331 Changed 4 years ago by
Replying to nbruin:
Replying to nthiery: Right. I was expecting something like that. The fact that each has to refer to the other is suspicious to me. What is the scenario that makes this absolutely necessary to be there right from the start? Wouldn't it be possible for there to be only one link? Perhaps when that link is actually exercised the other one can be put in place.
My default reaction would be that FiniteDimensionalAlgebras needs to know about Algebras because it needs to inherit the properties of Algebras, but is it absolutely necessary for Algebras to know about a FiniteDimensional version, even before it is instantiated? In general I expect the LazyImport to be a bad sign. LazyImports have trouble getting resolved in the first place. In fact, in the traceback that Volker produced, you can see there are is a LazyImport involved in the deep recursion, so I suspect that the LazyImport in this case indeed does not get cleared properly.
My default would be that
FiniteDimensionalAlgebras
only registers itself withAlgebras
when it gets instantiated, if that is necessary at all. That requires finding a solution forAlgebras().FiniteDimensional()
(is that how you call it?) when that happens beforeFiniteDimensionalAlgebras
gets imported. Does that happen at all? If so, what's the scenario? Perhaps we can solve that scenario.
As pointed by Simon, Algebras().FiniteDimensional()
is usually
the default scenario. And it was an important design goal for me that
importing Algebras did not trigger the import of all its axiom
subcategories, because there can be many. For example, with my
upcoming patch on semigroups, I would not want that importing
semigroups (which is done at startup time) would trigger the import of
the categories of respectively LTrivial, RTrivial, JTrivial, DTrivial,
HTrivial, and Band semigroups, and their finite variants, since those
are only relevant to a small public.
The reverse scenario (where FiniteDimensionalAlgebras? is called first)
is not as important but still is a nice and natural feature. One would
not usually think of using DivisionRings().Commutative()
for
Fields()
:)
We could think of FiniteDimensionalAlgebras? triggering the import of Algebra, with some magic to put in place the link in FiniteDimensionalAlgebras? from that in Algebra. But it sounds to me that this logic would be somewhat more complicated (searching through the code to get the link) and not necessarily more robust than the current name mangling.
Cheers,
Nicolas
comment:338 in reply to: ↑ 332 Changed 4 years ago by
Replying to SimonKing:
+1. I think it is important that the consistency tests take place, and I think they guarantee robustness of the code.
I can't check the code right now; but didn't one of your last commits remove the "assert" statement that has originally triggered the recursion? I think after my commit, the assertion would not trigger a recursion, and it would perhaps be better to keep it inunless you can point out that an equivalent consistency check is happening anyway.
My commit removed the assertion enforcing that the axiom category class itself had to be in a standard location. On the other hand, the assertion about the consistency of links is still there (line 262).
Cheers,
Nicolas
comment:339 Changed 4 years ago by
For the record: all long tests passed for me.
comment:340 followups: ↓ 344 ↓ 347 Changed 4 years ago by
I have to think about your explanation. I have a hunch there's a problem with it but as long as I cannot point it out explicitly I cannot really object.
Independently, the lazy import indeed doesn't seem to clear properly, as was indicated by the tracebacks above already. With

src/sage/misc/lazy_import.pyx
diff git a/src/sage/misc/lazy_import.pyx b/src/sage/misc/lazy_import.pyx index 051a99b..2b96582 100644
a b cdef class LazyImport(object): 480 480 documentation of :meth:`_get_object` for an explanation of 481 481 this. 482 482 """ 483 print "lazy_import.__get__(%s,%s,%s)"%(self, instance, owner) 483 484 obj = self._get_object(owner=owner) 484 485 if hasattr(obj, "__get__"): 485 486 return obj.__get__(instance, owner)
I get:
sage: Algebras(GF(13)) lazy_import.__get__(<class 'sage.categories.associative_algebras.AssociativeAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.algebras.Algebras'>,None,<class 'sage.categories.associative_algebras.AssociativeAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.unital_algebras.UnitalAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) Category of algebras over Finite Field of size 13 sage: Algebras(GF(5)) lazy_import.__get__(<class 'sage.categories.associative_algebras.AssociativeAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.algebras.Algebras'>,None,<class 'sage.categories.associative_algebras.AssociativeAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.unital_algebras.UnitalAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) Category of algebras over Finite Field of size 5 sage: Algebras(GF(7)) lazy_import.__get__(<class 'sage.categories.associative_algebras.AssociativeAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.algebras.Algebras'>,None,<class 'sage.categories.associative_algebras.AssociativeAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.unital_algebras.UnitalAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) Category of algebras over Finite Field of size 7
After the first call, all the lazy importing should have happened already, so subsequent invocations shouldn't have a lazy_import in between any more. I suspect that this happens because there is a lazy import object somewhere that doesn't get replaced by the reference to the actual object once it gets loaded. This is a known issue: lazy_importing objects from modules doesn't actually work very well (lazy importing modules and then referring to an object in the module works better, if I'm not mistaken).
So, you should probably not use lazy import for this but rather do it manually: just keep the string first and do an actual import, replacing the string, once you really need the object.
comment:341 followup: ↓ 342 Changed 4 years ago by
Helloooo guys !
I was reading the modifications that this branch makes to code I care for, and I read the following in posets.py
sage: Q = Poset(DiGraph({'a':['b'],'b':['c'],'c':['d']}), facade = True) sage: Q.category()  Category of facade finite posets + Join of Category of finite posets + and Category of finite enumerated sets + and Category of facade sets
Well. Isn't a finite poset always an enumerated set too ? And so why do the two appear ? Besides, if I can suspect what the mathematical meaning of "posets" and "enumerated sets" is, is "facade sets" really a mathematical category, or just a programming trick ? Thus should it be a category ?
It would be cool (=necessary) to have an index somewhere of all categories used in Sage (let's say in our own code/doctests) with an explanation of what they mean. Especially if their meaning is nonmathematical, and so possibly not documented in textbooks.
Nathann
comment:342 in reply to: ↑ 341 ; followup: ↓ 343 Changed 4 years ago by
Hi Nathann,
Replying to ncohen:
I was reading the modifications that this branch makes to code I care for, and I read the following in
posets.py
sage: Q = Poset(DiGraph({'a':['b'],'b':['c'],'c':['d']}), facade = True) sage: Q.category()  Category of facade finite posets + Join of Category of finite posets + and Category of finite enumerated sets + and Category of facade sets
Well. Isn't a finite poset always an enumerated set too?
With our current implementation FinitePoset?, yes. But in general we will eventually want to implement other posets were you can do poset operations on the elements without necessarily having an algorithm to generate them all.
In practice, for FinitePoset? only the output of the category changes; the categories and thus the features are the same as before.
is "facade sets" really a mathematical category, or just a programming trick? Thus should it be a category ?
It's indeed technical. But that's ok: the category infrastructure is not necessarily only about mathematical categories. In fact other systems like Axiom go much further into using technical categories. See: http://www.axiomdeveloper.org/axiomwebsite/bookvol10.2full.html
It would be cool (=necessary) to have an index somewhere of all categories used in Sage (let's say in our own code/doctests) with an explanation of what they mean. Especially if their meaning is nonmathematical, and so possibly not documented in textbooks.
There it is, and there is even documentation about facades :)
http://www.sagemath.org/doc/reference/categories/index.html
Cheers,
Nicolas
comment:343 in reply to: ↑ 342 Changed 4 years ago by
Yooooooo !!
With our current implementation FinitePoset?, yes. But in general we will eventually want to implement other posets were you can do poset operations on the elements without necessarily having an algorithm to generate them all.
Oh. Okayyyyyyyyyyyy.
It's indeed technical. But that's ok: the category infrastructure is not necessarily only about mathematical categories. In fact other systems like Axiom go much further into using technical categories. See: http://www.axiomdeveloper.org/axiomwebsite/bookvol10.2full.html
There it is, and there is even documentation about facades :)
Excellent ! THaaaaaaaaaaanks !!
Nathann
comment:344 in reply to: ↑ 340 Changed 4 years ago by
Replying to nbruin:
Independently, the lazy import indeed doesn't seem to clear properly, as was indicated by the tracebacks above already.> I get:
sage: Algebras(GF(13)) lazy_import.__get__(<class 'sage.categories.associative_algebras.AssociativeAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.algebras.Algebras'>,None,<class 'sage.categories.associative_algebras.AssociativeAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.unital_algebras.UnitalAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) Category of algebras over Finite Field of size 13 sage: Algebras(GF(5)) lazy_import.__get__(<class 'sage.categories.associative_algebras.AssociativeAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.algebras.Algebras'>,None,<class 'sage.categories.associative_algebras.AssociativeAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.unital_algebras.UnitalAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) Category of algebras over Finite Field of size 5 sage: Algebras(GF(7)) lazy_import.__get__(<class 'sage.categories.associative_algebras.AssociativeAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.algebras.Algebras'>,None,<class 'sage.categories.associative_algebras.AssociativeAlgebras_with_category'>) lazy_import.__get__(<class 'sage.categories.unital_algebras.UnitalAlgebras'>,None,<class 'sage.categories.magmatic_algebras.MagmaticAlgebras_with_category'>) Category of algebras over Finite Field of size 7
Obvious solution: Replace
Finite = LazyImport('...')
by
@lazy_class_attribute def Finite(cls): from ... import ... return ...
Then, the import would happen only when the lazy class attribute is invoked. Afterwards, the lazy class attribute is replaced by an actual class attribute, hence, the import won't happen again.
comment:345 Changed 4 years ago by
 Commit changed from dbb17b11bb9e8f94b5d9d3424cd34c5efc82564c to 48dc0c06e567d07a70f1b45018f1e2a02cd434e7
Branch pushed to git repo; I updated commit sha1. New commits:
48dc0c0  Category with axioms: workaround limitation in lazy import to avoid lazy reimporting over and over

comment:346 Changed 4 years ago by
I don't agree with the solution proposed in the previous commit. Please revert it.
Since these lazy imports happen quite often, I think it does not suffice at all to just fix one of these lay imports.
New commits:
48dc0c0  Category with axioms: workaround limitation in lazy import to avoid lazy reimporting over and over

comment:347 in reply to: ↑ 340 Changed 4 years ago by
Hi Nils!
Replying to nbruin:
I have to think about your explanation. I have a hunch there's a problem with it but as long as I cannot point it out explicitly I cannot really object.
Ok; thanks for your thinking about it, and let us know soon if you pinpoint a problem!
Independently, the lazy import indeed doesn't seem to clear properly, as was indicated by the tracebacks above already.
Oh, right, it's good you are raising again this issue.
After the first call, all the lazy importing should have happened already, so subsequent invocations shouldn't have a lazy_import in between any more. I suspect that this happens because there is a lazy import object somewhere that doesn't get replaced by the reference to the actual object once it gets loaded. This is a known issue: lazy_importing objects from modules doesn't actually work very well (lazy importing modules and then referring to an object in the module works better, if I'm not mistaken).
Yeah, I also consider this as a shortcoming of lazy import and created #15648 for it.
So, you should probably not use lazy import for this but rather do it manually: just keep the string first and do an actual import, replacing the string, once you really need the object.
I am not sure there is an easy solution for #15648. Luckily, there is an easy workaround in the case at hand, which I have implemented in the commit I just pushed. I checked, and now the files are lazily imported only once.
So I think we can stick to the standard lazy import idiom which I very much like (concise, explicit, and the reader does not need to learn a new idiom).
What do you think?
Cheers,
Nicolas
comment:348 Changed 4 years ago by
I suggest to create a combination of lazy class attribute and __import__
. Such as:
def imported_lazy_class_attribute(module_name, cls_name): return lazy_class_attribute(lambda cls: getattr(__import__(module_name, {}, {}, [cls_name]),cls_name))
Proof of concept:
sage: def imported_lazy_class_attribute(module_name, cls_name): ....: return lazy_class_attribute(lambda cls: getattr(__import__(module_name, {}, {}, [cls_name]),cls_name)) ....: sage: class Test(object): ....: Finite = imported_lazy_class_attribute('sage.categories.finite_permutation_groups', 'FinitePermutationGroups') ....: sage: Test.Finite <class 'sage.categories.finite_permutation_groups.FinitePermutationGroups'>
So, we have a simple wrapper, that can be used to replace LazyImport
.
comment:349 Changed 4 years ago by
If you don't mind, I'll create a commit for my suggestion. But it will take until this evening.
comment:350 Changed 4 years ago by
Hmmmm. Perhaps you are right, and one can make LazyImport
behave as a lazy attribute (lazy class attribute or lazy instance attribute, in fact!). Then, we wouldn't need to change the code here, but would instead just change a few lines in LazyImport.__get__
. Hence, I'll now focus on #15648.
comment:351 followup: ↓ 355 Changed 4 years ago by
Hmmm. I think I see the problem: #15648 is probably invalid, as LazyImport
already puts an imported object into the class' dict (provided it is called on a class).
The problem is: The attribute name coincides with the name of the imported object (here: 'FinitePermutationGroups'
), but it should be assigned to the attribute 'Finite'
. Hence, I suppose it is enough to provide as_name="Finite"
to the lazy import.
comment:352 Changed 4 years ago by
sage: from sage.misc.lazy_import import LazyImport sage: class A: ....: Associative = LazyImport('sage.categories.magmas', 'Magmas', 'Associative') ....: sage: A.Associative <class 'sage.categories.magmas.Magmas'> sage: A.__dict__['Associative'] <class 'sage.categories.magmas.Magmas'>
Hence, we won't need #15648.
comment:353 Changed 4 years ago by
If you don't mind: I am now trying to create a new commit that uses the as_name
argument, to avoid multiple imports.
comment:354 followup: ↓ 356 Changed 4 years ago by
Sorry for the delay, just coming out of four hour of classes ...
I now agree that #15648 is not a bug, but rather a dream feature that might not be possible to implement, and that can be worked around easily by using as_name. Nice finding Simon!
Back to the issue here. Having to specify as_name works but is a bit redundant. In particular, it's likely that one will occasionally forget to put it in, or put it in with a typo, and then the induced reiterating lazy import will probably go unnoticed.
That unless we add an assertion test about this in classget but then it's barely different from the current workaround I implemented.
Altogether, I am hesitant in the trade off between adding four admittedlynotsonice lines in one spot, and adding an extra argument in 43 (and increasing) spots spread over the category code. In fact, if it was just for me, you know what my choice would be :)
What do you think?
Cheers,
Nicolas
comment:355 in reply to: ↑ 351 Changed 4 years ago by
Replying to SimonKing:
The problem is: The attribute name coincides with the name of the imported object (here:
'FinitePermutationGroups'
), but it should be assigned to the attribute'Finite'
. Hence, I suppose it is enough to provideas_name="Finite"
to the lazy import.
You really want to check that this works properly. LazyImport proxies seem deceivingly easy to use but have nasty catches. For instance, if such a proxy gets returned by a caching routine then the proxy gets nailed in the cache, out of reach of the replacement code.
comment:356 in reply to: ↑ 354 Changed 4 years ago by
Replying to nthiery:
Altogether, I am hesitant in the trade off between adding four admittedlynotsonice lines in one spot, and adding an extra argument in 43 (and increasing) spots spread over the category code. In fact, if it was just for me, you know what my choice would be :)
Then what about my other suggestion: Make a shortcut (say, LazilyImportedClassAttribute
), that guarantees that stuff is put into the class' dict (see comment:348). Then, we would change the 43 places now, and have something that is easier to maintain, since we don't need to specify as_name
.
Note that there would be one difference: Doint LazyImport
will bind the imported object to the class resp. the instance  but my suggestion from comment:348 just puts stuff into an attribute, without binding it (i.e., without calling the imported object's __get__
). This may be what we want, or perhaps it isn't? Not totally clear to me.
comment:357 followup: ↓ 358 Changed 4 years ago by
Oops, forget my suggestion. As it turns out, one has
sage: def imported_lazy_class_attribute(module_name, cls_name): return lazy_class_attribute(lambda cls: getattr(__import__(module_name, {}, {}, [cls_name]),cls_name)) ....: sage: class Test(object): ....: Finite = imported_lazy_class_attribute('sage.categories.finite_permutation_groups', 'FinitePermutationGroups') ....: sage: Test.Finite <class 'sage.categories.finite_permutation_groups.FinitePermutationGroups'> sage: Test.__dict__['Finite'] <sage.misc.lazy_attribute.lazy_class_attribute at 0xc29702c>
So, what I suggested in comment:348 is not a solution.
I think we should solve the problem now. Currently, I am running tests for the "add as_name
in 43 spots" approach.
It would be a good idea (for maintainability) to have an assertion that the lazily imported object in fact ends up in the class' dict.
But where could such a test take place? What part of the code knows, e.g., that Groups
are known under the name Inverse
to Monoids
?
comment:358 in reply to: ↑ 357 ; followup: ↓ 359 Changed 4 years ago by
Replying to SimonKing:
I think we should solve the problem now. Currently, I am running tests for the "add
as_name
in 43 spots" approach.It would be a good idea (for maintainability) to have an assertion that the lazily imported object in fact ends up in the class' dict.
But where could such a test take place? What part of the code knows, e.g., that
Groups
are known under the nameInverse
toMonoids
?
A natural spot is where I introduced the trick that forces the replacement of the lazy import by the object itself: in CategoryWithAxiom?.classget.
But then the trick is no more complicated than the assertion test, and the trick by itself solves the problem now. So honestly I don't see why bother :)
Cheers,
Nicolas
comment:359 in reply to: ↑ 358 ; followup: ↓ 360 Changed 4 years ago by
I HATE TRAC!!!
It always keeps jumping to the top of the page while editing. And right now, after typing for, say, 30 minutes, the text got lost, even though I pressed "submit". So, let's use itsalltext.
Replying to nthiery:
But where could such a test take place? What part of the code knows, e.g., that
Groups
are known under the nameInverse
toMonoids
?A natural spot is where I introduced the trick that forces the replacement of the lazy import by the object itself: in CategoryWithAxiom?.classget.
But then the trick is no more complicated than the assertion test, and the trick by itself solves the problem now.
Aha! I got distracted by the comments that you've put into the code: I thought you only consider the special case of the axiom 'Finite'.
But you use cls._axiom
, and this should give the correct attribute name.
Or does it? I think you'll run into problems, as soon as you have categories
C
, B1
, B2
, such that C==B1.Axiom1()==B2.Axiom2()
.
Indeed, you must decide what value to assign to C._axiom
. Say, you decide
C._axiom=='Axiom1'
. Then, with your trick, calling B2.Axiom2
would put C
into B2.__dict__['Axiom1']
. And that's the wrong attribute, it should be
'Axiom2'
.
More concretely:
sage: Rings().Commutative().Finite().Division() Category of finite fields sage: DivisionRings().Finite() Category of finite fields sage: Rings().Commutative().Finite().Division()._axiom 'Finite'
Luckily, Rings().Commutative().Finite()
is a join category. But assume, in a
cifferent example, you would build it not as a join, but using a separate
category class class CC
. Then, you would originally find that
CC.Finite()==C()
(since C
comprises the finiteness axiom). But after doing
CC.Division()
, the result would be put into CC.__dict__['Finite']
. Hence,
next time you call C.Finite()
, the result wouldn't be CC()
but CC.Division()
.
So honestly I don't see why bother :)
Do you see now? It is sheer luck that you don't run into that kind of problems. So, I'd say we should better be explicit here.
comment:360 in reply to: ↑ 359 ; followup: ↓ 361 Changed 4 years ago by
Replying to SimonKing:
It always keeps jumping to the top of the page while editing. And right now, after typing for, say, 30 minutes, the text got lost, even though I pressed "submit". So, let's use itsalltext.
Yes, pretty annoying! I am also systematically using itsalltext ...
Replying to nthiery: Aha! I got distracted by the comments that you've put into the code: I thought you only consider the special case of the axiom 'Finite'.
Oh, I see! I now understand your comment on trac. Sorry for my misleading comment in the code.
But you use
cls._axiom
, and this should give the correct attribute name.Or does it? I think you'll run into problems, as soon as you have categories
C
,B1
,B2
, such thatC==B1.Axiom1()==B2.Axiom2()
.
Let me write instead C(), B1() and B2() for the categories, and C, B1, and B2 for their respective classes.
You would indeed run into a problem if you had B1.Axiom1 and B2.Axiom2 point to the same class C. But that's forbidden: the classes are to be organized in a tree. In particular, the class of a category with axiom has a unique base_category class and axiom, which is that given in the code by the link "X.Axiom = Y". In principle there should be appropriate barking if this specification is violated.
Since the trick in classget works at the level of the classes, before even the category is created, we are safe.
For the record, at the level of categories, here is what happens. Imagine C=B1.Axiom1. Even if you get also C() as the end result of calling B2.Axiom2(), C() is still constructed internally as B1().Axiom1(), and the base_category_class and axiom are set accordingly:
sage: B1 = Magmas.Associative sage: B2 = Magmas.Unital sage: C = B1.Unital sage: B1() Category of semigroups sage: B2() Category of unital magmas sage: C() Category of monoids sage: B2().Associative() is B1().Unital() True sage: B1().Unital()._base_category_class_and_axiom [sage.categories.semigroups.Semigroups, 'Unital']
Do you see now? It is sheer luck that you don't run into that kind of problems. So, I'd say we should better be explicit here.
No sheer luck. Just a consequence of the specifications :)
Cheers,
Nicolas
comment:361 in reply to: ↑ 360 Changed 4 years ago by
Replying to nthiery:
You would indeed run into a problem if you had B1.Axiom1 and B2.Axiom2 point to the same class C. But that's forbidden:
How? And don't you think it is easier to enforce people using the as_name
argument of LazyAttribute
then to follow this strange "Nono"?
the classes are to be organized in a tree. In particular, the class of a category with axiom has a unique base_category class and axiom, which is that given in the code by the link "X.Axiom = Y". In principle there should be appropriate barking if this specification is violated.
It could very well be that mathematically you have
Z().OtherAxiom()==X().Axiom()==Y()
. So, why do you forbid to support
OtherAxiom
on the class level? And how would you enforce it?
Since the trick in classget works at the level of the classes, before even the category is created, we are safe.
I was talking about classes, and we are only safe if everybody follows the convention that you just formulated, and that I don't think is clearly documented.
sage: B1 = Magmas.Associative sage: B2 = Magmas.Unital sage: C = B1.Unital sage: B1() Category of semigroups sage: B2() Category of unital magmas sage: C() Category of monoids sage: B2().Associative() is B1().Unital() True sage: B1().Unital()._base_category_class_and_axiom [sage.categories.semigroups.Semigroups, 'Unital']
Mathematically, it should be the case that B2.Associative
is the
same as B1.Unital
. On the instance level, it is. But not on the class level:
sage: B2().Associative() Category of monoids sage: B2.Associative  AttributeError Traceback (most recent call last) <ipythoninput9179422d552e4> in <module>() > 1 B2.Associative AttributeError: type object 'Magmas.Unital' has no attribute 'Associative'
There are associative unital magmas, in the same way as there are unital associative magmas. Apparently the categories know it, even though the classes of the category don't.
It seems quite likely to me that a mathematician would think: "Well, of course the category of unital magmas should also support the axiom of associativity, so, let's provide it on the level of classes." But then, your trick would result in the problem (putting a class into the wrong attribute of the base class) that I sketched in my previous post.
Do you see now? It is sheer luck that you don't run into that kind of problems. So, I'd say we should better be explicit here.
No sheer luck. Just a consequence of the specifications :)
The more I think of it, the less I like the convention that some mathematical facts can only be implemented for instances of a class, but not for the class itself. And I don't see how you enforce or even just encourage this convention.
comment:362 Changed 4 years ago by
I see. Continuing the example from above:
sage: B2.Associative = sage.misc.lazy_import.LazyImport('sage.categories.monoids','Monoids') sage: B2.Associative()  AssertionError Traceback (most recent call last) <ipythoninput52f486cd5dd51> in <module>() > 1 B2.Associative() /home/king/Sage/git/sage/local/lib/python2.7/sitepackages/sage/misc/lazy_import.so in sage.misc.lazy_import.LazyImport.__get__ (sage/misc/lazy_import.c:3371)() /home/king/Sage/git/sage/local/lib/python2.7/sitepackages/sage/misc/classcall_metaclass.so in sage.misc.classcall_metaclass.ClasscallMetaclass.__get__ (sage/misc/classcall_metaclass.c:1350)() /home/king/Sage/git/sage/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.pyc in __classget__(cls, base_category, base_category_class) 549 else: 550 assert cls._base_category_class_and_axiom[0] is base_category_class, \ > 551 "base category class for %s mismatch; expected %s, got %s"%(cls, cls._base_category_class_and_axiom[0], base_category_class) 552 if base_category is None: 553 return cls AssertionError: base category class for <class 'sage.categories.monoids.Monoids'> mismatch; expected <class 'sage.categories.semigroups.Semigroups'>, got <class 'sage.categories.magmas.Magmas.Unital'>
So, this is how you enforce it.
But then: Do you really expect that in the long run all developers contributing to the category framework will be happy to do
class SubcategoryMethods: @cached_method def Associative(self):
on some categories, and
Associative = LazyImport('...', '...')
on exactly one category?
Granted, if one uses a database as a metaclass, this could be automated. Well, once we got this ticket merged, I will start to do experiments with that idea...
comment:363 Changed 4 years ago by
Nicolas, I think more things could potentially go wrong. Assume that someone has a category class CC
and has two axioms Axiom1 and Axiom2. As it happens, both CC().Axiom1()
and CC.Axiom2()
return two different categories R1 and R2, but unfortunately both R1._axiom=='Axiom1'==R2._axiom
. It seems to me that in this situation it would be possible to screw up (trying to construct an example now), if you just test whether CC.__dict__[R1._axiom]
is a lazy import (and then override it).
Could it not happen that CC.__dict__[R1._axiom]
happens to be R2? I think this case should be barked at as well.
I believe the following should happen:
 if
CC.__dict__[R1._axiom]
is aLazyImport
then presumably theLazyImport
had the wrong (or no)as_name
. Rather than blindly putting R1 into it, Sage should better raise an error and ask to provide the correct name.  if
CC.__dict__[R1._axiom]
is not aLazyImport
, then it must be tested whether it coincides with R1. If it doesn't, then certainly there is something wrong, which should give rise to an error, too.
comment:364 Changed 4 years ago by
 Commit changed from 48dc0c06e567d07a70f1b45018f1e2a02cd434e7 to 8045aa4a4b7ada735b3eb6055382f9b341a39f1e
Branch pushed to git repo; I updated commit sha1. New commits:
8045aa4  Trac 10963: Fix LazyImport as_name, and make stronger consistency tests

comment:365 Changed 4 years ago by
Nicolas, please see if you can agree with my changes. If you can agree, then in the next step I'd like to try and add some more tests.
First of all, I did not succeed in constructing an example in which the problem sketched above really occurs: All the time there was an error telling me that I did something wrong. This is the good news.
Nonetheless, I think it is better to ask developers to lazyimport with the correct name. Hence, I replaced your workaround by an error, explaining what needs to be done (and this is what I want to add a doctest for).
In addition, just to be on the safe side, I test whether the thing found in
base_category_class.__dict__[cls._axiom]
actually is cls
. If it isn't,
then there is a problem: Either it is a lazy import put under the wrong name,
or we are in the scenario sketched above. I think we have good reason to be
paranoid, because if a class is put into base_category_class.__dict__
with the wrong key, then I guess we would have very difficult to debug crashes.
comment:366 Changed 4 years ago by
OK, I think I've pinned down what bits make me think that the current implementation may be a bit unnatural for (what I understand of) what is being modelled. To me it seems there are some differences between the implementation and the model. That is often necessary in practice, so they may be quite justified. But I see some problems popping up here that may be due to those differences.
First: what is modelled? As I understand this is an acyclic digraph, where the vertices are categories and the edges are labelled with axioms. Furthermore, I think the assumption is that this graph is determined at build/startup time. The modelled graph is supposed to be constant throughout runtime (so Simon's desire to dynamically extend the graph would, strictly speaking, not fit in the model).
There is of course information that is carried with this graph. I'm not sure what exactly the information is, but it seems to be mainly "code" (methods etc.), attached to the vertices.
Instantiating the entire graph at startup is apparently too expensive (I can definitely believe so), so although the modelled graph is constant, we are only keeping part of it in memory, gradually extending it as needed.
It seems that each category is modelled by a "class" (for codecentric information, this seems a reasonable choice). However, inheritance isn't used to express edges (they wouldn not be sufficient anyway, since "subclass of" arrows don't carry labels). Instead, "axioms" are indicated by class attributes on the more general category, where the name of the attribute provides the axiom label and the value is the category obtained from adding the axiom.
Potential Problem: Implementing a labelled digraph using dictionaries is a
very natural one, but here we have a bit of a mess: the outgoing edges are mixed
in to a dictionary that has all kinds of other stuff in it too. For instance,
getting a list of which axioms are "implemented" for a category involves
iterating over the class.__dict__
and filtering on which attributes are axioms
and which are other methods. How do we tell the difference? By name? How about
name clashes?
Unusual: It is rare that callables are bound to an attribute that doesn't bear their (unqualified) name. It's not unheard of, though.
The "lazy loading" requirement is implemented by choosing for every category a
preferred (basecategory,axiom) pair through which it can be obtained. The class
attribute corresponding to that edge is implemented by a LazyImport
link
rather than putting the actual class there.
Question: At this point, it seems unnecessary to do this only for a
preferred edge. Each LazyImport
would need to be called once to resolve, but
if the import in question has already happened then the result should just be
that the LazyImport? removes itself from the story (provided the LazyImport? has
the right references to take itself out of the relevant dictionary).
It also seems to be useful to have a link from a category to the categories from which is can be obtained. Oddly enough, the system insists on having a single main (category,axiom) pair through which it can be obtained, even though the original graph we're modelling doesn't have any such concept
Question: Why do we need _base_category_and_axiom
at all? It seems very
unnatural: there are many ways in which a category can be obtained by adding an
extra axiom to another category. Indeed, there are extra_super_categories
. Why
are they separate? Should we just have a set of (base_category,axiom)
pairs?
Question: (for lazy loading) given that categories refer to their super categories, is it indeed the case that loading a category will imply loading the entire "inbound component" (all vertices that have a path to this one)? It is not entirely clear to me how they get that info. Is that a combination of a name mangling search and hardwiring the connections?
Question: Is there always just one axiom to add to go from one category to another? Wouldn't there be useless nodes in the graph then (partial axiom combinations that noone is interested in)?
In short, I think the current problems are mainly coming from an imposed
asymmetry: the introduction of a "main" (base_category,axiom)
pair.
Additionally, the lazy loading issue only becomes complicated because the
derived categories insist on knowing what their direct supercategories are. If
we wouldn't need that then just the lazy imports would do the trick.
If we do need categories to know their supercategories, then there is indeed a problem of how to get that information. Name mangling doesn't do the trick, because that only works for a rooted tree. We need to be able to express an (acyclic)digraph (I don't think we get any mileage out of the acyclic bit).
If you really want a single point of truth, I don't see an alternative to Simon's suggestion a bit list of:
("base category", "axiom", "resulting category")
triples, where the entries are indeed strings, and a nasty (meta?)metaclass on
categories that looks up all the entries with base category occurring, to put in
the appropriate LazyImport
bindings and a look up of the resulting category
entries to put the links back in the super_category attribute.
Another point that's a little dirty presently is that the axiom callables are mixed in with all the other class attributes. That could easily cause name clashes. I would almost say that instead of
Algebras(GF(5)).FiniteDimensional()
it would be preferable to do
Algebras(GF(5)).with_axiom("FiniteDimensional")
since that would give the freedom to use a dedicated dictionary.
comment:367 followup: ↓ 368 Changed 4 years ago by
Given the number of questions poping out recently, I decided this morning that I might as well write the developer's documentation for axioms instead of spreading the information in comments on this track ticket. Beware: so much more work for you to review :)
I hope it will be convincing about the design decisions and, up to some point, the implementation. It will take me some more time (until tomorrow evening maybe?), so don't start hacking around in the mean time!
I can answer right now to Nils last comment though. I believe that
C.FiniteDimensional()
is *really* better than
C.with_axiom("FiniteDimensional")
Rationale:
 This idiom is short, unambiguous and super expressive. Look at the non trivial examples in the primer and see how painful those would be with the _with_axiom idiom. It is backed up by three years of practical usage, and during that period it has appeared, as far as I know, totally natural to all those who played with it.
 It's nice w.r.t. introspection:
 The axioms of C appear in the tab completion of C. In particular you get an easy access to the list of them.
 introspection on C.FiniteDimensional? gives specific documentation on what each axiom is about.
 It's consistent with what we have been using for years for functorial constructions (Cartesian products, Graded, ...)
 Categories have few operations. So the odds of a name clash is low.
 You can *also* use C._with_axiom(...) if you feel inclined to, for example if the name of the axiom you are interested in is stored in a variable. I am happy to rename C._with_axiom to C.with_axiom if you believe it deserves to be public.
 This idiom does not depend on the actual implementation behind the scene. We could still decide later on to store the links in a database or whatever.
Cheers,
Nicolas
comment:368 in reply to: ↑ 367 Changed 4 years ago by
Nicolas:
Replying to nthiery:
Given the number of questions poping out recently, I decided this morning that I might as well write the developer's documentation for axioms instead of spreading the information in comments on this track ticket.
Thank you!!
comment:369 followup: ↓ 370 Changed 4 years ago by
I have just pushed a first step on u/nthiery/ticket/10963
(extended primer on axioms). Lunch and then documentation of how to
implement axioms ...
PS: do we have a trac role for referencing git branches?
comment:370 in reply to: ↑ 369 Changed 4 years ago by
Replying to nthiery:
I have just pushed a first step on
u/nthiery/ticket/10963
(extended primer on axioms). Lunch and then documentation of how to implement axioms ...
I'm looking forward to it. Can you also link to a readily typeset version of this documentation? I'm mainly interested in reading it, not so much building it.
comment:371 Changed 4 years ago by
Can you also link to a readily typeset version of this documentation? I'm mainly interested in reading it, not so much building it.
There it is:
In progress, ...
comment:372 Changed 4 years ago by
Progress in the documentation:
 implementing new axioms
 handling multiple axioms, and tree structure of the classes
 recovering the class of a category with axiom to add new code
I am not sure I'll be able to make more progress over the weekend.
comment:373 followup: ↓ 376 Changed 4 years ago by
I am reading the documentation of axioms, which begins by saying that one should first be used to the doc of axioms in the category primer... which is probably contained in this very patch too. Even though I cannot find where.
Well, this just to say that there seems to be something wrong with the first two examples of Sage code in the section entitled "Difference between axioms and regressive covariant functorial constructions" of the following doc : http://sage.math.washington.edu/home/nthiery/sage6.0/src/doc/output/html/en/reference/categories/sage/categories/primer.html#categoryprimeraxioms
Nathann
comment:374 followup: ↓ 377 Changed 4 years ago by
In the same page, there is a broken link, in "Each category should come with a good example, in sage.categories.examples". If you want to avoid that, you can use the warnlinks
flag when compiling the doc. Broken links will appear as warnings, it's totally cool.
Besides, it seems very easy to intersect properties (axioms). Isn't there any need to *remove* axioms from time to time ? And is there a way to do it ?
Nathann
comment:375 followup: ↓ 378 Changed 4 years ago by
Hi Nicholas,
Great work writing up the documentation. It's very accessibly written and very friendly in tone. I see you're not done yet. One (in my view major) point I don't see addressed yet:
base_category: While you argue why the code is written like a (spanning) tree of the acyclic digraph, I don't see it argued why it needs to be reflected in further data structures. In particular, I don't see why "base_category" is necessary at all. Surely we need to know what the super categories are, but I don't see why one needs to be preferred.
In fact, as soon as classes get implemented in their own module rather than as a nested class, there isn't a natural tree structure anyway.
Apart from that some minor comments about coding conventions and class use.
old style classes: If you type class ElementMethods:
, you create a nested class that is an oldstyle class in python 2. Since you're basically only interested in ElementMethods.__dict__
anyway, this is perhaps not such an issue. Oldstyle classes have some semantic differences. I don't know if they have any significant performance differences, and probably they don't have any performance differences that matter for your very limited application.
nested classes: are a little strange in python. The syntax suggests the class would become a kind of closure with the enveloping class as scope closed over. That is not the case, however, which is probably good for your purposes. It's one reason why nested classes aren't very popular, however: they add an extra indentation level and they aren't any different from a separate class, even though the lexical context might suggest differently. So I'd lean towards not nesting axiom classes. At the cost of one attribute assignment Finite=FiniteGroups
(or something like that), you're being spared extra indentation on many lines. It probably also makes the code easier to read for people who don't usually use a codefolding editor.
comment:376 in reply to: ↑ 373 Changed 4 years ago by
Replying to ncohen:
I am reading the documentation of axioms, which begins by saying that one should first be used to the doc of axioms in the category primer... which is probably contained in this very patch too. Even though I cannot find where.
I am confused. Are you looking at http://sage.math.washington.edu/home/nthiery/sage6.0/src/doc/output/html/en/reference/categories/sage/categories/category_with_axiom.html#modulesage.categories.category_with_axiom? Doesn't it start with links to the primer and the appropriate section there?
Well, this just to say that there seems to be something wrong with the first two examples of Sage code in the section entitled "Difference between axioms and regressive covariant functorial constructions" of the following doc : http://sage.math.washington.edu/home/nthiery/sage6.0/src/doc/output/html/en/reference/categories/sage/categories/primer.html#categoryprimeraxiom
You mean that they are not framed as Sage examples? Thanks for spotting this! I had indeed forgotten the ::. Fixed on my machine; it will be in the next commit.
comment:377 in reply to: ↑ 374 ; followup: ↓ 381 Changed 4 years ago by
Replying to ncohen:
In the same page, there is a broken link, in "Each category should come with a good example, in sage.categories.examples".
Thanks for the reminder. I was wondering about this. At this point, the module sage.categories.examples bears no documentation (like most modules corresponding to directories in Sage). In such a case is it better to not put a link, or put one in case someone would later add documentation? Well, ok, or add doc there, but I am not sure what would be useful to say.
If you want to avoid that, you can use the
warnlinks
flag when compiling the doc. Broken links will appear as warnings, it's totally cool.
Yup; I'll run that. I just don't promise I'll fix here those links that were previously broken.
Besides, it seems very easy to intersect properties (axioms). Isn't there any need to *remove* axioms from time to time ? And is there a way to do it ?
You have the _without_axiom method. It works stupidly by removing all axioms and then reinserting all but the one you mentionned. It's not super robust though, since some combinations of axioms may imply others and thus you might get back the original category as in:
sage: F = FiniteFields() sage: F._without_axiom("Commutative") Category of finite fields
Since I had only very little use cases for this method, I left it as private for now until one will get a clearer idea of the precise semantic we want.
Cheers,
Nicolas
comment:378 in reply to: ↑ 375 ; followups: ↓ 379 ↓ 400 Changed 4 years ago by
Replying to nbruin:
Great work writing up the documentation. It's very accessibly written and very friendly in tone.
Thanks :)
I see you're not done yet.
Yup. I'll be back to it tomorrow morning.
One (in my view major) point I don't see addressed yet:
base_category: While you argue why the code is written like a (spanning) tree of the acyclic digraph, I don't see it argued why it needs to be reflected in further data structures. In particular, I don't see why "base_category" is necessary at all. Surely we need to know what the super categories are, but I don't see why one needs to be preferred.
base_category
is used in a few spots for computing stuff
recursively. This is includes _without_axioms (a trivial recursion),
and more importantly the calculations of the super categories (a
tricky recursion).
I agree that this is mostly for internal use. But it's consistent with functorial constructions and the like to have the .base_category() method.
In fact, as soon as classes get implemented in their own module rather than as a nested class, there isn't a natural tree structure anyway.
The tree structure is still there, given by the links "Finite=...". And it is used extensively by the underlying algorithmic.
old style classes: If you type
class ElementMethods:
, you create a nested class that is an oldstyle class in python 2. Since you're basically only interested inElementMethods.__dict__
anyway, this is perhaps not such an issue. Oldstyle classes have some semantic differences. I don't know if they have any significant performance differences, and probably they don't have any performance differences that matter for your very limited application.
Yup. We have been using ElementMethods? and friends since 2009 while being aware of this artifact. As you mention, those are just bags of methods and the semantic difference does not pop up in practice (well, it did once, but it was trivial).
nested classes: are a little strange in python. The syntax suggests the class would become a kind of closure with the enveloping class as scope closed over. That is not the case, however, which is probably good for your purposes. It's one reason why nested classes aren't very popular, however: they add an extra indentation level and they aren't any different from a separate class, even though the lexical context might suggest differently. So I'd lean towards not nesting axiom classes. At the cost of one attribute assignment
Finite=FiniteGroups
(or something like that), you're being spared extra indentation on many lines. It probably also makes the code easier to read for people who don't usually use a codefolding editor.
Well, we have been using extensively nested classes from the beginning of the category code in late 2008. It really helps seeing the structure of the code. See magmas.py for an example. Maybe it's time to advertise them more to the Python community :)
I agree that we should not nest too much for the indentation to not become too large; which is why I want the code to support both nesting and links to other files or elsewhere.
I agree that codefolding is a killer tool there. And I am blaming emacs everyday for not supporting it easily. But I still managed writing quite some category code without it :)
Cheers,
Nicolas
comment:379 in reply to: ↑ 378 Changed 4 years ago by
Replying to nthiery:
The tree structure is still there, given by the links "Finite=...". And it is used extensively by the underlying algorithmic.
No, that's my point. Both Fields.Finite
and DivisionRings.Finite
can point at the same category (and I think they could both do that via a properly formatted lazy import proxy if required). Then there's no tree structure. I haven't seen a convincing reason yet why you need to bless one (in this case probably Fields.Finite
) as the main link. You may well have a convincing reason, in which case you should mention it. If there is no convincing reason we should start deemphasizing it. The data that you're modelling doesn't intrinsically imply a spanning tree, so if we can avoid putting it in, we'd probably be better off.
comment:380 followup: ↓ 382 Changed 4 years ago by
So I read the documentation and it does a pretty good job of explaining what is going on. The goals are very nice and I totally agree with you. I didn't see it spelled how and what kind of identities between different categories with axioms can be found automaticaly. It seems that this is about the same problem as normal form for toric ideals so there needs to be some decision about monomials / axiom orderings.
But thats not what I really want to bring up. I'm also more and more convinced that the whole "axioms as strings" is an absolutely terrible implementation. The very first example:
sage: class Cs(Category): ....: def super_categories(self): ....: return [Sets()] ....: class Finite(CategoryWithAxiom): ....: class ParentMethods: ....: def foo(self): ....: print "I am a method on finite C's"
implements
sage: P = Parent(category=Cs().Finite()) sage: P.foo() # ok, nice I am a method on finite C's sage: P.is_finite() # What is this I don't even True
From a Python programmer's perspective, the fact that class names get parsed under the hood is just about entirely unexpected. Sure its possible to implement, but it is also entirely opposite of Python best practices. I don't even want to bring up the poor guy who'll try this with "Endlich" instead of "Finite" as a class name and be in for a surprise.
Slightly different angle, same problem IMHO:
sage: Cs().Finite().super_categories() # ok, nice [Category of finite sets, Category of cs] sage: Cs().Finite().axioms() # really, this is the best we can do? frozenset(['Finite'])
Axioms are at the end of the day the analog of mixins in the category framework, and as such implemented as classes. Just return the classes. This should be obvious.
comment:381 in reply to: ↑ 377 ; followup: ↓ 386 Changed 4 years ago by
Yooooooo !!
You have the _without_axiom method.
Since I had only very little use cases for this method, I left it as private for now until one will get a clearer idea of the precise semantic we want.
Okayokay. And about your confusion above : I was looking for the code, not for the doc. I wondered where this doc was actually implemented. I didn't even know if it came from this patch or some other place ^^;
Nathann
comment:382 in reply to: ↑ 380 ; followup: ↓ 385 Changed 4 years ago by
Replying to vbraun:
Axioms are at the end of the day the analog of mixins in the category framework, and as such implemented as classes. Just return the classes. This should be obvious.
I'm not entirely sure this is true presently. While Groups.Finite
points to a
class (category) FiniteGroups
(or really just at the class named
Groups.Finite
if written as a nested class), the class Sets.Finite
is also
automatically picked up, and if I'm not mistaken that's one of the features of
axioms: if they can, they get applied to supercategories as well. The class
FiniteSets
and the class FiniteGroups
are quite probably distinct classes,
so there is no single class that symbolizes the Finite
axiom, yet it is
relevant that FiniteGroups
resp. FiniteSets
are obtained from Groups
resp.
Sets
by applying the same axiom. The label that signifies this is
currently the string "Finite"
. From an implementation point of view I think
it's confusing to abuse the class type __dict__
to document this fact, but
Nicholas does illustrate how the abuse leads to easy to write classes.
Essentially, Nicholas has selected a bunch of special method names (such as
dundermethods __len__
, __get__
etc) and blessed them to be "axioms", which
receive special treatment: If a category implements one of those then this axiom
can be "applied" to this category. It does mean the documentation should have an
exhaustive list of what method names are considered axioms
and introducing new
axioms would be subject to name clash warnings (we wouldn't want to break code
out there that already gives a nonaxiom meaning to an attribute).
I think that if we're staying with this pattern, we should announce some rule: on categories, any method name starting with a capital is in principle reserved to be used as an axiom or functorial construction. We should probably also include some guidelines how users should go about implementing their own nonlibrary axioms in a way that is likely to work with future versions, or declare that such a thing is not guaranteed (meaning Categories do not implement an "open protocol" but are entirely meant to be internals of sage).
comment:383 followup: ↓ 388 Changed 4 years ago by
I agree with Nils, but imho that is just another way of spelling out the problem. What does adding the Finite axiom actually mean if I want to apply it to a category that I'm using/writing? I can guess that it gives me a is_finite()
method returning True
, but confusingly that is implemented in some category. Anything else? How can I use code introspection (one of Pythons absolute strong points) to find out what is going on? I can't, because all I got is this string to represent the axiom. Axioms should be code (classes) with some protocol for how they are used to enrich categories.
Slightly related, I don't like the _base_category_class_and_axiom
attribute. A heterogeneous list with some convention to treat the first element special is a terrible data structure. Just split it up into _base_category_class
and _axioms
.
comment:384 Changed 4 years ago by
I just pushed two new sections on axioms depending on other axioms and on deduction rules.
I'll answer your comments later tonight. Thanks for them, they are making me thing on how to best present the material. Thanks JeanBaptiste for some profreading too!
Cheers,
Nicolas
comment:385 in reply to: ↑ 382 Changed 4 years ago by
Replying to nbruin:
I'm not entirely sure this is true presently. While
Groups.Finite
points to a class (category)FiniteGroups
(or really just at the class namedGroups.Finite
if written as a nested class), the classSets.Finite
is also automatically picked up, and if I'm not mistaken that's one of the features of axioms: if they can, they get applied to supercategories as well. The classFiniteSets
and the classFiniteGroups
are quite probably distinct classes, so there is no single class that symbolizes theFinite
axiom, yet it is relevant thatFiniteGroups
resp.FiniteSets
are obtained fromGroups
resp.Sets
by applying the same axiom.
+1
The label that signifies this is currently the string
"Finite"
. From an implementation point of view I think it's confusing to abuse the class type__dict__
to document this fact, but Nicholas does illustrate how the abuse leads to easy to write classes.
It's not a label. Cs() inherits from the Sets category a Finite
method, and it can complement this method with extra data (here a
mixin class) in the form of a class attribute
Cs.Finite
. The fact
that
Cs.Finite
*complements* the
Finite
method rather than
*overriding* is not unnatural: that's what sequences of
super
calls are usually about.
Essentially, Nicholas has selected a bunch of special method names (such as dundermethods
__len__
,__get__
etc) and blessed them to be "axioms", which receive special treatment: If a category implements one of those then this axiom can be "applied" to this category. It does mean the documentation should have an exhaustive list of what method names are consideredaxioms
and introducing new axioms would be subject to name clash warnings (we wouldn't want to break code out there that already gives a nonaxiom meaning to an attribute).I think that if we're staying with this pattern, we should announce some rule: on categories, any method name starting with a capital is in principle reserved to be used as an axiom or functorial construction. We should probably also include some guidelines how users should go about implementing their own nonlibrary axioms in a way that is likely to work with future versions, or declare that such a thing is not guaranteed (meaning Categories do not implement an "open protocol" but are entirely meant to be internals of sage).
Note quite since the definition of an axiom is local to a category and its super categories. For example, the category of Modules defines an axiom "Graded", and this fixes the semantic of Cs.Graded for every subcategory of Modules, but no more. That's no different from usual hierarchy of classes: if a class defines (or declares) an attribute or method with a given name, this fixes the semantic of that name for all subclasses.
Therefore I don't think we need to take any more specific step than if we were implementing usual hierarchies of classes.
Of course, I agree that it's good for consistency to promote the
convention that axioms and functorial constructions should start with
a capital. It gives a hint that Cs().A()
is going to construct a
category.
Cheers,
Nicolas
comment:386 in reply to: ↑ 381 Changed 4 years ago by
Replying to ncohen:
Okayokay. And about your confusion above : I was looking for the code, not for the doc. I wondered where this doc was actually implemented. I didn't even know if it came from this patch or some other place
^^;
:)
comment:387 Changed 4 years ago by
Replying to vbraun:
So I read the documentation and it does a pretty good job of explaining what is going on. The goals are very nice and I totally agree with you.
I am glad you appreciate it :)
I didn't see it spelled how and what kind of identities between different categories with axioms can be found automaticaly.
The new section on deduction rules (i.e. mathematical facts encoded into the system) might answer your question. Otherwise, let me know.
It seems that this is about the same problem as normal form for toric ideals so there needs to be some decision about monomials / axiom orderings.
Hmm, I am not sure. It's more about computing recursively a closure upon all available deduction rules to derive as much information as possible from what's available.
But thats not what I really want to bring up. I'm also more and more convinced that the whole "axioms as strings" is an absolutely terrible implementation. The very first example:
sage: class Cs(Category): ....: def super_categories(self): ....: return [Sets()] ....: class Finite(CategoryWithAxiom): ....: class ParentMethods: ....: def foo(self): ....: print "I am a method on finite C's"implements
sage: P = Parent(category=Cs().Finite()) sage: P.foo() # ok, nice I am a method on finite C's sage: P.is_finite() # What is this I don't even TrueFrom a Python programmer's perspective, the fact that class names get parsed under the hood is just about entirely unexpected. Sure its possible to implement, but it is also entirely opposite of Python best practices. I don't even want to bring up the poor guy who'll try this with "Endlich" instead of "Finite" as a class name and be in for a surprise.
In the above example, there is no class name parsing. If the name
Finite
is special, that's because Cs is a subcategory of Sets, and
Sets defines the
Finite
axiom. Namewise, it's of the same nature
as implementing in a class a method declared in some superclass. I
just added a note to make this more explicit in the documentation.
Slightly different angle, same problem IMHO:
sage: Cs().Finite().super_categories() # ok, nice [Category of finite sets, Category of cs] sage: Cs().Finite().axioms() # really, this is the best we can do? frozenset(['Finite'])Axioms are at the end of the day the analog of mixins in the category framework, and as such implemented as classes. Just return the classes. This should be obvious.
As pointed out by Nils, a single axiom (like Finite) corresponds to a bunch of mixin classes (Groups.Finite, Crystals.Finite, Lattices.Finite, ...). Each mixin models a given category with a given axiom. Granted, the mixin in the category defining the axiom (e.g. Sets.Finite) is a bit special; still it is a category, not the axiom itself (mathematically speaking, we make a difference, even just in the naming, between the axiom of associativity and the category of associative magmas), so I would not want to model the axiom with it.
Now there is the question of whether we want to model the axioms at all with objects (or classes), for example when returning the axioms of a category. Honestly, in three years I haven't had a need for it; in particular, I did not need to have operations on the axioms. The only relevant operation I could see so far is asking an axiom in which category it's defined. So for now let's keep it simple.
Cheers,
Nicolas
comment:388 in reply to: ↑ 383 Changed 4 years ago by
Replying to vbraun:
Slightly related, I don't like the
_base_category_class_and_axiom
attribute. A heterogeneous list with some convention to treat the first element special is a terrible data structure. Just split it up into_base_category_class
and_axioms
.
I agree it's not so nice. But there is a technicality preventing from storing the base category class by itself in an attribute (see the note at the bottom of the lazy class attribute _base_category_class_and_axiom). Well, you could but you'd need to wrap it somehow, typically in a tuple. Also handling them together has a small advantage since they are always set simultaneously, and this gives some guarantee of atomicity.
Anyway, it's mostly an internal implementation detail that can be refactored later on if we then decide that there is a better solution; and the piece of it that is exposed (when a category implementer provides the attribute) gives a safe idiom.
comment:389 followups: ↓ 390 ↓ 397 Changed 4 years ago by
You are parsing the "Finite" class name, I don't care if it is explicit using string tools/regexes or implicit looking for classes whose names match certain strings. The usual way is to provide a programmatic interface that sets up stuff in code. You should avoid using strings for program flow, and most certainly not use them for foundational material. I wouldn't care so much if we were talking about some implementation details in the combinat project, but you expect us to go around and teach others to use this. That better have a really good reason for the current interface. Simple would be something that follows usual patterns (even if its a few characters more). I don't see anything simple here, I see a bunch of trickery that is extremely hard to understand by looking at the code.
And you don't need operations on axioms? I see a lot of weird stuff in this ticket where you use strings to do operations that would be much clearer if you had some object to represent the axiom. E.g.:
sage: FiniteFields()._without_axiom("Commutative")
vs.
sage: FiniteFields().without(Commutative()) sage: FiniteFields()  Commutative()
Its an absolute nobrainer in Python to model everything with objects. Which file implements the commutativity axiom? If I have an object then I can tell immediately. Where is the documentation for the Commutative
axiom? Lets keep it simple and explicit, yes. Shorter but nondiscoverable is most certainly not simpler. And explicit is better than implicit, as always.
Also, I don't agree with atomicity in setting _base_category_class_and_axiom
buys us anything. Its private by convention, so it is the job of the setter to make changes atomically if necessary (though thats hardly an issue in Python). But in multithreaded Java, say, this would be a bad data structure as well. We all know the old joke, whats the only data structure in Cobol? A 2000character EBCDIC string...
comment:390 in reply to: ↑ 389 Changed 4 years ago by
Replying to vbraun:
You are parsing the "Finite" class name,
The fact that axioms are represented by a string is a corollary of the fact that the category that results from storing the axiom in an attribute of the supercategory. Attributes are labelled by (interned) strings.
It seems that axioms by themselves are hardly more than labels (they definitely don't implement something themselves), so I'm not sure something significant is gained by introducing another object to model axioms themselves.
Any operations that explicitly depend on axiom labels being strings is less desirable in my eyes, i.e., no mangling please.
comment:391 followup: ↓ 395 Changed 4 years ago by
I'm not sure what kind of argument you are trying to make with axioms being just labels. Variables are just labels in Python, but we still pass them around. In fact, you could use magic variable names (a.k.a. global variables) for everything, but I think we all agree that this is bad practice.
def sin(): arg = globals().get('x') return math.sin(arg)
In fact, since types are just stuff that is assigned to a variable in Python, this is precisely what this ticket does. It establishes a naming convention for types to avoid having to pass them around. And IMHO that is bad practice for exactly the same reasons as global variables.
comment:392 Changed 4 years ago by
It really sounds like what you're after Volker is something like Java's enum
type: labels belonging to some collection (which in this case we can use as flags [ints]). Perhaps we should mimic that with a container class called Axioms
which has methods (ideal world would be immutable attributes which show up in the documentation) for the various axioms (which we could encode as strings or flgs). I'd almost advocate going a step further with this in that this also stores what axioms are used. So it would work something like this:
sage: F = Fields().with_axiom(Axioms.finite()); F Finite fields sage: F.axioms() Axioms: Finite
and internally:
# In Category def with_axiom(axiom): new_axioms = copy(self._axioms).with_axioms([axiom]) return ObjectWithAxioms(new_axioms) # The container class class Axioms(object): def __init__(self, axioms=[]): self._axioms = set(axioms) def __repr__(self): return "Axioms: " + ", ".join(ax for ax in self._axioms) def __iadd__(self, axiom): return self.add_axioms([axiom]) def __isub__(self, axiom): return self.without_axioms([axiom]) def with_axioms(self, axioms): # After making sure each axiom is valid self._axioms = self._axioms.union(axioms) def without_axioms(self, axioms): self._axioms = self._axioms.difference(axioms) @staticmethod def finite(): """ With this we can document what each axiom means and it shows up on tab completion. """ return "Finite" @staticmethod def commutative(): return "Commutative"
Although we might want to change the above behavior to act like an immutable object.
comment:393 Changed 4 years ago by
Hmmmmm... And would there be anything wrong with that ? :P
F = Fields() and axioms.finite()
Anyway I like this axioms.<tab>
thing. As much as I love all the thing.<tab>
things.
Nathann
comment:394 followup: ↓ 399 Changed 4 years ago by
At least implementing a kind of "axiom enum" type leverages some of Python's code inspection capabilities. But IMHO there is a reason that it took until Python 3.4 and at least one failed PEP for enums to make it into Python: there are few compelling use cases in a completely dynamic language. You can pass anything to a method and store anything as an attribute, so instead of an enum value you can always use the enumerated thing. And I don't buy that there is no code or data that we could possibly attach to axioms. Why not
class Finite(Axiom): class ParentMethods: def is_finite(self): return True class Groups(Category): class ParentMethods: @requires_axiom(Finite()) def is_finite_group(self): return True
comment:395 in reply to: ↑ 391 ; followup: ↓ 396 Changed 4 years ago by
Replying to vbraun:
I'm not sure what kind of argument you are trying to make with axioms being just labels. Variables are just labels in Python, but we still pass them around.
We don't! We normally pass around the values that are bound to them. Passing around "variables" would necessarily boil down to passing around the strings that can then be looked up in the dictionaries representing the scope bindings to be investigated. THAT is indeed what the category code does (it also mangles strings, and that I don't like).
def sin(): arg = globals().get('x') return math.sin(arg)
I think this is fundamentally different from what is happening in this code. A direct corollary of storing the subcategory obtained by applying an axiom to a supercategory in an attribute labelled with the axiom name is that at least at some point axioms are represented by a string.
It seems the strongest motivation (and a convincing one to me) for storing subcategories in attributes is that it allows leveraging Python's syntax for writing classes and attributes. Given that implementation, axioms are represented by a string at some point. Do we need another representation as well?
If you take "applying an axiom to a category" literally, then basically the implementation
def Finite(category): return category.Finite() #or, to illustrate where the string is living: #return getattr(category,"Finite")()
would do the trick. There'd be room for documentation on that, but really there is not much to document. Checking whether a given axiom is applicable boils down to eventually
def has_axiom(category,axiom): return hasattr(category,string_corresponding_to(axiom))
where the implementation of string_corresponding_to
is simplest if axiom
itself is already given by a string. What do we gain from representing axioms
otherwise?
comment:396 in reply to: ↑ 395 ; followups: ↓ 398 ↓ 401 ↓ 405 Changed 4 years ago by
Replying to nbruin:
We don't! We normally pass around the values that are bound to them.
Thats what I meant by "them", sorry for being unclear. My point is not so much about what exactly is passed, but that there is nothing passed, referenced, imported, or inherited from at all:
sage: class Cs(Category): ....: def super_categories(self): ....: return [Sets()] ....: class Finite(CategoryWithAxiom): ....: class ParentMethods: ....: def foo(self): ....: print "I am a method on finite C's" sage: P = Parent(category=Cs().Finite()) sage: P.is_finite() # What is this I don't even
Nowhere does the source of Sets.Finite
refer to Cs.Finite
or vice versa. By the normal mental model of Python code (principle of least astonishment), that ought to mean that the implementations are independent. The only thing that ties them together ultimately is that a substring of the type name matches. For example the following would make me much happier since it makes the dependence visible:
sage: class Cs(Category): ....: def super_categories(self): ....: return [Sets()] ....: class Finite_or_any_other_name(Sets.Finite): ....: class ParentMethods: ....: def foo(self): ....: print "I am a method on finite C's" sage: P = Parent(category=Cs().Finite_or_any_other_name()) sage: P.is_finite() # obviously comes from Sets.Finite
Or, even better, with standalone axiom objects either using a @require_axiom(Finite)
decorator or class syntax if you prefer:
sage: class Cs(Category): ....: class Finite_or_any_other_name(axioms.Finite): ....: class ParentMethods: ....: def foo(self): ....: print "I am a method on finite C's"
You could argue that the ParentMethods
/ ElementMethods
are already precedent for magic attribute names that violate the usual python mental model. Thats true, but a) I wasn't asked when they were introduced and b) there are only two magic names that are ubiquitous in every category source. So it is still kind of obvious from the source code. But if you end up with precisely two categories that have a Flasque subcategory, say, then it is going to be very confusing.
comment:397 in reply to: ↑ 389 Changed 4 years ago by
Replying to vbraun:
You are parsing the "Finite" class name, I don't care if it is explicit using string tools/regexes or implicit looking for classes whose names match certain strings. The usual way is to provide a programmatic interface that sets up stuff in code. You should avoid using strings for program flow, and most certainly not use them for foundational material. I wouldn't care so much if we were talking about some implementation details in the combinat project, but you expect us to go around and teach others to use this. That better have a really good reason for the current interface. Simple would be something that follows usual patterns (even if its a few characters more). I don't see anything simple here, I see a bunch of trickery that is extremely hard to understand by looking at the code.
And you don't need operations on axioms? I see a lot of weird stuff in this ticket where you use strings to do operations that would be much clearer if you had some object to represent the axiom. E.g.:
sage: FiniteFields()._without_axiom("Commutative")vs.
sage: FiniteFields().without(Commutative()) sage: FiniteFields()  Commutative()
I will change this example to FiniteFields() + Commutative()
so
as to speak of an operation which is clearly useful in real life.
The syntax:
FiniteFields().Commutative()
has the following merits:
 No need to import stuff. From a single entry point (e.g. the category of Fields()), you can explore all categories you can get from it by just following the flow of calling methods.
 Want to know what are the available axioms? Well, constructing the subcategory of objects satisfying the Finite axiom is a natural operation on a category; therefore, in a standard OO pattern you find this operations along the other methods. Just use introspection:
sage: C = Monoids() sage: C.<tab>
(as pointed by Nils, you could refine this to recover exactly the axioms)
 Tab completion naturally reduces to exactly those axioms that are applicable in the given context.
 Which file implements the commutativity axiom? Use introspection:
sage: Monoids().Finite.__module__ 'sage.categories.sets_cat'
 Where is the documentation for the
Commutative
axiom? Use introspection:sage: C = Monoids() sage: C.Finite?
 Somewhat unrelated, but since you asked elsewhere. How do you know which mixins get inserted when you use a given axiom? Use
 introspection
sage: Groups().Finite().parent_class.mro() [sage.categories.finite_groups.FiniteGroups.parent_class, sage.categories.finite_monoids.FiniteMonoids.parent_class, sage.categories.groups.Groups.parent_class, sage.categories.monoids.Monoids.parent_class, sage.categories.finite_semigroups.FiniteSemigroups.parent_class, sage.categories.semigroups.Semigroups.parent_class, sage.categories.magmas.Unital.Inverse.parent_class, sage.categories.magmas.Magmas.Unital.parent_class, sage.categories.magmas.Magmas.parent_class, sage.categories.finite_enumerated_sets.FiniteEnumeratedSets.parent_class, sage.categories.enumerated_sets.EnumeratedSets.parent_class, sage.categories.finite_sets.FiniteSets.parent_class, sage.categories.sets_cat.Sets.parent_class, sage.categories.sets_with_partial_maps.SetsWithPartialMaps.parent_class, sage.categories.objects.Objects.parent_class, object]
Besides I would not want to use arithmetic for the operation of adding an axiom, since mathematically we would not write this using arithmetic either, but that's a minor detail.
Yes there is a bit of complexity under the hood. Implementing a mixins mechanism in a language that does not support it natively means that you have to do some non trivial stuff. The internals of an interpreter are not simple either. It does not necessarily means that it's complicated to use in practice.
<grin> Oh but right, in Python, methods and attributes are accessed by looking up for strings in the dictionary of the classes/objects. Overriding a method means inserting a string in a dictionary. Yuck! Maybe we should not be using methods and attributes at all in our code? </grin>
Its an absolute nobrainer in Python to model everything with objects. If I have an object then I can tell immediately. Lets keep it simple and explicit, yes. Shorter but nondiscoverable is most certainly not simpler. And explicit is better than implicit, as always.
The axiom is basically already modeled by a *method*. That's rather simple and explicit. And introspection works rather naturally.
Maybe we could go a bit further and indeed use objects instead of strings in the _with(...) methods and in the output of .axioms(). But please, go ahead, try it in a non trivial project and see if it really makes things easier to use in practice. It's not clear.
I am happy leaving a note that this piece of the design is an implementation detail and subject to refactoring.
Also, I don't agree with atomicity in setting
_base_category_class_and_axiom
buys us anything. Its private by convention, so it is the job of the setter to make changes atomically if necessary (though thats hardly an issue in Python). But in multithreaded Java, say, this would be a bad data structure as well. We all know the old joke, whats the only data structure in Cobol? A 2000character EBCDIC string...
<getting frustrated> Whatever. That's a minor implementation detail I don't care about. Not happy with it? Go ahead, fix it and get everything right. I spent weeks polishing everything to a state where it works smoothly. This patch has been advertised for quite some time now, and already got positively reviewed featurewise months ago. Time to move on. </getting frustrated>
Nicolas
comment:398 in reply to: ↑ 396 ; followup: ↓ 402 Changed 4 years ago by
Replying to vbraun:
For example the following would make me much happier since it makes the dependence visible:
sage: class Cs(Category): ....: def super_categories(self): ....: return [Sets()] ....: class Finite_or_any_other_name(Sets.Finite): ....: class ParentMethods: ....: def foo(self): ....: print "I am a method on finite C's" sage: P = Parent(category=Cs().Finite_or_any_other_name()) sage: P.is_finite() # obviously comes from Sets.Finite
Ah, I see your issue. I get the feeling that if one would address that point fully, one would end up with a system so verbose that axioms don't really save coding any more, which is the motivation of the system in the first place.
I also don't think the above suggestion expresses the link properly: P doesn't
have the is_finite
method because its category is Cs.Finite
, but because its
category is Cs
together with the Finite
axiom, Cs
is a subcategory of
Sets
, and Sets
can also have the Finite
axiom applied to it. This kind of
inheritance is fundamentally richer than what normal class inheritance allows
for, so trying to express it is doomed to fail (otherwise we could have used the
translation!). I am not convinced that we really need this in sage, but the
author does.
In fact, the syntax above is perhaps more misleading: By letting Cs.Finite
inherit from Sets.Finite
, you might think that attributes like ParentMethods
follow the usual inheritance rules as well. But they shouldn't, because
Cs.Finite.ParentMethods
does not contain is_finite
, so this fails to express
how P gets its is_finite
attribute completely. By not letting Cs.Finite
inherit from Sets.Finite
, at least we're not suggesting a kind of relation
that doesn't apply.
Or, even better, with standalone axiom objects either using a
@require_axiom(Finite)
decorator or class syntax if you prefer:sage: class Cs(Category): ....: class Finite_or_any_other_name(axioms.Finite): ....: class ParentMethods: ....: def foo(self): ....: print "I am a method on finite C's"
There might be something to that, but if the or_any_other_name
option gets
excercises, how would you tell efficiently whether Cs
can have axioms.Finite
applied to it? See if there are any attributes that are subtypes of
axioms.Finite?
There are precedents in Python for "magic" attribute names. For instance, an
object gets a length by implementing a __len__
method on it.
I think the bigger issue is how the supercategories of a category are documented, rather than the relations between the axioms on each of them.
You could argue that the
ParentMethods
/ElementMethods
are already precedent for magic attribute names that violate the usual python mental model.
I think the category framework itself violates the usual python mental model, but that's being advertised as feature, the argument being that the usual python model isn't expressive enough.
comment:399 in reply to: ↑ 394 Changed 4 years ago by
Replying to vbraun:
Why not
class Finite(Axiom): class ParentMethods: def is_finite(self): return True class Groups(Category): class ParentMethods: @requires_axiom(Finite()) def is_finite_group(self): return True
We had a similar syntax in MuPAD / Axiom. There, axioms were basically predicates on parents which you could test in your code. And they were organized in a module Ax.Finite, Ax., etc. In practice we never found an interesting use case for those (and we tried!).
On the other hand, I believe from practical experience that a mechanism of mixins like that implemented in categories, and further extended by this patch, can help a lot structuring the code based on math knowledge.
Of course the downside is that we are necessarily deviating at some point from standard Python, since Python does not have native support for mixins. But this is relatively alleviated by the fact that, once the magic for building the hierarchy of classes for parent and element is finished, we are back to a purely standard OO; which means that all standard tools like instrospection and the like work.
Cheers,
Nicolas
comment:400 in reply to: ↑ 378 Changed 4 years ago by
Replying to nthiery:
base_category
is used in a few spots for computing stuff recursively. This is includes _without_axioms (a trivial recursion),
Is that even a welldefined operation? Given that
DivisionRings().Finite().base_category()
is not DivisionRings
, it's not clear to me that this process is guaranteed to
undo the application of "axioms". In this case it works because both
DivisionRings()
and Fields()
are obtained via axioms from a common category.
But that also means that we didn't have to recurse up via a marked "base
category". We could have taken any supercategory that is of type
CategoryWithAxiom
. There's a theorem to be proved that such a greedy approach
works, but the commutativity of applying axioms should take care of that.
and more importantly the calculations of the super categories (a tricky recursion).
Yup, this bit:
base_category = self._base_category axiom = self._axiom extra = self.extra_super_categories() return Category.join((self._base_category,) + tuple(base_category.super_categories()) + tuple(extra), axioms = (axiom,), uniq=False, ignore_axioms = ((base_category, axiom),), as_list = True)
This code just needs some supercategories that tie this thing into the
category hierarchy. Apparently (base,)+base.super_categories+extra
is enough,
for some base that is present upon initialization. Why bother keeping the base
special if it's not inherently special.
From:
sage: V=Fields().Finite().super_categories() sage: V=flatten([v.super_categories() for v in V]) sage: V=flatten([v.super_categories() for v in V]) sage: V [Category of principal ideal domains, Category of domains, Category of semigroups, Category of unital magmas, Category of semigroups, Category of finite enumerated sets]
you can already see that the supercategories as returned now lead to multiple paths to the same thing any way, so (as always when walking up a tree) you need to keep track of already visited nodes any way. Limiting recursion to just "base" isn't going to alleviate that.
I agree that this is mostly for internal use. But it's consistent with functorial constructions and the like to have the .base_category() method.
Since it has no meaning here, I don't see why this consistency is desirable. I'd say it's misleading.
comment:401 in reply to: ↑ 396 Changed 4 years ago by
Replying to vbraun:
Nowhere does the source of
Sets.Finite
refer toCs.Finite
or vice versa. By the normal mental model of Python code (principle of least astonishment), that ought to mean that the implementations are independent. The only thing that ties them together ultimately is that a substring of the type name matches.
May I play the advocate of the devil? Consider this example:
class A: def foo(): ... class B(A): def foo() ...
Nothing ties B.foo
to
A.foo
. Yet, by standard OO mental model,
we know that B.foo() overrides A.foo() and should thus have the same
semantic. Yet, ultimately the only link between the two is that the
substring
foo
of the names
A.foo
and
B.foo
matches.
Granted, the relation "inheritance" relation between Cs
and
Sets
is not as explicit as between
B
and
A
; but that's the
price we pay for all the flexibility of dynamic mixins.
For example the following would make me much happier since it makes the dependence visible:
sage: class Cs(Category): ....: def super_categories(self): ....: return [Sets()] ....: class Finite_or_any_other_name(Sets.Finite): ....: class ParentMethods: ....: def foo(self): ....: print "I am a method on finite C's"
I see your point. We have something similar for functorial constructions where the idiom is:
> sage: class Algebras(Category): > ....: class Graded(GradedModulesCategory): > ....: class ParentMethods: > ....: def foo(self): > ....: print "I am a method on graded algebras"
The reason why I moved away from this idiom, and why I consider refactoring functorial constructions similarly, is that defining new axioms is much more lightweight in practice than defining new constructions. Really it was getting in my way when writing code. Itching hard. In fact JeanBaptiste is complaining that it's still not lightweight enough :) Also, even just implementing an axiom in a category is more lightweight than implementing a functorial construction (no need to fiddle with an extra import and risk merge conflicts there, ...).
There is another annoying issue in being explicit about where the
axiom is defined as in the idiom Finite(Sets.Finite)
. Namely, if
later on one wants to generalize the axiom by moving its definition up
the category hierarchy (maybe because in the meantime a larger
category has been implemented where the axiom makes sense), then you
need to fix accordingly each and every category where the axiom is
implemented (the usual price for redundant information). This
situation has happened to me in practice more than once!
You could argue that the
ParentMethods
/ElementMethods
are
already precedent for magic attribute names that violate the usual python mental model. Thats true, but a) I wasn't asked when they were introduced
But this was reviewed by a bunch of people. Of course not as great as you, master :)
Sorry, I could not resist. I totally understand your being careful before being imposed upon a framework that may have long lasting consequences on our code. But it's super frustrating for me and all those who have tons of code depending on that framework.
and b) there are only two magic names that are ubiquitous
in every category source. So it is still kind of obvious from the source code. But if you end up with precisely two categories that have a Flasque subcategory, say, then it is going to be very confusing.
Well, you see Flasque(CategoryWithAxiom?) in Cs. It tells you that it's
implementing an axiom named Flasque axiom. If you are interested at
the code of Cs, there are some chances that you know what Cs is about
and are aware of this axiom. And otherwise you quickly lookup
Cs().Flasque?
to know what it's about.
Granted this assumes a minimum of knowledge about categories and axioms. Like any infrastructure there is a minimum of stuff to learn before you can really benefit from it. Here a 30 minutes course should be sufficient to cover the necessary ground.
Cheers,
Nicolas
comment:402 in reply to: ↑ 398 ; followup: ↓ 403 Changed 4 years ago by
Replying to nbruin:
sage: class Cs(Category): ....: class Finite_or_any_other_name(axioms.Finite): ....: class ParentMethods: ....: def foo(self): ....: print "I am a method on finite C's"There might be something to that, but if the
or_any_other_name
option gets excercises, how would you tell efficiently whetherCs
can haveaxioms.Finite
applied to it? See if there are any attributes that are subtypes of axioms.Finite?
Exactly, that would be the implementation. And that can easily be cached in the unlikely case that it would ever be a speed problem to extract the attributes that are subclasses of the base Axiom
class. Also
 Independent axioms make them easier to implement since you don't need to figure out the most basic category that can carry the axiom. I.e. you don't need a global understanding of the existing categories. You never have to "move it up the category hierarchy" either.
 It gives you additional freedom to only implement methods when certain combinations of axioms are applied (for breverity as decorator, same argument applies to containedclass syntax):
@require_axiom(Foo) def foo() [...] @require_axiom(Bar) def bar() [...] @require_axiom(Foo, Bar) def baz() [...]
 You don't have to decide who owns a particular adjective. E.g. "Rigid" for differential operators is not the "Rigid" in the category theory sense, at least I don't think so. (Exercise: Find your own rarelyused/obscure adjective with conflicting meanings).
 Its actually a pretty minimal change of the existing code, the only thing that really changes is that we look for attributes that are tagged by being subclasses of the axiom class (i.e. in code) instead of having a blessed adjective as name.
There is precedency in Python for "magic" attribute names. For instance, an object gets a length by implementing a
__len__
method on it.
True, and it is really unfair that if Guido van Rossum blesses a name with magical properties then that is automatically good Python and people explain it in every book about Python. Whereas if one of us declares a name special then it is bad practice and no Python book in the world gives a hoot about it.
comment:403 in reply to: ↑ 402 ; followups: ↓ 407 ↓ 416 Changed 4 years ago by
Replying to vbraun:
 Independent axioms make them easier to implement since you don't need to figure out the most basic category that can carry the axiom. I.e. you don't need a global understanding of the existing categories. You never have to "move it up the category hierarchy" either.
Really?
A few months ago (see some earlier comments), I suggested to distinguish between categories that provide certain "features" (such as: addition, subtraction, multiplication, division, a total ordering) and categories that provide axioms for these "features" (such as: commutativity, distributivity, compatibility of an order with arithmetic operations). The "features" largely correspond to magical Python methods for parents and their elementshence, it could be implement by (abstract) ParentMethods
and ElementMethods
.
The axioms then require that the parent/element methods satisfy certain specifications, that give rise to tests of the test suite, and also (by mathematical theorems) you may get default implementations of the abstract parent/element methods, and you might automatically get further axioms when applying one axiom (such as: division ring plus finite implies commutative).
This model relies on the observation that you must have an additive and a multiplicative magma among the super categories, if you want to have distributivity: Their join is the most basic category that can carry this axiom.
 Its actually a pretty minimal change of the existing code, the only thing that really changes is that we look for attributes that are tagged by being subclasses of the axiom class (i.e. in code) instead of having a blessed adjective as name.
Actually, from a practical point of view, I'd prefer to have something that works (i.e., this code) and move on to a different (better?) model later.
For example, I think the approach to use a proper database could also relatively easily be implemented on top of the existing code. Namely: The construction digraph will still be encoded by (1) axioms that are stored as (lazy/nested/...) class attributes of a base category, and (2) by an attribute of a categorywithaxiom providing the default construction (_base_category_class_and_axiom
). The only difference is that these attributes would be provided by the database, and only the database (as a single point of truth).
But I think it would be a mistake to do this presumably/hopefully "small" change now, i.e. before merging Nicolas' code.
There is precedency in Python for "magic" attribute names. For instance, an object gets a length by implementing a
__len__
method on it.True, and it is really unfair that if Guido van Rossum blesses a name with magical properties then that is automatically good Python and people explain it in every book about Python. Whereas if one of us declares a name special then it is bad practice and no Python book in the world gives a hoot about it.
+1, provided they the Sage documentation explains our magical methods as clearly as the Python books document magical Python methods.
comment:404 Changed 4 years ago by
FTR  I'm taking care of the merge with the latest develop
branch right now.
comment:405 in reply to: ↑ 396 ; followup: ↓ 406 Changed 4 years ago by
Replying to vbraun:
Or, even better, with standalone axiom objects either using a
@require_axiom(Finite)
decorator or class syntax if you prefer:sage: class Cs(Category): ....: class Finite_or_any_other_name(axioms.Finite): ....: class ParentMethods: ....: def foo(self): ....: print "I am a method on finite C's"
There's a peculiarity in this representation of the concept: with this paradigm
it would be possible to implement multiple axiom.Finite
subclasses on
Cs
. I'm not sure that's a desirable property. Although it could express
Wedderburn: The Finite
and Commutative
attributes on DivisionRings
could
both inherit from both axiom.Finite
and axiom.Commutative
. I'm not so sure
doing this is desirable. I would expect it's better to mandate that every
category can implement an axiom at most once. And using fixed attribute names
does that naturally, at the expense of forcing name choice.
I wonder if "name clashes" in axioms are ever a real problem. I would hope that
if two categories A
and B
have conflicting ideas over what the axiom
named d
must mean, then any common supercategory doesn't implement either
(because it can't carry them). I don't think the different meanings will ever
clash then.
If there is a common supercategory that implements one of the meanings of the axiom then the terminology is genuinely confusing and then the system rightly points at a naming clash that needs resolving.
comment:406 in reply to: ↑ 405 Changed 4 years ago by
Replying to nbruin:
There's a peculiarity in this representation of the concept: with this paradigm it would be possible to implement multiple
axiom.Finite
subclasses onCs
.
Yes, giving you additional freedom to only implement methods when certain combinations of axioms are applied:
class Cs(Category): ....: class WithFoo(Foo): ....: class ParentMethods: ....: def foo(self): [...] ....: class WithBar(Bar): ....: class ParentMethods: ....: def bar(self): [...] ....: class WithFooAndBar(Foo, Bar): ....: class ParentMethods: ....: def baz(self): [...]
I wonder if "name clashes" in axioms are ever a real problem.
Funny that you would say that, as Atiyah's category of "Real" vector bundles would be another example of a likely name clash with what you'd commonly use "Real" for.
I would hope that if two categories
A
andB
have conflicting ideas over what the axiom namedd
must mean, then any common supercategory doesn't implement either
Yes, I'm aware that you could use the same adjective provide that they are not joined by a common supercategory. It seems a bit fragile as adding new supercategories may then have very nonlocal consequences. Moreover, for differential operators, say, I think it would be possible (if highly unusual) to ask them to form a "Rigid" category. So they can't be separated by not having a common supercategory, at least not in a mathematically satisfying way.
comment:407 in reply to: ↑ 403 Changed 4 years ago by
Replying to SimonKing:
This model relies on the observation that you must have an additive and a multiplicative magma among the super categories, if you want to have distributivity: Their join is the most basic category that can carry this axiom.
Yes, and a standalone axiom class would be the ideal place to implement a is_applicable_to(category)
method that could be implemented exactly as you say. And as implementer you don't have to figure out what that join is to find the right place for your code.
comment:408 Changed 4 years ago by
 Commit changed from 8045aa4a4b7ada735b3eb6055382f9b341a39f1e to eb7b486c6fecac296052f980788e15e2ad1b59e4
Branch pushed to git repo; I updated commit sha1. New commits:
eb7b486  Merge branch 'develop' into public/ticket/10963

comment:409 followups: ↓ 410 ↓ 411 Changed 4 years ago by
Another important issue that I think still needs discussion are the relations between different categorieswithaxioms. It seems to me, thought I can't find it spelled out in the docs, that we want to allow arbitrary relations of the type
category1 * axiom2 = category3 * axiom4 * axiom5
I'm writing this as multiplication to stress the commutativity of axioms and the formal analogy with radical toric (or binomial) ideals. As usual in the presence of relations, one can either work with equivalence classes or normal forms (unique representatives). There is some talk about manually specifying some distinguished representative/default construction on this ticket, but I don't understand why that would be desirable.
To figure out all relations, we clearly need a Groebner basis for relations. There are some wellknown facts about Buchberger's algorithm that ought be of importance to us:
 It should not be implemented recursively
 The "greedy" approach does not work: Spolynomials involving highdegree terms can and will give rise to lowerdegree generators. In other words, you cannot expect to arrive at the normal form by removing axioms at each step.
 Being explicit about term orders is key to the implementation
There is also a consistency issue about usersupplied axioms: They must not induce further relations for the categories and axioms that Sage ships with.
comment:410 in reply to: ↑ 409 Changed 4 years ago by
Replying to vbraun:
Another important issue that I think still needs discussion are the relations between different categorieswithaxioms. It seems to me, thought I can't find it spelled out in the docs, that we want to allow arbitrary relations of the type
category1 * axiom2 = category3 * axiom4 * axiom5I'm writing this as multiplication to stress the commutativity of axioms and the formal analogy with radical toric (or binomial) ideals.
It is clear that this can't work in full mathematical generality (since in principle there is an infinity of potential axioms to consider). But in a CAS, it could actually work. Let's discuss it:
 At any point in time, we have a finite list of axioms (it may grow in future, though).
 At any point in time, we have a finite set of "basic categories": By this, I mean categories that provide the above mentioned "features":
Sets
(provides__contains__
),Magmas
(provides (__mul__
),AdditiveMagmas
(provides__add__
) and so on.  The union of the "basic categories" and the axioms generates a commutative monoid: Multiplying categories means forming the join, multiplying with axioms means applying them.
 If I understand correctly, Nicolas has introduced an ordering on the set of categories anyway. In any case, it is clear that we *can* introduce an ordering on the commutative monoid.
 The relations are, as you remark, binomial. Thus, the word problem in our commutative monoid modulo relations can be solved by means of Gröbner bases.
So far, the approach looks good. However, here is a problem: How do we model the fact that the axiom Distributive
does not apply to Magmas
and does not apply to AdditiveMagmas
, but does apply to Magmas*AdditiveMagmas
?
Perhaps we actually do not need to model this fact in or commutative
monoid. Any categorial construction corresponds to an element of the
monoid. Two constructions result in the same category if and only if the
normal forms (modulo relations) of the corresponding monoid elements coincide. Hence,
each standard monomial is a potential label of a category. However, it should
be fine to assume that only a subset of the standard monomials actually occurs
as label: Magmas*Distributive
does not occur as label of a category, but
Magmas*AdditiveMagmas*Distributive
does occur.
comment:411 in reply to: ↑ 409 Changed 4 years ago by
PS:
Replying to vbraun:
There is also a consistency issue about usersupplied axioms: They must not induce further relations for the categories and axioms that Sage ships with.
Why?
Imagine Wedderburn lived today. We wouldn't know that all finite division rings are commutative. Hence, Rings.Division().Finite()
and Fields().Finite()
would be distinct categories. Then, Wedderburn proves his theorem. We add his theorem as a relation, and as a result we have a new relation between previously distinct categories in Sage. I don't think this would be a problem.
The only requirement: When adding a new axiom or a new basic category, the original commutative monoid must be extended, and the ordering of the enlarged monoid must be compatible with the ordering of the original monoid; The original monoid must be an ordered submonoid of the enlarged monoid. By this requirement, we can keep using the Gröbner basis we had for the relations in the original monoid.
comment:412 Changed 4 years ago by
PPS: Actually the situation is even easier.
If we understand multiplication in the monoid as I have stated above, then all generators are idempotents. C*C
is the join of C
with itself (C*C=C
), and applying the same axiom A
twice is the same as applying it once (again, A*A=A
).
Hence, we have a tool in Sage that can easily be instrumented to provide descriptions for categorial constructions and also to provide unique identifiers of categories: Polybori!
comment:413 followup: ↓ 414 Changed 4 years ago by
I agree that, as long as you don't try to express relations that combine mismatched categories/axioms, you shouldn't have to worry about that.
The potential problem with additional relations is that you might have already constructed distinct categories Rings + division + finite and Fields + finite. At that point I think its fine to require that Wedderburn has to restart Sage.
I don't quite understand how Polybori solves this, Z/2Z
has only idempotents but Z/2Z[x]
does not. Of course you can add x^2=x
as relation to the ideal. Or just work with squarefree / radical ideals. Either way thats a bit of a technicality that does fit into this analogy.
comment:414 in reply to: ↑ 413 Changed 4 years ago by
Replying to vbraun:
At that point I think its fine to require that Wedderburn has to restart Sage.
Sure, that's what I meant.
I don't quite understand how Polybori solves this,
Z/2Z
has only idempotents butZ/2Z[x]
does not.
x^2=x
is intrinsic in polybori. That's why I think polybori is the right tool for implementing the model: It provides an efficient Gröbner basis implementation for rings generated by idempotents.
sage: P.<magma, additive_magma, ring, distributive, finite,division,commutative> = BooleanPolynomialRing() sage: magma*magma magma sage: R = P*[ring*division*finitering*division*commutative*finite, ringmagma*additive_magma*distributive] sage: R.groebner_basis() [magma*additive_magma*distributive + ring, magma*ring + ring, additive_magma*ring + ring, ring*distributive + ring, ring*finite*division*commutative + ring*finite*division] sage: (magma*additive_magma*distributive*finite*division).reduce(R.groebner_basis()) ring*finite*division sage: (ring*commutative*finite*division).reduce(R.groebner_basis()) ring*finite*division
I think this is more or less what we want.
comment:415 Changed 4 years ago by
PS: Adding a generator field
to our boolean polynomial ring, we get
sage: P.<magma, additive_magma, ring, field, distributive, finite, division, commutative> = BooleanPolynomialRing() sage: R = P*[fieldring*division*commutative, ring*division*finitering*division*commutative*finite, ringmagma*additive_magma*distributive] sage: (magma*additive_magma*distributive*finite*division).reduce(R.groebner_basis()) field*finite sage: (ring*commutative*finite*division).reduce(R.groebner_basis()) field*finite
comment:416 in reply to: ↑ 403 ; followup: ↓ 418 Changed 4 years ago by
Replying to SimonKing:
Actually, from a practical point of view, I'd prefer to have something that works (i.e., this code) and move on to a different (better?) model later. ... lots of fun design ideas ... But I think it would be a mistake to do this presumably/hopefully "small" change now, i.e. before merging Nicolas' code.
Yes, please!!!
Guys, this is a very interesting discussion and we should pursue it; there is a whole research area to explore. But I believe we should really do that elsewhere. Here, we have a well defined task, namely to devise a plan to finalize this ticket as soon as possible. There is a lot of code that has been waiting for the features way too long, and it's blocking the work of several developers. We also have a long backlog of further important developments around categories (morphisms, ...).
The core question is what absolutely needs to be done *now* ?
 Writing that last section in the documentation of axioms describing the current core algorithm. I still have 23 hours to spend on it. Hopefully that will be done tomorrow. Worst case by Monday.
 Reviewing the documentation of axioms. JeanBaptiste already did some proofreading, but another pass is needed, especially for the latest sections. Who can take care of this?
 Deciding whether we want to keep the latest experimental changes by Simon in the current branch.
 Merging in my branch, with or without Simon's changes
 Rerunning all tests
 What else?
Thanks!
Nicolas
comment:417 Changed 4 years ago by
Just two last comments and I stop participating here to the long term design discussion.
 I believe the right conceptual setting for what we are doing is that of lattices (certainly not an original claim; lattice theory has been used for a long time for the concept analysis, analysis and design of hierarchy of classes, and so on). That's where we should be looking for data structures, algorithms, and possibly implementations.
 There might be a case for having Sage depend on some non trivial pieces of external software (Polybori, Singular, ...) for its very programming framework, and in particular for its startup. gcc itself uses polyhedra software for loop optimization purposes (cloog), so maybe that's not completely crazy. But that would certainly be a hard sell, and with good reasons.
I so far went for a selfcontained approach, even if this meant for example a bit of duplicated code between the category code and that for Sage lattices.
comment:418 in reply to: ↑ 416 Changed 4 years ago by
Guys, this is a very interesting discussion and we should pursue it; there is a whole research area to explore. But I believe we should really do that elsewhere. Here, we have a well defined task, namely to devise a plan to finalize this ticket as soon as possible. There is a lot of code that has been waiting for the features way too long, and it's blocking the work of several developers. We also have a long backlog of further important developments around categories (morphisms, ...).
Nicolas, the review process of a ticket is *PRECISELY* where all design choices should be discussed. The reviewer's role is not to accept all design choices you made without discussing them. It is his role to check and understand each piece of your code and think whether it makes sense or not. The fact that you have a lot of things depending on this ticket is just a result of your own independent development in sagecombinat. THIS is why it takes 3 years to merge a ticket like this one. And now you come and use this argument to say "Come on guys, a lot of thing already depend on this ticket, let's merge it quick and change things later".
This is not fair. You should not think of the review as a bother, preventing you from getting tickets in Sage. The review is THE thing that makes your code not just "your own thing" but a piece of code that several people agree on. Something we think good for the software and want to have inside.
You should use the review as a test, to find out whether your code makes sense. Whether people understand it. If the reviewers do not understand what the code does from the doc, it means that the doc should be rewritten, and rewritten again until it is clear.
How many exchanges do you think it takes to implement a 10lines functions like Frederic's polynomials (#15662 for instance) ? The function's name is discussed, the efficiency of the algorithm, the correction. Hell, ten lines of code take several hours, and at the end the patch is good, tested and clear. We did our best, and there is no stuff "left to be done". And it is not very long ago that you began to write the doc explaining how it was to be used !
This is what you read at the beginnig on Knuth's books :
"Here is your book, the one your thousands of letters have asked us to publish. It has taken us years to do, checking and rechecking countless recipes to bring you only the best, only the interesting, only the perfect. Now we can say, without a shadow of a doubt, that every single one of them, if you follow the directions to the letter, will work for you exactly as well as it did for us, even if you have never cooked before."
We should build a software like that. We shouldn't write anything in there if we don't think that it is reliable, or that it will have to be rewritten again. Let's write something *GOOD*.
The core question is what absolutely needs to be done *now* ?
The only question is : "what is the best way to do it ?". And that's why every single line of a patch needs to be thought upon.
Please don't try to cut the discussions short.
Nathann
comment:419 followups: ↓ 420 ↓ 422 Changed 4 years ago by
Nicolas, I like you and your contribution but I don't think you understood what I'm saying. So let me be completely blunt: This ticket is nowhere near ready to be merged, and I'm totally opposed to giving it positive review at this point. We really should have had this discussion before the first line was written. You failed to seek any external input when drafting it. There is no post to sagedevel about this. We even have a formal RFC process (SEP) for foundational changes which you did not pursue either. So if this is too late in the whole process for a basic discussion then that is your own fault.
IMHO we have to at least get rid of the openended list of blessed adjectives that have special hidden/surprising meaning. This includes all cases where substrings of class names are matched. We can change the implementation details later, but whatever programming interface we fix now will be exceedingly difficult to change once this is merged. Its hard enough to communicate a design paradigm to the wider developer community, it would be entirely confusing to change it in a year.
Anything else, including the implementation (but not the programming interface for specifying) relations could be left for later, I agree. But without having a reasonable idea of what kind of relations we want to support we can't devise a suitable programming interface. In particular, I think your current interface of specifying a list of extra_super_categories()
is fundamentally flawed for the reasons that I stated.
comment:420 in reply to: ↑ 419 Changed 4 years ago by
Replying to vbraun:
Nicolas, I like you and your contribution but I don't think you understood what I'm saying.
I did. The discussion was honestly going out of topic, and I asked concretely "what else?" had to be done now. Thanks for the second part of your message elaborating on that question.
So let me be completely blunt: This ticket is nowhere near ready to be merged, and I'm totally opposed to giving it positive review at this point. We really should have had this discussion before the first line was written. You failed to seek any external input when drafting it. There is no post to sagedevel about this.
Let's see:
 I have been mentioning this ticket over and over on sagedevel, not counting sagecombinatdevel. Here is a sample among those:
https://groups.google.com/forum/?fromgroups#!searchin/sagedevel/10963/sagedevel/1lZAr60N8w/vHr6nOWrsfcJ https://groups.google.com/forum/?fromgroups#!searchin/sagedevel/10963/sagedevel/chC1qH455Qs/NDiN3IOPLBgJ
 I have made presentations about it in at least four Sage days.
 Throughout the whole design process, I have had countless long email/trac/oral discussions with some of those that care most about categories, in particular Florent Hivert and Simon King. Simon even came to Orsay twice in good part to discuss about this.
 The code has been publicly available all time through, with an easy way to install it, try it, and see non trivial use cases (sage combinat install).
 The code has been used by quite a few people.
With that, I believed that everybody interested in the category infrastructure was aware that non trivial changes were coming. I would have been happy to expand on the details if anyone had just asked.
Granted: there was no framed official request for comments. Point taken for next time.
We even have a formal RFC process (SEP) for foundational changes which you did not pursue either.
Where is it formalized? How many time has it been used?
I certainly can see the point of formalizing certain processes. Yet, I believe that this also has its limits. The point is that, if I had presented a draft of the current design twothree years ago, the reactions would have been: "that's just all crazy overdesign", "what's the point?", or "it can't be made to work reasonably". And that would have been perfectly fair: I was asking myself the very same question, and there indeed were some non trivial hurdles to overcome (e.g. the C3 business).
For such an infrastructure to be convincing, I think it has to be seriously battlefield tested, for otherwise it only leads to never ending unsupportedbyfacts discussions (I have seen sooo many of those). The main point is how it feels in practice to write code using the infrastructure, and in particular how it scales. Not counting: can Sage start if we actually refactor the internals? are there performance issues? Can we get to all doctests passing? More than that: you need (at least *I* need) several iterations of battlefield testing (three complete rewrites in the case at hand) before converging to a proper design; at least one that convinces me.
That being said, let's move to the interesting part.
IMHO we have to at least get rid of the openended list of blessed adjectives that have special hidden/surprising meaning. This includes all cases where substrings of class names are matched. We can change the implementation details later, but whatever programming interface we fix now will be exceedingly difficult to change once this is merged. Its hard enough to communicate a design paradigm to the wider developer community, it would be entirely confusing to change it in a year.
Anything else, including the implementation (but not the programming interface for specifying) relations could be left for later, I agree. But without having a reasonable idea of what kind of relations we want to support we can't devise a suitable programming interface. In particular, I think your current interface of specifying a list of
extra_super_categories()
is fundamentally flawed for the reasons that I stated.
Very well. I appreciate your suggestions, but so far none of them convinced me. Well no, that's not right: I found the idiom {{{F(axiom.Finite, axiom.Commutative)}} very interesting though it does not buy the rest. In each case, either I see fine points where they are likely to be unimplementable within the desired features, or I believe that they will make category code less nice to write. I'd be happy to be proven wrong, but it does not make any sense for me implementing something I don't not believe in a priori. Alternatively, we can spend a couple days discussing step by step the details.
No, as you have proven repeatedly, in particular with the git transition, you are a man of action. If you are convinced some change is right and easy, prove me wrong by implementing a convincing prototype, say in a review branch. No need to be perfect. I am happy polishing the details.
As for the fundamentally flawed extra_super_categories()
interface. It's not about relations in an algebra. It's about a
completion computation in a lattice. And in this context I believe
it's correct. Shall I write a formal proof of the algorithm? At least
I would have the feeling to be investing my time for the day I would
write a paper on the topic.
Best,
Nicolas
comment:421 followup: ↓ 423 Changed 4 years ago by
Burying a post in a long thread titled "RFC: a good name the category of algebras that are not necessarily associative nor unital" or a poll whether that is an acceptable performance impact does not constitute an announcement in my book. For the SEP process at work, see e.g. the git transition http://wiki.sagemath.org/WorkflowSEP (wiki search will net you more info)
Replying to nthiery:
As for the
fundamentally flawed extra_super_categories() interface. It's not about relations in an algebra. It's about a completion computation in a lattice.
I know. Binomial ideals are closely related to lattice ideals. It appears to be commutative algebra, but you actually never form nontrivial polynomials.
Still, my point that you can't expect to arrive at the normal form by removing axioms at each step remains. So just listing supercategories is not a good way of supplying relations, you need a way to get a handle on all relations (without having to instantiate all categories on startup).
Alternatively, we can spend a couple days discussing step by step the details.
Please do, I'm interested in what you think is "likely unimplementable" in my proposal or how you are going to go about name conflicts in yours. I hate openended discussions at least as much as you. And I'm more than willing to push this forward, but I have to be convinced that I'm not pushing the car into a ditch...
comment:422 in reply to: ↑ 419 Changed 4 years ago by
Replying to vbraun:
IMHO we have to at least get rid of the openended list of blessed adjectives that have special hidden/surprising meaning. This includes all cases where substrings of class names are matched. We can change the implementation details later, but whatever programming interface we fix now will be exceedingly difficult to change once this is merged.
As one of the reviewers, I can tell that Nicolas did seek other people's opinion, although this has partially happened in offline discussions.
You are probably aware, but let's make this difference explicit: We have to
distinguish the user interface from the programming interface. I
believe the user interface is nice and natural: Take a category C
, and
type
C.Commutative()
to create a new category obtained from
C
by
applying an axiom. I think it makes sense to do it in this way. It would also
make sense to do it like
C.add_axiom(Axiom.Commutative)
. Anyway, the current
user interface is sufficiently nice IMHO.
The programming interface is less nice, as we have discussed. When adding a new categorywithaxiom, the programmer needs to provide a default construction, which can either be implicit by the choice of a name, or explicit by providing a "magical" class attribute. This is on the new category; on the base category, another class attribute needs to be created, most likely by a lazy import, or a nested class (which is then the class for the new category).
In addition to that, there may be nondefault constructions yielding the same
category. One (minor) problem is that this additional constructions must not
be defined by class attributes (otherwise the assertions happening in the
code would complain) but by SubcategoryMethods
.
A major problem is: How to justify the choice of default versus nondefault constructions? Shouldn't there somehow be a globally consistency? And should this concistency not be granted in an automatic way (because otherwise it isn't granted)?
Anything else, including the implementation (but not the programming interface for specifying) relations could be left for later, I agree.
I am not sure if I agree on this statement: Is it really a problem to have a handmade nonscalable (because of global consistency) programming interface now and then replace it by a more automated scalable programming interface?
Actually I am more concerned about the implementation of the underlying lattice. As in the case of coercion, the lattice structure is given locally, on the nodes. But some kind of global consistency is required (if you concatenate coercions, then the result must be a coercion as well, but there can be different coercion chains from parent A to parent B, and the concatenation results must all coincide). Sometimes I find it rather frustrating that the coercion lattice is encoded in this way, since fixing a global problem locally tends to be difficult.
But Gröbner bases of toric ideals are, I think, a tool to treat global questions locally. What do you think of the following attempt of a compromise?
 The user interface
C.Commutative
is nice enough, let's keep it as suggested by Nicolas, for now. In a later stage, if axioms start to get an independent life, the syntaxC.add_axiom(axioms.Commutative)
might be added.
 For practical considerations, I would accept a temporary solution in the programming interface: As we all know, a lot of patches depend on the "more functorial constructions", and I guess it would be easier to change the way of defining a default construction later in one go rather than in hundred tiny steps (namely by breaking all the existing 100 patches that depend on the functorial constructions). But I am not release manager, perhaps I "misunderestimate" the problems.
 IF we preserve the current programming interface, then we should add a
tool that allows to compute what should be the default construction of a
new category. As I have demonstrated above, this could be provided by
Gröbner basis computations in boolean polynomial rings: Input a category
construction, output the "normal form" of this construction in the lattice,
which should then be taken as default construction. So, the programmer can
seek advice before doing a choice what to put into
_base_category_class_and_axiom
resp. before chosing a name.
 In a later step, this helper tool to achieve consistency could be the
fundament of a simpler programmatic interface. The programmer would state
somewhere in the code (or by calling a method of some object and then
storing an updated database in the Sage sources) that finite division rings
are commutative, and (perhaps by a database using labels that are standard
monomials in a boolean polynomial ring) would ensure that in all future Sage
sessions
Rings.Finite.Division
will coincide withFields.Finite
.
comment:423 in reply to: ↑ 421 ; followup: ↓ 426 Changed 4 years ago by
Replying to vbraun:
Still, my point that you can't expect to arrive at the normal form by removing axioms at each step remains.
I agree that the semantic of _without_axioms and _without_axiom is currently not strongly specified (though I actually believe we could find a well defined  but not necessarily useful  semantic).
In any case, this is a non issue because those methods are used nowhere in the algorithmic. One only recurse by one step on the base_category, and the semantic of this is well defined. Granted, at this point, you have to believe me or convince yourself from the code. As I said, if you believe this is really worth it (how many pieces of Sage's infrastructure have been formally proven?) I can go the step of proving formally the whole thing, but that will take a bit of time.
_without_axioms is only used for finding heuristically a good _repr_.
And it's been doing a good job so far. It's also used for implementing
_without_axiom: as I mentioned earlier the semantic of the later is
not well defined, but is good enough for the single spot where I have
had a need for it (removing the facade axiom when playing with
posets).
So just listing supercategories is not a good way of supplying relations, you need a way to get a handle on all relations (without having to instantiate all categories on startup).
I believe the relations are trivial enough that you can lazily handle them along the closure calculation. But I don't have enough room in the margin to prove it here :)
Please do, I'm interested in what you think is "likely unimplementable" in my proposal or how you are going to go about name conflicts in yours.
The point is "within the desired feature set". For the idiom:
class C: class Anyname(MyAxiom.Category): ...
(1) You need to scan through the entries of C
to decide whether
C
implements
MyAxiom?
, right? In particular, you need to evaluate all those entries to test the inheritance, which means triggering lazy imports.
I think it's an important feature of the current design that you can use a category and some of its axioms while completely ignoring the others. For example, my upcoming tickets will add rather large categories like Semigroups().JTrivial(); those categories have no reason to be loaded upon starting Sage, whereas Semigroups().Commutative() will be constructed.
In general, I believe one should refrain from evaluating all entries of an object, for some of them might be lazy with good reasons.
(2) Either you define MyAxiom? in a location of its own. But then you
loose some code locality (the code for the axiom is not tied to the category defining it, which I find important). Or, as I mentioned before, you take the risk of having non trivial refactoring in case you generalize the axiom to a super category later.
Altogether the current design just follows by analogy standard OO
practice: when a class C defines a method or attribute named a
,
this fixes the semantic of
a
for all subclasses. I don't see that
the names for our axioms would be soooo specifically keen to clashes
that we need to invent a new mechanism and deviate from standard
practice. As usual, if a name is potentially ambiguous within its
field of application, then it should be made more explicit. That's
e.g. what we do with "Associative" w.r.t. "AdditiveAssociative?".
For the tuple _base_category_and_axiom: really, if you care, please have a try yourself; it's a small piece of work anyway. I had given it a shot at some point and then reverted my changes because it did not look feel any better after. I'd be happy being proven wrong.
Cheers,
Nicolas
comment:424 Changed 4 years ago by
For the record: I somehow feel tempted to write a function that is able to test whether the choice of default construction is consistent. So, in the best case, we'll soon have a tool to prove Nicolas' model...
comment:425 followup: ↓ 427 Changed 4 years ago by
I have attached consistency.py, which provides routines to check whether Nicolas' local choice of default constructions for categories with axiom is globally consistent.
Idea:
First, we load all available subclasses of Category
. Those that are not CategoryWithAxiom
are "basic" (or atomic?) categories, and correspond to some generators of a boolean polynomial ring R. The remaining generators of this ring correspond to the available axioms (there is an exhaustive list).
Then, for each category class, it is tested what other category class can be obtained by applying an axiom. Difficulty: Only the default construction can be given by a class attribute. All other constructions have to be given on the level of instances. Hence, if C is a category class and A is an axiom, then C.A might not be available. In this case, I try C.an_instance().A().__class__.__base__
to get the class that is used to implement the result of applying axiom A to instances of category C.
Problem: There are a couple of categories that do not provide instances! Hence, I couldn't test them. So, for now, we restrict on those cases where we find a way to apply axiom A to category class C, either by an attribute of the class C, or by using an instance of C.
As we all know, different constructions may yield the same result. This happens 20ish times. Now, each alternative construction yields a relation in the lattice that is modelled by the above mentioned boolean polynomial ring R. Hence, the next step is to create the relation ideal Rel
of R
.
And now we are ready to test consistency of the choice of default constructions: For each category C with axiom, we have the famous _base_category_class_and_axiom
attribute. Say, C is obtained from category class B by applying axiom A.
B corresponds to a standard monomial b_monomial, that describes a construction of B. Axiom A corresponds to a generator of the ring R. The condition for consistency is simple: b_monomial*R(A)
has to be a standard monomial with respect to Rel
.
As it turns out, this is largely the case.
Problematic cases:
 There are 121 category classes that do no support
an_instance()
. So, we can't really vouch for complete consistency. Note, in particular, thatModules.an_instance()
returns the category of rational vector space, hence, not and instance of the classModules
.  In two cases, applying an axiom to a category class does not return a category class:
age: type(sage.categories.category_with_axiom.SmallTestObjects.Connected) <type 'int'> sage: type(sage.categories.category_with_axiom.SmallTestObjects.Commutative) <type 'classobj'>
In only two cases, my routine seems to find nonconsistent choices.
 According to the routine,
Modules.WithBasis
should beVectorSpaces.WithBasis
. That's clearly an artefact of the above mentioned problem thatModules.an_instance()
does not return a category of modules.  In one nonsensical example, my routine finds this inconsistency:
sage: from sage.categories.category_with_axiom import Blahs sage: Blahs.Unital.Blue._base_category_class_and_axiom (sage.categories.category_with_axiom.Blahs.Unital, 'Blue') sage: Blahs().Blue() Category of unital blahs sage: Blahs().Blue() is Blahs().Blue().Unital() is Blahs().Unital() True
Axioms are supposed to commute. However, we getsage: Blahs().Blue().Unital() is Blahs().Unital().Blue() False sage: Blahs().Blue().Unital() is Blahs().Unital().Blue() False
It is thus no surprise that my routine complains here. I won't check now whether this example is supposed to demonstrate an illegal construction.
Conclusion
 I think the basic principle of the consistency check is sound. However, it is incomplete, since a lot of categories do not support
an_instance()
.  With the exception of the
Blahs().Unital()
example, Nicolas did a fine job to build the local data in a globally consistent way.  If we would go the opposite way, we could turn the consistency check into a method to create the default constructions of categories in an automated way (so that Nicolas does not need to choose them manually). Therefore, in the long run, we could have a database of category classes, using identifiers that are monomials with respect to some ideal Rel in a boolean polynomial ring R. Adding a new base category or adding a new axiom means to add more generators to R. Adding a theorem "these two constructions yield the same category" add a new generator to the ideal Rel. Global consistency is guaranteed by letting the default constructions correspond to standard monomials with respect to Rel. Note that this would also allow to deal with nondefault constructions on the level of category classes and polynomial ideals, hence, without the problem that
an_instance()
often does not work.
comment:426 in reply to: ↑ 423 ; followups: ↓ 428 ↓ 440 Changed 4 years ago by
Replying to nthiery:
The point is "within the desired feature set". For the idiom:
class C: class Anyname(MyAxiom.Category): ...(1) You need to scan through the entries of
C
to decide whether
C
implements
MyAxiom?
, right?
Yes, of course.
I also think that the code on this ticket suffers from a lot of abuse of lazy imports. They are a useful tool, but they don't absolve you from thinking about import order of modules. If you do, then you'll run into precisely the kind of hardtodebug errors that we found. In particular, lazy imports that you have to resolve on startup are IMHO a sure sign of code smell.
This is yet another reason why I want to divorce axioms from categories. Axioms clearly ought to be lower than categories in the import priority, and moving them into separate files forces you to treat them as such.
I think it's an important feature of the current design that you can use a category and some of its axioms while completely ignoring the others.
Agreed.
Either you define MyAxiom? in a location of its own. But then you loose some code locality (the code for the axiom is not tied to the category defining it, which I find important).
But axioms are not tied to particular categories. They can be added to categories, but there is no need to have a unique "most basic category" for the axiom. And you yourself complained that its annoying to move the axiom code up in the category hierarchy occasionally.
Or, as I mentioned before, you take the risk of having non trivial refactoring in case you generalize the axiom to a super category later.
No, you just move the import from your old category to your new category.
Altogether the current design just follows by analogy standard OO practice: when a class C defines a method or attribute named
a
, this fixes the semantic of
a
for all subclasses.
You are deliberately omitting the other half of the story: If you have two unrelated classes C
and D
then C.a
and D.a
are unrelated in Python. And you are breaking that.
comment:427 in reply to: ↑ 425 ; followup: ↓ 429 Changed 4 years ago by
Replying to SimonKing:
I have attached consistency.py, which provides routines to check whether Nicolas' local choice of default constructions for categories with axiom is globally consistent.
Fun :)
In this case, I try
C.an_instance().A().__class__.__base__
to get the class that is used to implement the result of applying axiom A to instances of category C.
I believe that's the right approach for this need.
B corresponds to a standard monomial b_monomial, that describes a construction of B. Axiom A corresponds to a generator of the ring R. The condition for consistency is simple:
b_monomial*R(A)
has to be a standard monomial with respect toRel
.
You are taking some term order here, probably based on some order of the axioms, right?
 In one nonsensical example, my routine finds this inconsistency ...
sage: Blahs().Blue().Unital() is Blahs().Unital().Blue() False sage: Blahs().Blue().Unital() is Blahs().Unital().Blue() FalseIt is thus no surprise that my routine complains here. I won't check now whether this example is supposed to demonstrate an illegal construction.
Yup. This class demonstrates a possibly desirable but nonimplemented feature. See Blahs.Blue_extra_super_categories.
Conclusion
 I think the basic principle of the consistency check is sound. However, it is incomplete, since a lot of categories do not support
an_instance()
.
Question: what about turning the problem upside down, and write a
C._test_??? method that checks that everything is consistent within
the context of the super categories of C
(and possibly all axiom
categories you can derive from those, though I guess it's not necessary)?
Since every category class is supposed to be TestSuit?'ed, we should cover all categories this way. Do you think we would get a global enough view?
An inconvenient is that this would be redundant: we would be checking over and over the consistency of the semigroup categories while checking lower categories. This is, or not, a problem depending on the cost of the check.
 With the exception of the
Blahs().Unital()
example, Nicolas did a fine job to build the local data in a globally consistent way.
And I believe that I have no merit in that. I haven't proven it formally yet, but I am pretty much convinced that as long as the two following specifications (which I have now described in detail in the documentation) are satisfied:
 Tree structure on the classes
 If Ds() is a subcategory of Cs() and Ds().A() = Cs().A() then that axiom should be implemented in Ds.A. And the mathematical theorem stating the above equality should be implemented in Cs.
then the algorithmic works.
I also believe we don't event need to impose a term order, but that I have to think about.
 If we would go the opposite way, we could turn the consistency check into a method to create the default constructions of categories in an automated way (so that Nicolas does not need to choose them manually). Therefore, in the long run, we could have a database of category classes, using identifiers that are monomials with respect to some ideal Rel in a boolean polynomial ring R. Adding a new base category or adding a new axiom means to add more generators to R. Adding a theorem "these two constructions yield the same category" add a new generator to the ideal Rel. Global consistency is guaranteed by letting the default constructions correspond to standard monomials with respect to Rel. Note that this would also allow to deal with nondefault constructions on the level of category classes and polynomial ideals, hence, without the problem that
an_instance()
often does not work.
I always like consistency check. And that one has the bonus of being fun for commutative algebraist like us :)
On the other hand, I believe that there is at this point no need to have a tool for deciding where to put the axiom category. So far I have always been putting the axiom category where mathematics told me to put it.
The point is that we are at this point only implementing well known facts. And the goal of the infrastructure is only to make it easy to provide only minimal information about those facts and have it derive the immediate consequences.
One day might come where we will want to use the category infrastructure to prove *new* facts. But then that would be going to a completely different level, and we should first discuss seriously with the scientific communities that work on computer algebra and proofs.
comment:428 in reply to: ↑ 426 Changed 4 years ago by
Replying to vbraun:
I also think that the code on this ticket suffers from a lot of abuse of lazy imports. They are a useful tool, but they don't absolve you from thinking about import order of modules.
Perhaps I misunderstand, but to me it seems the lazy imports are used in order to avoid that all category classes are imported at startup time: PermutationGroups.Finite
should only be imported when it is actually needed, but not if you only intend to use PermutationGroups
. Yet it should show up as an attribute. Do you think that, on top of that purpose, it is used to break import cycles?
In particular, lazy imports that you have to resolve on startup are IMHO a sure sign of code smell.
Sounds right. If something is imported at startup time anyway, then why should one not use a proper import?
Either you define MyAxiom? in a location of its own. But then you loose some code locality (the code for the axiom is not tied to the category defining it, which I find important).
But axioms are not tied to particular categories. They can be added to categories, but there is no need to have a unique "most basic category" for the axiom.
As I have mentioned earlier, it is simply a fact that you always have a "most basic category". Namely, it is the category that provides all notions that you need in order to formulate the axiom. E.g., AdditiveMagmas
when you want to formulate x+y==y+x
.
You are deliberately omitting the other half of the story: If you have two unrelated classes
C
andD
thenC.a
andD.a
are unrelated in Python. And you are breaking that.
Well, if a
is an axiom and C
and D
are categories and if a
can be formulated in both C
and D
, then they are both subcategories of the largest category B
that allows to formulate a
. In this sense, C
and D
are related.
And that's another argument for considering the "most basic category allowing to formulated an axiom": It is natural (when you think of Python classes) to provide a
as an attribute of B
, and then C
and D
both inherit B.a
. That said, it could very well be that because of some theorems axiom a
implies properties of C.a()
that are not part of what is provided by B.a()
. And that's when (in Python) you would override B.a
in C
.
But actually that's not the end of the story yet. We can provide a
with a __get__
method that does recognise whether a
is bound to B
or to C
, and so we could make it so that C.a
earns additional features that are not in B.a
.
Changed 4 years ago by
A routine to test whether the default choice of axiomatic category constructions is consistent
comment:429 in reply to: ↑ 427 ; followup: ↓ 431 Changed 4 years ago by
Replying to nthiery:
Replying to SimonKing:
I have attached consistency.py,
... and updated it now, because of some previous typos.
which provides routines to check whether Nicolas' local choice of default constructions for categories with axiom is globally consistent.
Fun :)
Agreed, but I didn't tell how to use it. So, here it goes: Start sage and attach the file. Then:
sage: Bad, I,S,Rel = TestCategoryModel() Import all category classes Create boolean polynomial ring with 168 generators Collect all available axiomatic constructions => Found 277 constructions for 256 category classes 8 cases of alternative axiomatic constructions Testing that default constructions are compatible Bad: <class 'sage.categories.category_with_axiom.Unital.Blue'> = <class 'sage.categories.category_with_axiom.Blahs.Unital'>.Blue expected default Blahs*Unital got Blahs*Blue*Unital Basis is given by Blahs*Unital but Blahs*Blue*Unital is no standard monomial
What I find very strange: Apparently the number of alternative axiomatic constructions is not welldefined! Namely yesterday I got different figures.
To be investigated...
Anyway, we find:
sage: Bad [(sage.categories.category_with_axiom.Unital.Blue, Blahs*Blue*Unital, Blahs*Unital)]
which means that Blahs.Unital.Blue
should coincide with Blahs.Unital
, which makes perfect sense since
sage: Blahs().Blue() is Blahs().Unital() True
and thus Blahs().Unital().Blue()
should be the same as Blahs().Blue().Blue()
, thus the same as Blahs().Blue()
, which is Blahs().Unital()
. However, we find
sage: Blahs().Unital().Blue() is Blahs().Blue() False
This is a bug, which my test function correctly detects!
Moreover, we have
sage: len(I) 121
which means that 121 category classes do not give a reasonable answer when asked to return an instance.
Finally,
sage: for rel in Rel.groebner_basis(): ....: print rel ....: Blahs*DistributiveMagmasAndAdditiveMagmas*Flying*Facade*Finite*Commutative*Associative*Division*NoZeroDivisors*AdditiveCommutative*AdditiveAssociative*AdditiveInverse*AdditiveUnital + Blahs*DistributiveMagmasAndAdditiveMagmas*Flying*Finite*Commutative*Associative*Inverse*Division*NoZeroDivisors*AdditiveCommutative*AdditiveAssociative*AdditiveInverse*AdditiveUnital Blahs*DistributiveMagmasAndAdditiveMagmas*Flying*Finite*Infinite*Commutative*Associative*Division*NoZeroDivisors*AdditiveCommutative*AdditiveAssociative*AdditiveInverse*AdditiveUnital + Blahs*DistributiveMagmasAndAdditiveMagmas*Flying*Finite*Commutative*Associative*Inverse*Division*NoZeroDivisors*AdditiveCommutative*AdditiveAssociative*AdditiveInverse*AdditiveUnital Blahs*Flying*Unital + Blahs*Flying Blahs*Blue + Blahs*Unital TestObjects*Blue + TestObjects*Unital TestObjectsOverBaseRing*Blue + TestObjectsOverBaseRing*Unital DistributiveMagmasAndAdditiveMagmas*Facade*Finite*Commutative*Associative*Unital*Division*NoZeroDivisors*AdditiveCommutative*AdditiveAssociative*AdditiveInverse*AdditiveUnital + DistributiveMagmasAndAdditiveMagmas*Finite*Commutative*Associative*Inverse*Unital*Division*NoZeroDivisors*AdditiveCommutative*AdditiveAssociative*AdditiveInverse*AdditiveUnital DistributiveMagmasAndAdditiveMagmas*Finite*Infinite*Commutative*Associative*Unital*Division*NoZeroDivisors*AdditiveCommutative*AdditiveAssociative*AdditiveInverse*AdditiveUnital + DistributiveMagmasAndAdditiveMagmas*Finite*Commutative*Associative*Inverse*Unital*Division*NoZeroDivisors*AdditiveCommutative*AdditiveAssociative*AdditiveInverse*AdditiveUnital
As we can see, most of the relations come from Nicolas' examples, but not from real mathematical theorems.
One thing really irritates me, though. Yesterday, I also saw relations
Modules*VectorSpaces==VectorSpaces
which makes sense: The join of the module category over a base ring and the vector space category over the same base ring should be the vector space category, simply since in this case the module category coincides with the vector space category.
So, why am I not seeing the same relations today??
In this case, I try
C.an_instance().A().__class__.__base__
to get the class that is used to implement the result of applying axiom A to instances of category C.I believe that's the right approach for this need.
OK, but then we'd like to have 121 more implementations of an_instance()
...
You are taking some term order here, probably based on some order of the axioms, right?
Yes. That's another point that I should change. BooleanPolynomialRing
apparently uses a lexicographic ordering by default. What I wanted was a degrevlex ordering: A construction involving fewer axioms should be preferred over a construction involving many axioms.
Anyway. The generators of the ring correspond to
 All category classes that are not categorywithaxiom, sorted (I don't know how Python sorts them).
 All axioms, in the same order as given in
sage.categories.category_with_axiom.all_axioms
.
Is the sorting of category classes perhaps depending on the memory addresses of the classes? This would explain why I get different results on different runs.
It is thus no surprise that my routine complains here. I won't check now whether this example is supposed to demonstrate an illegal construction.
Yup. This class demonstrates a possibly desirable but nonimplemented feature. See Blahs.Blue_extra_super_categories.
Well, the reason for the complaint is that the axioms Blue
and Unital
do not commute. But applying axioms has to commute. Hence, the feature is not "desirable but nonimplemented", but it is "mathematically illegal".
Question: what about turning the problem upside down, and write a C._test_??? method that checks that everything is consistent within the context of the super categories of
C
(and possibly all axiom categories you can derive from those, though I guess it's not necessary)?
Consistency is a global property. Hence, I think there is no way around to consider all category classes at once when testing consistency. It is clear that this must not happen during startup. Hence, as it is, the consistency test is a test that may be a tool for a developer wanting to add a new category.
However, turning things "upside down" is what I want. But I mean a different thing.
Currently, my functions import all categories, extract from it the local information that define the categorywithaxiom lattice, and then uses commutative algebra to assert that the local information gives a globally consistent picture.
Turning this upside down would mean:
 First of all, create a Boolean polynomial ring, whose generators correspond to the list of basic categories and axioms available in Sage (this list may grow). Fix a reasonable monomial ordering, best would be a degree ordering.
 Create an ideal in this ring, whose generators are given by mathematical results such as Wedderburn's. Compute the Gröbner basis. This Gröbner basis could be computed once, stored, and then loaded at Sage startup time. Hence, Sage startup would not suffer from an expensive Gröbner basis computation, it only is loading a fixed result.
 When creating a category class, the Gröbner basis helps to provide this class with the available axiomatic constructions (this is what you do with lazy imports and with
SubcategoryMethods
), and also determines a default construction for this class (this is the_base_category_class_and_axiom
attribute). Commutative algebra ensures consistency of this local information. This would be implemented by a metaclass forCategory
.
This would be my approach to add more rigour to your categorywithaxiom model, and replace handmade choices of default constructions by something that scales better.
Since every category class is supposed to be TestSuit?'ed, we should cover all categories this way. Do you think we would get a global enough view?
No.
And I believe that I have no merit in that. I haven't proven it formally yet, but I am pretty much convinced that as long as the two following specifications (which I have now described in detail in the documentation) are satisfied:
 Tree structure on the classes
Which I totally don't like!
Namely, as a consequence, you need to treat DivisionRings.Finite
different from Fields.Finite
: The latter is a (lazily imported) class stored as a class attribute, the former is a cached method obtained from Set() and will only work on instances, not on classes. This asymmetry is unfortunate, and I think using a metaclass that works with standard monomials corresponding to default axiomatic constructions is a nice way to make the implementation symmetric.
You see, if a developer wants to implement a new mathematical theorem on equality of two axiomatic constructions, then your model is a bit awkward: (S)he needs to pick one of the constructions to become a cached method on instances, while the other stays a class attribute. But which of them? Doing the wrong choice would mean subtle inconsistencies resulting in difficult to find bugs.
It seems obvious to me that we want that all axiomatic constructions can be implemented in the same way, and that there is no danger of wrong choices, because all choices will be made automatic by a method that is proven to yield welldefined results (namely: Computing normal forms using Gröbner bases).
In other words: There is a tree structure in the set of standard monomials. We shouldn't force the developer to put the same tree structure manually into the class definitions.
I also believe we don't event need to impose a term order, but that I have to think about.
I think we at least want a degree order, so that applying three axioms will always be preferred over applying four axioms to get the same result.
On the other hand, I believe that there is at this point no need to have a tool for deciding where to put the axiom category. So far I have always been putting the axiom category where mathematics told me to put it.
Well, in the case of Blahs.Blue.Unital
, you did wrong. I don't know if it was on purpose that you did wrong, but your example shows something that should never happen, namely axioms that do not commute.
comment:430 followup: ↓ 432 Changed 4 years ago by
Something else, Nicolas: Did you push your latest commits? I can't see your latest additions to the documentation in the branch.
comment:431 in reply to: ↑ 429 ; followups: ↓ 433 ↓ 435 Changed 4 years ago by
Replying to SimonKing:
OK, but then we'd like to have 121 more implementations of
an_instance()
...
It would be a good thing to have an_instance for all categories at some point. Hopefully we can do that by providing just a couple generic implementations.
Now, I am actually surprised that you get that many. Can you give a couple examples where this fails? Have you tried the category_sample function?
Well, the reason for the complaint is that the axioms
Blue
andUnital
do not commute. But applying axioms has to commute. Hence, the feature is not "desirable but nonimplemented", but it is "mathematically illegal".
No. See below.
Consistency is a global property. Hence, I think there is no way around to consider all category classes at once when testing consistency.
I am not sure. If there is some inconsistency, you can always work in a sublattice (with bottom) that contains all the involved categories, rather than the full lattice. And I would not be surprised if it could be argued that one could always choose such a sublattice with a category that is actually implemented in Sage. But anyway, let's not pollute more this thread; we can discuss this later.
 Create an ideal in this ring, whose generators are given by mathematical results such as Wedderburn's. Compute the Gröbner basis. This Gröbner basis could be computed once, stored, and then loaded at Sage startup time. Hence, Sage startup would not suffer from an expensive Gröbner basis computation, it only is loading a fixed result.
Right. But Sage's startup would still require being able to manipulate such a Gröbner basis in one form or the other. And one needs to make sure the Gröbner basis is consistent with all the code (that is Sage's compilation might require a Gröbner basis computation). And that it can be extended dynamically if users introduce new categories in their own library.
I am certainly not saying it's not doable. But it introduces some complexity which has to be well motivated.
 When creating a category class, the Gröbner basis helps to provide this class with the available axiomatic constructions (this is what you do with lazy imports and with
SubcategoryMethods
), and also determines a default construction for this class (this is the_base_category_class_and_axiom
attribute). Commutative algebra ensures consistency of this local information. This would be implemented by a metaclass forCategory
.
This would be my approach to add more rigour to your categorywithaxiom model, and replace handmade choices of default constructions by something that scales better.
Sorry, I can't resist; let me use the very argument that soo many people have raised when saying that all that category stuff was just overdesign. «Before introducing non trivial design to solve a scaling issue, one needs to be sure there is one in practice». So far, I haven't had a single time where I got bothered by that.
Since every category class is supposed to be TestSuit?'ed, we should cover all categories this way. Do you think we would get a global enough view?
No.
Can you give me a sketch of scenario where this would fail?
And I believe that I have no merit in that. I haven't proven it formally yet, but I am pretty much convinced that as long as the two following specifications (which I have now described in detail in the documentation) are satisfied:
 Tree structure on the classes
Which I totally don't like!
Namely, as a consequence, you need to treat
DivisionRings.Finite
different fromFields.Finite
: The latter is a (lazily imported) class stored as a class attribute, the former is a cached method obtained from Set() and will only work on instances, not on classes. This asymmetry is unfortunate, and I think using a metaclass that works with standard monomials corresponding to default axiomatic constructions is a nice way to make the implementation symmetric.You see, if a developer wants to implement a new mathematical theorem on equality of two axiomatic constructions, then your model is a bit awkward: (S)he needs to pick one of the constructions to become a cached method on instances, while the other stays a class attribute. But which of them? Doing the wrong choice would mean subtle inconsistencies resulting in difficult to find bugs.
No I don't see.
For Fields vs Division rings, the asymmetry is *very* natural. You put the class in Fields.Finite, because it's about stuff valid for finite fields. And you put in DivisionRings? the theorem which tells you that in the context of division rings, the "Finite" axiom implies the "commutative" axioms.
Note also that if you would do the converse (put the class in DivisionRings?), you would see *immediately* the error: first you would not know what to put in Fields.Finite_extra_super_categories. And second DivisionRings?().Finite() would not coincide with Fields().Finite(); and this is the first thing you would test. Nothing "subtle".
The same happened in every example of mathematical theorem I implemented. And I believe this is a general feature; each time, the theorem is telling you that, within a given category C, satisfying a given axiom gives you more structure i.e. you land in a subcategory.
So, until I am given a concrete example where it's actually awkward to put the mathematical theorem in one side rather than the other, I'll consider this as a non issue.
It seems obvious to me that we want that all axiomatic constructions can be implemented in the same way, and that there is no danger of wrong choices, because all choices will be made automatic by a method that is proven to yield welldefined results (namely: Computing normal forms using Gröbner bases).
I believe, and will work on proving formally, that the current implementation is perfectly welldefined and gives normal forms.
In other words: There is a tree structure in the set of standard monomials. We shouldn't force the developer to put the same tree structure manually into the class definitions.
Maybe not. Or maybe yes. I agree there is a small inconvenient with that tree structure because you have to make a choice between putting stuff in C.A.B or C.B.A (it's not so bad because I believe you can locally make whatever choice you want). And if forces to create a few extraneous empty categories (really not that many).
But before ruling out the tree design choice, one needs to make sure there is an alternative design choice that has:
 a syntax at least as good
 a similar performance
 a robustness at least as good
 someone willing to take the time to implement it
Anyway, I read you agree that this is all for a later ticket, right? Can you please move the discussion to a different ticket? This ticket is already way too cluttered for people that will try to read the discussion later.
I think we at least want a degree order, so that applying three axioms will always be preferred over applying four axioms to get the same result.
I'd rather not impose any limitation if there is no good reason for it. We should put the code wherever it's most natural mathematically to put it.
Well, in the case of
Blahs.Blue.Unital
, you did wrong. I don't know if it was on purpose that you did wrong, but your example shows something that should never happen, namely axioms that do not commute.
No I did not do wrong. It does demonstrates *on purpose* a *missing feature*: namely that you currently can't use the Blue_extra_super_categories mechanism in the category where the Blue axiom is defined. You can also see it as a demonstration of a *specified* limitation of the current specification: a category that *defines* an axiom also has to *implement* it.
Cheers,
Nicolas
comment:432 in reply to: ↑ 430 Changed 4 years ago by
Replying to SimonKing:
Something else, Nicolas: Did you push your latest commits? I can't see your latest additions to the documentation in the branch.
Now yes!
comment:433 in reply to: ↑ 431 ; followup: ↓ 438 Changed 4 years ago by
Replying to nthiery:
Now, I am actually surprised that you get that many. Can you give a couple examples where this fails?
The TestCategoryModel()
from consistency.py returns (among other things) a list of all category classes for which an_instance()
does not return an instance of the class. This list seems to be stable.
Sidenote: The other tests do not seem stable yet: They differ from run to run. Anyway, I'll work on it.
Here are the classes and the errors I am getting:
0 sage.combinat.ncsym.bases.NCSymDualBases AttributeError 1 sage.categories.modules_with_basis.ModulesWithBasis.DualObjects TypeError 2 sage.categories.algebra_functor.AlgebrasCategory AssertionError 3 sage.categories.additive_magmas.AdditiveMagmas.Algebras AssertionError 4 sage.categories.hopf_algebras.HopfAlgebras.DualCategory NotImplementedError 5 sage.categories.additive_semigroups.AdditiveSemigroups.Algebras AssertionError 6 sage.categories.category_types.AbelianCategory NotImplementedError 7 sage.combinat.ncsf_qsym.qsym.QuasiSymmetricFunctions.Bases AttributeError 8 sage.combinat.descent_algebra.DescentAlgebraBases AttributeError 9 sage.categories.quotients.QuotientsCategory TypeError 10 sage.combinat.ncsf_qsym.ncsf.NonCommutativeSymmetricFunctions.Bases AttributeError 11 sage.categories.additive_magmas.AdditiveCommutative.Algebras AssertionError 12 sage.categories.realizations.RealizationsCategory TypeError 13 sage.combinat.ncsym.bases.MultiplicativeNCSymBases AttributeError 14 sage.categories.sets_cat.Sets.CartesianProducts TypeError 15 sage.categories.graded_modules.GradedModulesCategory KeyError 16 sage.categories.sets_cat.Sets.Subobjects TypeError 17 sage.categories.cartesian_product.CartesianProductsCategory TypeError 18 sage.categories.magmas.Magmas.Realizations TypeError 19 sage.categories.hopf_algebras.HopfAlgebras.Realizations TypeError 20 sage.categories.groups.Groups.Algebras AssertionError 21 sage.categories.magmas.Magmas.Algebras AssertionError 22 sage.categories.isomorphic_objects.IsomorphicObjectsCategory TypeError 23 sage.categories.sets_with_partial_maps.SetsWithPartialMaps.HomCategory TypeError 24 sage.combinat.ncsf_qsym.generic_basis_code.GradedModulesWithInternalProduct.Realizations TypeError 25 sage.combinat.sf.sfa.SymmetricFunctionsBases AttributeError 26 sage.categories.category.HomCategory TypeError 27 sage.categories.sets_cat.Sets.Subquotients TypeError 28 sage.categories.algebras_with_basis.AlgebrasWithBasis.CartesianProducts TypeError 29 sage.categories.finite_enumerated_sets.FiniteEnumeratedSets.IsomorphicObjects TypeError 30 sage.categories.sets_cat.Sets.Realizations TypeError 31 sage.categories.realizations.Category_realization_of_parent NotImplementedError 32 sage.categories.category_with_axiom.BrokenTestObjects NotImplementedError 33 sage.categories.category.CategoryWithParameters NotImplementedError 34 sage.categories.magmas.Commutative.Algebras AssertionError 35 sage.categories.vector_spaces.VectorSpaces.DualObjects TypeError 36 sage.categories.hopf_algebras.HopfAlgebras.Morphism NotImplementedError 37 sage.categories.category_types.Category_in_ambient TypeError 38 sage.categories.objects.Objects.HomCategory TypeError 39 sage.combinat.ncsf_qsym.ncsf.NonCommutativeSymmetricFunctions.MultiplicativeBasesOnGroupLikeElements AttributeError 40 sage.categories.schemes.Schemes.HomCategory TypeError 41 sage.combinat.ncsf_qsym.ncsf.NonCommutativeSymmetricFunctions.MultiplicativeBases AttributeError 42 sage.categories.semigroups.Semigroups.Subquotients TypeError 43 sage.categories.sets_cat.Sets.Algebras AssertionError 44 sage.categories.coalgebras.Coalgebras.TensorProducts TypeError 45 sage.categories.hecke_modules.HeckeModules.HomCategory TypeError 46 sage.categories.category_types.Category_module NotImplementedError 47 sage.categories.algebras.Algebras.CartesianProducts TypeError 48 sage.categories.algebras_with_basis.AlgebrasWithBasis.TensorProducts TypeError 49 sage.combinat.ncsym.bases.NCSymOrNCSymDualBases AttributeError 50 sage.categories.subquotients.SubquotientsCategory TypeError 51 sage.categories.algebras.Algebras.TensorProducts TypeError 52 sage.categories.coalgebras.Coalgebras.DualObjects TypeError 53 sage.categories.magmas.Unital.Algebras AssertionError 54 sage.categories.additive_magmas.AdditiveUnital.Algebras AssertionError 55 sage.categories.category_types.Category_ideal NotImplementedError 56 sage.categories.modules_with_basis.ModulesWithBasis.HomCategory TypeError 57 sage.categories.tensor.TensorProductsCategory TypeError 58 sage.categories.algebras.Algebras.DualObjects TypeError 59 sage.categories.semigroups.Semigroups.Quotients TypeError 60 sage.categories.hopf_algebras_with_basis.HopfAlgebrasWithBasis.TensorProducts TypeError 61 sage.categories.finite_sets.FiniteSets.Subquotients TypeError 62 sage.categories.category_types.Category_over_base_ring NotImplementedError 63 sage.categories.category_with_axiom.SmallTestObjects NotImplementedError 64 sage.categories.hopf_algebras.HopfAlgebras.TensorProducts TypeError 65 sage.combinat.ncsf_qsym.ncsf.NonCommutativeSymmetricFunctions.MultiplicativeBasesOnPrimitiveElements AttributeError 66 sage.categories.covariant_functorial_construction.CovariantConstructionCategory TypeError 67 sage.categories.magmas.Magmas.CartesianProducts TypeError 68 sage.categories.semigroups.Semigroups.CartesianProducts TypeError 69 sage.categories.dual.DualObjectsCategory TypeError 70 sage.categories.modules.Modules.EndCategory TypeError 71 sage.categories.finite_sets.FiniteSets.Algebras AssertionError 72 sage.algebras.iwahori_hecke_algebra.IwahoriHeckeAlgebra._BasesCategory AttributeError 73 sage.categories.coalgebras.Coalgebras.WithRealizations TypeError 74 sage.categories.subobjects.SubobjectsCategory TypeError 75 sage.categories.sets_cat.Sets.WithRealizations TypeError 76 sage.categories.additive_magmas.AdditiveUnital.WithRealizations TypeError 77 sage.categories.finite_enumerated_sets.FiniteEnumeratedSets.CartesianProducts TypeError 78 sage.categories.modules_with_basis.ModulesWithBasis.CartesianProducts TypeError 79 sage.categories.semigroups.Semigroups.Algebras AssertionError 80 <class 'sage.categories.modules.Modules'> yields instance of <class 'sage.categories.vector_spaces.VectorSpaces_with_category'> 81 sage.categories.modules.Modules.HomCategory TypeError 82 sage.categories.category_with_axiom.TestObjectsOverBaseRing.Commutative TypeError 83 sage.categories.monoids.Monoids.WithRealizations TypeError 84 sage.categories.covariant_functorial_construction.RegressiveCovariantConstructionCategory TypeError 85 sage.categories.rings.Rings.HomCategory TypeError 86 sage.categories.category_singleton.Category_singleton AssertionError 87 sage.categories.category.Category NotImplementedError 88 sage.categories.monoids.Monoids.Subquotients TypeError 89 sage.combinat.ncsym.bases.NCSymBases AttributeError 90 sage.categories.coalgebras.Coalgebras.Realizations TypeError 91 sage.categories.with_realizations.WithRealizationsCategory TypeError 92 sage.categories.sets_cat.Sets.IsomorphicObjects TypeError 93 sage.categories.sets_cat.Sets.Quotients TypeError 94 sage.categories.monoids.Monoids.CartesianProducts TypeError 95 sage.combinat.ncsf_qsym.generic_basis_code.BasesOfQSymOrNCSF AttributeError 96 sage.categories.modules_with_basis.ModulesWithBasis.TensorProducts TypeError 97 sage.categories.magmas.Magmas.Subquotients TypeError 98 sage.categories.category.JoinCategory TypeError 99 sage.categories.graded_hopf_algebras_with_basis.GradedHopfAlgebrasWithBasis.WithRealizations TypeError 100 sage.algebras.iwahori_hecke_algebra.IwahoriHeckeAlgebra_nonstandard._BasesCategory AttributeError 101 sage.categories.sets_cat.Sets.HomCategory TypeError 102 sage.categories.monoids.Monoids.Algebras AssertionError 103 sage.categories.category_types.Category_over_base NotImplementedError 104 sage.categories.commutative_additive_groups.CommutativeAdditiveGroups.Algebras AssertionError 105 sage.categories.category_with_axiom.BrokenTestObjects.Commutative NotImplementedError 106 <class 'sage.categories.modules_with_basis.ModulesWithBasis'> yields instance of <class 'sage.categories.vector_spaces.VectorSpaces.WithBasis_with_category'> 107 sage.categories.category_with_axiom.TestObjectsOverBaseRing.Unital TypeError 108 sage.categories.category_with_axiom.TestObjectsOverBaseRing.FiniteDimensional TypeError 109 sage.categories.category_with_axiom.BrokenTestObjects.Finite NotImplementedError 110 sage.categories.category_with_axiom.Commutative.Facade TypeError 111 <class 'sage.categories.finite_dimensional_modules_with_basis.FiniteDimensionalModulesWithBasis'> yields instance of <class 'sage.categories.category.JoinCategory_with_category'> 112 sage.categories.category_with_axiom.Commutative.FiniteDimensional TypeError 113 <class 'sage.categories.modules.Modules.FiniteDimensional'> yields instance of <class 'sage.categories.category.JoinCategory_with_category'> 114 sage.categories.category_with_axiom.SmallTestObjects.Finite NotImplementedError 115 sage.categories.category_with_axiom.Commutative.Finite TypeError 116 sage.categories.category_with_axiom.FiniteDimensional.Finite TypeError 117 sage.categories.category_with_axiom.Commutative.Finite NotImplementedError 118 sage.categories.category_with_axiom.FiniteDimensional.Unital TypeError 119 sage.categories.category_with_axiom.Finite.Commutative NotImplementedError 120 sage.categories.category_with_axiom.Unital.Commutative TypeError
In examples 80, 106, 111 and 113, C.an_instance()
does return something, but it does not return an instance of C.
And please don't forget:
sage: type(sage.categories.category_with_axiom.SmallTestObjects.Connected) <type 'int'> sage: type(sage.categories.category_with_axiom.SmallTestObjects.Commutative) <type 'classobj'>
which are bugs, too.
Have you tried the category_sample function?
Never heard of it before, hence: No.
I am not sure. If there is some inconsistency, you can always work in a sublattice (with bottom) that contains all the involved categories, rather than the full lattice. And I would not be surprised if it could be argued that one could always choose such a sublattice with a category that is actually implemented in Sage.
Sublattice it is. But how big do you need to choose the sublattice in order to detect the inconsistency? Since we already have Gröbner bases in this thread: Generally you'll need to consider elements of rather high degree in order to detect all relations in small degree. Sure, in a boolean polynomial ring the order is bounded by the number of generators, as they are idempotent.
Right. But Sage's startup would still require being able to manipulate such a Gröbner basis in one form or the other. And one needs to make sure the Gröbner basis is consistent with all the code (that is Sage's compilation might require a Gröbner basis computation). And that it can be extended dynamically if users introduce new categories in their own library.
Exactly. If I had to choose between a Gröbner basis computation in a boolean polynomial ring (which is a relatively moderate task due to idempotency) and the requirement to manually do local choices that are globally consistent, I'd do the former.
I am certainly not saying it's not doable. But it introduces some complexity which has to be well motivated.
Are you talking about mathematical complexity? Then my answer is that the complexity of the underlying localglobal problem is there, whether we want it or not. And I'd rather have the complexity dealt with by a mathematical theory (commutative algebra) than by the inspiration of all future developers of categories.
Sorry, I can't resist; let me use the very argument that soo many people have raised when saying that all that category stuff was just overdesign. «Before introducing non trivial design to solve a scaling issue, one needs to be sure there is one in practice». So far, I haven't had a single time where I got bothered by that.
With exception of the Blahs.Unital.Blue
example, you mean...
For Fields vs Division rings, the asymmetry is *very* natural. You put the class in Fields.Finite, because it's about stuff valid for finite fields. And you put in DivisionRings? the theorem which tells you that in the context of division rings, the "Finite" axiom implies the "commutative" axioms.
Then consider the axiom a*b==a*c => b==c
(I don't know an English word for
it, thus call it "kürzbar"). There are infinite "kürzbare" rings that aren't
division ring (e.g., the ring of integers). However, for finite rings, being
"kürzbar" and "division" are equivalent.
So, you have a perfectly symmetric formulation:
Rings.Finite.Kürzbar==Rings.Finite.Division
. So, how to choose a default? In
this particular example you could argue that both are equal to
Fields.Finite
. But generally?
Next, imagine you have several of such symmetric statements. You can do a consistent symmetry breakbut that's probably equivalent to chosing a monomial order in a polynomial ring.
And
second DivisionRings?().Finite() would not coincide with Fields().Finite(); and this is the first thing you would test.
Why not? If I do
DivisionRings.Finite = FiniteFields # lazily imported from sage.categories.finite_fields
and
Fields.Finite = FiniteFields # lazily imported from sage.categories.finite_fields
then of course both are the same!
The only problem is that the current implementation would complain. And that's what I think is a deficiency of the current implementation. Doing the above is natural and easy syntax, and it should be supported.
I believe, and will work on proving formally, that the current implementation is perfectly welldefined and gives normal forms.
I am talking about people who want to extend the current implementation: Add new axioms, new basic categories, and in particular new mathematical theorems about categorial identities.
No I did not do wrong. It does demonstrates *on purpose* a *missing feature*: namely that you currently can't use the Blue_extra_super_categories mechanism in the category where the Blue axiom is defined.
You talk about an implementation detail (Blue_extra_super_categories
). I am
talking about the fact that in this example (made up) mathematical axioms do not commute.
comment:434 Changed 4 years ago by
PS: In th above list of 121 category classes with failing an_instance()
, there are of course many classes that probably should not be directly instantiated. So, the "real" list will be much shorter. Above list was found by an automatic procedure that simply imported all available subclasses of Category
and tried an_instance()
on them.
comment:435 in reply to: ↑ 431 ; followup: ↓ 439 Changed 4 years ago by
Replying to nthiery:
Have you tried the category_sample function?
Now I have, and I get a list of only 97 categories. Blahs()
is missing. But anyway, thank you for the pointer.
Right. But Sage's startup would still require being able to manipulate such a Gröbner basis in one form or the other. And one needs to make sure the Gröbner basis is consistent with all the code (that is Sage's compilation might require a Gröbner basis computation).
In the code, you would provide the local data for the plain acyclic digraph structure of categories. The Gröbner basis would be used to extract from it consistent local data of a spanning tree that is specified by the (fixed) choice of a monomial order. This does not happen at compile time, but only when a class is created (by a metaclass).
The only difference to the current implementation: Currently, you need to specify local data for the plain acyclic digraph and at the same time provide local data for a spanning tree (by moving some stuff into cached methods rather than class attributes) that is not explicitly specified (it is implicitly specified by the perceived asymmetry in statements of mathematical theorems).
I indicated before: Making the databasemetaclassindexedbystandardmonomials approach productive clearly is for a different ticket. However, creating a tool that asserts consistency of the current implementation is something that I see happening on this ticket. That's why I don't move the discussion to a different ticket, and that's why I keep working on consistency.py.
Sorry, I can't resist; let me use the very argument that soo many people have raised when saying that all that category stuff was just overdesign. «Before introducing non trivial design to solve a scaling issue, one needs to be sure there is one in practice». So far, I haven't had a single time where I got bothered by that.
Well, in my early Sage days I occasionally complained that the source code of
category stuff can hardly be found (thus, I improved sage.misc.sageinspect
) and that
the category framework is responsible for slowing things down (thus, I made
some contributions in that regard). But I did not raise the very argument you
are mentioning. So, I am clearly entitled to consider overdesign to solve
farfetched scalability issues ;)
.
Since every category class is supposed to be TestSuit?'ed, we should cover all categories this way. Do you think we would get a global enough view?
No.
Can you give me a sketch of scenario where this would fail?
I am not saying that it would necessarily fail. However, a local test
may fail. And rather than repeating the same local test over and over in the TestSuite
of any category, I'd like to have one test
(say, a doctest of sage.categories.categories_with_axiom
) that takes into
account the whole digraph and is thus reliable.
I believe, and will work on proving formally, that the current implementation is perfectly welldefined and gives normal forms.
I believe it, too, and I am working on a formal proof (using commutative algebra), modulo missing instances of some category classes.
In addition, the minimum of what I want is this: Provide a tool that asserts consistency of choices, so that future developers are prevented from doing wrong choices when they extend the categorywithaxiom framework.
It would be used as follows: Developer X implements a new category class C and
wants to make it accessible by applying certain axioms to certain base
categories. Then, X can call a function that returns the correct choice of
a default construction, hence, the correct choice of
C._base_category_class_and_axiom
(the, because it should be uniquely
determined after fixing a monomial order), which also tells where X should use
a (lazily imported) class attribute and where a cached method in the
SubcategoryMethods
.
Anyway, I read you agree that this is all for a later ticket, right?
Partially. A concistency checker is something for here. A databasemetaclass turning the checker into a productive tool to simplify the implementation of new categorieswithaxiom is for later.
Can you please move the discussion to a different ticket?
See reasoning above.
Best regards,
Simon
comment:436 followup: ↓ 437 Changed 4 years ago by
As indicated above, it seems natural to me that we prefer a "short" construction (start with a basic category and apply few axioms) over a "long" construction (involving a long chain of axioms. This means we would prefer a degree order. However, it really matters what happens after comparing degrees.
In all examples, I am using a degneglex order.
 Start with the basic categories reversedly sorted by their name,
followed by the axioms in the reversed order given by
sage.categories.categories_with_axioms.all_axioms
. Then, there is only one complaint, namely:Blahs.Unital.Blue
should coincide withBlahs.Unital
.
 Start with the basic categories directly sorted by their name, followed
by the xioms in the reversed order given by
sage.categories.categories_with_axioms.all_axioms
. Then, we additionally find:TestObjects.FiniteDimensional.Unital
should better be provided byBars.Unital.FiniteDimensional
. Similarly for other educational examples insage.categories.category_with_axiom
.
 Start with the basic categories reversedly sorted by their name,
followed by the axioms in the direct order given by
sage.categories.categories_with_axioms.all_axioms
. Then the problems are similar to the previous case, in the educational examples insage.categories.category_with_axiom
. Such as:TestObjects.FiniteDimensional.Unital
should better be provided byTestObjects.Blue.FiniteDimensional
.
 Start with the basic categories directly sorted by their name, followed
by the axioms in the direct order given by
sage.categories.categories_with_axioms.all_axioms
. Again, problems with the educational examples insage.categories.category_with_axiom
. Such as:TestObjectsOverBaseRing.Unital
should rather be provided asTestObjectsOverBaseRing.Blue
.
This result is temporary, as I still seem to miss a couple of category classes. However, what does it tell us?
On the plus side, all "real" examples work consistently.
On the negative side, a consistent choice of local spanning tree data does depend on choosing a monomial order. This order is nowhere explicit, but with some orders the choices made in educational examples fail. Who can guarantee that the same will never happen in future real world examples, unless we make the implicit order explicit?
On the neutral side, I am still not sure whether my consistency test is airtight and waterproof...
comment:437 in reply to: ↑ 436 Changed 4 years ago by
Hi Simon!
Short answer for now. I have a longer answer from yesterday which did not go through before I went to bed.
Replying to SimonKing:
As indicated above, it seems natural to me that we prefer a "short" construction (start with a basic category and apply few axioms) over a "long" construction (involving a long chain of axioms. This means we would prefer a degree order. However, it really matters what happens after comparing degrees.
In all examples, I am using a degneglex order.
 Start with the basic categories reversedly sorted by their name, followed by the axioms in the reversed order given by
sage.categories.categories_with_axioms.all_axioms
. Then, there is only one complaint, namely:Blahs.Unital.Blue
should coincide withBlahs.Unital
.
 Start with the basic categories directly sorted by their name, followed by the xioms in the reversed order given by
sage.categories.categories_with_axioms.all_axioms
. Then, we additionally find:TestObjects.FiniteDimensional.Unital
should better be provided byBars.Unital.FiniteDimensional
. Similarly for other educational examples insage.categories.category_with_axiom
.
 Start with the basic categories reversedly sorted by their name, followed by the axioms in the direct order given by
sage.categories.categories_with_axioms.all_axioms
. Then the problems are similar to the previous case, in the educational examples insage.categories.category_with_axiom
. Such as:TestObjects.FiniteDimensional.Unital
should better be provided byTestObjects.Blue.FiniteDimensional
.
 Start with the basic categories directly sorted by their name, followed by the axioms in the direct order given by
sage.categories.categories_with_axioms.all_axioms
. Again, problems with the educational examples insage.categories.category_with_axiom
. Such as:TestObjectsOverBaseRing.Unital
should rather be provided asTestObjectsOverBaseRing.Blue
.This result is temporary, as I still seem to miss a couple of category classes. However, what does it tell us?
On the plus side, all "real" examples work consistently.
On the negative side, a consistent choice of local spanning tree data does depend on choosing a monomial order. This order is nowhere explicit, but with some orders the choices made in educational examples fail. Who can guarantee that the same will never happen in future real world examples, unless we make the implicit order explicit?
Let me state it in bold: the current algorithm has been designed so that *there is no localglobal problem*. The global consistency conditions that you are trying to impose on the code are *not* required by the current specifications. You need a tree, but the tree need not be that of standard monomials for whatever term order. One can for example very well choose to implement an axiom category as Cs.A.B in a category Cs, and in Ds.B.A in a subcategory.
When I claimed the code was correct, I really meant that the infrastructure algorithm was correct. That is, not only the current category code works properly, but every new category/axiom code written that respects the specifications given in the axiom documentation should work properly.
You are of course very well entitled to not believe me and I am glad that you are checking my claims. Which is why I'll try today to formalize a proof of the infrastructure. One of the point is that the algorithm computes a normal form, but that normal form is given by the lattice structure itself, without a need for a term order. The computations occur in a concrete lattice, not in the free lattice modulo relations.
In the mean time, a hint is the fact that the algorithm gives correct results even on examples where there are local choices that do not satisfy your global consistency conditions (I am of course excluding the example which breaks voluntarily the current specifications).
Cheers,
Nicolas
comment:438 in reply to: ↑ 433 Changed 4 years ago by
Replying to SimonKing:
Here are the classes and the errors I am getting:
Thanks!
Let me go through a sample of the failures that represent all the others.
0 sage.combinat.ncsym.bases.NCSymDualBases AttributeError
Ah right, category_sample() is not looking outside of the sage.categories.all module. It should. This is now #15696.
Hmm, there will be a bunch of similar categories (basically in each
algebra A
with several bases). They take A
as first argument. We
will need to see how to treat them in #15696.
1 sage.categories.modules_with_basis.ModulesWithBasis?.DualObjects? TypeError?
Interesting: we caught a bug. This should be a category over base ring and it's not. Of course, since this is currently unused, this went unnoticed.
I added this to #15647.
2 sage.categories.algebra_functor.AlgebrasCategory? AssertionError?
As you mentioned, like a couple others belows, this is not a category, but an abstract category class.
3 sage.categories.additive_magmas.AdditiveMagmas?.Algebras AssertionError?
Ok, the default an_instance() fail for most functor categoryies (XXX.Algebras, XXX.Quotients, XXX.CartesianProducts?, ...). It's probably possible to fix all at once.
I put this in #15696 too.
32 sage.categories.category_with_axiom.BrokenTestObjects? NotImplementedError?
Ok, meant to be broken :)
63 sage.categories.category_with_axiom.SmallTestObjects? NotImplementedError?
Oh, right, this old test class is not used anymore! I removed it.
107 sage.categories.category_with_axiom.TestObjectsOverBaseRing?.Unital TypeError?
Interesting, bug again: they should have been CategoryWithAxiom_over_base_ring:
sage: TestSuite(TestObjectsOverBaseRing(QQ).FiniteDimensional()).run() Failure in _test_category_with_axiom: Traceback (most recent call last): File "/opt/sagegit/local/lib/python2.7/sitepackages/sage/misc/sage_unittest.py", line 282, in run test_method(tester = tester) File "/opt/sagegit/local/lib/python2.7/sitepackages/sage/categories/category_with_axiom.py", line 1337, in _test_category_with_axiom tester.assertIsInstance(self, CategoryWithAxiom_over_base_ring) File "/opt/sagegit/local/lib/python/unittest/case.py", line 969, in assertIsInstance self.fail(self._formatMessage(msg, standardMsg)) File "/opt/sagegit/local/lib/python/unittest/case.py", line 412, in fail raise self.failureException(msg) AssertionError: Category of finite dimensional test objects over base ring over Rational Field is not an instance of <class 'sage.categories.category_with_axiom.CategoryWithAxiom_over_base_ring'>
This would have been caught with a TestSuite? but there was none. I just fixed this and pushed.
111 <class 'sage.categories.finite_dimensional_modules_with_basis.FiniteDimensionalModulesWithBasis?'> yields instance of <class 'sage.categories.category.JoinCategory_with_category'> 113 <class 'sage.categories.modules.Modules.FiniteDimensional?'> yields instance of <class 'sage.categories.category.JoinCategory_with_category'>
Hmm, same gag as for Modules(QQ) > VectorSpaces?(QQ).
And please don't forget:
sage: type(sage.categories.category_with_axiom.SmallTestObjects.Connected) <type 'int'> sage: type(sage.categories.category_with_axiom.SmallTestObjects.Commutative) <type 'classobj'>which are bugs, too.
Well, they were *voluntary* bugs. I was using those at some point to test the assertion checks in the code. Anyway, gone with the wind.
About global consistency test w.r.t. consistency tests within the lattice of super categories:
Let's not waste more time on this. You are implementing it, you take the final decision :) I just meant to make sure you took into account the advantages and inconvenient of both approaches (including the deviation from TestSuite?, that discovering the categories in the code is not immediate, that you need an_instance() to work, and that you can't make use of the tests to know what are the relevant/interesting input to feed the category constructor).
I am certainly not saying it's not doable. But it introduces some complexity which has to be well motivated.
Are you talking about mathematical complexity? Then my answer is that the complexity of the underlying localglobal problem is there, whether we want it or not. And I'd rather have the complexity dealt with by a mathematical theory (commutative algebra) than by the inspiration of all future developers of categories.
I am speaking of technical complexity, and that for solving what I believe to be a non issue. As I said, there should be no localglobal problem.
So far, I haven't had a single time where I got bothered by that.
With exception of the
Blahs.Unital.Blue
example, you mean...
Well, it's not like it did strike me from behind. I made up this example voluntarily to shake the system, see how it would behave when violating the specifications, and document a potential limitation (I never had an actual use case); and it failed as expected.
Then consider the axiom
a*b==a*c => b==c
(I don't know an English word for it, thus call it "kürzbar"). There are infinite "kürzbare" rings that aren't division ring (e.g., the ring of integers). However, for finite rings, being "kürzbar" and "division" are equivalent.So, you have a perfectly symmetric formulation:
Rings.Finite.Kürzbar==Rings.Finite.Division
. So, how to choose a default?
(from a quick googling, that's the cancellation property: http://en.wikipedia.org/wiki/Integral_domain).
This is not symmetric, because Rings().Division() is a subcategory of Rings.Kurzbar(). Hence, the system already knows that Rings().Finite().Division() is a subcategory of Rings().Finite().Kürzbar(). The interesting part of the theorem, and that that we need to teach Sage, is really about the reverse inclusion: namely that, in the context of Rings.Kurzbar, Finite implies Division. Hence, this theorem is naturally implemented in Rings.Kurzbar.Finite_extra_super_categories.
Next, imagine you have several of such symmetric statements. You can do a consistent symmetry breakbut that's probably equivalent to chosing a monomial order in a polynomial ring.
No: the order is given by the subcategory relation, as above.
And
second DivisionRings?().Finite() would not coincide with Fields().Finite(); and this is the first thing you would test.
Why not?
The purpose of my comment was to demonstrate that, in the current implementation, if you make a mistake then this mistake is immediately caught.
Doing the above (DivisionRings? = Fields.Finite) is natural and easy syntax, and it should be supported.
I agree it is rather natural. I actually tried (hard) to implement it. But knowingly decided against because not having the tree structure was making the algorithmic really much more convoluted.
And besides, the current syntax is not unnatural either. Granted, the name XXX_extra_super_categories is not great, but having to write a method to model something as important as a theorem is ok to me; if not just because it gives a natural spot to document and test the modeling of the theorem. And it's consistent with what we have been doing everywhere else: the mathematical facts which relate the category together are implemented in the super_categories methods (and their variants like extra_super_categories).
No I did not do wrong. It does demonstrates *on purpose* a *missing feature*: namely that you currently can't use the Blue_extra_super_categories mechanism in the category where the Blue axiom is defined.
You talk about an implementation detail (
Blue_extra_super_categories
). I am talking about the fact that in this example (made up) mathematical axioms do not commute.
Well, yes, the specifications are voluntarily violated by this example and the infrastructure gives back wrong results (in the form of non commuting axioms), which is the whole point of the example. Call it whatever you like, but the infrastructure itself is behaving properly here: garbage in, garbage out.
I just added the following to the documentation of Blue_extra_super_categories:
.. TODO:: Improve the infrastructure to detect and report this violation of the specifications, if this is easy. Otherwise, it's not so bad: when defining an axiom A in a category ``Cs`` the first thing one is supposed to doctest is that ``Cs().A()`` works. So the problem should not go unnoticed.
Cheers,
Nicolas
comment:439 in reply to: ↑ 435 Changed 4 years ago by
Replying to SimonKing:
Well, in my early Sage days I occasionally complained that the source code of category stuff can hardly be found (thus, I improved
sage.misc.sageinspect
) and that the category framework is responsible for slowing things down (thus, I made some contributions in that regard).
That's certainly right, and I am soo glad that you believed in the design and contributed so much making it not only a reality but a viable reality!
But I did not raise the very argument you are mentioning. So, I am clearly entitled to consider overdesign to solve farfetched scalability issues
;)
.
:)
If this goes beyond "considering" be prepared to defend it though. Besides we have a limited work power and have lots of concrete scalability issues (e.g. around morphisms) that we have to work on.
I am not saying that it would necessarily fail. However, a local test may fail. And rather than repeating the same local test over and over in the
TestSuite
of any category, I'd like to have one test (say, a doctest ofsage.categories.categories_with_axiom
) that takes into account the whole digraph and is thus reliable.
Oh, I forgot on point in my other message. Promised I am not commenting anymore on that after. An advantage of a local test is that a category writer will typically run local TestSuite?'s immediately and global tests only from time to time.
Partially. A concistency checker is something for here. A databasemetaclass turning the checker into a productive tool to simplify the implementation of new categorieswithaxiom is for later.
Ok. I am yet to be convinced about the very relevance of the checker (since I believe there is no local/global consistency required). However, as a side effect, by working on it you revealed unrelated little bugs. Besides it's a small project and you are the one spending time on it. So if this makes you more comfortable, go ahead.
Cheers,
Nicolas
comment:440 in reply to: ↑ 426 ; followups: ↓ 441 ↓ 449 Changed 4 years ago by
Replying to vbraun:
I also think that the code on this ticket suffers from a lot of abuse of lazy imports. They are a useful tool, but they don't absolve you from thinking about import order of modules.
Quite on the contrary, the code is being thoughtfully specific about import order. It's being explicit that, e.g., the Magmas category can be imported and is fully functional without importing Magmas.Associative (i.e. Semigroups). On the other hand, importing Semigroups really requires importing Magmas before hand.
If you do, then you'll run into precisely the kind of hardtodebug errors that we found. In particular, lazy imports that you have to resolve on startup are IMHO a sure sign of code smell.
Oh yes, it smells! But not for the reason you are pointing to. What's bad is that the corresponding categories are constructed on startup; and those are constructed because elsewhere in the Sage code some parents are constructed on startup.
And I precisely want to leave those lazy imports in the code so that it continues to smell and entices people to reduce the number of parents (and therefore categories) constructed on startup.
Each "at_startup=True" that will be removed will be a measure of progress.
You are deliberately omitting the other half of the story: If you have two unrelated classes
C
andD
thenC.a
andD.a
are unrelated in Python. And you are breaking that.
No, I am not!
Ah ah, but now maybe I see the source of the confusion. In order to make sure that confusion is cleared, let me be very pedestrian, at the risk of being pedantic (I apologize in advance if I am).
Mathematically speaking, you agree that an axiom *is* naturally tied to a most basic category, right? That which provides the language necessary to express the semantic of the axiom.
Expressing this tie in the code is very relevant: it makes the developer/reader think about the semantic of the axiom and what structure it is about. And it specifies the context in which the axiom is defined for (namely all subcategories).
I believe, from experience, that the category is the right place to express this tie: in particular because looking at the code of the category exposes what its structure gives as new axioms and constructions.
With that in mind, I have introduced the following definition in the documentation: a category *defines* an axiom if it's the most basic category where the axiom makes sense. This is where the axiom and its semantic should be specified. A category *implements* an axiom if it provides a category with axiom that gives additional code for its objects satisfying the axiom. Of course, a category implementing an axiom should be a subcategory of that defining that axiom.
This is in exact parallel to classes defining (the semantic of) a method, respectively implementing a method. Attaching a semantic to a name in a class/category fixes the semantic of that name for every subclass/subcategory. And it has the exact same nameclash limitations. Two independent categories can very well *define* axioms with the same name but different semantics. But there should be no axiom with that name in their super categories (otherwise the semantic of that name would already be fixed). And there should be no common subcategory.
In practice in Sage, the category that actually defines an axiom might
occasionally be a subcategory of the (mathematically speaking) most
basic category C
. This can be typically because C
is not yet
modeled, or for legacy reasons. Or possibly because one later
discovers/wants to implement a better formulation of the semantic of
the axiom that requires less structure to be expressed. Hence the
potential for later having to move up the definition of an
axiom. That's just following a usual moveupthe class hierarchy
refactoring pattern.
Ok, off for lunch!
Cheers,
Nicolas
comment:441 in reply to: ↑ 440 ; followup: ↓ 442 Changed 4 years ago by
Replying to nthiery:
Replying to vbraun:
I also think that the code on this ticket suffers from a lot of abuse of lazy imports. They are a useful tool, but they don't absolve you from thinking about import order of modules.
Quite on the contrary, the code is being thoughtfully specific about import order. It's being explicit that, e.g., the Magmas category can be imported and is fully functional without importing Magmas.Associative (i.e. Semigroups). On the other hand, importing Semigroups really requires importing Magmas before hand.
+1 (see my reply to Volker in comment:428.
You are deliberately omitting the other half of the story: If you have two unrelated classes
C
andD
thenC.a
andD.a
are unrelated in Python. And you are breaking that.No, I am not!
Let's see if you give the same arguments that I gave in
comment:428... readingreadingreadin  Yes, you do, so +1 :)
Ok, off for lunch!
Bon appetit !
comment:442 in reply to: ↑ 441 Changed 4 years ago by
'Replying to SimonKing:
+1 (see my reply to Volker in comment:428. comment:428... readingreadingreadin  Yes, you do, so +1
:)
Yes, sorry for the redundancy; I had formulated the answers in my head yesterday, and since the wording was slightly different, I decided it would not hurt as a complement.
comment:443 followup: ↓ 447 Changed 4 years ago by
Dear Nicolas,
it seems to me that I still don't see exactly where the theoretical model ends and where the implementation details start. Also, I am not sure if we talk about the same when we both say "consistency". Therefore I try to formulate what I think you claim, asking you to correct where I am wrong, and also I give you an example, asking you to explain to me how to implement it in your model and how your model detects the inconsistency in my example.
First, a pledge: Could you please push your latest commits providing the latest documentation?
We agree that we have an acyclic digraph, the nodes being categories, i.e., instances of category classes, the arrows being labelled with axioms that are being applied to the start point of the arrow and result in the end point of the arrow.
We agree that we should single out a spanning tree. It is useful, e.g., for getting a proper inheritance of Python classes (think of parent and element classes).
Questions:
 Do you claim that the choice of a spanning tree doesn't matter at all? Would any spanning tree work?
 Do you claim that all theorems about categorial identities are and will in future be asymmetric, so that they give a natural choice of a spanning tree?
Now let's assume we have chosen a spanning tree, and see how that choice can be/is implemented.
For specifying a spanning tree (or rather: forest) in an acyclic digraph, it
is sufficient to choose one incoming arrow, for any node that has incoming
arrows. Agreed? This is done by the explicit or implicit definition of
C._base_category_class_and_axiom
.
In addition to that (and this is where I think the implementation deviates from the theoretical model), you say that at each node one should additionally mark the outgoing
arrows that belong to the spanning tree: The outgoing arrows belonging to the
spanning tree result in class attributes, the other outgoing arrows result in
(cached) subcategory methods. Is this a correct description? And somehow there
are these <Axiom name>_extra_super_categories
methods, which relate
with the nonspanningtreeoutgoingarrows as well, right?
This gives rise to a couple of questions:
 Of course, if you do specify the spanning tree both on incoming and outgoing
arrows, then this specification should be consistent. Is this what you mean
when you talk about a "consistent choice"? Then I agree that it is a purely
local problem. I don't think that one needs to test it in the
TestSuite
, because you already test it in__classget__
.
 Why do you think that one needs to specify the spanning tree twice (incoming and outgoing)?
Related with the second question: I know that in your current implementation,
one can not both define DivisionRings.Finite = FiniteFields
and
Fields.Finite = FiniteFields
but it seems to me that this is only
because you chose to give the same data twice. So, can you explain in the
theoretical model why it is illegal to assign DivisionRings.Finite = FiniteFields
and why it is needed to use all this
Finite_extra_supercategories
and cached subcategory methods magic?
Or is it just because of the implementation? In this case, please elaborate (or give a pointer to a place where you did elaborate already) why providing the same data twice is important and an implementation is hardly doable without the duplication of information.
Now I come to an example.
Assume we have a category class As
, and axioms B, C, D, E, F
that can all
be called on As()
. In principle, any subset of the axioms can be
successively applied to As()
in any order.
If I was to implement it, I would provide class attributes `As.B, As.C, ...,
As.F` whose values are categoryclasseswithaxiom. Each of these classes has
_base_category_class_and_axiom = (As, 'B'/'C'/.../'F')
. Do you agree that
this is what one should do?
Next, there are two axioms that need to hold for our categorywithaxiom
framework: Applying axioms is commutative, and applying axioms is
idempotent. Hence, we need As().B().C()==As().C().B()
. In other words,
As().B().C().__class__.__base__
has two incoming arrows, and we need to pick
one of them (say, the one labelled "C") in order to specify a tree. In
addition to that, you say that one has to do something special with
As().C().B
: It can not be a class attribute but should be a subcategory method
or so (I am still not buying why this is needed).
So, where are we? We have some categories, that are essentially labelled by subsets (not ordered!) of axioms "B",...,"F", and by specifying a spanning tree we obtain labels that are ordered subsets of axioms "B",...,"F".
Next, William Stein proves that As().B().C()==As().E().F()
. Now, we can of
course change the code so that As().B().C
becomes a subcategory method
returning As().E().F()
, and the old class As.B.C
is removed.
But the point I want to make: This is not enough. We still can apply axioms
"B", "C" and "D" to As().E().F()
. But we should have
As().E().F().D().B().C() == As().B().C().E().F().D() # commutativity == As().E().F().E().F().D() # William's theorem == As().E().E().F().F().D() # commutativity == As().E().F().D() # idempotency == As().D().E().F() # I guess you are likely to choose the # spanning tree by applying axioms in # lexicographic order.
So, a fix is needed! This is what I mean when I speak about consistency, and this is what I think is a difficult localglobal problem.
Questions:
 Assume that a developer would simply make
As().B().C()
returnAs().E().F()
. This would be a bug, by the above reasoning. Would the categorywithaxiom framework detect this bug and raise an error on the attempt to createAs().E().F().D().B().C()
? Can you demonstrate it in the above example, and can you prove that it will always detect the bug?
 What does a developer need to do to fix the code?
 How can a developer determine what needs to be done to fix the code? In the example above, I demonstrated one case that needed to be fixed. Do you really require that the developer draws the whole category digraph on a sheet of paper and traces (similar to the above reasoning) what has to be done with the chosen spanning tree after merging two of its nodes?
 Do you provide any tool that tells the developer what needs to be done?
For the record: I think commutative algebra can be used to detect that there is a bug and could make a suggestion on how to fix the bug. Ceterum censeo: In the long run (on a different ticket) a database of category classes can use commutative algebra to deal with the above localglobal problem so that developers don't need to think about it when coding. The axioms shouldn't be provided by coding nested classes or adding direct assignment of class attributes into the code, since this would be the job of the database.
comment:444 followup: ↓ 445 Changed 4 years ago by
First, I just want to state my agreement with others' opinions (e.g. Nils's comment:326, Volker's comment:327) that it would be extremely desirable to avoid using name parsing to deduce mathematical properties from Python names.
Second, I think the new method CartesianProduct.summands()
is inaptly named. The things of which a product is composed (also in the categorical sense, in my experience) are normally called factors! Recall that in a category where both sums and product exist, they are usually not the same. For example, in the category of sets, the sum is the disjoint union, and in the category of rings, the sum is the tensor product. It makes sense that the components of which a product is composed should be called factors and the components of which a sum is composed should be called summands.
comment:445 in reply to: ↑ 444 ; followup: ↓ 450 Changed 4 years ago by
Replying to pbruin:
Second, I think the new method
CartesianProduct.summands()
is inaptly named. The things of which a product is composed (also in the categorical sense, in my experience) are normally called factors! Recall that in a category where both sums and product exist, they are usually not the same. For example, in the category of sets, the sum is the disjoint union, and in the category of rings, the sum is the tensor product. It makes sense that the components of which a product is composed should be called factors and the components of which a sum is composed should be called summands.
Granted, it's not perfect, but it's consistent with the other
preexisting summand_
methods in the context of cartesian products.
I'd be happy to change it, but then we should change all of them at
once for consistency. IMHO This would be best handled in a followup
ticket since this one is already way too big. I am happy adding a
warning about the probable name change in the documentation though.
Also, I would like something different from factors, since we will
also use it in the context of monoids (like for making cartesian
products thereof), and
factors
would be ambiguous. Any suggestions?
Cheers,
Nicolas
comment:446 Changed 4 years ago by
cartesian_factors
?
comment:447 in reply to: ↑ 443 ; followup: ↓ 448 Changed 4 years ago by
Replying to SimonKing:
it seems to me that I still don't see exactly where the theoretical model ends and where the implementation details start. Also, I am not sure if we talk about the same when we both say "consistency". Therefore I try to formulate what I think you claim, asking you to correct where I am wrong, and also I give you an example, asking you to explain to me how to implement it in your model and how your model detects the inconsistency in my example.
Great.
First, a pledge: Could you please push your latest commits providing the latest documentation?
It's done (in my branch u/nthiery/ticket/10963). Ah, I had not recompiled the doc recently on sagemath.org; done.
We agree that we should single out a spanning tree. It is useful, e.g., for getting a proper inheritance of Python classes (think of parent and element classes).
More importantly, it's useful for the algorithmic.
Questions:
 Do you claim that the choice of a spanning tree doesn't matter at all? Would any spanning tree work?
Yes. Up to one extra constraint: if Cs().A() coincides with Ds().A(), with Ds a subcategory of Cs, then the category with axiom should be in Ds.A.
 Do you claim that all theorems about categorial identities are and will in future be asymmetric, so that they give a natural choice of a spanning tree?
Yes. Well, at least, all the use case I have met or foreseen so far are of this form.
In addition to that (and this is where I think the implementation deviates from the theoretical model), you say that at each node one should additionally mark the outgoing arrows that belong to the spanning tree: The outgoing arrows belonging to the spanning tree result in class attributes, the other outgoing arrows result in (cached) subcategory methods. Is this a correct description?
Let me refine it a bit: if Cs defines an axiom A, and Ds is a subcategory, then Ds().A always results in the method Cs.SubcategoryMethods?.A whose job is to add the axiom. And if Ds further implements A in Ds.A, then the call Ds().A() will use that class in the process.
And somehow there are these
<Axiom name>_extra_super_categories
methods, which relate with the nonspanningtreeoutgoingarrows as well, right?
In general, the extra_super_categories method can be used to provide additional inheritance information that can't be derived automatically by the system, e.g. is not a direct consequence of the commutativity of axioms.
This gives rise to a couple of questions:
 Of course, if you do specify the spanning tree both on incoming and outgoing arrows, then this specification should be consistent. Is this what you mean when you talk about a "consistent choice"? Then I agree that it is a purely local problem. I don't think that one needs to test it in the
TestSuite
, because you already test it in__classget__
.
By consistent I meant that the computed results are what we expect mathematically (including commutativity of axioms, ...).
 Why do you think that one needs to specify the spanning tree twice (incoming and outgoing)?
At the level of the classes and the code, the only algorithmically relevant link is that of the form Sets.Finite = FiniteSets?. The reverse link is only there to allow for calling FiniteSets?() as syntactic sugar for Sets().Finite(). Or Fields() for DivisionRings?().Commutative().
Of course, at the level of the categories, Sets().Finite() needs to have a link to Sets() for the algorithmic to work, but that link can be setup later at initialization.
Related with the second question: I know that in your current implementation, one can not both define
DivisionRings.Finite = FiniteFields
andFields.Finite = FiniteFields
but it seems to me that this is only because you chose to give the same data twice. So, can you explain in the theoretical model why it is illegal to assignDivisionRings.Finite = FiniteFields
and why it is needed to use all thisFinite_extra_supercategories
and cached subcategory methods magic? Or is it just because of the implementation? In this case, please elaborate (or give a pointer to a place where you did elaborate already) why providing the same data twice is important and an implementation is hardly doable without the duplication of information.
For the subcategory method magic, the answer is easy: If Cs defines the axiom A, then I want Ds().A() to work for every subcategory Ds, whether Ds implements A or not. Putting A in Cs.SubcategoryMethods? models naturally that Cs defines the axiom for every subcategories.
About not allowing DivisionRings?.Finite=Fields.Finite, this is about the algorithm (so not the model) which is highly recursive; without that assumption, one needs to detect the situation to avoid running in a recursion loop, and the detection is tricky. I will elaborate on it in the documentation and let you know when done.
Now I come to an example.
Great.
I am going to play with it now and report. Did you have any specific ideas on which subsets of axioms are actually implemented in your example? Otherwise, I'll pick a couple typical situations.
Cheers,
Nicolas
comment:448 in reply to: ↑ 447 Changed 4 years ago by
Replying to nthiery:
First, a pledge: Could you please push your latest commits providing the latest documentation?
It's done (in my branch u/nthiery/ticket/10963). Ah, I had not recompiled the doc recently on sagemath.org; done.
This is not the branch associated with this ticket. That's why I couldn't find it.
comment:449 in reply to: ↑ 440 ; followup: ↓ 468 Changed 4 years ago by
Replying to nthiery:
Oh yes, it smells! But not for the reason you are pointing to. What's bad is that the corresponding categories are constructed on startup;
Well a few categories are always going to be constructed on startup, the majority shouldn't.
And I precisely want to leave those lazy imports in the code Each "at_startup=True" that will be removed will be a measure of progress.
I'm sorry, I overlooked the comment in your code that stated that you want to get rid of those importsonstartup. Oh, no comment? In that case I'm sorry for not having any telepathic abilities to read your mind ;)
Mathematically speaking, you agree that an axiom *is* naturally tied to a most basic category, right? That which provides the language necessary to express the semantic of the axiom.
I agree that there is, mathematically speaking, always some join category that would be the most basic. But there is no need to require Sage developers to implement a common supercategory by hand. Of course you can, but the whole point of this ticket is to reduce the number of categories that you must construct by hand.
comment:450 in reply to: ↑ 445 ; followup: ↓ 465 Changed 4 years ago by
Replying to nthiery:
Granted, it's not perfect, but it's consistent with the other preexisting
summand_
methods in the context of cartesian products. I'd be happy to change it, but then we should change all of them at once for consistency. IMHO This would be best handled in a followup ticket since this one is already way too big. I am happy adding a warning about the probable name change in the documentation though.
The existing summand_*
methods I could find are
CartesianProduct.summand_projection() Sets.CartesianProducts.ParentMethods.summand_projection() Sets.CartesianProducts.ElementMethods.summand_projection() Sets.CartesianProducts.ElementMethods.summand_split() CombinatorialFreeModule_CartesianProduct.summand_embedding() CombinatorialFreeModule_CartesianProduct.summand_projection()
Maybe the quickest solution is to insert betternamed aliases for these, rename the method summands()
introduced here, and later deprecate summand_projection()
and summand_split()
in a different ticket.
My first reflex would be to rename summand_projection()
to projection()
and summand_split()
to tuple()
. If this is too conflictprone, maybe using the prefix cartesian_
suggested by Simon would be a solution?
For products of modules (the last two methods in the above list), calling the components "summands" is OK if and only if there are only finitely many summands/factors; in that case the product and sum coincide, since modules form an additive category.
Also, I would like something different from
factors, since we will also use it in the context of monoids (like for making cartesian products thereof), and
factors
would be ambiguous.
I'm confused; isn't a Cartesian product of monoids just the Cartesian product of the underlying sets, with the obvious monoid structure? Or do you mean that a generic monoid will have a factors()
method that does something unrelated?
comment:451 Changed 4 years ago by
I've started a branch that separates the axioms into independent classes at as an alternative. It is posted at #15701 so we can make use of the git/trac integration features.
comment:452 Changed 4 years ago by
Offtrac, Nicolas has shown me what happens in the example sketched in comment:443. I think what he has shown to me is worth to be put into the docs, and also it indicates that Nicolas' approach is able to deal with nontrivial consequences of "merging" axioms. If Nicolas is not faster, I'll comment more on it later.
comment:453 followup: ↓ 454 Changed 4 years ago by
Here is the announced example. Since Nicolas did not post here, I do. But it is his example.
Recall: I wanted to start with the category As()
, with available axioms "B",...,"F". In the first place, there should be no relations between the axioms. Moreover, I somehow want that As().B().C()
is implemented using a dedicated class (such as FiniteFields()
uses a dedicated class) and no join category is needed at this point.
This can be done by a basic category Bases
that defines the axioms:
from sage.categories.category_singleton import Category_singleton from sage.categories.category_with_axiom import axiom, CategoryWithAxiom import sage.categories.category_with_axiom sage.categories.category_with_axiom.all_axioms += ("B","C","D","E","F") # This is just here so that As is not the category that defines the axioms. class Bases(Category_singleton): def super_categories(self): return [Objects()] class SubcategoryMethods: B = axiom("B") C = axiom("C") D = axiom("D") E = axiom("E") F = axiom("F") class B(CategoryWithAxiom): pass class C(CategoryWithAxiom): pass class D(CategoryWithAxiom): pass class E(CategoryWithAxiom): pass class F(CategoryWithAxiom): pass
and then, As
becomes
class As(Category_singleton): def super_categories(self): return [Bases()] class B(CategoryWithAxiom): class C(CategoryWithAxiom): pass class E(CategoryWithAxiom): class F(CategoryWithAxiom): pass class D(CategoryWithAxiom): pass
Simple enough! And nicely, the commutativity and idempotency of applying axioms is taken care of by the system:
sage: As().C().B() is As().B().C().B() True sage: type(As().B().C()) <class '__main__.B.C_with_category'>
Of course, in the above implementation, one has a join category, if D
and another axiom are involved:
sage: type(As().B().D()) <class 'sage.categories.category.JoinCategory_with_category'>
And now, I assume that some theorem says that As().B().C()==As().E().F()
. How to modify the above code? And in particular: Will the system automatically take care of the implications of the theorem? Again, I want a dedicated class for As().B().C()
.
Here is one way to modify the code (calling the result As2
rather than As
):
class As2(Category_singleton): def super_categories(self): return [Bases()] class B(CategoryWithAxiom): class C(CategoryWithAxiom): def extra_super_categories(self): return [Bases().E(), Bases().F()] class E(CategoryWithAxiom): def F_extra_super_categories(self): return [Bases().B(), Bases().C()]
What does this code tell? Well, it tells that
As().B().C()
uses a dedicated class, and that it additionally satisfies axioms E and F, by providing extra super categories.As().E().F()
has the additional axioms B and C. Note the asymmetry in the definition: It tells thatAs().E().F()
will use the classAs.B.C
, and the output ofF_extra_super_categories
tells how to find this class.
What I don't like is that we have a method F_extra_super_categories
, whose name is then mangled.
In the first moment I thought that the asymmetry is not nice. However, on second thought, we must have asymmetry! After all, we want that As.B.C
is used for As().E().F()
and not the other way around.
I believe that the above solution is simple enough, and it does address my concern: The following works out of the box.
sage: As2().B().C() # it is recognised that axioms B, C, E and F hold Category of b c e f as2 sage: type(As2().E().F()) # the dedicated class is used <class '__main__.B.C_with_category'> sage: As2().B().F().D().E().C() is As2().B().C().D() True
The last line means that the system finds nontrivial consequences of the theorem together with idempotency and commutativity.
I don't think that the changes needed for implementing the theorem are totally obvious: One has to learn rules and naming conventions. However, if one has learnt these rules (which of course must be well documented) then implementing the theorem in the code seems to be fairly straight forward.
comment:454 in reply to: ↑ 453 ; followup: ↓ 455 Changed 4 years ago by
Replying to SimonKing:
Here is the announced example. Since Nicolas did not post here, I do.
Thanks Simon!
But it is his example.
Or more precisely, my implementation of your example :)
(which of course must be well documented)
There is a (not yet completely finalized) section about this in the documentation of axioms now (see Deduction Rules). Let me know if it's good enough.
Speaking of which: I have just pushed my work of the day. The join algorithm is described and partially formalized; however its companion _with_axiom is yet to be done and they work hand in hand. Yet this should give some hints about why it works.
This gave me the occasion to look back at the code, and I did some small simplifications; mostly clearing out a couple small features of the join method that were needed at some point and not anymore.
Pushed on u/nthiery/ticket/10963; doc recompiled on sagemath.
Cheers,
Nicolas
comment:455 in reply to: ↑ 454 ; followup: ↓ 459 Changed 4 years ago by
Replying to nthiery:
Pushed on u/nthiery/ticket/10963; doc recompiled on sagemath.
How does it relate with the branch that is attached to this ticket? Does the attached branch contain commits that are not in your branch (e.g., my commit to make the lazy imports even safer, by using as_name)?
At some point, the work branch should be attached to the ticket, I suppose.
comment:456 followups: ↓ 457 ↓ 460 Changed 4 years ago by
I agree with Simon's explanations. And it illustrates the point that I'm trying to make, if you show the code to a Python programmer then he'll be quite astonished that it does what it does since it seemingly consists only of all a pile of apparently unrelated inner classes.
The question about unnecessary breaking of symmetry already arises at
class As(Category_singleton): class B(CategoryWithAxiom): class C(CategoryWithAxiom): pass
why As.B.C and not As.C.B? The only difference is the internal representation of the class. In particular, this explicitly specified order is not used in the printing order of the categorywithaxiom.
The asymmetry in specifying the relations is also in addition (and unrelated) to the print sort order.
The only "error message" if you get the asymmetry wrong will be an infinite recursion.
comment:457 in reply to: ↑ 456 ; followups: ↓ 458 ↓ 461 ↓ 466 Changed 4 years ago by
Replying to vbraun:
I agree with Simon's explanations. And it illustrates the point that I'm trying to make,
Hence, you agree with my explanations, but not with my conclusions (namely that the example demonstrates that only the theorem needs to be implemented, but not its implications, and that implementing the theorem is fairly straightforward after learning the rules).
if you show the code to a Python programmer then he'll be quite astonished that it does what it does since it seemingly consists only of all a pile of apparently unrelated inner classes.
What would the same programmer say about the abc module?
The question about unnecessary breaking of symmetry already arises at
class As(Category_singleton): class B(CategoryWithAxiom): class C(CategoryWithAxiom): passwhy As.B.C and not As.C.B?
I said that it only seemed unnecessary to me at first! And after all, the necessity to choose a spanning tree makes it fairly obvious that one has to make choices at some point.
And a more symmetric solution is possible, too:
from sage.misc.lazy_attribute import lazy_class_attribute class CBAs(CategoryWithAxiom): @lazy_class_attribute def _base_category_class_and_axiom(cls): return(As.B, "C") class As(Category_singleton): def super_categories(self): return [Bases()] class B(CategoryWithAxiom): C = CBAs class C(CategoryWithAxiom): pass class E(CategoryWithAxiom): pass class F(CategoryWithAxiom): pass class D(CategoryWithAxiom): pass
The only asymmetry is the choice of a spanning tree, as defined by _base_category_class_and_axiom
and the corresponding thing to do at the starting point of the arrow that belongs to the spanning tree.
And it still works:
sage: As().B().C() is As().C().B() True sage: As().B().C() Category of c b as sage: type(_) <class '__main__.CBAs_with_category'>
So, there is an asymmetry, but it is necessary.
The only "error message" if you get the asymmetry wrong will be an infinite recursion.
Yes, and this is unfortunate.
comment:458 in reply to: ↑ 457 ; followup: ↓ 463 Changed 4 years ago by
Replying to SimonKing:
implementing the theorem is fairly straightforward after learning the rules
Well I agree that it works, but I don't think the implementation of the relation As+B+C = As+E+F is as concise as it could be phrased.
On the plus side, the extra_super_categories
(I agree with the sentiment that there ought to be better names) mechanism is more general in that it allows to express implications (C1+A1+A2 => C2+A3) in addition to relations.
What would the same programmer say about the abc module?
It precisely requires you to link, in code, your ABC to the virtual subclass:
FooABC.register(Foo) assert isinstance(Foo(), FooABC)
PEP 3119 could have said that Foo is automatically registered as virtual subclass of FooABC if there is a class of that name. This would have saved a line of code, but was afaik not even considered for the standard.
I said that it only seemed unnecessary to me at first! And after all, the necessity to choose a spanning tree makes it fairly obvious that one has to make choices at some point.
Yes, I agree that one must make choices. But some of the choices are inconsequential, why should I care about A.B.C vs A.C.B? Why am I forced to pick? There is already a total order on axioms implemented, can't that already be used to break the symmetry?
comment:459 in reply to: ↑ 455 Changed 4 years ago by
Replying to SimonKing:
Replying to nthiery:
Pushed on u/nthiery/ticket/10963
How does it relate with the branch that is attached to this ticket? Does the attached branch contain commits that are not in your branch (e.g., my commit to make the lazy imports even safer, by using as_name)?
I branched off right before that commit; actually that commit is the single reason for my branching off: since it was still to be discussed, I did not want to merge it right away in my changes, but I did not want it either to look like I was discarding it by having the ticket branch not contain that commit.
Cons for the commit: it's imposing a bit more redundant information on the developer in many spots when there is an alternative three lines localized robust fix (no risk of forgetting a spot).
Pros for the commit: the developer might get used of not having to specify as_name for similar lazy imported nested classes, and get surprised in a different context.
My personal preference is without the commit.
At some point, the work branch should be attached to the ticket, I suppose.
Definitely!
Cheers,
Nicolas
comment:460 in reply to: ↑ 456 Changed 4 years ago by
Replying to vbraun:
I agree with Simon's explanations. And it illustrates the point that I'm trying to make, if you show the code to a Python programmer then he'll be quite astonished that it does what it does since it seemingly consists only of all a pile of apparently unrelated inner classes.
Well, yes: there is no standard mixin mechanism in Python; so, if we want to have some mixin mechanism (and we agree that we want some, right?), then whatever the syntax for that mechanism is, a Python programmer will need to learn it.
Cheers,
Nicolas
comment:461 in reply to: ↑ 457 ; followup: ↓ 462 Changed 4 years ago by
I said that it only seemed unnecessary to me at first! And after all, the necessity to choose a spanning tree makes it fairly obvious that one has to make choices at some point.
Perhaps I understand nothing of what is happening here  and I am quite prepared to hear it  but in my own pagan way of doing things, and as you seem to be associating functions to set of axioms, I wondered why you don't associate functions to ... sets of axioms ?
It looks like your problem is that the user should "decide" if the function is a function of A.B
or a function of B.A
when what you have in mind is a function of {A,B}
. Why don't you have a syntax which takes as information a set of axioms (and a category if needed), and let some code decide automatically where it should be put (pick your spanning tree) ?
Something like the fancy stuff you like, a metaclass which creates a class from its SET of axioms, and everything ? This class would not appear as a subclass of any categoy with axiom, it would just stand on its own somewhere, and be copied where it belongs, by some code the coder does not have to think about ?
Nathann
P.S. : A "spanning tree" in a dag is usually not called a spanning tree but a spanning outarborescence. We just don't like "trees" to be directed :P
comment:462 in reply to: ↑ 461 ; followups: ↓ 464 ↓ 472 Changed 4 years ago by
Replying to ncohen:
It looks like your problem is that the user should "decide" if the function is a function of
A.B
or a function ofB.A
when what you have in mind is a function of{A,B}
. Why don't you have a syntax which takes as information a set of axioms (and a category if needed), and let some code decide automatically where it should be put (pick your spanning tree) ?
How could such syntax look like? Of course, we can have a separate class ABs
, and then let both As.B
and Bs.A
point to it. But even when you write down the name ABs
, you already have a choice to doafter all, why don't you chose BAs
instead of ABs
.
Something like the fancy stuff you like, a metaclass which creates a class from its SET of axioms, and everything ?
Well, this is my suggestion for the future.
P.S. : A "spanning tree" in a dag is usually not called a spanning tree but a spanning outarborescence. We just don't like "trees" to be directed
:P
Really? I always thought of rooted trees (and that's what we have here) as being directed.
comment:463 in reply to: ↑ 458 Changed 4 years ago by
Replying to vbraun:
Well I agree that it works, but I don't think the implementation of the relation As+B+C = As+E+F is as concise as it could be phrased.
On the plus side, the
extra_super_categories
mechanism is more general in that it allows to express implications (C1+A1+A2 => C2+A3) in addition to relations.
Or even C1+A1+A2 > C3.
Another plus side is that it gives a natural spot (the docstring of the method) to document and test the modeling of the theorem.
(I agree with the sentiment that there ought to be better names)
Definitely! We have been using extra_super_categories
since 2009.
If someone has a suggestion for a better name it should be easy to
change it now or later while maintaining temporary backward
compatibility.
It precisely requires you to link, in code, your ABC to the virtual subclass:
FooABC.register(Foo) assert isinstance(Foo(), FooABC)PEP 3119 could have said that Foo is automatically registered as virtual subclass of FooABC if there is a class of that name. This would have saved a line of code, but was afaik not even considered for the standard.
And I would have definitely voted against it :)
Now let me recall that the guessing based on the name only occurs to enable the alias FiniteSets?() > Sets().Finite(). If you do Sets().Finite() directly, it's not used at all. So it's just about implementing a syntactic sugar, not about semantic.
In fact, I am not even sure we want to have that syntactic sugar at all in the long run. My main motivation for implementing it was for backward compatibility. Later on, I definitely want to remove quite some of the names like GradedAlgebrasWithBasis? from the global name space. Probably FiniteSets? / FiniteGroups? / ... too. And possibly completely deprecate the idiom FiniteSets?(). Of course, we definitely want to keep Fields(), Groups(), ... but those don't use the guessing anyway.
Yes, yes Volker, that intention was yet not spelled out explicitly; I once again relied on your telepathic abilities :) I'll add now a comment in the documentation about the recommended usage of Sets().Finite() and the potential deprecation of FiniteSets?().
Btw: I don't want to handle this deprecation phase right now because there is already enough on the plate of this ticket and because I believe we need to have people play around with the code before deciding how far we want to do the deprecation.
Altogether, I believe that for implementing a syntactic sugar, that further might be temporary or for which we can seek later another solution, a little guess work is not great but ok.
Cheers,
Nicolas
comment:464 in reply to: ↑ 462 ; followup: ↓ 467 Changed 4 years ago by
Replying to SimonKing:
Replying to ncohen:
It looks like your problem is that the user should "decide" if the function is a function of
A.B
or a function ofB.A
when what you have in mind is a function of{A,B}
. Why don't you have a syntax which takes as information a set of axioms (and a category if needed), and let some code decide automatically where it should be put (pick your spanning tree) ?
Yes, I agree that this is a direction to explore (e.g. following Simon's database suggestions)! Yes we agree with Simon that this is for a later iteration as this requires non trivial experimentation and implementation work.
Now I'd like to raise a point to mitigate the discussion. In most practical cases, we want to work incrementally. You'd rather describe FiniteFields? as Fields with the axiom Finite rather than Magmas and AdditiveMagmas? with all of the following axioms (besides Distributive):
sage: Fields().axioms() frozenset(['Division', 'AdditiveUnital', 'NoZeroDivisors', 'Commutative', 'AdditiveInverse', 'AdditiveAssociative', 'Unital', 'AdditiveCommutative', 'Associative'])
Ok, a few of the above axioms are redundant but still, you see my point, right?
In practice, among the 59 categories with axioms that are currently implemented in Sage, almost all admitted a natural choice of base category and single axiom to add. Only in a few cases did I really make a choice that felt mathematically arbitrary. That was essentially in this snippet of DistributiveMagmasAndAdditiveMagmas?:
class AdditiveAssociative(CategoryWithAxiom): class AdditiveCommutative(CategoryWithAxiom): class AdditiveUnital(CategoryWithAxiom): class AdditiveInverse(CategoryWithAxiom): Associative = LazyImport('sage.categories.rngs', 'Rngs', at_startup=True) class Associative(CategoryWithAxiom): Unital = LazyImport('sage.categories.semirings', 'Semirings', at_startup=True)
And it's not even much worst than a syntax like
{AdditiveAssociative,AdditiveCommutative,AdditiveUnital} > ...
Really? I always thought of rooted trees (and that's what we have here) as being directed.
True, we should have added "rooted" everywhere in our discussion. But if Nathann believes "outarborescence" is better, we could change to that too.
Cheers,
Nicolas
comment:465 in reply to: ↑ 450 ; followup: ↓ 474 Changed 4 years ago by
Replying to pbruin:
The existing
summand_*
methods I could find areCartesianProduct.summand_projection() Sets.CartesianProducts.ParentMethods.summand_projection() Sets.CartesianProducts.ElementMethods.summand_projection() Sets.CartesianProducts.ElementMethods.summand_split() CombinatorialFreeModule_CartesianProduct.summand_embedding() CombinatorialFreeModule_CartesianProduct.summand_projection()Maybe the quickest solution is to insert betternamed aliases for these, rename the method
summands()
introduced here, and later deprecatesummand_projection()
andsummand_split()
in a different ticket.My first reflex would be to rename
summand_projection()
toprojection()
andsummand_split()
totuple()
.
If you believe this is urgent enough to belong to #10963, then please go ahead, and I'll review it.
If this is too conflictprone, maybe using the prefix
cartesian_
suggested by Simon would be a solution?
Yes, we definitely want long explicit names to avoid conflicts.
For products of modules (the last two methods in the above list), calling the components "summands" is OK if and only if there are only finitely many summands/factors; in that case the product and sum coincide, since modules form an additive category.
Yup. CartesianProducts? only covers finite cartesian products / finite sums, so that's ok. Hmm, this was apparently not spelled out explicitly, but all the code makes this assumption. I just fixed that.
I'm confused; isn't a Cartesian product of monoids just the Cartesian product of the underlying sets, with the obvious monoid structure?
Yes!
Or do you mean that a generic monoid will have a
factors()
method that does something unrelated?
To raise any confusion: I mean that if you construct a monoid M as a
cartesian product of other monoids, you would get a M.factors()
method which would have nothing to do with the concept of
factorization in the monoid M.
Cheers,
Nicolas
comment:466 in reply to: ↑ 457 Changed 4 years ago by
Replying to SimonKing:
The only "error message" if you get the asymmetry wrong will be an infinite recursion.
Yes, and this is unfortunate.
Agreed: it could be usefully complemented by some hint at what the source of the problem could be. That being said, the backtrace is pretty useful to explore where the issue comes from (e.g. with post mortem debugging), and we would not want to completely suppress it.
Anyway, that's an implementation detail that can most likely can be improved later on by adding more sanity checks.
comment:467 in reply to: ↑ 464 ; followup: ↓ 470 Changed 4 years ago by
Nathan raises a point that confuses me, too. Why is there a single defining axiom, the self._axiom attribute? It would be much more natural to have a list of defining axioms. For much of the work you are only looking at the list of implied axioms()
anyways.
Replying to nthiery:
Well, yes: there is no standard mixin mechanism in Python
There is, multiple inheritance.
And I would have definitely voted against it :)
You are of course entitled to your own opinion, but if you want to write a Python library that others can use then "explicit is better than implicit" is not up for vote. That train has long departed...
class AdditiveAssociative(CategoryWithAxiom): class AdditiveCommutative(CategoryWithAxiom): class AdditiveUnital(CategoryWithAxiom): class AdditiveInverse(CategoryWithAxiom): Associative = LazyImport('sage.categories.rngs', 'Rngs', at_startup=True)
And I hope we can all agree that fivefold nested classes are an abomination ;)
Replying to SimonKing:
How could such syntax look like? Of course, we can have a separate class
ABs
, and then let bothAs.B
andBs.A
point to it. But even when you write down the nameABs
, you already have a choice to doafter all, why don't you choseBAs
instead ofABs
.
This is just a naming choice, it does not change the actual code (unless you go out of your way to make the code depend on the class name). You can even call it Both_A_and_B
, or _Implementation
if you want to stress the symmetry. It doesn't require you to nest class definitions.
comment:468 in reply to: ↑ 449 ; followup: ↓ 469 Changed 4 years ago by
Replying to vbraun:
Well a few categories are always going to be constructed on startup
In principle, we could imagine not having a single one; but I can live with a couple.
I'm sorry, I overlooked the comment in your code that stated that you want to get rid of those importsonstartup. Oh, no comment? In that case I'm sorry for not having any telepathic abilities to read your mind ;)
You really should work on your sightseer skills;
Hi Nicolas,
How far is this patch  I just saw that the UCF patch depends on this.
I didn't actually figure out how it depends, I just get a trivial rebase, and then an import loop which wasn't easily fixable. The problem was my use of CombinatorialFreeModule?...
Thx, Christian