#24261 closed enhancement (fixed)
py3: add py2 and py3 doctest flags
Reported by:  embray  Owned by:  

Priority:  major  Milestone:  sage8.2 
Component:  python3  Keywords:  
Cc:  chapoton  Merged in:  
Authors:  Erik Bray, Jeroen Demeyer  Reviewers:  Frédéric Chapoton, Erik Bray, Jeroen Demeyer 
Report Upstream:  N/A  Work issues:  
Branch:  12c300a (Commits, GitHub, GitLab)  Commit:  
Dependencies:  Stopgaps: 
Description
This adds two new flags that can be placed next to doctests: # py2
and # py3
. If marked py2
a test will only be run on Python 2 and so on.
Generally this should be avoided in favor of making a test work on both in some way or another, but there are some cases where it simply doesn't make sense. For example this test in sage.doctest.control
640 sage: DD = DocTestDefaults(sagenb = True) 641 sage: DC = DocTestController(DD, []) 642 sage: DC.add_files() # py2 643 Doctesting the Sage notebook. 644 sage: DC.files[0][6:] # py2 645 'sagenb'
This doesn't work on Python 3 since we don't even install the Sage notebook on Python 3 (at least not currently).
The relevant code being tested there should also be updated to handle Python 3 better, but even so there's no practical way to make the test make sense on Python 3 until/unless sagenb is supported on Python 3. I suspect other cases like this may come up on occasion so it's useful to have.
Change History (38)
comment:1 Changed 3 years ago by
 Status changed from new to needs_review
comment:2 followups: ↓ 3 ↓ 4 Changed 3 years ago by
comment:3 in reply to: ↑ 2 Changed 3 years ago by
Replying to jdemeyer:
Why not
# optional  python2
or# optional  python3
? That's reusing an existing system for optional tests.
and there are already existing instances waiting for that
comment:4 in reply to: ↑ 2 ; followup: ↓ 14 Changed 3 years ago by
Replying to jdemeyer:
Why not
# optional  python2
or# optional  python3
? That's reusing an existing system for optional tests.
I considered something like that but it seems to have different semantics. Currently "optional  pkgname" implies that a package is installed/available. Here we don't care if python2/3 is available. We care about what Python version is currently running the tests.
Rather than making a special case it's simpler and clearer I think to just have specific tags for this (there's also prior precedence such as using # py3
in coverage to mark lines that should be ignored in the coverage analysis).
comment:5 Changed 3 years ago by
I think it would be better to have # py2
and # py3
behave like the # 32bit
and # 64bit
and be specific for the output. I guess there is also the question of how many of our doctest inputs are going to be specific for Python2. Do any of you have an impression? I think it might be marginal at worst, in which case, we can just build into the doctests an explicit check of Python2 vs 3 as part of the doctest itself.
comment:6 Changed 3 years ago by
I wouldn't mind changing it to apply to the output but not skip the test. That would actually be better since if the behavior on Python 3 changes in these cases (e.g. sagenb
works, to use my previous example) then we'd want to see that reflected in the test results.
I think a stronger possibility is Python 3 only inputs, such as doing things with annotations and other Python 3 only syntax. So maybe having these tags work on both output and input would be good?
comment:7 Changed 3 years ago by
I would not oppose that, but I don't think we do anything like that elsewhere. So it might require some invasive changes and might cause some slight confusion (but that is debatable). I think we would be better off having a separate marker for Python3 input.
comment:8 Changed 3 years ago by
Adding tags on output lines would not require invasive changes, but I do think it's slightly more confusing. Whereas the use of comments on input lines is consistent with Python syntax, sticking comments in output lines is a bit adhoc and confusing. But we also already have precedence for that in the "32bit/64bit" case as you pointed out.
comment:9 Changed 3 years ago by
 Status changed from needs_review to needs_work
I just realized there's a bug in how this is implemented, where tests containing these tags are skipped regardless.
comment:10 Changed 3 years ago by
Implementation bug aside, I've been working on fixing the sageinspect module, and that's increasingly convinced me that having a concise way to run specific test cases on either Python 2 or Python 3 only is useful, especially in that module.
comment:11 Changed 3 years ago by
 Commit changed from 6f5fd8cfb06c4b7c7f9743e2559eca00c06f3a0f to e01e5531b69e5a65642651efab85cf5bb496c8ee
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
e01e553  Add doctest flags for running tests on Python 2 only or Python 3 only.

comment:12 Changed 3 years ago by
 Status changed from needs_work to needs_review
Okay, this definitely works now. My fixes to the sageinspect module (and in particular its tests) rely on this pretty heavily.
comment:13 Changed 3 years ago by
 Commit changed from e01e5531b69e5a65642651efab85cf5bb496c8ee to 31f2008835ab51a4a1fdbe8b14b7ef175494be01
Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits:
31f2008  Add doctest flags for running tests on Python 2 only or Python 3 only.

comment:14 in reply to: ↑ 4 ; followup: ↓ 15 Changed 3 years ago by
Replying to embray:
I considered something like that but it seems to have different semantics. Currently "optional  pkgname" implies that a package is installed/available. Here we don't care if python2/3 is available. We care about what Python version is currently running the tests.
Right, I get that. Still, I think that we already have enough (if not too many) special doctest markers. And even if # optional
is typically used to check whether a package is installed, I think that the meaning of # optional  python2
is clear enough.
Moreover, there are already doctests using # optional  python2
and # optional  python3
referring to the current running version of Python.
comment:15 in reply to: ↑ 14 Changed 3 years ago by
Replying to jdemeyer:
Replying to embray:
I considered something like that but it seems to have different semantics. Currently "optional  pkgname" implies that a package is installed/available. Here we don't care if python2/3 is available. We care about what Python version is currently running the tests.
Right, I get that. Still, I think that we already have enough (if not too many) special
That's pretty subjectiveI think "enough" is however many you need to write reasonable tests.
doctest markers. And even if
# optional
is typically used to check whether a package is installed, I think that the meaning of# optional  python2
is clear enough.
I don't. It's overly wordy and not quite clear at all given the double meaning.
Moreover, there are already doctests using
# optional  python2
and# optional  python3
referring to the current running version of Python.
Except those explicitly don't work right now. I've already replaced a few examples using these markers. Here's a concrete example that might help motivate:
sage: import ast, sage.misc.sageinspect as sms # py3 sage: visitor = sms.SageArgSpecVisitor() # py3 sage: vis = lambda x: visitor.visit_NameConstant(ast.parse(x).body[0].value) # py3 sage: [vis(n) for n in ['True', 'False', 'None']] # py3 [True, False, None] sage: [type(vis(n)) for n in ['True', 'False', 'None']] # py3 [<type 'bool'>, <type 'bool'>, <type 'NoneType'>]
This entire sequence of test commands is Python 3 only and does not even make sense on Python 2 (it might be nice to tag a whole range at once actually rather than every single line but that's a separate issue). Putting "# optional  python3" on every one of those lines is just that much more line noise.
comment:16 followup: ↓ 17 Changed 3 years ago by
 Status changed from needs_review to needs_work
OK, fine. I don't really agree, but we have to move on. If you want to go forward with # py2
and # py3
, you should at least fix the existing uses of # optional  python2
and # optional  python3
. If you do that, I'm willing to set this ticket to positive review.
comment:17 in reply to: ↑ 16 Changed 3 years ago by
Replying to jdemeyer:
OK, fine. I don't really agree, but we have to move on. If you want to go forward with
# py2
and# py3
, you should at least fix the existing uses of# optional  python2
and# optional  python3
. If you do that, I'm willing to set this ticket to positive review.
I'm fixing those as I come to them, but fixing all of them at once would probably only hang this up longer. Better to add the feature first and then switch to using it where it makes sense to on a case by case basis.
comment:18 Changed 3 years ago by
Come on, there are really few instances of # optional  python[23]
currently in Sage. It doesn't take much effort to fix those. And those can also double as testcase that the # py2
and # py3
markers actually work.
comment:19 Changed 3 years ago by
Alright, I have no problem doing it (I'll have to doublecheck but I think I've already fixed most of those cases). It just seemed antithetical to your normal philosophy (which I generally agree with...)
comment:20 Changed 3 years ago by
 Commit changed from 31f2008835ab51a4a1fdbe8b14b7ef175494be01 to c09007614f985868bd2a43c0f8e282aeb8de17b7
Branch pushed to git repo; I updated commit sha1. New commits:
c090076  Fix a few more cases where '# optional  python2' wasn't really doing what was intended.

comment:21 Changed 3 years ago by
 Status changed from needs_work to needs_review
Note: I also fixed one case of # optional  python2
in #24286.
comment:22 Changed 3 years ago by
typo : "on optionl features" in src/doc/en/developer/coding_basics.rst
one failing doctest in src/sage/doctest/sources.py
comment:23 Changed 3 years ago by
 Status changed from needs_review to needs_work
comment:24 Changed 3 years ago by
Can somebody explain the origin of this test?
sage t long src/sage/doctest/sources.py ********************************************************************** File "src/sage/doctest/sources.py", line 759, in sage.doctest.sources.FileDocTestSource._test_enough_doctests Failed example: for path, dirs, files in itertools.chain(os.walk('sage'), os.walk('doc')): # long time path = os.path.relpath(path) dirs.sort(); files.sort() for F in files: _, ext = os.path.splitext(F) if ext in ('.py', '.pyx', '.pxd', '.pxi', '.sage', '.spyx', '.rst'): filename = os.path.join(path, F) FDS = FileDocTestSource(filename, DocTestDefaults(long=True,optional=True)) FDS._test_enough_doctests(verbose=False) Expected: There are 7 tests in sage/combinat/diagram_algebras.py that are not being run There are 7 tests in sage/combinat/dyck_word.py that are not being run There are 7 tests in sage/combinat/finite_state_machine.py that are not being run There are 6 tests in sage/combinat/interval_posets.py that are not being run There are 18 tests in sage/combinat/partition.py that are not being run There are 15 tests in sage/combinat/permutation.py that are not being run There are 14 tests in sage/combinat/skew_partition.py that are not being run There are 18 tests in sage/combinat/tableau.py that are not being run There are 8 tests in sage/combinat/crystals/tensor_product.py that are not being run There are 11 tests in sage/combinat/rigged_configurations/rigged_configurations.py that are not being run There are 15 tests in sage/combinat/root_system/cartan_type.py that are not being run There are 8 tests in sage/combinat/root_system/type_A.py that are not being run There are 8 tests in sage/combinat/root_system/type_G.py that are not being run There are 3 unexpected tests being run in sage/doctest/parsing.py There are 1 unexpected tests being run in sage/doctest/reporting.py There are 15 tests in sage/manifolds/manifold.py that are not being run There are 3 tests in sage/rings/invariant_theory.py that are not being run Got: There are 7 tests in sage/combinat/diagram_algebras.py that are not being run There are 7 tests in sage/combinat/dyck_word.py that are not being run There are 7 tests in sage/combinat/finite_state_machine.py that are not being run There are 6 tests in sage/combinat/interval_posets.py that are not being run There are 18 tests in sage/combinat/partition.py that are not being run There are 15 tests in sage/combinat/permutation.py that are not being run There are 14 tests in sage/combinat/skew_partition.py that are not being run There are 18 tests in sage/combinat/tableau.py that are not being run There are 8 tests in sage/combinat/crystals/tensor_product.py that are not being run There are 11 tests in sage/combinat/rigged_configurations/rigged_configurations.py that are not being run There are 15 tests in sage/combinat/root_system/cartan_type.py that are not being run There are 8 tests in sage/combinat/root_system/type_A.py that are not being run There are 8 tests in sage/combinat/root_system/type_G.py that are not being run There are 3 unexpected tests being run in sage/doctest/parsing.py There are 1 unexpected tests being run in sage/doctest/reporting.py There are 15 tests in sage/manifolds/manifold.py that are not being run There are 1 tests in sage/misc/sage_eval.py that are not being run There are 3 tests in sage/rings/invariant_theory.py that are not being run There are 1 tests in sage/rings/finite_rings/integer_mod.pyx that are not being run ********************************************************************** 1 item had failures: 1 of 9 in sage.doctest.sources.FileDocTestSource._test_enough_doctests [367 tests, 1 failure, 244.00 s]
It's pretty awfully annoying to have a test whose output depends on the contents of a variety of specific modules. If this is meant to be some kind of regression test (?) it's not clear, but surely it could be run without such a strong dependency on external data.
comment:25 Changed 3 years ago by
I really don't understand what to do with this test. It's like it's testing the doctest parser by using a different, simpler doctest parser that doesn't fully work, and comparing the results from that parser to the results from the actual doctest parser. If we want to check that the actual doctest parser works then there should be (and are) unit tests for the code of the actual doctest parser. As it is this test is just testing the secondary, flakier doctest parser implemented in the test...
comment:26 Changed 3 years ago by
 Commit changed from c09007614f985868bd2a43c0f8e282aeb8de17b7 to 96127ebd7f23c774b4c5c6619962f8dcb8bc2ae5
Branch pushed to git repo; I updated commit sha1. New commits:
96127eb  Fix typo

comment:27 Changed 3 years ago by
 Status changed from needs_work to needs_info
comment:28 Changed 3 years ago by
To solve your problem with _test_enough_doctests
, you could add some extra attribute to SageDocTestParser
to never skip any test (not even the # py2
and # py3
tests). Then you set that attribute when running _test_enough_doctests
.
comment:29 Changed 3 years ago by
I'd still just as soon remove that test...
comment:30 Changed 3 years ago by
 Branch changed from u/embray/python3/doctestpy2only to u/jdemeyer/python3/doctestpy2only
comment:31 Changed 3 years ago by
 Commit changed from 96127ebd7f23c774b4c5c6619962f8dcb8bc2ae5 to 12c300aad31ee72ea569134b35a94baed7efa470
 Status changed from needs_info to needs_review
comment:32 followup: ↓ 33 Changed 3 years ago by
I don't fully understand what this is doing differently, but I'm not opposed as long as it works.
comment:33 in reply to: ↑ 32 Changed 3 years ago by
comment:34 Changed 3 years ago by
This seems to be ready to go, no ?
comment:35 Changed 3 years ago by
 Reviewers set to Frédéric Chapoton, Erik Bray, Jeroen Demeyer
 Status changed from needs_review to positive_review
ok, let it be. Erik and Jeroen, I added your names as reviewers. Feel free to undo that.
comment:36 Changed 3 years ago by
Yes, I understand how this is working now. +1
comment:37 Changed 3 years ago by
 Branch changed from u/jdemeyer/python3/doctestpy2only to 12c300aad31ee72ea569134b35a94baed7efa470
 Resolution set to fixed
 Status changed from positive_review to closed
comment:38 Changed 3 years ago by
 Commit 12c300aad31ee72ea569134b35a94baed7efa470 deleted
followup in #24570
Why not
# optional  python2
or# optional  python3
? That's reusing an existing system for optional tests.