Opened 2 years ago

Last modified 22 months ago

#26774 new enhancement

Support custom doctest parser hooks (perhaps module-specific)

Reported by: embray Owned by:
Priority: major Milestone: sage-wishlist
Component: doctest framework Keywords:
Cc: Merged in:
Authors: Reviewers:
Report Upstream: N/A Work issues:
Branch: Commit:
Dependencies: Stopgaps:

Description

It would be nice if individual modules or sub-packages in Sage could register custom doctest result parsers (perhaps still to be manually enabled within individual tests with an appropriate # <keyword>).

This would allow specific code areas to register custom logic for parsing specialized doctest output. For example this would have been useful for complex_arb tests in #26360 for parsing complex ball representations.

Rather than build every possibility directly into the doctest framework, it would be better if customized parsers could be registered/enabled as-needed. Perhaps they could be enabled once on a per-file basis, so each file that needs a custom parser would have to explicitly enable it for it to work. This would avoid clutter for the majority of cases where those special cases aren't needed.

This would also be useful for third-party libraries that use Sage and want to use the Sage doctester.

Change History (2)

comment:1 Changed 22 months ago by nbruin

I think it is wasted effort to go to great lengths to *parse* output to validate tests. It's a test! It's under our control. Your parser will probably just be undoing the work that the repr method has just performed.

Just write the test such that it prints True if the result matches expectations and otherwise print False. If you want to verify arb results, just test that the centre point and radius are where you expect them to be. If you want to illustrate print output but not insist on matching strings then just mark the test #random and subsequently test the properties of the object you want to validate explicitly.

If you want to allow for a little bit of variation in float results (with IEEE this should not be necessary), then string matching is not the right tool.

Similarly, for testing sets: either construct the expected set explicitly and test for equality with that or do something (sort by string rep?) to the output so that string matching gives a good test for the output.

comment:2 Changed 22 months ago by embray

I mostly agree of course, and the real problem here is overreliance on the doctest framework, where a simple assert want == got style test would do.

Nevertheless, sometimes it's also desirable to have doctests that are actual docs that take the form of meaningful and readable examples which are themselves tested. I'm okay with using # random for these in some cases as long as there's an equivalent test elsewhere that actually checks the value. But in many simple cases I don't think one needs to go to "great lengths" either.

See for example all the workarounds I've added for normalizing output differences between Python 2 and 3.

Note: See TracTickets for help on using tickets.