Opened 3 years ago

Last modified 2 years ago

#22034 new enhancement

Weakly referenced version of @cached_method

Reported by: saraedum Owned by:
Priority: minor Milestone: sage-7.5
Component: misc Keywords: cached_method, cache, sd87
Cc: Merged in:
Authors: Reviewers:
Report Upstream: N/A Work issues:
Branch: Commit:
Dependencies: Stopgaps:

Description (last modified by saraedum)

Suppose that you have a morphism f whose _call_ is expensive to evaluate. To speed things up, you could mark _call_ as @cached_method. However, then f keeps alive all the arguments that were ever used in calls to f. One could try to use a weak version of @cached_method but currently there is no such decorator (and @weak_cached_function only uses weak references on the values not on the key.)

It turns out that in many common use patterns, you do not even need the cache to recognize arguments that are == equal but would be good enough to recognize arguments that are is equal. Often you call f several times with the same argument x, then that x leaves the scope, gets garbage collected, and you won't ever see an == element again. So it makes sense to actually attach the cached value of f(x) to x, i.e., the argument of f._call_.

Sadly that is not possible. Element is an extension type which has no __cached_methods field where we could put these caches. (See the comments in cachefunc.pyx.) The only workaround would be using something like a MonoDict to implement a weak_cached_method.

A reasonable approach could be to add a keyword weakly_reference_key and weakly_reference_values. These parameters together with do_pickle should then select the right implementation of the underlying dictionary (ideally, this logic should be in only one place.)

Change History (12)

comment:1 Changed 3 years ago by saraedum

  • Description modified (diff)

comment:2 Changed 3 years ago by saraedum

  • Description modified (diff)

comment:3 Changed 3 years ago by saraedum

  • Description modified (diff)

comment:4 Changed 3 years ago by saraedum

  • Description modified (diff)

comment:5 Changed 3 years ago by saraedum

  • Description modified (diff)
  • Summary changed from Introduce @cached_in_argument_method to Weakly referenced version of @cached_method

comment:6 Changed 3 years ago by nbruin

Your example of a morphism is a good one, but the caching strategy you describe is much less likely to save memory than you might think: Suppose we have f:A->B , with inverse g:B->A, which are both "weakly keyed cached morphisms". If we now execute b=f(a); c=g(b); del a,b,c, we have that a,b have their lifetime bounded below by f and g.

I suspect this approach will mainly lead to harder to find memory leaks, and probably more frequent ones, because it will look like an attractive option.

If you only need a computed image to survive in a scope, wouldn't it be much more direct (and a bit faster) to compute & assign that image to a local variable? Then the normal scoping rules of python will automatically have it fall out of scope and be collected.

comment:7 follow-up: Changed 3 years ago by saraedum

I am not exactly sure what you are proposing. I want repeated calls f(a) to be fast. Are you saying that in this case both a and the result of f(a) should be stored in a cache with weak references? Then, yes, I can imagine that.

comment:8 in reply to: ↑ 7 Changed 3 years ago by nbruin

Replying to saraedum:

I am not exactly sure what you are proposing.

I was just referring to your example:

It turns out that in many common use patterns, you do not even need the cache to recognize arguments that are == equal but would be good enough to recognize arguments that are is equal. Often you call f several times with the same argument x, then that x leaves the scope, gets garbage collected, and you won't ever see an == element again. So it makes sense to actually attach the cached value of f(x) to x, i.e., the argument of f._call_.

which I imagine arises in a loop like:

for i in R:
  L.append(f(x)+i)

which you can rewrite as

fx=f(x)
for i in R:
  L.append(fx+i)

I want repeated calls f(a) to be fast.

Doing so has a cost. The standard method of using fa=f(a) and working with that instead has less overhead and more predictable memory use, and is applicable pretty straightforwardly for "local reuse". If you need global caching then strong references are the way to go (in that case you're taking the choice about memory footprint vs. computation time).

You can take hybrid approaches where the cache is artificially bounded (via a LRU entry deletion strategy or so), which would need tuning. I would expect that to behave much better than handing over your cache pruning strategy to the garbage collector, which isn't designed for the purpose and which I imagine has a lot less research behind it than more established cache management strategies.

Are you saying that in this case both a and the result of f(a) should be stored in a cache with weak references?

No, that would just lead to upredictable performance. In fact, elements would ideally not participate in reference cycles so much, so your result would probably vanish pretty quickly, so in many scenarios you would now have the overhead of caching and weak reference management and no benefit (because the cache would immediately clear itself up).

I have seen plenty of examples: weak references are hard and lead to hard-to-replicate, hard-to-understand and hard-to-diagnose bugs (mainly memory leaks), and they inflict additional costs (accessing them costs an extra level of indirection and weakly referenced objects are more expensive to delete, because of the callbacks). You should use them only as a last resort. That's my main reason to try and dissuade you from extending their use.

comment:9 follow-up: Changed 3 years ago by saraedum

My use case is: You have a valuation on a polynomial ring, basically a map v:R[x]->QQ with some extra methods. Now, you want to call a bunch of methods of this valuation for a polynomial f. So you would ask, v.is_key(f) and if it is not whether v.is_equivalence_irreducible(f) and then maybe v.reduce(f). Each of these methods needs to evaluate v(f) as one of the first steps in its implementation so it would be nice to cache the result of v(f) and bind that cache to f. Now, I could of course change the signatures of is_key to is_key(f, vf=None) to share a precomputed value of v(f) in all of these cases. However, all of these methods are defined recursively and they might also need to compute the valuations of the coefficients of f which are defined recursively as valuations on another polynomial ring… In other words, explicit passing of all of these caches makes for very messy code and a few @cached_method decorators would make the code much easier to read.

Of course I am aware that weak references lead to very hard to debug memory leaks. I would prefer to attach the caches to the elements but that does not seem to be an option.

comment:10 in reply to: ↑ 9 ; follow-up: Changed 3 years ago by nbruin

It looks to me like you should have an "element of a valuation-equipped polynomial ring" type, where the element would naturally have a slot for storing the valuation. That way you solve the problem in the way that you wanted.

Subclassing is cheap:

sage: f=ZZ['x']([1,2,3])
sage: f
3*x^2 + 2*x + 1
sage: class A(type(f)): pass
sage: g=A(parent(f),f)
sage: g.valuation=10
sage: g.valuation
10

There's a reason why the default polynomial type does not have a dict: to be lightweight. But clearly in your situation, that's TOO lightweight. So a subclass would be entirely justified.

There are of course some issues that arithmetic will not result in your "augmented" polynomials, so you'd have to wrap at judicious times or subclass more extensively.

comment:11 in reply to: ↑ 10 Changed 2 years ago by roed

Replying to nbruin:

It looks to me like you should have an "element of a valuation-equipped polynomial ring" type, where the element would naturally have a slot for storing the valuation. That way you solve the problem in the way that you wanted.

Subclassing is cheap:

sage: f=ZZ['x']([1,2,3])
sage: f
3*x^2 + 2*x + 1
sage: class A(type(f)): pass
sage: g=A(parent(f),f)
sage: g.valuation=10
sage: g.valuation
10

There's a reason why the default polynomial type does not have a dict: to be lightweight. But clearly in your situation, that's TOO lightweight. So a subclass would be entirely justified.

In #23164, we just added a __cached_methods attribute to Polynomial. Do you think this is inappropriate?

There are of course some issues that arithmetic will not result in your "augmented" polynomials, so you'd have to wrap at judicious times or subclass more extensively.

comment:12 Changed 2 years ago by roed

  • Keywords sd87 added
Note: See TracTickets for help on using tickets.