Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Coercion discovery fails to be transitive #15303

Open
nbruin opened this issue Oct 17, 2013 · 149 comments
Open

Coercion discovery fails to be transitive #15303

nbruin opened this issue Oct 17, 2013 · 149 comments

Comments

@nbruin
Copy link
Contributor

nbruin commented Oct 17, 2013

As found in #14711:comment:134, the following example shows that a combination of register_embedding and register_coercion can lead to a failure in transitivity for coercion discovery; also discussed on sage-devel:

class pA(Parent): pass
class pB(Parent): pass
class pC(Parent): pass

A=pA(); B=pB(); C=pC()

BtoA=Hom(B,A)(lambda x: A(x))
AtoC=Hom(A,C)(lambda x: C(x))
A.register_coercion(BtoA)
A.register_embedding(AtoC)

G=get_coercion_model()
G.discover_coercion(A,C) #finds AtoC
G.discover_coercion(B,A) #finds BtoA
G.discover_coercion(B,C) #does not find the composition of BtoA with AtoC

One workaround is simple: just don't use register_embedding. However, after #14711, there are different lifetime implications between using register_embedding and register_coercion, so this workaround might not be desirable.

Depends on #14711
Depends on #15329
Depends on #15331

CC: @simon-king-jena @robertwb @jpflori

Component: coercion

Work Issues: rebase (11 files conflicting)

Author: Simon King

Branch/Commit: u/SimonKing/ticket/15303 @ 12e8055

Issue created by migration from https://trac.sagemath.org/ticket/15303

@nbruin nbruin added this to the sage-5.13 milestone Oct 17, 2013
@nbruin
Copy link
Contributor Author

nbruin commented Oct 17, 2013

comment:1

The problem is that in the present implementation, both coercions are stored on A, so if a coercion from B to C is requested, it hard to realize that A should even be considered.

As far as I understand, the coercion model should behave as a digraph on the parents, where a coercion between parents exists if there is a path from one to the other. In that model, coercion existence should be transitive, so the behaviour described is a bug.

One way to work around it is, at least for coercion discovery, to store the coercion always on the codomain. By #14711 this can now happen without the implication that the codomain will keep the domain alive. We would have to store it in a place where it can be used for coercion discovery, though.

If it is desirable to keep the current lifetime implications for register_embedding (and it probably is) then we should ensure this separately, either by also storing the embedding map on the domain or by just having a direct reference from the domain to the codomain.

@simon-king-jena
Copy link
Member

comment:2

It has also been discussed on sage-devel that the existence of a coercion from B to C would change over time. Namely:

  1. Create B and C. You will not find a coercion, because A is not known yet.
  2. Create A, registering a coercion from B to A and an embedding of A into C.
  3. Provided that we fixed the problem of transitivity, we would now find a coercion from B to C via A.

I suppose that this phenomenon can not be avoided: We can not have a static coercion digraph (otherwise we would restrict ourselves to a fixed finite set of usable parents), and when we have a dynamic coercion graph, then it is, well, dynamic.

However, one additional complication arises with the current implementation: The absence of a coercion is cached. Hence, if we asked for a coercion in 1., then in 3. you would still not get a coerce map, because the absence of a coercion has been cached in 1.

Let phi: A -> B and do A.register_embedding(phi). Currently, B is not aware of the existence of phi. Hence, the backtracking algorithm will currently ignore phi. We don't want to store phi as a strong attribute of B, hence, don't want to put it into B._coerce_from_list. But perhaps we could create a new attribute of type MonoDict and store the embedding there? For example B._coerce_embeddings_from[A] = phi, in addition to A._coerce_embedding = phi. Since it is a MonoDict and thus has weak references to the keys, B would not prevent A from being garbage collected (but of course A would still keep B alive, because we registered the coerce embedding).

Once this is done, we could change the backtracking so that it not only iterates over B._coerce_from_list, but it additionally iterates over B._coerce_embeddings_from.

But what about the problem of caching coercions? We should carefully think if the current scenario (A.register_coercion(B) plus A.register_embedding(C)) is the only scenario that could dynamically change the coercion graph. Here, I assume that self.register_embedding() and self.register_coercion() are only called during self.__init__().

How to make it possible to allow a coercion via a newly created parent?

Perhaps the following would be feasible: If we do A.register_embedding(psi), where psi: A -> C, then we clear the coercion cache of C, in the sense of all cache items stating the absence of a coercion to C will be deleted.

Note that clearing it in the sense of all cached coercions to C will be deleted is likely to be a bad idea, because the cached coercions might have already been used. So, we should restrict ourselves to allowing new coercions by removing the "there is no coercion" flag.

So, would the suggestion make sense to you?

  • Have a new MonoDict that stores all coerce embeddings leading into the parent (hence, we would have a strong reference from domain to codomain, and I suggest adding a weak reference from codomain to domain), that will be used in the backtracking algorithm.
  • When registering an embedding, then remove all "there is no coercion" flags from the coercion cache of the codomain.

Hm. Probably this is not good enough. What if we have had D.coerce_map_from(C), and had cached the absence of a coercion from B to D? After adding A, we would find a coercion from B to D via A and C. Hence, cleaning the coercion cache of C would not be enough---we would need to clean the coercion cache of all parents into which C coerces.

Given this example, it seems to me that we could only solve the problem in a drastic way: Do not cache the absence of a coercion at all. But caching the absence of a coercion is essential for speed. So, the drastic solution would probably be correct, but highly infeasible.

If nobody has a better suggestion, then I think we should restrict ourselves to just fix the transitivity problem; the cache problem (it seems to me) is not solvable without creating a drastic slow-down.

@nbruin
Copy link
Contributor Author

nbruin commented Oct 18, 2013

comment:3

Replying to @simon-king-jena:

  1. Provided that we fixed the problem of transitivity, we would now find a coercion from B to C via A.

There would be an additional effect, by the way: If the coercion that does get discovered is stored as a composition (on C) then there is now a reference from C._coerce_from_hash to A. This reference lives in a MonoDict keyed by B, so this reference remains as long as B is alive. So we see that as long as B and C are alive, then A will be kept alive as well (thus, with monodicts we can have objects whose lifetime depends on two objects BOTH being alive)

Note that if the composition gets computed in a smarter way (for instance, a composition of homomorphisms between finitely generated rings could be explicitly computed by computing the images of the generators and constructing a straight homomorphism from B to C out of that) then the resulting coercion map might not refer to A anymore.

I am not saying that this is avoidable nor that we should consider this a memory leak: we're keeping A in memory for a good reason, even if the reason is not directly supplied by the user.

However, one additional complication arises with the current implementation: The absence of a coercion is cached. Hence, if we asked for a coercion in 1., then in 3. you would still not get a coerce map, because the absence of a coercion has been cached in 1.

There are milder ways of invalidating caches: We could mark cached non-existence with a "coercion graph version number". When a non-existence is retrieved, one could check the current "coercion graph version number" and if the current graph is newer, we'd have to reinvestigate the "None". Frankly, I don't believe that would give acceptable performance either, but we could at least be much more lazy about invalidating "no coercion exists" findings. The reason why I think this would still not be good enough is because I expect that coercion graph mutations occur fairly frequently (basically with any parent creation) and I don't see a way to localize coercion graph versions, so any mutation would invalidate ALL non-existence caches.

For example B._coerce_embeddings_from[A] = phi, in addition to A._coerce_embedding = phi. Since it is a MonoDict and thus has weak references to the keys, B would not prevent A from being garbage collected (but of course A would still keep B alive, because we registered the coerce embedding).

Yes, something along those lines. In fact, since _coerce_embeddings_from and _coerce_from_list would serve the same purpose from the perspective of coercion discoveries, I think the entries in _coerce_from_list should also be added to _coerce_embeddings_from, because it would simplify the code. By then it would make more sense to call the attribute _coerce_from_to_be_used_in_backtracking and the list that is now _coerce_from_hash could be _coerce_from_cached_results.

By then we should probably be storing ALL maps there in weakened form.

The only function of _coerce_from_list now is to keep domains alive, so it would be more economical to replace that by a list _domains_to_keep_alive where we can store strong references to the domains of the corresponding weakened maps that are now in _coerce_from_to_be_used_in_backtracking

So we would end up with (names up for choice):

  • P._coercion_cache: MonoDict containing discovered (weakened) maps into P.
  • P._coercion_maps: MonoDict containing (weakened) maps into P that get used by backtracking
  • P._domains_to_keep_alive: List of domains that should be kept alive as long as P lives.
  • P._codomains_to_keep_alive: List of codomains that should be kept alive as long as P lives. Normally, such codomains Q would have an entry in Q._coercion_maps[P]. Just storing a map in P._coerce_embedding would also have this effect.

In the present naming system, it wasn't immediately clear to me what purposes _coerce_from_hash and _coerce_from_list served, so changing those names could be beneficial.

If nobody has a better suggestion, then I think we should restrict ourselves to just fix the transitivity problem; the cache problem (it seems to me) is not solvable without creating a drastic slow-down.

Yes, I don't have any suggestions that I seriously believe would lead to acceptable performance.

@simon-king-jena
Copy link
Member

comment:4

I was thinking about "versioning" of the coercion graph as well. Perhaps it would be worth trying.

The idea would be: We have one global variable, say, cdef unsigned int coerce_graph_version. Whenever we create a node that is neither a sink nor a source in the coerce graph, we increment the variable. This would be cheap, a simple test in .register_coercion() and .register_embedding().

Instead of storing None when a coercion can not be found, we store the current version. Hence, if we do mor = self._coerce_from_hash[P], a simple isinstance(mor,int) (or a faster version from the Python API that tests if the type of mor is exactly int) will tell us whether the absence of a coercion was cached. If mor==coerce_graph_version then we know that the cached absence of a coercion is reliable. Otherwise, we need to test.

Would this really be soooo expensive? I don't think so. Of course, it depends on how often we create non-sink non-source nodes---and I doubt that the combined use of .register_embedding() and .register_coercion() (which is the only way to create non-sink non-source) will happen very often.

So, I'd say we simply try.

@nbruin
Copy link
Contributor Author

nbruin commented Oct 19, 2013

comment:5

Replying to @simon-king-jena:

Would this really be soooo expensive? I don't think so. Of course, it depends on how often we create non-sink non-source nodes---and I doubt that the combined use of .register_embedding() and .register_coercion() (which is the only way to create non-sink non-source) will happen very often.

I don't think you can establish whether something is a non-sink, since _coerce_from_map gives programmatic answers about that. Things are very rarely going to be non-sinks, though, since usually ZZ will coerce into them (but ZZ will never be a problem. perhaps we can leave it out of consideration)

A start might be to only version up on "embedding" creations. That might very well be usable. I expect "embeddings" to be relatively rare, and I think we can declare in our documentation they are not the preferred way of expressing relations (they are vary prone to creating non-commutative diamonds).

@simon-king-jena
Copy link
Member

comment:6

Replying to @nbruin:

I don't think you can establish whether something is a non-sink, since _coerce_from_map gives programmatic answers about that.

Perhaps I have not been clear. A non-sink non-source is something that has both in-arrows (i.e., is codomain of a registered coercion or coerce embedding) and out-arrows (i.e., is domain of a registered coercion or coerce embedding). It is easy to keep track of this property while registering a coercion or coerce embedding.

But I actually think that your idea is better anyway:

A start might be to only version up on "embedding" creations.

It seems to me that this versioning would be complete under reasonable assumptions, based on the following lemma.

Lemma

Assume for all parents P, Q, P.register_coercion(Q) will be done in P.__init__(), but not later. Assume that parents R and S exist at time t_0, but there is no path from R to S in the coerce digraph at time t_0. Assume that between time t_0 and time t_1, .register_embedding() has never been called. Then, there is no path from R to S in the coerce digraph at time t_1.

Proof

Proof be contradiction. We assume that there is a path from R to S at time t_1. Hence, this path contains an arrow a that has been added to the coerce digraph after t_0. Since no register_embedding() has occurred between t_0 and t_1, all arrows created between t_0 and t_1 have been added by register_coercion(). But by hypothesis, register_coercion() is only called during __init__ of the codomain.

Hence, all arrows created between t_0 and t_1 end at parents created after t_0. Therefore, a path in the coerce digraph at time t_1 that contains a will necessarily end at a parent that has been created after t_0. This is a contradiction, since S has already existed at time t_0

q.e.d.

That might very well be usable. I expect "embeddings" to be relatively rare, and I think we can declare in our documentation they are not the preferred way of expressing relations (they are vary prone to creating non-commutative diamonds).

We could express in the documentation that one should call register_coercion only in the initialisation. Since register_embedding can only be called one time for each parent, I don't think we need to say that other methods of establishing a coercion are preferred.

@simon-king-jena
Copy link
Member

Dependencies: #14711

@simon-king-jena
Copy link
Member

comment:7

I hope you agree that we should start on top of #14711, because you suggest to change a couple of attribute names that are touched in #14711.

@simon-king-jena
Copy link
Member

comment:8

Replying to @nbruin:

For example B._coerce_embeddings_from[A] = phi, in addition to A._coerce_embedding = phi. Since it is a MonoDict and thus has weak references to the keys, B would not prevent A from being garbage collected (but of course A would still keep B alive, because we registered the coerce embedding).

Yes, something along those lines. In fact, since _coerce_embeddings_from and _coerce_from_list would serve the same purpose from the perspective of coercion discoveries, I think the entries in _coerce_from_list should also be added to _coerce_embeddings_from, because it would simplify the code. By then it would make more sense to call the attribute _coerce_from_to_be_used_in_backtracking and the list that is now _coerce_from_hash could be _coerce_from_cached_results.

Or shorter: _coerce_from_backtracking and _coerce_from_cache.

By then we should probably be storing ALL maps there in weakened form.

Probably.

The only function of _coerce_from_list now is to keep domains alive, so it would be more economical to replace that by a list _domains_to_keep_alive where we can store strong references to the domains of the corresponding weakened maps that are now in _coerce_from_to_be_used_in_backtracking

I have introduced Parent._registered_domains for exactly this purpose, at #14711. So, we should extend its use.

  • P._codomains_to_keep_alive: List of codomains that should be kept alive as long as P lives. Normally, such codomains Q would have an entry in Q._coercion_maps[P]. Just storing a map in P._coerce_embedding would also have this effect.

Yes, and that's why I don't think we need P._codomains_to_keep_alive (I'd prefer the name P._registered_codomains). Even in weakened maps, we have a strong reference to the codomain.

@simon-king-jena
Copy link
Member

comment:9

PS: We also have convert_from_list, and I guess we should proceed similarly for coerce_from_list.

@simon-king-jena
Copy link
Member

comment:10

With the branch that I have just attached, one gets

sage: class pA(Parent): pass
sage: class pB(Parent): pass
sage: class pC(Parent): pass
sage: 
sage: A=pA(); B=pB(); C=pC()
sage: 
sage: BtoA=Hom(B,A)(lambda x: A(x))
sage: AtoC=Hom(A,C)(lambda x: C(x))
sage: A.register_coercion(BtoA)
sage: A.register_embedding(AtoC)
sage: C.coerce_map_from(B)
Composite map:
  From: <class '__main__.pB'>
  To:   <class '__main__.pC'>

        WARNING: This map has apparently been used internally
        in the coercion system. It may become defunct in the next
        garbage collection. Please use a copy.

What has been done in the patch:

  • renaming of attributes (_coerce_from_backtracking etc)
  • do not use a list, but use a MonoDict for all morphisms that are supposed to be fundamental for backtracking. There is a list _registered_domains, so that domains of registered coercions are kept alive by the codomain, but the domain of a registered embedding is not kept alive by the codomain.

Missing: Tests and versioning.

@simon-king-jena
Copy link
Member

Branch: u/SimonKing/ticket/15303

@nbruin
Copy link
Contributor Author

nbruin commented Oct 20, 2013

Commit: dcd8d68

@nbruin
Copy link
Contributor Author

nbruin commented Oct 20, 2013

comment:12

Replying to @simon-king-jena:

PS: We also have convert_from_list, and I guess we should proceed similarly for coerce_from_list.

yes, conversion probably needs some attention too. However, since conversions exits between a lot more pairs of parents, Do we use backtracking to discover them? There are almost certainly loops: trivially, because ZZ->Z/nZ and Z/nZ->ZZ are valid conversions.

If we want a memory leak-free implementation I suspect having conversion without lifetime implications in either direction is required. Definitely material for another ticket.


Last 10 new commits:

[changeset:dcd8d68]Use registered embeddings for backtracking in the coercion model
[changeset:85cf7e8]Merge branch 'ticket/14711' into ticket/15303
[changeset:364b985]Add warning to string repr of weakened maps. Implement copying for *all* kinds of maps.
[changeset:5168cfd]Generic copy method for maps, using _update_slots Use a cdef _codomain, since the codomain is strongly refed anyway Add doctests
[changeset:452d216]Add docs to SchemeMorphism
[changeset:05fb569]Change SchemeMorphism back (to cope with a Cython bug), copying the new code from sage.categories.map.Map
[changeset:8fd09d5]Copying of PolynomialBaseringInjection and FormalCompositeMap
[changeset:be37145]Let SchemeMorphism inherit from Morphism, not from Element
[changeset:0f38a2c]Keep strong reference to codomain of weakened coerce maps Keep strong reference to domains of *registered* coercions
[changeset:a53261d]Keep a strong reference to the codomain of PrecomposedAction

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Oct 20, 2013

Changed commit from dcd8d68 to f837cbe

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Oct 20, 2013

Branch pushed to git repo; I updated commit sha1. New commits:

[changeset:f837cbe]Store a version number for the coercion cache, depending on the registered embeddings

@simon-king-jena
Copy link
Member

comment:14

I have pushed a new commit that implements version numbers for the coercion cache, and I added a doctest along the following lines:

sage: class pA(Parent): pass
sage: class pB(Parent): pass
sage: class pC(Parent): pass
sage: A=pA(); B=pB(); C=pC()
sage: BtoA=Hom(B,A)(lambda x: A(x))
sage: AtoC=Hom(A,C)(lambda x: C(x))
sage: A.register_coercion(BtoA)
sage: C.coerce_map_from(B)
sage: A.register_embedding(AtoC)
sage: C.coerce_map_from(B)
Composite map:
  From: <class '__main__.pB'>
  To:   <class '__main__.pC'>

        WARNING: This map has apparently been used internally
        in the coercion system. It may become defunct in the next
        garbage collection. Please use a copy.

Hence, I think that the current commit solves our problem. In spite of the following "todo" list, I put it to "needs review" (but I am sure that further commits will be pushed soon).

TODO:

  • Add documentation, stating that registering of coercions should only be done during initialisation, and that registering of embeddings shouldn't be done too often.
  • Run all doctests---since playing with the coercion digraph is asking for trouble, I wouldn't be surprised about bad surprises :-P
  • Important: Get timings for examples that should be most sensitive against slow-downs in coercion.

@simon-king-jena
Copy link
Member

Author: Simon King

@simon-king-jena
Copy link
Member

comment:15

I already found that some test fails when s = SymmetricFunctions(QQbar).s() is done. "Of course", the error only occurs in sage -t src/sage/combinat/sf/sfa.py, but not if one just does s = SymmetricFunctions(QQbar).s() in an interactive session...

@nbruin
Copy link
Contributor Author

nbruin commented Oct 20, 2013

comment:16
  • Important: Get timings for examples that should be most sensitive against slow-downs in coercion.

Looking at what happens for number fields, I have the impression that QQ._populate_coercion_lists_ is most frequently used to supply embeddings (I'm not getting many hits on direct calls to !register_embedding, see below)

The only two classes I have been able to identify that actually install embeddings are numberfields (and not all of them) and AbelianGroupWithValues_class

So, assuming that the version check itself is lightning-fast (and why shouldn't it be?) I expect that negatively affected examples would have to do a lot of number field or AbelianGroupWithValues creations interleaved with complicated coercion discovery that benefits a lot from knowing certain coercions do NOT exist.

There's AdditiveAbelianGroupWrapper which does an _unset_coercions_used together with a register_embedding (registering an "embedding" of the wrapper into the wrapped)

There's also UniversalCyclotomicField which registers an embedding into QQbar upon init, so that shouldn't really affect versioning.

@simon-king-jena
Copy link
Member

comment:17

The following runs into an infinite loop:

sage: CF = CyclotomicField(5)
sage: UCF.<E> = UniversalCyclotomicField()
sage: CF = CyclotomicField(5,embedding=CC(exp(4*pi*i/5)))
sage: x = E(5)
sage: CC(x)

@simon-king-jena
Copy link
Member

Work Issues: analyse recursion error

@simon-king-jena
Copy link
Member

comment:18

Shorter:

sage: UCF.<E> = UniversalCyclotomicField()
sage: phi = copy(QQbar.coerce_map_from(UCF))
sage: x = E(5)
sage: phi(x)

Hence, we do have a map, but applying it will run into a loop.

@simon-king-jena
Copy link
Member

comment:113

Replying to @mezzarobba:

Sorry, I meant to write

git checkout trac/u/mmezzarobba/ticket/15303 -m mybranch
git checkout trac/u/mmezzarobba/ticket/15303 -m ticket/15303/mezzarobba
error: pathspec 'trac/u/mmezzarobba/ticket/15303' did not match any file(s) known to git.
error: pathspec 'ticket/15303/mezzarobba' did not match any file(s) known to git.

So, that's not it.

However, as I said above: After the fetch command, I did git merge --ff-only FETCH_HEAD (being on my branch for this ticket), and I hope that this was one way to incorporate your merge resolution.

However, how can I see your merge resolution?

@simon-king-jena
Copy link
Member

comment:114

Replying to @mezzarobba:

Sorry, I meant to write

git checkout trac/u/mmezzarobba/ticket/15303 -m mybranch
git checkout trac/u/mmezzarobba/ticket/15303 -m ticket/15303/mezzarobba
error: pathspec 'trac/u/mmezzarobba/ticket/15303' did not match any file(s) known to git.
error: pathspec 'ticket/15303/mezzarobba' did not match any file(s) known to git.

So, that's not it.

However, as I said above: After the fetch command, I did git merge --ff-only FETCH_HEAD (being on my branch for this ticket), and I hope that this was one way to incorporate your merge resolution.

However, how can I see your merge resolution?

@mezzarobba
Copy link
Member

comment:115

Replying to @simon-king-jena:

How can I see what you did? I tried git diff HEAD^2, but it doesn't really look like what I expected from resolving a merge conflict, since it touches stuff that did not conflict.

git diff HEAD^2 gives you all the differences between one of the parents (in our case, master) and HEAD, without taking into account the other parent at all. To see the effect of a merge, you can try viewing the "combined diff" of HEAD and both parents (git show HEAD). But this will omit all files that agree entirely with either of the parents. Also, the output will include the parts of the merge that were automatically resolved.

All I did manually was to choose the version of morphism.pyx that looked more recent, and to merge the imports of morphism.py as follows:

 -from sage.categories.homset   import Homset
 +from sage.categories.homset   import Homset, Hom
- from sage.rings.all           import is_RingHomomorphism, is_CommutativeRing, 
+ from sage.rings.all           import Integer
+ from sage.rings.commutative_ring import is_CommutativeRing
+ from sage.rings.morphism import is_RingHomomorphism

@simon-king-jena
Copy link
Member

comment:116

Replying to @mezzarobba:

To see the effect of a merge, you can try viewing the "combined diff" of HEAD and both parents (git show HEAD). But this will omit all files that agree entirely with either of the parents. Also, the output will include the parts of the merge that were automatically resolved.

I see. So, you took the version of src/sage/misc/weak_dictionary.pyx from master, and ...

All I did manually was to choose the version of morphism.pyx that looked more recent,

I have not even been aware that this has changed. So, someone has made morphisms that are defined by generator images hashable? Nice!

and to merge the imports of morphism.py as follows:

 -from sage.categories.homset   import Homset
 +from sage.categories.homset   import Homset, Hom
- from sage.rings.all           import is_RingHomomorphism, is_CommutativeRing, 
+ from sage.rings.all           import Integer
+ from sage.rings.commutative_ring import is_CommutativeRing
+ from sage.rings.morphism import is_RingHomomorphism

Didn't you also do something with src/sage/combinat/ncsf_qsym/ncsf.py?

Anyway, the output of git show HEAD looks good to me. So, I will try make ptest now. Is "crash in pergroup.py" still an issue? We will see...

@mezzarobba
Copy link
Member

comment:117

Replying to @simon-king-jena:

I see. So, you took the version of src/sage/misc/weak_dictionary.pyx from master, and ...

All I did manually was to choose the version of morphism.pyx that looked more recent,

Of weak_dictionary.pyx, sorry (I guess I need to catch up on sleep...). The merge of morphism.pyx was automatic.

Didn't you also do something with src/sage/combinat/ncsf_qsym/ncsf.py?

No, there was no conflict there either.

@simon-king-jena
Copy link
Member

comment:118

Replying to @mezzarobba:

Didn't you also do something with src/sage/combinat/ncsf_qsym/ncsf.py?

No, there was no conflict there either.

Then I show you what git show HEAD shows me:

commit 2712c53cb68d2668b47ccc923b5a77421ff04bbd
Merge: 528a035 0c6fcdf
Author: Marc Mezzarobba <[email protected]>
Date:   Sat Nov 30 14:10:11 2013 +0100

    Merge 'trac/master' into ticket/15303
    
    Conflicts:
        src/sage/misc/weak_dict.pyx
        src/sage/schemes/generic/morphism.py

diff --cc src/sage/categories/morphism.pyx
index 74f600f,47aa188..a9f44c8
--- a/src/sage/categories/morphism.pyx
+++ b/src/sage/categories/morphism.pyx
@@@ -249,8 -214,58 +290,58 @@@ garbage collection. Please use a copy."
              sage: ZZ(x)
              -1
          """
 -        self.codomain().register_conversion(self)
 +        self._codomain.register_conversion(self)
  
+     # You *must* override this method in all cython classes
+     # deriving from this class.
+     # If you are happy with this implementation (typically
+     # is your domain has generators), simply write:
+     # def __hash__(self):
+     #     return Morphism.__hash__(self)
+     def __hash__(self):
+         """
+         Return a hash of this morphism.
+ 
+         It is the hash of the triple (domain, codomain, definition)
+         where ``definition`` is:
+ 
+         - a tuple consisting of the images of the generators
+           of the domain if domain has generators
+ 
+         - the string representation of this morphism otherwise
+ 
+         AUTHOR:
+ 
+         - Xavier Caruso (2012-07-09)
+         """
+         domain = self.domain()
+         codomain = self.codomain()
+         try:
+             gens = domain.gens()
+             definition = tuple([self(x) for x in gens])
+         except (AttributeError, NotImplementedError):
+             definition = self.__repr__()
+         return hash((domain, codomain, definition))
+ 
+     def __richcmp__(left, right, int op):
+         return (<Element>left)._richcmp(right, op)
+ 
+     cdef int _cmp_c_impl(left, Element right) except -2:
+         if left is right: return 0
+         domain = left.domain()
+         c = cmp(domain, right.domain())
+         if c: return c
+         c = cmp(left.codomain(), right.codomain())
+         if c: return c
+         try:
+             gens = domain.gens()
+             for x in gens:
+                 c = cmp(left(x), right(x))
+                 if c: return c
+         except (AttributeError, NotImplementedError):
+             raise NotImplementedError
+ 
+ 
  cdef class FormalCoercionMorphism(Morphism):
      def __init__(self, parent):
          Morphism.__init__(self, parent)
diff --cc src/sage/combinat/ncsf_qsym/ncsf.py
index 6478d46,7d84562..febf995
--- a/src/sage/combinat/ncsf_qsym/ncsf.py
+++ b/src/sage/combinat/ncsf_qsym/ncsf.py
@@@ -238,12 -249,11 +249,12 @@@ class NonCommutativeSymmetricFunctions(
          sage: elementary(ribbon[2,1,2,1])
          L[1, 3, 2] - L[1, 5] - L[4, 2] + L[6]
  
-     TODO: explain the other changes of bases!
+     .. TODO:: explain the other changes of bases!
  
 -    Here is how to fetch the conversion morphisms::
 +    Here is how to fetch the coerce morphisms. Note that by :trac:`15303`, we
 +    should use a copy of the maps being used in the coercion system::
  
 -        sage: f = complete.coerce_map_from(elementary); f
 +        sage: f = copy(complete.coerce_map_from(elementary)); f
          Generic morphism:
            From: NCSF in the Elementary basis
            To:   NCSF in the Complete basis
diff --cc src/sage/schemes/generic/morphism.py
index 3a6c351,09c6bc3..ab59866
--- a/src/sage/schemes/generic/morphism.py
+++ b/src/sage/schemes/generic/morphism.py
@@@ -79,10 -75,12 +79,12 @@@ AUTHORS
  #*****************************************************************************
  
  
 -from sage.structure.element   import AdditiveGroupElement, RingElement, Element, generic_power
 +from sage.structure.element   import AdditiveGroupElement, RingElement, Element, generic_power, parent
  from sage.structure.sequence  import Sequence
 -from sage.categories.homset   import Homset
 +from sage.categories.homset   import Homset, Hom
- from sage.rings.all           import is_RingHomomorphism, is_CommutativeRing, Integer
+ from sage.rings.all           import Integer
+ from sage.rings.commutative_ring import is_CommutativeRing
+ from sage.rings.morphism import is_RingHomomorphism
  from point                    import is_SchemeTopologicalPoint
  from sage.rings.infinity      import infinity
  import scheme

git blame shows me that the change in ncsf.py seems to be authored by me. But why is git show HEAD showing it?

@mezzarobba
Copy link
Member

comment:119

Replying to @simon-king-jena:

git blame shows me that the change in ncsf.py seems to be authored by me. But why is git show HEAD showing it?

Because ncsf.py contains both lines that come from master (i.e., changes introduced by the merge from the point of view of your branch) and lines that come from your branch (i.e. changes w.r.t. master). The two columns of plus/minus/space correspond to these two sets of changes.

@simon-king-jena
Copy link
Member

comment:120

Good and bad news. The good news: I did not see a crash in permgroup.py, even though it used to always occur with make ptest (not with other methods of testing). The bad news:

sage -t src/sage/combinat/ncsf_qsym/qsym.py  # 2 doctests failed
sage -t src/sage/schemes/projective/projective_morphism.py  # 1 doctest failed
sage -t src/sage/schemes/projective/projective_point.py  # 1 doctest failed
sage -t src/sage/rings/finite_rings/finite_field_base.pyx  # 1 doctest failed
sage -t src/sage/rings/finite_rings/hom_prime_finite_field.pyx  # 3 doctests failed

The number of failing tests indicates that it could be mostly harmless.

I'll soon push the branch (even though it actually is your branch, but I guess it is ok.

@simon-king-jena
Copy link
Member

Changed work issues from Crash in permgroup.py to none

@darijgr
Copy link
Contributor

darijgr commented Nov 30, 2013

comment:121

What is broken in qsym.py? I'm asking because I'm editing the file currently.

@simon-king-jena
Copy link
Member

comment:122

Oooops, what is that?

sage -t src/sage/schemes/projective/projective_morphism.py
**********************************************************************
File "src/sage/schemes/projective/projective_morphism.py", line 1326, in sage.schemes.projective.projective_morphism.SchemeMorphism_polynomial_projective_space.canonical_height
Failed example:
    f.canonical_height(Q,badprimes=[2])
Expected:
    0.0013538030870311431824555314882
Got:
    verbose 0 (3533: multi_polynomial_ideal.py, groebner_basis) Warning: falling back to very slow toy implementation.
    verbose 0 (3533: multi_polynomial_ideal.py, groebner_basis) Warning: falling back to very slow toy implementation.
    0.0013538030870311431824555314882
**********************************************************************
1 item had failures:
   1 of  16 in sage.schemes.projective.projective_morphism.SchemeMorphism_polynomial_projective_space.canonical_height
    [481 tests, 1 failure, 8.40 s]
----------------------------------------------------------------------
sage -t src/sage/schemes/projective/projective_morphism.py  # 1 doctest failed
----------------------------------------------------------------------
Total time for all tests: 8.6 seconds
    cpu time: 7.7 seconds
    cumulative wall time: 8.4 seconds

So, why is the slow toy implementation being used when coercion is changed?

@simon-king-jena
Copy link
Member

comment:123

Replying to @darijgr:

What is broken in qsym.py? I'm asking because I'm editing the file currently.

Here is the diff that I did to fix the failure:

diff --git a/src/sage/combinat/ncsf_qsym/qsym.py b/src/sage/combinat/ncsf_qsym/qsym.py
index 583ca87..f127c19 100644
--- a/src/sage/combinat/ncsf_qsym/qsym.py
+++ b/src/sage/combinat/ncsf_qsym/qsym.py
@@ -2232,23 +2232,25 @@ class QuasiSymmetricFunctions(UniqueRepresentation, Parent):
         def __init_extra__(self):
             """
             Set up caches for the transition maps to and from the monomial
-            basis, and register them as coercions.
+            basis, and register them as coercions. By :trac:`15303`, we need
+            to copy coerce maps before exposing them outside of the coercion
+            system.
 
             TESTS::
 
                 sage: HWL = QuasiSymmetricFunctions(QQ).HazewinkelLambda()
                 sage: M = QuasiSymmetricFunctions(QQ).Monomial()
-                sage: HWL.coerce_map_from(M)
+                sage: M2HWL = copy(HWL.coerce_map_from(M)); M2HWL
                 Generic morphism:
                   From: Quasisymmetric functions over the Rational Field in the Monomial basis
                   To:   Quasisymmetric functions over the Rational Field in the HazewinkelLambda basis
-                sage: M.coerce_map_from(HWL)
+                sage: HWL2M = copy(M.coerce_map_from(HWL)); HWL2M
                 Generic morphism:
                   From: Quasisymmetric functions over the Rational Field in the HazewinkelLambda basis
                   To:   Quasisymmetric functions over the Rational Field in the Monomial basis
-                sage: M.coerce_map_from(HWL)(HWL[2])
+                sage: HWL2M(HWL[2])
                 M[1, 1]
-                sage: HWL.coerce_map_from(M)(M[2])
+                sage: M2HWL(M[2])
                 HWL[1, 1] - 2*HWL[2]
             """
             M = self.realization_of().M()

@simon-king-jena
Copy link
Member

comment:124

Replying to @simon-king-jena:

Oooops, what is that?

...
So, why is the slow toy implementation being used when coercion is changed?

Same problem in sage -t src/sage/schemes/projective/projective_point.py.

And I guess the other errors should rather be fixed at #14711, because they are about "weakened coerce maps".

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Dec 1, 2013

Changed commit from 528a035 to 12e8055

@sagetrac-git
Copy link
Mannequin

sagetrac-git mannequin commented Dec 1, 2013

Branch pushed to git repo; I updated commit sha1. New commits:

[12e8055](https://github.com/sagemath/sagetrac-mirror/commit/12e8055)Merge branch 'ticket/14711' into ticket/15303
[ee30c20](https://github.com/sagemath/sagetrac-mirror/commit/ee30c20)Address the "check" keyword of scheme morphisms by name, not position
[d68c5df](https://github.com/sagemath/sagetrac-mirror/commit/d68c5df)Minor fixes, that became needed since #14711 was not merged quickly enough
[c42b539](https://github.com/sagemath/sagetrac-mirror/commit/c42b539)Merge branch 'master' into ticket/14711
[2712c53](https://github.com/sagemath/sagetrac-mirror/commit/2712c53)Merge 'trac/master' into ticket/15303
[23f18f2](https://github.com/sagemath/sagetrac-mirror/commit/23f18f2)Merge branch 'master' into ticket/14711

@simon-king-jena
Copy link
Member

comment:126

Since #14711 has changed, I have merged the new commits from there.

And I have included Marc's master merge.

Doctests pass for me.

@sagetrac-vbraun-spam sagetrac-vbraun-spam mannequin modified the milestones: sage-6.0, sage-6.1 Dec 17, 2013
@mezzarobba
Copy link
Member

comment:128

This branch (more precisely, commit 1db6444c33c5b41bf600b6446badd92ddbe3c018) conflicts with that from #10963 (more precisely, with 9d9cae3).

@sagetrac-vbraun-spam sagetrac-vbraun-spam mannequin modified the milestones: sage-6.1, sage-6.2 Jan 30, 2014
@rwst
Copy link
Contributor

rwst commented Apr 5, 2014

Work Issues: rebase (11 files conflicting)

@sagetrac-vbraun-spam sagetrac-vbraun-spam mannequin modified the milestones: sage-6.2, sage-6.3 May 6, 2014
@sagetrac-vbraun-spam sagetrac-vbraun-spam mannequin modified the milestones: sage-6.3, sage-6.4 Aug 10, 2014
@mkoeppe mkoeppe removed this from the sage-6.4 milestone Dec 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants