-
Notifications
You must be signed in to change notification settings - Fork 76
Introduce managedBy
field
#440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We added in the previous release, see #379 |
oh, you are referring to a field. Yeah, that sounds good, consistency with job is good. |
Should we revert that PR then? |
+1 for replacing with a field. I suppose the alpha annotation already made it to the release. I would suggest keeping both mechanism for at least one release, to facilitate upgrades to Kueue+Jobset. |
That makes sense. |
I tried a quick revert option on the PR but that failed so reverting may be a bit of a manual process.. |
+1 for using field, for ecosystem consistency, but also so that we fail fast when someone tries to use it on an old version (same argument as for the k8s job). |
/assign |
@trasc have you already started this? I was hoping to have a team member of mine work on this as a good first issue to ramp up on JobSet, we just hadn't assigned it to him yet, so i'd like to assign it to them if that's okay |
@danielvegamyhre Not yet. /unassign |
Thanks! /assign @jedwins1998 |
@danielvegamyhre: GitHub didn't allow me to assign the following users: jedwins1998. Note that only kubernetes-sigs members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign |
What is our deprecation policy for the managed-by label? |
I just realized this change was included in the v0.3.0 release. Given that, I think we should support both the label and the field for a period before removing the label. Maybe for a couple releases? And we can emit a warning the label is being deprecated at JobSet creation time. What do you think? |
I brought this on kubernetes-sigs/kueue#1870 to figure out what we want to do. |
Update: we can safely drop the label entirely per conversation here kubernetes-sigs/kueue#1870 (comment) |
Some additional notes for implementation:
@jedwins1998 you can generalize this into an improved contributor guide like you were talking about as well |
Created #487 as an initial implementation. |
PR has been merged. Will this issue auto-close or is there something I need to do? |
It will auto-close if you add "Fixes #issueNumber" in the issue description. We can close this one manually. Thanks for your work on this! |
What would you like to be added:
Remove the
alpha.jobset.sigs.k8s.io/managed-by
label and introduce.spec.managedBy
field.@mimowo @ahg-g @alculquicondor WDYT?
/kind feature
Why is this needed:
Since #407, we introduced the new
managed-by
label based on the batch/job KEP: kubernetes/enhancements#4370.But, we decided to introduce a new
managedBy
field to batch/job here: kubernetes/enhancements#4530So, I think that we should use the
managedBy
field instead of the dedicated label as well as batch/job.The text was updated successfully, but these errors were encountered: