-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hotfix on classification metrics #2878
Conversation
Co-authored-by: Jirka Borovec <[email protected]>
good catch! I wonder if other metrics have this issue too. |
I think this is all we need to fix. The major changes were all in the Though if there are anything more I missed or mistaken, pls tell. 😄 |
Codecov Report
@@ Coverage Diff @@
## master #2878 +/- ##
======================================
Coverage 90% 90%
======================================
Files 79 79
Lines 7259 7226 -33
======================================
- Hits 6524 6519 -5
+ Misses 735 707 -28 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we a test to show that the master fails and this fixes it?
Again, we were waiting for tests to be added... |
I would usually also like to see a test. For corrrectness we would need such a test for every metric, not just this one, but then how do we generalize this test for every metric? They are all different. |
TL,DR; I added the tests for my changes, and I'm not sure about whether I should add more or not. If you want more tests, you might want to add tests for I agree on that we need more tests on this part of the code. In fact, I did add some tests for If you want yo verify my changes, you might want to test the However, I think tests cannot encompass every possible cases and expectation checks. This in-place bug would be need a lot of test cases and checks, for each functions. Though that's doable, I don't think we can we can verify such cases - not only things like in-place ops - effectively. Considering and catching such bugs before merging is ideal and great, but I think some of them can just get through sometimes, and we can catch them later on when we have a visible issue. Moreover, as I'm not a core contributor of PL, I'm not sure about the expected behavior of the edge cases. For example, justusschock mentioned about the expected results about unmatching Even though, I'm happy to discuss in depth about this part of the project. If you may, please open up such conversation and mention me! =) |
What does this PR do?
Fixes #2862
Change the
stat_scores_multiple_classes
not to change the input tensors, by removing in-place operations.Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃