-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use epsilon() for tolerances #142
Conversation
The original tolerances look arbitrary: real(sp), parameter :: sptol = 1.2e-06_sp real(dp), parameter :: dptol = 2.2e-15_dp but when written using epsilon() they are almost exactly 10x the machine epsilon for the particular precision kind: real(sp), parameter :: sptol = 10.06633 * epsilon(1._sp) real(dp), parameter :: dptol = 9.907919 * epsilon(1._dp) so I just rounded it to 10: real(sp), parameter :: sptol = 10 * epsilon(1._sp) real(dp), parameter :: dptol = 10 * epsilon(1._dp) which now shows the intent more clearly: the tolerance assumes we lose one digit of accuracy.
Thanks @jvdp1. @aradi in that comment you gave |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree, your solution shows the intention much more, so let's got with that.
I feel like a drive-by sniper by commenting. But with a carefully chosen data set (e.g., integral values, power of 2 number of values) the calculations will be done without any round-off and tests can be for equality (bit-for-bit). One can dispense entirely with any notion of an appropriate epsilon, and the data easily hard-coded in the test itself without the extra complexity of reading from a file. The test would be far simpler. What exactly is being gained by using data that uses all the significant digits? |
I like this idea. But do you think it is possible for functions more complicate than |
@nncarlson Your comments are more than welcome! We do the PRs in public in order to get critical comments from skilled developers and improve so the quality of stdlib. As for the bit by bit comparison: I agree, one could maybe use special numbers and check by bitwise comparison. However, we should keep in mind that
Especially latter is important IMO, as the test suite should definitely include tests for the typical use cases (using all significant bits). However, adding the specially designed input values with bitwise check for the results could make up for very nice additional tests. |
Unfortunately not. Only a subset of those things where something rational is being computed. But in cases where it is possible, it makes the tests much more straightforward, and I think that is preferable.
It's certainly not the typical use case, but what exactly is being tested? I think it is that the correct elements from the input arrays are being summed and normalized by the correct value; i.e., the code of the mean function. For that, carefully chosen integral values (in this case) are sufficient to test that. I think the only thing that using general values (requiriing all significant bits) adds to the test is to test that the hardware does addition and division correctly. But I think that is something this and other tests can assume. |
We can further improve this to test things exactly, but I am also fine with just keeping this as is. This PR is effectively just a refactoring what is in master, so I merged it. |
The original tolerances look arbitrary:
but when written using epsilon() they are almost exactly 10x the machine
epsilon for the particular precision kind:
so I just rounded it to 10:
which now shows the intent more clearly: the tolerance assumes we lose
one digit of accuracy.