-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC add matrix types, add sqrtm benchmarks #61
Conversation
src/linalg/LinAlgBenchmarks.jl
Outdated
end | ||
return A | ||
end | ||
linalgmat{T}(::Type{T}, s, identity) = linalgmat(T, s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't think this will do what you want, you need ::typeof(identity)
src/linalg/LinAlgBenchmarks.jl
Outdated
# Non-positive-definite upper-triangular matrix | ||
type NPDUpperTriangular | ||
end | ||
linalgmat(::Type{NPDUpperTriangular}, s) = linalgmat(UpperTriangular, s, x -> randn()*x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
usually the benchmarks should use random numbers with a fixed seed, so that they are the same between benchmark runs. see the functions in RandUtils.jl
src/linalg/LinAlgBenchmarks.jl
Outdated
end | ||
linalgmat{T}(::Type{T}, s, identity) = linalgmat(T, s) | ||
|
||
linalgmat(::Type{UnitUpperTriangular}, s) = linalgmat(UpperTriangular, s, x -> 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't actually return a UnitUpperTriangular
matrix type?
@stevengj Thanks a lot for your comments -- they are really appreciated! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution, and sorry for the delayed review!
src/linalg/LinAlgBenchmarks.jl
Outdated
# operations # | ||
############## | ||
|
||
g = addgroup!(SUITE, "operations") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we change "operations"
to a more descriptive group name? The word "operations"
could apply to basically any benchmark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely, any suggestion? I struggled with this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just put it in the "arithmetic"
group.
src/linalg/LinAlgBenchmarks.jl
Outdated
end | ||
|
||
for b in values(g) | ||
b.params.time_tolerance = 0.45 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have any reason to believe that the benchmarks will be this noisy? It's probably better to stick with the default, and then we raise the tolerance threshold later if we need to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
None at all, I didn't know what I was doing -- I copied them from above, but is it ok just to remove these lines?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. The time_tolerance
parameter basically tells BenchmarkTools.judge
the percent threshold at which something is classified as a "regression" or "improvement".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done / As a consequence of the change operations -> arithmetic
, is the same time_tolerance = 0.45
applied again to these operations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, yeah, it does. The real solution I guess is to add a subgroup under "arithmetic"
for the binary arithmetic benchmarks (*
, +
, -
, etc.), which were what that time tolerance setting was meant for. I can do that in a future PR, though, let's just get this merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sweet. Travis is good for 0.4, 0.5 but fail on 0.6 seemingly due to
ERROR: LoadError: LoadError: UndefVarError: readbytes not defined
Is it from this PR, or could it be a 0.6-nightly change in the last week?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not from this PR, the error is occurring in FileIO.jl.
BaseBenchmarks and its dependencies haven't been runnable on Julia master in a couple days now, there have just been too many changes that broke the stack (plus all the type inference bugs).
Start benchmarking
sqrtm
, cf. JuliaLang/julia#20214Add
UnitUpperTriangular
andNPDUpperTriangular
(NPD = non-positive-definite) matrices to the testing