Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC add matrix types, add sqrtm benchmarks #61

Merged
merged 4 commits into from
Feb 3, 2017

Conversation

felixrehren
Copy link
Contributor

@felixrehren felixrehren commented Jan 25, 2017

Start benchmarking sqrtm, cf. JuliaLang/julia#20214
Add UnitUpperTriangular and NPDUpperTriangular (NPD = non-positive-definite) matrices to the testing

end
return A
end
linalgmat{T}(::Type{T}, s, identity) = linalgmat(T, s)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think this will do what you want, you need ::typeof(identity)

# Non-positive-definite upper-triangular matrix
type NPDUpperTriangular
end
linalgmat(::Type{NPDUpperTriangular}, s) = linalgmat(UpperTriangular, s, x -> randn()*x)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

usually the benchmarks should use random numbers with a fixed seed, so that they are the same between benchmark runs. see the functions in RandUtils.jl

end
linalgmat{T}(::Type{T}, s, identity) = linalgmat(T, s)

linalgmat(::Type{UnitUpperTriangular}, s) = linalgmat(UpperTriangular, s, x -> 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't actually return a UnitUpperTriangular matrix type?

@felixrehren
Copy link
Contributor Author

felixrehren commented Jan 26, 2017

@stevengj Thanks a lot for your comments -- they are really appreciated!
I eliminated some of the complexity I introduced. The issue is that sqrtm behaves differently on non/positive-definite upper-triangular matrices, so a new non-positive-definite matrix to test is essential. I'm not sure if my implementation is good, suggestions welcome

Copy link
Member

@jrevels jrevels left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution, and sorry for the delayed review!

# operations #
##############

g = addgroup!(SUITE, "operations")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we change "operations" to a more descriptive group name? The word "operations" could apply to basically any benchmark.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely, any suggestion? I struggled with this

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would just put it in the "arithmetic" group.

end

for b in values(g)
b.params.time_tolerance = 0.45
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have any reason to believe that the benchmarks will be this noisy? It's probably better to stick with the default, and then we raise the tolerance threshold later if we need to.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None at all, I didn't know what I was doing -- I copied them from above, but is it ok just to remove these lines?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. The time_tolerance parameter basically tells BenchmarkTools.judge the percent threshold at which something is classified as a "regression" or "improvement".

Copy link
Contributor Author

@felixrehren felixrehren Feb 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done / As a consequence of the change operations -> arithmetic, is the same time_tolerance = 0.45 applied again to these operations?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, yeah, it does. The real solution I guess is to add a subgroup under "arithmetic" for the binary arithmetic benchmarks (*, +, -, etc.), which were what that time tolerance setting was meant for. I can do that in a future PR, though, let's just get this merged.

Copy link
Contributor Author

@felixrehren felixrehren Feb 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sweet. Travis is good for 0.4, 0.5 but fail on 0.6 seemingly due to

ERROR: LoadError: LoadError: UndefVarError: readbytes not defined

Is it from this PR, or could it be a 0.6-nightly change in the last week?

Copy link
Member

@jrevels jrevels Feb 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not from this PR, the error is occurring in FileIO.jl.

BaseBenchmarks and its dependencies haven't been runnable on Julia master in a couple days now, there have just been too many changes that broke the stack (plus all the type inference bugs).

@jrevels jrevels merged commit bd09196 into JuliaCI:master Feb 3, 2017
Keno pushed a commit that referenced this pull request Feb 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants