-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce latency #210
Reduce latency #210
Conversation
``` julia> using Cthulhu julia> tstart = time(); descend(gcd, (Int, Int)); time() - tstart ``` and hit 'q' while you are waiting. Here's the timing: - master: 6.57s - this branch with just precompilation: 5.86s - this branch with compile=min optimize=1: 4.68s
Codecov Report
@@ Coverage Diff @@
## master #210 +/- ##
===========================================
- Coverage 83.47% 47.77% -35.71%
===========================================
Files 7 7
Lines 938 875 -63
===========================================
- Hits 783 418 -365
- Misses 155 457 +302
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great.
It's a bit at odds with a couple of annotations we have
The only purpose of Base.@aggressive_constant_prop
is to narrow down the return type of lookup
from Tuple{Union{Nothing,CodeInfo,IRCode},...}
to Tuple{Union{CodeInfo,IRCode}}
when not in test, by making sure inference propagates constant information of allow_no_codeinf
argument.
It may come with inference cost (but we can't say anything solid unless we do benchmarking, because inference performance isn't really monotonic w.r.t. inference accuracy), but since we're precompiling the inference cache in this PR, it won't hurt the latency ?
That's correct, because this package "owns" I don't know what's up with the coverage report; it appeared shortly after submission before all workers had finished, and hasn't updated. I don't think it's a permanent state of affairs, but feel free to close and re-open if you want to run it again. |
Hmmm, very weird, the coverage is still very low. |
Sure, we can fix the report later (if the breakage is real, which it may not be). |
Julia bug. It turns out that the combination of precompilation and limiting the compiler options seems to break coverage analysis. AFAICT it's essentially limited to the calls that were made at the time the package was built, which means the "workload" executed during precompilation. |
Ah, it's JuliaLang/julia#37059 |
and hit 'q' while you are waiting.
Here's the timing:
I haven't run this extensively, so I can't promise that there isn't a hit to runtime performance, but it looks from my analysis in #209 that this module is not performance-sensitive. It's a bit at odds with a couple of annotations we have
Cthulhu.jl/src/Cthulhu.jl
Line 179 in 60d8b54
so some discussion may be merited.