-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@expect() hint to optimizer #489
Comments
I do think that something to guide branch prediction can be pretty neat, but might be clearer with a builtin like @expect(expression, value) or @likely(condition). This would be less surprising to someone who's familiar with llvm.expect and gcc __builtin_expect. A builtin might also be better suited as it communicates a hint to the compiler as opposed to actual program logic.
|
I use
but this wastes one precious line. |
Sure, it wastes a line, but it makes it abundantly clear that the first branch is the likely one. It is important to note that if you choose the wrong branch, performance will be worse and it is actually good for the hint to be easily found. More information on performance: http://blog.man7.org/2012/10/how-much-do-builtinexpect-likely-and.html More explicitly/consistent with the language, perhaps:
|
It's going to be |
Compare
with #define likely(x) (__builtin_expect(!!(x), 1))
#define unlikely(x) (__builtin_expect(!!(x), 0))
if unlikely(x < 0)
... The former is too chatty. If you are going to leave two sets of parenthesis, please consider adding |
How would I expect no error from something? if (errorable()) |payload| {
// this should be likely
} else |err| {
// this should be unlikely
} The proposed Would there be any way to expect or not expect certain branches of a Glancing at the LLVM docs, it looks like we're pretty limited in what we can do. Looks like
I think it's ok to be a little verbose with this feature, since it's a bit advanced and not recommended unless you understand the drawbacks. My concern is lack of generality. |
Zig would always expect no error. That's #84 |
Would this make sense on other conditional Zig operators? |
Yes, and for that reason likely/unlikely is prob. Better. |
@0joshuaolson1: it probably doesn't make sense to expand the feature. It has to be used often to have an impact. Language Nim has support for fine tuned switch, using |
Why does it have to be used often to have an impact? I'd say you'll only have to use it in hot loops and stuff like that. It really doesn't matter most of the time. |
@BarabasGitHub: the hint allows to reorder instructions so that the likely flow goes without jumping (and this cleaning the pipeline). This can save few cycles. To have measurable impact, it should be applied a lot. I value it more as the documentation. VC++ does not support implementation of |
I know how it works. I also know most of the code you write isn't in the
hot path and thus has a neglectable impact on performance.
Plus you have the branch predictor which mostly negates these kinds of
optimizations in most cases.
Not saying you shouldn't use it, because it can definitely help. However I
don't think it should be used all over the place because you think it
improves performance.
Op wo 8 aug. 2018 01:02 schreef PavelVozenilek <[email protected]>:
… @BarabasGitHub <https://github.com/BarabasGitHub>: the hint allows to
reorder instructions so that the likely flow goes without jumping (and this
cleaning the pipeline). This can save few cycles. To have measurable
impact, it should be applied a lot.
I value it more as the documentation. VC++ does not support implementation
of likely/unlikelylike GCC does, but I use it anyway, as empty macro.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#489 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AL-sGu-hosG2iwAJ5U3hy47C3TfBqDjhks5uOhxogaJpZM4PaIPQ>
.
|
In theory, with a very advanced benchmarking tool (could be based on #1010#issuecomment-389227431), validity of these hints could be checked and clear violation reported. This may help the programmer to discover wrong assumptions about runtime behaviour. If the major reason for if+/if- is self-documentation, then it makes sense to use it often. |
That sounds like #237 |
Both
Returns
These are all solved with #5177 which is also accepted. |
I want to add that apart from the mentioned benefits, readability and static hints for branch prediction it also reduce the pressure on the L1 instruction cache by typically moving cold instructions last in functions. I believe this to be the greatest benefit of using a likely/unlikely construct. For the longest time I was not a believer but I've seen the performance benefits of using this in a real product that was optimized a lot over its life time. Eventually we resorted to pepper the hot code paths with likely and unlikely. Ugly, but it can be effective. |
switch (@expect(123, x, 1)) {
123 => {
// likely
},
124, else => {
// unlikely
},
} Here its still impossible to order branches in likelyness. This could be handled with an if/else chain I guess. |
Also, regarding the natural language point made earlier in this isue: perhaps |
I realize that this accepted definition of As a programmer, I care less about the expected value and more about the expected branch, I want go to the part of the source code that contains the code path that I think is important and annotate that. Also, raw probabilities are problematic for keeping them consistent while editing code, what I really want are weights for each branch with a default of 1 for unannotated branches. Note that this doesn't prevent using weights that add up to some number like 100 and that translate directly to probabilities. For self-hosted backends, assuming no higher level optimizations, the best you can do for an One optimization that can be done for switches on x86_64 is make the most likely prong a fake "fallthrough" from the indirect branch (which the x86_64 branch predictor assumes as the likely target). This is also compatible with the accepted definition of For automated tooling, a profiler is going to have branch counts for each branch, and so it would be trivial to just edit the source code with an annotation for the branch count of each branch, which is just a weight as described above. For a syntax proposal, in the interest of avoiding extra grammar complexity, I think there can just be a switch (x) {
1 => { // likely
@branchWeight(100);
},
2 => {}, // unlikely, default weight of 1
// it seems like this branch should have a lower weight than the default
// maybe a weight of 0, or floats weights < 1 could mean "cold" or "never optimize for this branch being taken"
// in the future, branches that always trigger safety, panic, or error could be detected and treated like that
else => unreachable,
}
while (true) {
if (normal_term_cond) {
@branchWeight(10); // slightly unlikely termination condition
break;
} else {
@branchWeight(1_000); // very likely to keep looping
}
if (special_case) {
@branchWeight(1); // very unlikely termination condition
break;
} else {
@branchWeight(1_000); // very likely to keep looping
}
} |
Adding to jacobly's points, I also wonder whether |
Unaccepted to consider #20642 as an alternative. |
Rejected in favor of #21148. |
This is proposal for a small feature, local and isolated. Could improve code readability.
Linux kernel uses macros
likely
andunlikely
:These macros may improve performance a little bit and they serve as handy documentation.
Language Nim supports this functionality too, through its standard library ( https://nim-lang.org/docs/system.html#likely.t,bool ).
My proposal:
Add
if+
andif-
into the language.if+
would signal that the path is rather likely,if-
will suggest the flow will not go this way.What is this good for:
How it could be implemented:
llvm.expect
. This is how clang implements__builtin_expect
, which is called bylikely
/unlikely
macros. (GCC also provides__builtin_expect
, MSVC has nothing similar.)Performance effects of
__builtin_expect
are hotly disputed on the internet. Many people claim programmers are invariably bad in such prediction and profile guided optimization will do much, much better. However, they never support their claim by benchmarks.Even if performance is unaffected the documentation value remains. When one is stepping the code through debugger and the flow goes against the hint, one may be get more cautious a catch a bug.
The text was updated successfully, but these errors were encountered: