-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change all durations and timestamps to times.nim
#15
Comments
Windows GetQueuedCompletionStatus() which used to poll completion events and pause supports only milliseconds resolution, so how you propose to use microseconds and nanoseconds duration in such case? I understand problem with monotonic clock, but currently library uses most performant primitives which are 5x times faster then its monotonic alternatives. |
The point is to allow the user to specify the time in an unambiguous way - the precision you get is whatever the platform offers you, at that point. by fixing ad2, we can also offer the same usability advantage in downstream libraries. When you use these functions, you necessarily have to take into account that these are estimates. For example, if you just blindly repeat a call with a 100ms delay, you'll have less than 10 calls per second regardless of the precision the underlying clock - for example because other events are running at the same time. I'm curious about that 5x claim also, and how much of the total time of calling for example |
If you check source code of First call is used to handle already expired callbacks and calculate timeout for system queue waiter, second call is used also to process expired callbacks after system queue waiter. Second call is also required because you don't know how much time you spend in system queue waiter (it can be equal to timeout value, but if timeout was not set, it will wait infinite time for FDs events). |
I understand that the function is called - I'm asking two things:
once you have the answers to those questions, you have to weigh that against the massive improvement in ergonomics:
on my crappy laptop, I see more variation due to thermal throttling than between the two calls.. |
@arnetheduck first of all i'm making asyncdispatch not for Linux only, so if Linux has advantage on some behavior it doesn't matter for library, because it must produce equal behavior on all OSes.
This is my old benchmarks but data still can be used to understand timer's impact on performance. |
The benefit is in the hardware, not the operating system, and the point is that you need to use the correct clock. On windows, that's However, the benchmarks I see in that post for timers look mostly irrelevant - they measure timing without optimizations ( The second thing is that the incoming value in the API can be converted to whatever underlying clock you want - in fact, it should be. The key is to have a clear, powerful and unambiguous ABI followed up by an efficient implementation that makes the best use of the hardware available. Using milliseconds is strange (not an SI unit for time) and suboptimal any way you look at it. |
The actual benefit is not in hardware, i can't use |
I'm not saying we should use I'm also claiming that the we could use just a normal slow clock and probably wouldn't notice the difference, because the benchmark is kind of irrelevant - when there's no load, it doesn't really matter how many loops per second you can do because you're sleeping most of the time and when when there is load, the timing part is usually dwarfed by actual work (ie a beacon node packet arriving, in our case).. thus I'd focus on correctness and the use of good data structures (I see for example we're using a heap for the timers which seems much more sane than a seq or a linked list - in general) before worrying too much about micro-benchmarks |
Now i will show my measurements, to benchmark i have used this code https://gist.github.com/cheatfate/d184c9f11fd49e9d0c75d166bd2d2b05. So Linux benchmark is (compiled by
Windows benchmark is:
|
As you can see on Windows using |
Now about Linux results, this is quote from
As you can see |
So usage of |
so from adjtime:
the two work together to do (mostly) the right thing. specifically, they don't jump back in time, and don't cause large disruptions in timing, beyond what normally would happen anyway with non-realtime OS's. for windows, you can try all that said, this is not an argument about performance, primarily. you will be calling the clock function only when it doesn't matter, relatively speaking. it's possible to construct benchmarks that focus on the timer, but these will be far removed from reality. |
Fixed in #24 |
There are several reasons to do this:
times.Duration
and friends) to integrate better with other librariesThe text was updated successfully, but these errors were encountered: