-
Notifications
You must be signed in to change notification settings - Fork 4.1k
STORM-3791 update metric documentation #3409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good. Thanks for updating the docs
docs/Metrics.md
Outdated
|
||
A disruptor queue has a set maximum number of entries. If the regular queue fills up an overflow queue takes over. The number of tuple batches stored in this overflow section are represented by the `overflow` metric. Storm also does some micro batching of tuples for performance/efficiency reasons so you may see the overflow with a very small number in it even if the queue is not full. | ||
The queue has a set maximum number of entries. If the regular queue fills up an overflow queue takes over. The number of tuple batches stored in this overflow section are represented by the `overflow` metric. Storm also does some micro batching of tuples for performance/efficiency reasons so you may see the overflow with a very small number in it even if the queue is not full. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not very sure about this sentence:
Storm also does some micro batching of tuples for performance/efficiency reasons so you may see the overflow with a very small number in it even if the queue is not full.
Is this still valid?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not aware of anything. Removed it.
docs/Metrics.md
Outdated
@@ -235,7 +235,7 @@ These queues report the following metrics: | |||
`arrival_rate_secs` is an estimation of the number of tuples that are inserted into the queue in one second, although it is actually the dequeue rate. | |||
The `sojourn_time_ms` is calculated from the arrival rate and is an estimate of how many milliseconds each tuple sits in the queue before it is processed. | |||
|
|||
The queue has a set maximum number of entries. If the regular queue fills up an overflow queue takes over. The number of tuple batches stored in this overflow section are represented by the `overflow` metric. Storm also does some micro batching of tuples for performance/efficiency reasons so you may see the overflow with a very small number in it even if the queue is not full. | |||
The queue has a set maximum number of entries. If the regular queue fills up an overflow queue takes over. The number of tuple batches stored in this overflow section are represented by the `overflow` metric. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove batches
in tuple batches
too? If I understand correctly, tuples are inserted to the queue one by one.
Maybe add the following:
Note that an overflow queue is only used for executors to receive tuples from remote workers. It doesn't apply to intra-worker tuple transfer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
No description provided.