-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multiple memcache daemons #3
Comments
This goes against the best practices in Prometheus as it adds several additional failure domains (a failure to retrieve stats from one node will result in no metrics for any server, dependency on the network, etc.). It actually scales worse than having one exporter per node. The prometheus server is engineered to scrape from many targets, the exporters are usually not. With kubernetes(pods!)/marathon etc. or any kind of configuration management it's very easy to always scale two processes together. On the bright side, you can now get |
All the memcached daemons are on the same machine. We have a tier of machines that run three memcached daemons each. I do understand the point, I just got everything working with the old implementation -- unaware that this was coming. I get Issue #2 to deal with too. |
Well, you can of course continue to use the old version*. I expected some push back around that removal, which was part of the reason I kind of forked it into a memcached_exporter. Scraping 3 instances on the same node is a bit of an edge case, at least there is no network involved. Though, the point remains that a failure to connect to any of the 3 memcached will result in a metric failure for all of them. * Just be aware that the metrics are both incomplete (no metrics for |
https://github.com/DavidLeeUX/memcached_exporter |
Since memcached does not support multiple databases, this would be really useful to be still implemented. |
The original implementation supported multiple memcache servers. For my usage, running multiple memcached daemons per machine scales better than one and I was using that feature.
Late to the commit party, sorry....
The text was updated successfully, but these errors were encountered: