-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for dynamically updating instance config #1045
Comments
Just to summarize our conversation elsewhere, right now the closest thing that matches this is Like the other examples you've note, Go allows users to close a
Would you mind elaborating on this? Do you mean that you'd start the proxy with a pointer to the configuration (a GCS object, a Secret Manager secret, etc) and the proxy would set itself up entirely from this configuration? Something like:
How would this be different from the example above? |
Adding some additional information here from @davidcollom:
|
Currently, we're thinking about making it possible to dynamically configure instance connection names using:
The first two are the priority at the moment. |
Hi, is there any update for that feature ? |
Not yet. It remains pretty high up on the list and will likely get some attention later this year. |
In the meantime, we have an example that shows how to connect the proxy to secret manager, such that it restarts when a new instance connection name shows up. See https://github.com/GoogleCloudPlatform/cloud-sql-proxy/tree/main/examples/disaster-recovery. |
This might be a good solution for people who want to automatically cut over to newly promoted read replicas. |
Hello. While many of our applications are container based with fixed topologies, we have several use cases that would greatly benefit from dynamic updates. We spoke with @ jonpjenkins from the Google Team and agreed to enter a feature request.
One such use case is a auditing system that needs to scan all of our databases for PII. This system will need to connect to many systems, and the list will need to be updated as new databases are added or switched. The scanners run in GKE, but for some bewildering reason or another, they are persistent and do not run on a 1:1 scanner to job model.
The most basic and effective model that we see is to:
Many popular servers such as TCP proxies allow for hot restarting by receiving a signal and then:
(a) running a lint check on the new configuration
(b) terminating the listener. Existing connections should continue to persist, using the old configuration and state, but the old daemon cannot receive new connections
(c) starting a new listener and worker pool that will receive new connections with the new configuration.
Apache, Sendmail, squid, Envoy and other processes use this basic reload feature for non-disruptive uploads.
Most of these examples provide easy to replicate hot restarts. I can give you the envoy hot restarted code if you're looking for an example of that. Their documentation is horrendous, but we're running it as a MySQL proxy on prem and we non-disruptively update the entire configuration this way out of the box!
Oh, one more thing:
For containers, it would be great if the configuration checker would kick off the hot restart. Same for GCE, but if you could allow a user to update the local config and manually kick off a hot restart, that would be extra great.
The text was updated successfully, but these errors were encountered: