-
Notifications
You must be signed in to change notification settings - Fork 516
[spike] pluggable datastores for CDP #5526
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
wanted to post an idea for consideration, pluggable datastore as separate 0/S process, leverage inter-process rpc: Create a new Pluggable Datastore service could be implemented in any programming language that can be compiled to an o/s binary and Pluggable Datastore service implementation needs to follow the contract for pluggability which [edit...couple days later] |
We should speak to stellar-core maintainers (@stellar/core-committers) about their experiences of using separate processors for data integrations. stellar-core runs commands when interacting with archives that are typically configured using Another thing we should keep in the back of our mind is that running separate processes isn't trivial, and in containers has it's own challenges. e.g. child process management, child process reaping, zombie-ing, etc. This isn't a blocker, I just feel the urge to mention this early due to past scars. |
An idea that was shared a couple years ago with the RPC was to make it contain a pluggable functionality1 interface as well. Back then one of the options we discussed was using Wasm, and I think that might be viable option here too, albeit with similar performance concerns as above that'd need evaluating. Go as of 1.21 has support for WASI2 and as of 1.24 supports reactor3 wasi apps that make it easier to build pluggable interfaces across a Wasm API boundary. |
yes, the design was assuming the o/s process for the remote/pluggable datastore takes on some type of continuous/longer running timeframe, aligned to the client app process if it spawned it as child process, or if it's just a standalone micro-service then always up.
cool suggestion, just for high level confirm of this approach, it would enable the existing compiled Go process(such as Galexie) at runtime to load a WASM file compiled to WASI standard format from any programming language that has tooling support for WASM/WASI compilation, and then verify it supports a supplied app interface spec? |
What problem does your feature solve?
Ultimately, we want to make it as easy as possible for external contributors to create data lakes using different technologies (s3, r2, mongo, mqtt, etc). With how the code is currently structured, if someone were to create a new data storage option, they would have to contribute it back to this repository (go monorepo). This makes it so that the responsibility for quality and future maintenance for all data stores technically lies with the maintainers of this repo (SDF). We don't want to slow things down by putting ourselves in the middle here, and we don't want to be the arbiters of what people can build and how they build it.
What would you like to see?
A spike or design proposal that outlines how we could restructure our code or repositories in a way that would allow Galexie to accept pluggable datastores. This could mean that the interface for how to create a datastore is public, and in some separate repo that is maintained in an SDF-owned repo, but the implementations live in different, disperse repos. Also keep in mind that we already want to pull out the consumption components of CDP (#5525) into their own repo.
The "dream" dev journey could look something like:
The text was updated successfully, but these errors were encountered: