-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for server-sent events #234
Comments
Reference from a Node.js implementation I did a while back: Refs
Headers 'Content-Type': 'text/event-stream',
'X-Accel-Buffering': 'no',
'Cache-Control': 'no-cache' heartbeat var interval = setInterval(function () {
if (res.finished) return // prevent writes after stream has closed
res.write(`id:${id++}\ndata:{ "type:": "heartbeat" }\n\n`)
}, 4000) dataIn particular making sure to increment the var msg = JSON.stringify({ type: 'scripts' })
res.write(`id:${id++}\ndata:${msg}\n\n`) ScreenshotWhat it looks like to run this: |
I believe people can just swap In some cases, it may be better to return JSON over plain text, I wonder if By the way, I believe that this should be marked as |
@pickfire the output stream should probably take an In general we should probably move the |
APII've sketched out an API we could use for Tide: use async_std::task;
fn main() -> Result<(), std::io::Error> {
task::block_on(async {
let mut app = tide::new();
app.at("/sse").get(tide::sse()); // shorthand function, we might also want to provide a builder
app.at("/", async |req| {
let mut sse = req.sse();
println!("next message id: {}", sse.id()); // u64
println!("sse status is: {}", sse.status()); // enum { Connecting, Open, Closed }
println!("sse is open?: {}", sse.is_open()); // bool
sse.send("message", b"hello world").await; // send(&self, event: &str, data: &[u8])
"hello world" // return some response
});
app.listen("127.0.0.1:8080").await?;
Ok(())
})
} A single connection would be established per peer (when they make an The connection would be stored inside the We would probably also want to provide extra metadata on the sse object on SecurityWe can't base the host purely on incoming TCP request since we could be behind a proxy. Instead we should be aware of ReferencesThis model is inspired by Phoenix's channels, which have a similar model. The main difference being that we don't take care of the multiplexing aspects, but instead leave it up to the application author to decide how they want to use the connection. Phoenix provides a matching client library to handle the "channel", and I think we probably shouldn't do that.
cc/ @goto-bus-stop I'm curious what you think of this API |
app.at("/sse").get(tide::sse()); // shorthand function, we might also want to provide a builder Looking at that, seemed like it should only expected to have one server-sent events per server. I believe SSE is usually one per server but could there be multiple endpoints for SSE? How about? app.at("/", async |req| {
// assuming connection does not close in the block (not sure what happens if it does)
// enum { Connecting, Open(SSEConnection), Closed }
if let Open(mut sse) = req.sse() {
println!("next message id: {}", sse.id()); // u64
sse.send("message", b"hello world").await; // send(&self, event: &str, data: &[u8])
}
"hello world" // return some response
}); |
@pickfire the I like your idea of wrapping the SSE object inside an option; that might perhaps be the best way to go about it . |
Should the send return a result? Because there is no saying that SSE connection will be dropped while inside the block. app.at("/", async |req| {
// assuming connection does not close in the block (not sure what happens if it does)
// enum { Connecting, Open(SSEConnection), Closed }
if let Open(mut sse) = req.sse() {
println!("next message id: {}", sse.id()); // u64
// <- sse connection dropped here
sse.send("message", b"hello world").await?; // result?
}
"hello world" // return some response
}); I wonder if we should even have that block in the first place, no saying when will the connection will be dropped. By the way, Happy Chinese New Year! ^^ |
@pickfire Happy new year to you too! I'm thinking we should also add a I've been thinking about what you said earlier by the way, and actually I think it might be susceptible to race conditions. What happens if we need to send a message over SSE, but this message might be sent after the connection has been established. In fact I think we should support several different scenarios; and probably returning an Examples// Wait until we can send the message.
app.at("/", async |req| {
req.sse().send("message", b"hello world").await;
Response::new(200)
});
// Wait until we can send the message, but timeout after 5 secs.
app.at("/", async |req| {
req.sse()
.send("message", b"hello world")
.timeout(Duration::from_secs(5))
.await;
Response::new(200)
});
// Error if we can't send the message immediately. This includes
// a full queue.
app.at("/", async |req| {
req.sse().try_send("message", b"hello world")?.await;
Response::new(200)
});
// Only send a message if we're already connected.
app.at("/", async |req| {
let sse = req.sse();
if sse.is_connected() {
req.sse().send("message", b"hello world").await;
}
Response::new(200)
}); |
The other examples seems fair enough but looking at the fourth example. // Only send a message if we're already connected.
app.at("/", async |req| {
let sse = req.sse();
if sse.is_connected() {
req.sse().send("message", b"hello world").await;
}
Response::new(200)
}); It may be possible that |
@pickfire accurate; it could be a temporary hiccup and the client could reconnect. Or perhaps it's an actual timeout and we should timeout the whole request. We can't distinguish between these cases. Either way, waiting seems like the right behavior. |
Or is it possible that we could return a response first and then wait to send the SSE? |
@pickfire That's a possibility, though I see there might be difficulty in finding the right peer -- so it would need to take some context from I spent some time this week writing about the APIs here by the way: https://blog.yoshuawuyts.com/tide-channels/. |
That's an interesting API direction—not something I had thought of! I'll write down some of my thoughts based mostly on an actual realtime app I have that's currently using WebSockets, but that could just as well use EventSource. I'm not sure that this is a typical app so some of the features I'd like may not be that important in general. e; i wrote most of this a couple days ago so i hadn't seen the post yet 🙃 it looks like that addresses a lot of the same things, which is prob a good sign! I think the most common case is having one SSE endpoint, but it can make sense to have multiple for different types of resources. Or something like a One thing is kinda unclear to me in the proposed API (e; from the OP, but addressed in the blog post). If user A and B are connected to the SSE endpoint, and user B sends a request to There's a couple scenarios where you might need to send something over an SSE connection. Sometimes, you need to broadcast something to all clients, or to a subset, or to just one. Sometimes this may be directly in response to an HTTP request (then I think @fitzgen's proposed API is a good starting point regardless of what a higher level API would look like. If/when tide settles on a higher level API, having an escape hatch for the edge cases would still be valuable. I think a higher level API could/should also attempt to make Last-Event-ID "just work", which would involve keeping some messages queued after a connection is lost. I don't know if this fits in very well with Tide because you might want to store your messages elsewhere (I store them in Redis in my app to be sorta resilient to server crashes and restarts.) Without giving much thought to implementation, I think abstraction layers could look something like:
|
I was thinking it'd be "broadcast to both" by default, but we'd need to have some (as of yet unspecified) escape hatch to more accurately target clients. Overall I'd love to read more on what you're thinking. I feel you're talking with from more experience than I am, and on the surface level I'm agreeing with much of what you're saying. It sounds like there are four layers to any SSE API then:
And this should be able to be matched with ways to persist message ID sequences, and queued messages on backing stores. This adds quite a few requirements, but they all seem legit. I'm also wondering how the |
I thought Server Sent Events automatically reconnects when they disconnects? How about event on disconnection? Maybe
Should we make only json or One more thing I noticed to be missing is how should we sent server sent events after a response have been sent to the client? So far the examples only go in the direction of sending the server sent events when a request is being sent by a clients. Maybe there are two more?
Regarding the grouping and stuff, from what I have seen from nodejs or socketio similar frameworks, there is an id linked to each channels, probably something like It would be cool to explore prior art how other languages and different frameworks implement this. |
Authored a lib for this today, primarily based on @goto-bus-stop's work: https://docs.rs/async-sse. The latest (2.1.0) version includes a channel-based encoder that should work nicely with the APIs that have been drafted so far. |
Feature Request
Support for server-sent events in Tide apps.
Detailed Description
Support for working creating server-sent event streams, easily, with Tide.
Some more general server-sent events info:
Context
I would like to use server-sent events. I suppose other folks might want to as well.
Possible Implementation
Ideally, I would be able to have some method on
Response
or some sort of type that implementsIntoResponse
that sets theContent-Type: text/event-stream
header for me, and takes aStream<Item = ServerSentEvent>
whereServerSentEvent
is a struct with an optional event type string and a data string.Maybe it would automatically send heartbeats too, to keep the connection alive?
The text was updated successfully, but these errors were encountered: