Server-Sent Events (SSE) Are Underrated
(igorstechnoclub.com)273 points by Igor_Wiwi 19 hours ago | 118 comments
273 points by Igor_Wiwi 19 hours ago | 118 comments
Dren95 15 hours ago | root | parent | next |
Cool didn’t know this. I used a similar solution called Centrifugo for a while. It allows you to choose which transport to use (ws, sse, others)
apitman 17 hours ago | root | parent | prev | next |
The site mentions battery-efficiency specifically. I'm curious what features does Mercure provide in that direction?
kdunglas 14 hours ago | root | parent | next |
SSE/Mercure (as WebSockets) is much battery-efficient than polling (push vs poll, less bandwidth used).
Additionally, on controlled environnements, SSE can use a « push proxy » to wake up the device only when necessary: https://html.spec.whatwg.org/multipage/server-sent-events.ht...
pests 13 hours ago | root | parent | prev |
It comes down to all the extra bytes sent and processed (local and remote, and in flight) by long polling. SSE events are small while other methods might require multiple packets and all the needless headers throughout the stack, for example.
tonyhart7 11 hours ago | root | parent | prev |
its cool but its in go, do you know other implementation in rust ????
dugmartin 18 hours ago | prev | next |
It doesn’t mention the big drawback of SSE as spelled out in the MDN docs:
“Warning: When not used over HTTP/2, SSE suffers from a limitation to the maximum number of open connections, which can be especially painful when opening multiple tabs, as the limit is per browser and is set to a very low number (6).”
atombender 16 hours ago | root | parent | next |
One of my company's APIs uses SSE, and it's been a big support headache for us, because many people are being corporate firewalls that don't do HTTP/2 or HTTP/3, and people often open many tabs at the same time. It's unfortunately not possible to detect client-side whether the limit has been reached.
Another drawback of SSE is lack of authorization header support. There are a few polyfills (like this one [1]) that simulate SSE over fetch/XHR, but it would be nice to not need to add the bloat.
fitsumbelay 8 hours ago | root | parent | next |
I hate to suggest a solution before testing it myself so apologies in advance but I have a hunch that Broadcast Channel API can help you detect browser tab opens on client side. New tabs won't connect to event source and instead listen for localStorage updates that the first loaded tab makes.
https://www.google.com/search?q=can+I+use+BroadcastChannel+A...
The problem in this case is how to handle the first tab closing and re-assign which tab then becomes the new "first" tab that connects to the event source but it may be a LOE to solve.
Again apologies for suggesting unproven solutions but at the same time I'm interested in feedback it gets to see if its near the right track
leni536 5 hours ago | root | parent | prev | next |
Supposedly websockets (the protocol) support authorization headers, but often there are no APIs for that in websocket libraries, so people just abuse the subprotocols header in the handshake.
apitman 28 minutes ago | root | parent |
I don't think the problem is libraries. Browsers don't support this.
robocat 16 hours ago | root | parent | prev | next |
Presumably you try SSE, and on failure fallback to something else like WebSockets?
Push seems to require supporting multiple communication protocols to avoid failure modes specific to one protocol - and libraries are complex because of that.
mardifoufs 15 hours ago | root | parent |
But then why not just use websockets?
virtue3 14 hours ago | root | parent |
From what I understand websockets are great until you have to load balance them. And then you learn why they aren’t so great.
hamandcheese an hour ago | root | parent | next |
My understanding is the hard part about scaling WebSockets is that they are stateful and long lived connections. That is also true of SSE. Is there some other aspect of WebSockets that make them harder to scale than WebSockets?
I guess with WebSockets, if you choose to send messages from the client to the server, then you have some additional work that you wouldn't have with SSE.
com2kid 10 hours ago | root | parent | prev |
I've scaled websockets before, it isn't that hard.
You need to scale up before your servers become overloaded, and basically new connections go north to the newly brought up server. It is a different mentality than scaling stateless services but it isn't super duper hard.
hooli_gan 5 hours ago | root | parent |
Can you suggest some resources to learn more about Websocket scaling? Seems like an interesting topic
com2kid 12 minutes ago | root | parent |
Honestly I just flipped the right bits in the aws load balancer (maintain persistent connections, just the first thing you are told to do when googling aws load balancers and web sockets) and setup the instance scaler to trigger based upon "# open connections / num servers > threshold".
Ideally it is based on the rate of incoming connections, but so long as you leave enough headroom when doing the stupid simple scaling rule you should be fine. Just ensure new instances don't take too long to start up.
nchmy 15 hours ago | root | parent | prev |
FYI, the dev of that library created a new, better Event Source client
atombender 15 hours ago | root | parent |
Yes, I know. We both work at Sanity, actually! The reason I didn't mention it was that the newer library isn't a straight polyfill; it offers a completely different interface with async support and so on.
jesprenj 16 hours ago | root | parent | prev | next |
You can easily multiplex data over one connection/event stream. You can design your app so that it only uses one eventstream for all events it needs to receive.
raggi 16 hours ago | root | parent |
This, it works well in a service worker for example.
tomsmeding 4 hours ago | root | parent | next |
The caniuse link in the OP, under Known Issues, notes that Firefox currently does not support EventSource in a service worker. https://caniuse.com/?search=EventSource
nikeee 15 hours ago | root | parent | prev |
How does this work with a service worker? I've only managed to do this via SharedWorker (which is not available on Chrome on Android).
raggi 6 hours ago | root | parent |
You can just open a stream in the service worker and push events via postMessage and friends.
Another nice thing to do is to wire up a simple filesystem monitor for all your cached assets that pushes path & timestamp events to the service worker whenever they change, then the service worker can refresh affected clients too (with only a little work this is endgame livereload if you’re not constrained by your environment)
RadiozRadioz 18 hours ago | root | parent | prev | next |
That is a very low number. I can think of many reasons why one would end up with more. Does anyone know why it is so low?
raggi 16 hours ago | root | parent | next |
The number was set while Apache was dominant and common deployments would get completely tanked by a decent number of clients opening more conns than this. c10k was a thing once, these days c10m is relatively trivial
apitman 17 hours ago | root | parent | prev | next |
Historical reasons. The HTTP/1.1 spec actually recommends limited to 2 connections per domain. That said, I'm not sure why it's still so low. I would guess mostly to avoid unintended side effects of changing it.
gsnedders 8 hours ago | root | parent | next |
> The HTTP/1.1 spec actually recommends limited to 2 connections per domain.
This is no longer true.
From RFC 9112 § 9.4 (https://httpwg.org/specs/rfc9112.html#rfc.section.9.4):
> Previous revisions of HTTP gave a specific number of connections as a ceiling, but this was found to be impractical for many applications. As a result, this specification does not mandate a particular maximum number of connections but, instead, encourages clients to be conservative when opening multiple connections.
apitman 23 minutes ago | root | parent |
If this was a MUST would it have required a bump from 1.1?
dontchooseanick 8 hours ago | root | parent | prev |
Because you're supposed to use a single connection with HTTP Pipelining for all your ressources [1]
When index.html loads 4 CSS and 5 JS : 10 ressources in HTTP 1.0 needed 10 connections, with 10 TLS negociations (unless one ressource loaded fast and you could reuse it's released connection)
With HTTP1.1 Pipelining you open only one connection, including a single TLS nego, and ask 10 ressources.
Why not only 1 per domain so ? IIRC it's because the 1st ressource index.html may take a lot of Time to complete and well race conditions suggest you use another one that the 'main thread' more or less. So basically 2 are sufficient.
immibis 38 minutes ago | root | parent |
HTTP pipelining isn't used by clients.
foota 16 hours ago | root | parent | prev | next |
Probably because without http/2 each would require a TCP connection, which could get expensive.
giantrobot 15 hours ago | root | parent | prev |
Because 30 years ago server processes often (enough) used inetd or served a request with a forked process. A browser hitting a server with a bunch of connections, especially over slow network links where the connection would be long lived, could swamp a server. Process launches were expensive and could use a lot of memory.
While server capacity in every dimension has increased the low connection count for browsers has remained. But even today it's still a bit of a courtesy to not spam a server with a hundred simultaneous connections. If the server implicitly supports tons of connects with HTTP/2 support that's one thing but it's not polite to abuse HTTP/1.1 servers.
SahAssar 17 hours ago | root | parent | prev | next |
There is little reason to not use HTTP/2 these days unless you are not doing TLS. I can understand not doing HTTP/3 and QUIC, but HTTP/2?
jiggawatts 16 hours ago | root | parent |
Corporate proxy servers often downgrade connections to HTTP 1.1 because inertia and lazy vendors.
SahAssar 16 hours ago | root | parent |
To do that they need to MITM and tamper with the inner protocol.
In my experience this is quite rare. Some MITM proxies analyze the traffic, restrict which ciphers can be used, block non-dns udp (and therefore HTTP/3), but they don't usually downgrade the protocol from HTTP/2 to HTTP/1.
geoffeg 14 hours ago | root | parent | next |
That hasn't been my experience at large corporations. They usually have a corporate proxy which only speaks HTTP 1.1, intercepts all HTTPS, and doesn't support websockets (unless you ask for an exception) and other more modern HTTP features.
arccy 15 hours ago | root | parent | prev | next |
"tamper" sounds much more involved than what they (their implementation) probably do: the proxy decodes the http request, potentially modifies it, and uses the decoded form to send a new request using their client, which only speaks http/1
dilyevsky 13 hours ago | root | parent | prev |
That’s exactly what they’re doing and it’s still very common in private networks
12 hours ago | root | parent | prev | next |
nhumrich 11 hours ago | root | parent | prev | next |
Http2 is controllable by you, since it's supposed in every browser. So, the way to fix this limitation is to use http2
lolinder 3 hours ago | root | parent | next |
This was already suggested and someone pointed out that some corporate networks MITM everything without HTTP/2 support:
jillesvangurp 6 hours ago | root | parent | prev |
Yes, use a proper load balancer that can do that. And use Http3 which is also supported by all relevant browsers at this point. There's no good reason to build new things on top of old things.
k__ 18 hours ago | root | parent | prev |
And over HTTP/2 and 3 they are efficient?
apitman 18 hours ago | root | parent |
HTTP/2+ only uses a single transport connection (TCP or QUIC) per server, and multiplexes over that. So there's essentially no practical limit.
toomim 15 hours ago | root | parent |
Except that browsers add a limit of ~100 connections even with HTTP/2, for no apparently good reason.
piccirello 18 hours ago | prev | next |
I utilized SSE when building automatic restart functionality[0] into Doppler's CLI. Our api server would send down an event whenever an application's secrets changed. The CLI would then fetch the latest secrets to inject into the application process. (I opted not to directly send the changed secrets via SSE as that would necessitate rechecking the access token that was used to establish the connection, lest we send changed secrets to a recently deauthorized client). I chose SSE over websockets because the latter required pulling in additional dependencies into our Golang application, and we truly only needed server->client communication. One issue we ran into that hasn't been discussed is HTTP timeouts. Some load balancers close an HTTP connection after a certain timeout (e.g. 1 hour) to prevent connection exhaustion. You can usually extend this timeout, but it has to be explicitly configured. We also found that our server had to send intermittent "ping" events to prevent either Cloudflare or Google Cloud Load Balancing from closing the connection, though I don't remember how frequently these were sent. Otherwise, SSE worked great for our use case.
apitman 17 hours ago | root | parent | next |
Generally you're going to want to send ping events pretty regularly (I'd default to every 15-30 seconds depending on application) whether you're using SSE, WebSockets, or something else. Otherwise if the server crashes the client might not know the connection is no longer live.
robocat 16 hours ago | root | parent | next |
What do you do for mobile phones: using data/radio for pings would kill the battery?
After locking the phone, how is the ping restarted when the phone is unlocked? Or backgrounding the browser/app?
erinaceousjones 7 hours ago | root | parent |
The way I've implemented SSE is to make use of the fact it can also act like HTTP long-polling when the GET request is initially opened. The SSE events can be given timestamps or UUIDs and then subsequent requests can include the last received ID or the time of the last received event, and request the SSE endpoint replay events up until the current time.
You could also add a ping with a client-requestable interval, e.g. 30 seconds (for foreground app) and 5 minutes or never (for backgrounded app), so the TCP connection is less frequently going to cause wake events when the device is idle. As client, you can close and reopen your connection when you choose, if you think the TCP connection is dead on the other side or you want to reopen it with a new ping interval.
Tradeoff of `?lastEventId=` - your SSE serving thing needs to keep a bit of state, like having a circular buffer of up to X hours worth of events. Depending on what you're doing, that may scale badly - like if your SSE endpoint is multiple processes behind a round-robin load balancer... But that's a problem outside of whether you're choosing to use SSE, websockets or something else.
To be honest, if you're worrying about mobile drain, the most battery efficient thing I think anyone can do is admit defeat and use one of the vendor locked-in things like firebase (GCM?) or apple's equivalent notification things: they are using protocols which are more lightweight than HTTP (last I checked they use XMPP same as whatsapp?), can punch through firewalls fairly reliably, batch notifications from many apps together so as to not wake devices too regularly, etc etc...
Having every app keep their own individual connections open to receive live events from their own APIs sucks battery in general, regardless of SSE or websockets being used.
sabareesh 17 hours ago | root | parent | prev |
Yeah with cloudflare you need to do it every 30 seconds as the timeout is is 60 seconds
loloquwowndueo 16 hours ago | root | parent |
Then why not do it every 59 seconds :)
virtue3 14 hours ago | root | parent |
You’d probably want to do it every 29 seconds in case a ping fails to send/deliver.
Xenoamorphous 17 hours ago | root | parent | prev |
I also used SSE 6 or so years ago, and had the same issue with out load balancer; a bit hacky but what I did was to set a timer that would send a single colon character (which is the comment delimiter IIRC) periodically to the client. Is that what you meant by “ping”?
RevEng an hour ago | prev | next |
OpenAI's own documentation makes note of how difficult it is to work with SSE and to just use their library instead. My team wrote our own parser for these streaming events from an OpenAI compatible LLM server. The streaming format is awful. The double newline block separator also shows up in a bunch of our text, making parsing a nightmare. The "data:" signifier is slightly better, but when working with scientific software, it still occurs too often. Instead we've had to rely on the totally-not-reliable fact that the server returns each as a separate packet and the receiving end can be set up to return each packet in the stream.
The suggestions I've found online for how to deal with the newline issue are to fold together consecutive newlines, but this loses formatting of some documents and otherwise means there is no way to transmit data verbatim. That might be fine for HTML or other text formats where newlines are pretty much optional, but it sucks for other data types.
I'm happy to have something like SSE but the protocol needs more time to cook.
apitman 18 hours ago | prev | next |
> Perceived Limitations: The unidirectional nature might seem restrictive, though it's often sufficient for many use cases
For my use cases the main limitations of SSE are:
1. Text-only, so if you want to do binary you need to do something like base64
2. Browser connection limits for HTTP/1.1, ie you can only have ~6 connections per domain[0]
Connection limits aren't a problem as long as you use HTTP/2+.
Even so, I don't think I would reach for SSE these days. For less latency-sensitive and data-use sensitive applications, I would just use long polling.
For things that are more performance-sensitive, I would probably use fetch with ReadableStream body responses. On the server side I would prefix each message with a 32bit integer (or maybe a variable length int of some sort) that gives the size of the message. This is far more flexible (by allowing binary data), and has less overhead compared to SSE, which requires 7 bytes ("data:" + "\n\n") of overhead for each message.
nhumrich 11 hours ago | root | parent | next |
ReadableStream appears to be SSE without any defined standards for chunk separation. In practice, how is it any different from using SSE? It appears to use the same concept.
tomsmeding 4 hours ago | root | parent |
Presumably, ReadableStream does not auto-reconnect.
nchmy 15 hours ago | root | parent | prev |
You can do fetch and readable stream with SSE - here's an excellent client library for that
Tiberium 17 hours ago | prev | next |
One thing I dislike regards to SSE, which is not its fault but probably a side effect of the perceived simplicity: lots of developers do not actually use proper implementations and instead just parse the data chunks with regex, or something of the sorts! This is bad because SSE, for example, supports comments (": text") in streams, which most of those hand-rolled implementations don't support.
For example, my friend used an LLM proxy that sends keepalive/queue data as SSE comments (just for debugging mainly), but it didn't work for Gemini, because someone at Google decided to parse SSE with a regex: https://github.com/google-gemini/generative-ai-js/blob/main/... (and yes, if the regex doesn't match the complete line, the library will just throw an error)
recursivedoubts 18 hours ago | prev | next |
https://data-star.dev is a hypermedia-oriented front end library built entirely around the idea of streaming hypermedia responses via SSE.
It was developed using Go & NATS as backend technologies, but works with any SSE implementation.
Worth checking out if you want to explore SSE and what can be achieved w/it more deeply. Here is an interview with the author:
andersmurphy an hour ago | root | parent | next |
+1 for recommending data-star. The combination of idiomorph (thank you), SSE and signals is fantastic for making push based and/or multiplayer hypermedia apps.
sudodevnull an hour ago | root | parent | prev |
Datastar author here, happy to answer any questions!
anshumankmr 6 hours ago | prev | next |
SSE is not underrated. In fact it's being used by Open AI for streaming completions. It's just not always needed unlike the very obvious use cases for normal REST APIs and Websockets.
It was a pain to figure out how to get it to work in a ReactJS codebase I was working on then and from what I remember Axios didn't support it then so I had to use native fetch to get it to work.
hamandcheese 16 hours ago | prev | next |
I tried implementing SSE in a web project of mine recently, and was very surprised when my website totally stopped working when I had more than 6 tabs open.
It turns out, Firefox counts SSE connections against the 6 host max connections limit, and gives absolutely no useful feedback that it's blocking the subsequent requests due to this limit (I don't remember the precise error code and message anymore, but it left me very clueless for a while). It was only when I stared at the lack of corresponding server side logs that it clicked.
I don't know if this same problem happens with websockets or not.
uncomplexity_ 16 hours ago | root | parent | next |
wait let's check this
https://news.ycombinator.com/item?id=42511562
at https://developer.mozilla.org/en-US/docs/Web/API/Server-sent... it says
"Warning: When not used over HTTP/2, SSE suffers from a limitation to the maximum number of open connections, which can be especially painful when opening multiple tabs, as the limit is per browser and is set to a very low number (6). The issue has been marked as "Won't fix" in Chrome and Firefox. This limit is per browser + domain, which means that you can open 6 SSE connections across all of the tabs to www.example1.com and another 6 SSE connections to www.example2.com (per Stack Overflow). When using HTTP/2, the maximum number of simultaneous HTTP streams is negotiated between the server and the client (defaults to 100)."
so the fix is just use http/2 on server-side?
remram 16 hours ago | root | parent | next |
Or a SharedWorker that creates a single SSE connection for all your tabs.
SharedWorker is not very complicated but it's another component to add. It would be cool if this was built into SSE instead.
uncomplexity_ 16 hours ago | root | parent | next |
okay wtf this is amazing, seems usable with websockets too.
usage of Shared Web Workers https://dev.to/ayushgp/scaling-websocket-connections-using-s...
caniuse Shared Web Workers 45% https://caniuse.com/sharedworkers
caniuse BroadcastChannel 96% https://caniuse.com/broadcastchannel
nchmy 15 hours ago | root | parent |
Yeah, the issue with SharedWorkers is that Android Chromium doesn't support it yet. https://issues.chromium.org/issues/40290702
But rather than Broadcast Channel, you can also use the Web Locks API (https://developer.mozilla.org/en-US/docs/Web/API/Web_Locks_A...) rather than Broadcast Channel
This library (https://github.com/pubkey/broadcast-channel/blob/master/src/...) from the fantastic RxDB javascript DB library uses WebLocks with a fallback to Broadcast Channel. But, WebLocks are supported on 96% of browsers, so probably safe to just use it exclusively now.
hamandcheese 15 hours ago | root | parent | prev |
Ultimately this is what I did. But if you need or want per-tab connection state it will get complicated in a hurry.
ksec 11 hours ago | root | parent | prev |
Even if they don't change the default 6 open connection. They could have at least made it per tab rather than per site. [1] [2] And I dont understand why this hasn't been done in the past 10 years.
What am I missing?
mikojan 16 hours ago | root | parent | prev |
That's only if not used over HTTP/2 and it says so in the docs too[0]
[0]: https://developer.mozilla.org/en-US/docs/Web/API/EventSource
hamandcheese 16 hours ago | root | parent |
AFAIK browsers require https with http2. This is a locally running server/app which will probably never have https. Maybe there is an exception for localhost, I'm not sure.
ramon156 18 hours ago | prev | next |
They're underrated when they work™
Currently at work I'm having issues because - Auth between an embedded app and javascript's EventSource is not working, so I have to resort to a Microsoft package which doesn't always work. - Not every tunnel is fond of keep-alive (Cloudflare), so I had to switch to ngrok (until I found out they have a limit of 20k requests).
I know this isn't the protocol's fault, and I'm sure there's something I'm missing, but my god is it frustrating.
nchmy 15 hours ago | root | parent |
Try this sse client https://github.com/rexxars/eventsource-client
aniketchopade 4 hours ago | prev | next |
I had some trouble implementing when server has to wait for another endpoint (webhook) to feed output to browser . During request processing (thread1) had to store the sse context in native Java object which will be retrieved later when webhook(thread2) is called But then with multiple service instance you wouldn't know which service had stored it so webhook had to publish something which others has to subscribe.
_caw 12 hours ago | prev | next |
> SSE works seamlessly with existing HTTP infrastructure
This is false. SSE is not supported on many proxies, and isn't even supported on some common local proxy tooling.
schmichael 17 hours ago | prev | next |
I’ve never understood the use of SSE over ndjson. Builtin browser support for SSE might be nice, but it seems fairly easy to handle ndjson? For non-browser consumers ndjson is almost assuredly easier to handle. ndjson works over any transport from HTTP/0.9 to HTTP/3 to raw TCP or unix sockets or any reliable transport protocol.
apitman 17 hours ago | root | parent |
Manually streaming a XHR and parsing the messages is significantly more work, and you lose the built-in browser API. But if you use a fetch ReadableStream with TLV messages I'm sold.
nchmy 15 hours ago | root | parent |
Here's SSE with fetch and streams https://github.com/rexxars/eventsource-client
benterix 4 hours ago | prev | next |
The topic is interesting but the ChatGPT style of presenting information as bullet points is tiring.
upghost 16 hours ago | prev | next |
Does anyone have a good trick for figuring out when the client side connection is closed? I just kill the connection on the server every N minutes and force the client to reconnect, but it's not exactly graceful.
Secondly, on iOS mobile, I've noticed that the EventSource seems to fall asleep at some point and not wake up when you switch back to the PWA. Does anyone know what's up with that?
nhumrich 11 hours ago | root | parent | next |
The socket closes. Most languages bubble this back up to you with a connection closed exception. In python async world, it would be a cancelled error.
jesprenj 16 hours ago | root | parent | prev |
Send a dummy event and see if you get an ACK in response. Depends on the library you're using.
upghost 16 hours ago | root | parent |
There's no ack on a raw SSE stream, unfortunately -- unless you mean send an event and expect the client to issue an HTTP request to the server like a keepalive?
jauco 8 hours ago | root | parent |
There should be an ACK on the tcp packet (IIRC it’s not a lateral ACK but something like it) and the server should handle a timeout on that as the connection being “closed” which can be returned to the connection opener.
You might want to look into timeouts or error callbacks on your connection library/framework.
upghost 4 hours ago | root | parent |
Interesting, hadn't checked at the TCP level. Will need to look into that.
jauco 4 hours ago | root | parent |
I remembered wrong. In most circumstances a tcp connection will be gracefully terminated by sending a FIN message. The timeout I talked about is on an ACK for a keepalive message. So after x time of not receiving a keepalive message the connection is closed. This handles cases where a connection is ungracefully dropped.
All this is done at the kernel level, so at the application level you should be able to just verify if the connection is open by trying a read from the socket.
upghost an hour ago | root | parent |
Thanks for clarifying, that would've sent me on a long wild goose chase. Most libraries only provide some sort of channel to send messages to. They generally do not indicate any FIN or ACK received at the TCP level.
If anyone knows any library or framework in any language that solves this problem, I'd love to hear about it.
est 10 hours ago | prev | next |
I built several internal tool to tail logs using SSE with Flask/FastAPI. Easy to implement and maintain.
For FastAPI if you want some hooks when client disconnects aka nginx 499 errors, follow this simple tip
https://github.com/encode/starlette/discussions/1776#discuss...
fitsumbelay 8 hours ago | prev | next |
Finding use cases for SSE and reading about others doing the same brings me great joy. Very easy to set up -- you just set 2 or 3 response headers and off you go.
I have a hard time imagining the tech's limits outside of testing scenarios so some of the examples brought up here are interesting
ksajadi 12 hours ago | prev | next |
I'm curious as to how everyone deals with HTTP/2 requirements between the backend servers and the load balancer? By default, HTTP/2 requires TLS which means either no SSL termination at the load balancer or a SSL cert generated per server with a different one for the front end load balancer. This all seems very inefficient.
kcb 12 hours ago | root | parent | next |
Not sure how widespread this is but AWS load balancers don't validate the backend cert in any way. So I just generate some random self signed cert and use it everywhere.
nhumrich 11 hours ago | root | parent | prev |
You don't need http2 on the actual backend. All limitations for SSE/http1 are browser level. Just downgrade to http1 from the LB to backend, even without SSL. As long as LB to browser is http2 you should be fine.
ksajadi 11 hours ago | root | parent |
Isn't that going to affect the whole multiplexing / multiple connection of SSEs?
yu3zhou4 18 hours ago | prev | next |
I've had no idea they exist until I began to use APIs serving LLM outputs. They work pretty well for this purpose from my experience. An alternative to SSE is web sockets for this purpose I suppose
lakomen 4 hours ago | prev | next |
No they're not. They're limited to 6 clients per browser per domain on http/1.1 Which is important because nginx can't reverse proxy http/2 or higher, so you end up with very weird functionality, essentially you can't use nginx with SSE.
https://developer.mozilla.org/en-US/docs/Web/API/Server-sent...
Edit: I see someone already posted about that
sbergjohansen 17 hours ago | prev | next |
Previous related discussion (2022):
https://news.ycombinator.com/item?id=30403438 (100 comments)
programmarchy 18 hours ago | prev | next |
Great post. I discovered SSE when building a chatbot and found out it’s what OpenAI used rather than WebSockets. The batteries-included automatic reconnection is huge, and the format is surprisingly human readable.
crowdyriver 7 hours ago | prev | next |
http streaming is even more underrated.
whatever1 17 hours ago | prev | next |
Can Django with vanilla gunicorn do this ?
mfalcao 7 hours ago | root | parent |
Yes, I’ve done it using StreamingHttpResponse. You’ll want to use an asynchronous worker type though.
Tiberium 17 hours ago | prev | next |
Also, another day, another mostly AI-written article on HN's top page :)
emmanueloga_ 16 hours ago | root | parent | next |
It’s funny how HN has a mix of people who think AGI is just around the corner, people trying to build/sell stuff that uses LLMs, and others who can’t stand LLM-generated content. Makes me wonder how much overlap there is between these groups.
remram 16 hours ago | root | parent | next |
Those are not incompatible positions at all. You can think great AI is around the corner and still dislike today's not-great AI writing.
Tiberium 16 hours ago | root | parent | prev |
I don't have anything against LLMs, I use them daily myself, but publishing content that's largely AI-generated without a disclaimer just feels dishonest to me. Oh, and also when people don't spend at least some effort to make the style more natural, not those bullet point lists in the article that e.g. Claude loves so much.
slow_typist 8 hours ago | root | parent | prev |
What makes you think the article is AI-written?
henning 16 hours ago | prev | next |
They are handy for implementing simple ad-hoc hot reloading systems as well. E.g. you can have whatever file watcher you are using call an API when a file of interest changes that sends an event to listening clients on the frontend. You can also trigger an event after restarting the backend if you make an API change by triggering the event at boot time. Then you can just add a dev-only snippet to your base template that reloads the page or whatever. Better than nothing if your stack doesn't support it out of the box and doesn't take very much code or require adding any additional project dependencies. Not as sophisticated as React environments that will only reload a component that changed and only do a full page refresh if needed, but it still gives a nice, responsive feeling when paired with tools that recompile your backend when it changes.
condiment 18 hours ago | prev |
So it’s websockets, only instead of the Web server needing to handle the protocol upgrade, you just piggyback on HTTP with an in-band protocol.
I’m not sure this makes sense in 2024. Pretty much every web server supports websockets at this point, and so do all of the browsers. You can easily impose the constraint on your code that communication through a websocket is mono-directional. And the capability to broadcast a message to all subscribers is going to be deceptively complex, no matter how you broadcast it.
realPubkey 18 hours ago | root | parent | next |
Yes most servers support websockets. But unfortunately most proxies and firewalls do not, especially in big company networks. Suggesting my users to use SSEs for my database replication stream solved most of their problems. Also setting up a SSE endpoint is like 5 lines of code. WebSockets instead require much more and you also have to do things like pings etc to ensure that it automatically reconnects. SEEs with the JavaScript EventSource API have all you need build in:
https://rxdb.info/articles/websockets-sse-polling-webrtc-web...
the_mitsuhiko 18 hours ago | root | parent |
SSE also works well on HTTP/3 whereas web sockets still don’t.
apitman 18 hours ago | root | parent |
I don't see much point in WebSockets for HTTP/3. WebTransport will cover everything you would need it for an more.
the_mitsuhiko 17 hours ago | root | parent |
That might very well be but the future is not today.
apitman 17 hours ago | root | parent |
But why add it to HTTP/3 at all? HTTP/1.1 hijacking is a pretty simple process. I suspect HTTP/3 would be significantly more complicated. I'm not sure that effort is worth it when WebTransport will make it obselete.
leni536 4 hours ago | root | parent | next |
To have multiple independent websocket streams, without ordering requirements between streams.
the_mitsuhiko 9 hours ago | root | parent | prev |
It was added to HTTP/2 as well and there is an RFC. (Though a lot of servers don’t support it even on HTTP/2)
My point is mostly that SSE works well and is supported and that has A meaningful benefit today.
18 hours ago | root | parent | prev |
kdunglas 18 hours ago | next |
A while ago I created Mercure: an open pub-sub protocol built on top of SSE that is a replacement for WebSockets-based solutions such as Pusher. Mercure is now used by hundreds of apps in production.
At the core of Mercure is the hub. It is a standalone component that maintains persistent SSE (HTTP) connections to the clients, and it exposes a very simple HTTP API that server apps and clients can use to publish. POSTed updates are broadcasted to all connected clients using SSE. This makes SSE usable even with technologies not able to maintain persistent connections such as PHP and many serverless providers.
Mercure also adds nice features to SSE such as a JWT-based authorization mechanism, the ability to subscribe to several topics using a single connection, events history, automatic state reconciliation in case of network issue…
I maintain an open-source hub written in Go (technically, a module for the Caddy web server) and a SaaS version is also available.
Docs and code are available on https://mercure.rocks