Auto re-download incomplete resources? (Stop "refreshing" broken pages?)

Sometimes Discourse users will get a broken page, sometimes subtly, sometimes obviously… I assume this is due to one or more requested resources (javascript or image) that were being downloaded that didn’t finish.

When building the site for production, would it be feasible to include a hash or filesize requirement for each resource file, and if it doesn’t match, automatically try to grab it once or twice?
For JSON requests, couldn’t preload any sort of size or hash, but maybe a token at the end of every file?

We have a few users that don’t quite know when to “refresh”…

Or maybe I’m getting this wrong and the cause is the browser rather than the network?

1 Like

Errrr what? This is not remotely normal.

You’re saying that it’s not remotely “normal” that some small percentage of requests result in a broken page? I’m not saying it’s more than 1%…

I’ve not heard of this across any of the sites I work with. It would be terrible to patch in something to deal with a symptom of an underlying network or server issue.

In the cases where you’ve observed it do you have access to logs? How widespread are user reports? Even if it’s one in 10k we should be able to pinpoint how it’s arising.

Definitely one of those “tell us more about the problem you think you’re trying to solve, rather than how you think it should be addressed” situations.

2 Likes

I’m exactly talking about networking issues. Sometimes we’re on a great network, sometimes we’re on a horrible network.

MOSH exists as an alternative to SSH for precisely this purpose.

If you can increase reliability for users when they’re on really bad networks, why not?

You’re really dumbing down what the guys built there to try and make a point, doing them a real disservice in the process. MOSH exists to handle latency and IP roaming, which means it can gracefully handle disconnects. They built a whole new UDP protocol to achieve it which requires a new server agent and different client.

Unless you’re suggesting a new browser engine we’re still constrained by the unique personalities of browsers and good old HTTP.

How many of your users are trying to use Discourse across terrible connections? How significant are occurrences you reference above? You’re still not talking about the specifics of the problem, just assumptions and oversimplifications on potential solutions.

3 Likes

The headline on mosh.org

Mosh is a replacement for interactive SSH terminals. It’s more robust and responsive, especially over Wi-Fi, cellular, and long-distance links.

Yet I simply asked:

If you can increase reliability for users when they’re on really bad networks, why not?

And you choose to be offended by a question.

You’re assuming I’m taking offense, which is a shame. That’s just not true.

Twice I’ve asked you to tell us more about the problem, rather than prematurely elaborate on how you think it needs to be solved:

But instead your response has been to tout the mosh project, which effectively replaces both client and server, it’s not relevant.

Aside from the obvious engineering effort of implementing in-browser hashing of assets, where the effort at a desktop level could be absorbed to some degree, what of mobile, where battery life and connectivity is likely worse? What do you do when those images arrive from a CDN where the objects have been optimised (hashes changed) or pass through a proxy and they change in-flight?

Is Discourse the only product which doesn’t work for your affected users? Just how poor or unreliable are these connections?

My misunderstanding. Guess we could use more emoji…

OK… I thought I was clear, the problem is simply “refresh fixing broken pages”.

Instead of forcing users to refresh, could the JS app be more resilient?

I’m specifically talking resources that didn’t download properly/completely.

Maybe some future version of HTTP will finally take care of it and we’ll be able to remove the “refresh” button on the nav bar…

In the mean time, single page JS apps directing all downloads and could wrap those requests with validation.

That’s the problem though, isn’t it?

Let’s rule out dialup and consider our modern worst-case scenario as a mobile device on little to no connectivity - where battery life is paramount and retransmission the worst case scenario. You don’t want to hash in the first place because even if you can do it in-browser it’s costly to checksum every object, and you can’t assume 1:1 once the data has been lifted off a CDN, massaged by a mobile proxy and tampered with by a mobile carrier.

HTTP/2 does more to reduce the amount of back-and-forth, but it’s far from flawless.

@Stephen,

Thanks for your feedback on this. For mobile, I think you’re 100% correct about the resource trade-off is unreasonable.

I was thinking primarily in terms of desktop access for people with bad connections. Living in China for many years, connections to anywhere else are just horrible… Often times lossy connections even corrupt your cached files… extremely frustrating stuff. I’ve seen similarly terrible connections in many other parts of Asia (Philippines, Vietnam, Indonesia). Of course the same thing happens to travelers in the west when you find particularly bad or overloaded WiFi. Most of the sites I use still aren’t single page JS apps, so there’s nothing to even discuss. In the case of Discourse (for desktop mode) it seems finally possible to actually automatically fix those issues.

That’s a really good point that could still be a risk even for desktop…

The odds of this being a problem in the first place are miniscule for a JavaScript app like Discourse. Once you have downloaded and executed the JS app bundle, all subsequent “page” requests are tiny — only the minimal data needed to render the UI, in JSON format, is sent down the wire. The actual rendering is done by JavaScript code you’ve already fully loaded.

Feel free to verify this behavior yourself using the network tab in the F12 console of your browser.

4 Likes