It is recommended to use HTTrack to take a dump of static HTML and host that as a static archived website. But the layout for crawlers is not very pretty to host it as a static site. I will be working on improving the layout and adding necessary data to the static website. You can see the crawler layout at https://meta.discourse.org/?escaped_fragment which I will try to improve.
This is just a placeholder to link with changes I make so that someone reviewing it can get more context.
Let me know if you have any suggestions on this topic.
Sorry in advanced for my question since I’m not very familiar with HTTrack. Why do we need to use HTTrack to take a dump of the static HTML page and host that as a static archived website?
All 3 pull requests have been merged. I’m adding screenshots with the new static archive look here below. Let me know if any of you have any suggestions on things to improve.