Disable or bypass feature detect for Googlebot (while serving JS app to crawlers)

I’m starting to think my logic was flawed from the beginning. It would explain why no one responded - perhaps nothing is wrong.

Here’s a fresh article on how it’s normal for Google to show a white page in the screenshot

I can see the “crawled” HTML for the home page now , this is the indexed version, not from “Live test”- it shows the full page. Keep in mind, Google figured this out while serving them the full JS app.

What’s interesting is they went down to about the 27th post on the home page as far as indexing. So the endless scroll thing is something Google understands.

Not sure if it helped, but I unchecked the ajax setting in admin settings. It caused google to find URLs like the below ( and serve the crawler version ) - I unchecked it, and now that URL will show the JS version

https://discuss.flynumber.com/t/japan-phone-numbers-disconnect-notice/2351?_escaped_fragment_=

Now all I need to figure out is how I can clean up those extra canonical URLs discourse creates for the user pages.