This is the fundamental issue… we have no “correct” canonical page if we are displaying content from 2 different pages on the screen. Only way to correct this is making pages for “crawling” purpose work differently and this enters other worlds of pain.
For my blog what I do is just keep the whole chunk of comments with the blog post, eg:
https://www.google.com.au/search?q=“One+commonly+overlooked+impedance+to+development+flow+is+typos”
But the issue described here is far more fundemental we give web crawlers a bunch of content splayed across 2 pages and then we just pick the canonical for one of the posts in the set.
One way I can think of ways of resolving this, tell google not to index “post” links eg: https://meta.discourse.org/t/google-indexed-link-not-pointing-to-the-correct-post/61443/9
is a post link, using meta tags which may force its hand to crawl the canonical and index that instead, it may work. I don’t know. Very trick problem.
Interestingly there is a far more severe issue I am noticing when I search
google indexing site:meta.discourse.org
I find these 2 broken links that we need to figure out how this even happened:
This on the second page:
https://meta.discourse.org/t/google-complaining-indexed-though-blocked-by-robots-txt/96408?page=2
This on the third
https://meta.discourse.org/t/canonical-tag-generated-with-page-2/32842?page=4
It is not really making sense how this sneaked in. My first port of call here would be to check the site map plugin to confirm it does not include these bad links AND then to confirm there is no logic where we are presenting google with content on these pages instead of an error page.