We have every single page of latest in the index, the content is like quicksand and there is nothing in the homepage that is “site specific” and not quicksand which is a big problem:
We absolutely do not want people landing on page 2 / 3 etc… page 1 maybe, but the content on page 1 keeps on changing.
This URL for example https://meta.discourse.org/latest?no_definitions=true&no_subcategories=false&page=2 is stored in the Google index.
I am reticent to change stuff though cause I do not know how the big Google will deal with us adding “dont store in index” directives here. Also people never land on these pages anyway cause Google automatically detects they are rubbish and do not send people there.
If there is anything super positive here, I guess it would be having a wonderful “HTML off” homepage that has useful enough content that search engines would send people to the page.
For example, it would be super nice if discourse community discussions ranked meta.discourse.org first cause we had a nice front page.
A simple fix here we can make that can give us lots of mileage is nice expansion of pinned posts:
In fact we can even expand it a bit further for crawler views. Additionally we could list all the categories on the home page as well in the crawler view… there is a bunch of stuff we can do.
I read the tutorials above but I do not understand how to fix the question “Need to edit robots.txt file - where is it?”. Looking forward to receiving help from the community
This is the content to be want to update
# See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
#
User-agent: *
Disallow: /auth/cas
Disallow: /auth/facebook/callback
Disallow: /auth/twitter/callback
Disallow: /auth/google/callback
Disallow: /auth/yahoo/callback
Disallow: /auth/github/callback
Disallow: /auth/cas/callback
Disallow: /assets/browser-update*.js
Disallow: /users/
Disallow: /u/
Disallow: /badges/
Disallow: /search
Disallow: /search/
Disallow: /tags
Disallow: /tags/
You really need to read some of the dev topics, it explains all of that and more. The plugin should be trivial, to be honest. Or you can post something in marketplace with a budget to see if someone will build it for you.
If that is added, could it be made into an overridable setting? I clicked on this link in the newsletter, because getting user pages indexed is also something we need. We’re hoping to add additional information to them and eventually redirect the old (indexed) user pages to the Discourse ones.
I was just noticing this problem on one of my Discourse sites. The way to block those dynamic URLs from bots while still allowing search engines to crawl /latest is this:
Disallow: /latest?
That will only block the dynamic ones, but not /latest, so search engines would still be able to see the latest content. I tested the rule in Google’s Webmaster Tools and it works.
Here’s an example of some of the dynamic URLs that are getting crawled on my site:
Is it possible to add that one line to robots.txt?
(Edit: I looked more closely at the file, and I wouldn’t use noindex there, at least on that dynamic rule. I’m pretty sure that Google has recommended not to use noindex in robots.txt though it was several years ago.)