Needing to edit robots.txt file - where is it?

Correct me if I am wrong, but Latest is the default display but not the default link, right? This has to do with the actual /latest link

We have every single page of latest in the index, the content is like quicksand and there is nothing in the homepage that is “site specific” and not quicksand which is a big problem:

We absolutely do not want people landing on page 2 / 3 etc… page 1 maybe, but the content on page 1 keeps on changing.

This URL for example https://meta.discourse.org/latest?no_definitions=true&no_subcategories=false&page=2 is stored in the Google index.

I am reticent to change stuff though cause I do not know how the big Google will deal with us adding “dont store in index” directives here. Also people never land on these pages anyway cause Google automatically detects they are rubbish and do not send people there.

If there is anything super positive here, I guess it would be having a wonderful “HTML off” homepage that has useful enough content that search engines would send people to the page.

For example, it would be super nice if discourse community discussions ranked meta.discourse.org first cause we had a nice front page.

A simple fix here we can make that can give us lots of mileage is nice expansion of pinned posts:

They are stable content, we can expand that:

In fact we can even expand it a bit further for crawler views. Additionally we could list all the categories on the home page as well in the crawler view… there is a bunch of stuff we can do.

3 Likes

Hello!
this is my file

# See http://www.robotstxt.org/robotstxt.html for documentation on how to use the robots.txt file
#
User-agent: *
Disallow: /auth/cas
Disallow: /auth/facebook/callback
Disallow: /auth/twitter/callback
Disallow: /auth/google/callback
Disallow: /auth/yahoo/callback
Disallow: /auth/github/callback
Disallow: /auth/cas/callback
Disallow: /assets/browser-update*.js
Disallow: /users/
Disallow: /u/
Disallow: /my/
Disallow: /badges/
Disallow: /search
Disallow: /search/
Disallow: /tags
Disallow: /tags/
Disallow: /email/
Disallow: /session
Disallow: /session/
Disallow: /admin
Disallow: /admin/
Disallow: /user-api-key
Disallow: /user-api-key/
Disallow: /*?api_key*
Disallow: /*?*api_key*
Disallow: /groups
Disallow: /groups/
Disallow: /t/*/*.rss
Disallow: /tags/*.rss
Disallow: /c/*.rss


User-agent: mauibot
Disallow: /


User-agent: bingbot
Crawl-delay: 60
Disallow: /auth/cas
Disallow: /auth/facebook/callback
Disallow: /auth/twitter/callback
Disallow: /auth/google/callback
Disallow: /auth/yahoo/callback
Disallow: /auth/github/callback
Disallow: /auth/cas/callback
Disallow: /assets/browser-update*.js
Disallow: /users/
Disallow: /u/
Disallow: /my/
Disallow: /badges/
Disallow: /search
Disallow: /search/
Disallow: /tags
Disallow: /tags/
Disallow: /email/
Disallow: /session
Disallow: /session/
Disallow: /admin
Disallow: /admin/
Disallow: /user-api-key
Disallow: /user-api-key/
Disallow: /*?api_key*
Disallow: /*?*api_key*
Disallow: /groups
Disallow: /groups/
Disallow: /t/*/*.rss
Disallow: /tags/*.rss
Disallow: /c/*.rss

I read the tutorials above but I do not understand how to fix the question “Need to edit robots.txt file - where is it?”. Looking forward to receiving help from the community

This is the content to be want to update

# See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file
#
User-agent: *
Disallow: /auth/cas
Disallow: /auth/facebook/callback
Disallow: /auth/twitter/callback
Disallow: /auth/google/callback
Disallow: /auth/yahoo/callback
Disallow: /auth/github/callback
Disallow: /auth/cas/callback
Disallow: /assets/browser-update*.js
Disallow: /users/
Disallow: /u/
Disallow: /badges/
Disallow: /search
Disallow: /search/
Disallow: /tags
Disallow: /tags/

Thanks all

1 Like

I think you can override the file in your own plugin.

1 Like

My archive directory is this

robots%20txt

how to override the file in your own plugin

Thanks

You will want to read the plugin development topics and then read this
https://meta.discourse.org/t/how-to-block-all-crawlers-but-googles/62431/4?u=cpradio

I really do not want to block the google search engine that I want to change by content in the robots.txt file

Why does my website not find such a directory /discourse/app/views ?

There is no robots.txt text file per se. It is a Ruby controller
https://github.com/discourse/discourse/blob/master/app/controllers/robots_txt_controller.rb

2 Likes

You really need to read some of the dev topics, it explains all of that and more. The plugin should be trivial, to be honest. Or you can post something in marketplace with a budget to see if someone will build it for you.

6 Likes

If that is added, could it be made into an overridable setting? I clicked on this link in the newsletter, because getting user pages indexed is also something we need. We’re hoping to add additional information to them and eventually redirect the old (indexed) user pages to the Discourse ones.

I was just noticing this problem on one of my Discourse sites. The way to block those dynamic URLs from bots while still allowing search engines to crawl /latest is this:

Disallow: /latest?

That will only block the dynamic ones, but not /latest, so search engines would still be able to see the latest content. I tested the rule in Google’s Webmaster Tools and it works.

Here’s an example of some of the dynamic URLs that are getting crawled on my site:

https://gist.githubusercontent.com/j127/d329c15dab45369b03321cad40448734/raw/300aa579b1386087b903da6aa52c52ff5d95828c/latest.txt

Is it possible to add that one line to robots.txt?

(Edit: I looked more closely at the file, and I wouldn’t use noindex there, at least on that dynamic rule. I’m pretty sure that Google has recommended not to use noindex in robots.txt though it was several years ago.)

2 Likes

You can now ban or limit abusive webcrawlers via site settings which indirectly edits robots.txt but we still don’t provide arbitrary editing ability.

I think we should though … @eviltrout can you scope this for 2.4? It answers a lot of requests, many of which we don’t agree with, but my attitude on this is “it’s your funeral so go for it if you feel you must :skull_and_crossbones:

7 Likes

Can we at least mark editing the robots.txt as totally outside the scope for community support?

2 Likes

FTR, anyone can easily add additional rules through a simple plugin using the “robots_txt_index” connector template. For example: app/views/connectors/robots_txt_index/sitemap.html.erb

9 Likes

Here’s how I think it should work:

  • Add a new URL to the admin section which is not linked directory. For example /admin/customize/robots

    • Show a <textarea> with the current robots.txt content.

    • If they’ve not edited it before, pre-fill it with the contents based on the white/blacklist.

    • When the admin mashes Save Changes, it should be saved to the database and will replace the existing contents for robots.txt for that forum.

6 Likes

I am strongly opposed to this, because it gives an obscure and dangerous option top billing in the UI.

I think the route to customize robots.txt should be custom and hand entered for now. If users want it they need to search google or meta and find the path.

That’s why I hid it behind “Advanced Edit”, but if we are obscuring the interface I can simplify it further (will edit that post.)

2 Likes

I’ve created a PR for this:

https://github.com/discourse/discourse/pull/7884

Screenshots:

17 Likes

Looks good! Make sure revert button uses the correct glyph, the same one we use on revert in site settings. Also we just use the word “reset” so you can repurpose that copy rather than creating a new translation.

image

Also we need some warnings about the handful of site settings that modify robots.txt which will be overridden if you manually edit etc.

9 Likes

PR was just merged: :tada:

https://github.com/discourse/discourse/commit/6515ff19e5c8e62ba3aaecb5947eaccdcbbaf0dd

If you update to latest tests-passed, you’ll be able to customize robots.txt at /admin/customize/robots. The page is not linked to from anywhere in the UI, you’ll have to copy and paste the URL manually into your browser.

Note: if you override the file, any later changes to the site settings that modify robots.txt (e.g. whitelisted crawler user agents etc.) won’t apply to the file (settings will save correctly, but changes won’t reflect on robots.txt). You can restore to the default version and the site settings will apply to the file again.

If there are overrides AND an admin views the file at /robots.txt, they’ll see a comment on the top that says there’re overrides and links to where they can modify the file or reset to the default version.

21 Likes