Por que há muitas regras de Disallow no robots.txt?

Why are you posting screenshots of text? Guess what is also very bad for searchability? Pictures of text…

So your point seems to be this – which I had to TYPE IN FROM YOUR SCREENSHOT instead of being linked like a regular link:

You can prevent a page from appearing in Google Search by including a noindex meta tag in the page’s HTML code, or by returning a ‘noindex’ header in the HTTP request. When Googlebot next crawls that page and see the tag or header, Googlebot will drop that page entirely from Google Search results, regardless of whether other sites link to it.

:warning: Important! For the noindex directive to be effective, the page must not be blocked by a robots.txt file. If the page is blocked by a robots.txt file, the crawler will never see the noindex directive, and the page can still appear in search results, for example if other pages link to it.

Which means user pages are still kinda present in Google’s indexes though they would never appear as hits for any actual search terms.

How is this a problem? Give me a valid search term with actual search keywords that produces a user page.

4 curtidas