Saturday, May 1, 2021

Practical usage of Robots.txt

I understand that Robots.txt should be used to block access to low-quality content pages and the only time I should want to use robots.txt to block pages is when my website is eating up the crawl budget, for example, If I notice that Google is re-crawling and indexing relatively unimportant pages (e.g., individual product pages) at the expense of core pages.

But what is a practical usage example of this, do you guys disallow pages such as privacy policy, disclaimer, etc pages that will not have any changes made to them once they are already indexed? Or do you only use robots.txt on very large websites? Or is this more for large ecommerce websites?

submitted by /u/Rhavasher
[link] [comments]

from Search Engine Optimization: The Latest SEO News https://www.reddit.com/r/SEO/comments/n2ssgp/practical_usage_of_robotstxt/>

No comments:

Post a Comment