Monday, July 26, 2021

Can I use my robots.txt to block absolute URLs?

I want to prevent crawlers from accessing dev.mywebsite.com. I can see from my main property on GSC that all the pages in that domain are Crawled but not indexed and I just want to stop wasting my crawl budget.

Would the following robots.txt work?

User-agent: * Disallow: https://dev.mywebsite.com

submitted by /u/Piilaria
[link] [comments]

from Search Engine Optimization: The Latest SEO News https://www.reddit.com/r/SEO/comments/orxfnu/can_i_use_my_robotstxt_to_block_absolute_urls/>

No comments:

Post a Comment