Friday, August 28, 2020

Robots.txt - prevent crawling

I have added the code "user-agent: * Disallow: /" to the robots.txt file, so as to prevent bots from crawling my website.

However when I test the same using some free tools, I'm getting a result as "crawling of the url is allowed".

Can anyone please mention what went wrong?

submitted by /u/resurrect002
[link] [comments]

from Search Engine Optimization: The Latest SEO News https://www.reddit.com/r/SEO/comments/ii2r7x/robotstxt_prevent_crawling/>

No comments:

Post a Comment