Search result pages have been indexed by Google. I can see in GSC that Google is trying to crawl these pages:
/?s=something /search/something/ /search/something/page/2/
What is the best approach from the current situation?
Should I block them via robots.txt?
User-agent: * Allow: / Disallow: /?s= Disallow: /page/*/?s= Disallow: /search/
Or should I put
<meta name="robots" content="noindex,follow" />
on those pages?
My goal is to stop wasting crawling budget on those pages.
[link] [comments]
from Search Engine Optimization: The Latest SEO News https://www.reddit.com/r/SEO/comments/g6jxdq/already_indexes_wordpress_search_result_pages/>
No comments:
Post a Comment