Google’s Search Relations recently addressed important questions regarding webpage indexing on their latest episode of the ‘Search Off The Record’ podcast. In this episode, Google’s John Mueller and Gary Illyes discussed how to prevent Googlebot from accessing certain parts of a webpage and how to stop Googlebot from visiting a website altogether.
Preventing Googlebot from Crawling Specific Web Page Sections
When asked how to prevent Googlebot from crawling specific web page sections, such as “also bought” areas on product pages, Mueller explained that it’s impossible to block crawling of a specific section on an HTML page. However, he offered two potential strategies:
- Using the data-nosnippet HTML attribute to prevent text from appearing in a search snippet.
- Using an iframe or JavaScript with the source blocked by robots.txt.
While these solutions are not ideal, Mueller reminded listeners that there’s no need to block Googlebot from seeing duplicated content that appears across multiple pages.
Preventing Googlebot from Accessing a Website
When it comes to preventing Googlebot from accessing any part of a site, Illyes provided an easy solution: adding a disallow: / rule for the Googlebot user-agent in the robots.txt file.
For those seeking a more robust solution, Illyes recommends creating firewall rules that deny access to Googlebot IP ranges. Check out Google’s official documentation for a list of Googlebot’s IP addresses.
Conclusion
Although it’s impossible to prevent Googlebot from accessing specific sections of an HTML page, methods like using the data-nosnippet attribute can provide control. When considering blocking Googlebot from your site entirely, a simple disallow rule in your robots.txt file will get the job done. However, more extreme measures like creating specific firewall rules may be necessary.
Looking for SEO services for your website? Consider SEO Augusta, the SEO Guru that can help you optimize your online presence. Visit their website at https://seoaugusta.com/ to learn more.