Enter an IPv4 or IPv6 address to check if it's a valid crawler.


Try these examples.

  • (Googlebot)
  • (Bingbot)
  • (Twitter)
  • (Facebook)

Why do you need to validate search engine IP addresses?

  • Identify requests from bots claiming to be search engines and block them
  • Filter non-genuine bot activity from your logs to understand the genuine search engine crawler activity
Your web server logs normally include requests from search engine crawlers, which send a user agent to identify themselves. Anyone can use a search engine's user agent, but the IP address can be used to validate if the request is from a genuine search engine crawler.

How can you validate requests are from genuine crawlers?

Some crawlers such as DuckDuckGo provide a fixed list of IP addresses. Facebook and Twitter provide a list of IP ranges. Most search engines recommend using a method called reverse DNS lookup to validate their IP addresses.

First you run a reverse DNS lookup on the IP address in your logs to get the hostname. Then run a forward DNS lookup on the hostname to confirm it matches.

The full list of crawlers which can be validated is as follows.