Does anyone know how I can detect spiders (particularly, Googlebot)?<BR><BR>I have a web site that is only viewable in cetain browsers, and I accomplish this with some C# code in the Session_OnStart() method of the global.asax file. By using the HttpBrowserCapabilities object, I can determine if a browser meets certain requirements, and if not, redirects to an alternate page.<BR><BR>The problem is, spiders like Yahoo, and Google are not being allowed into the site, and so it never gets indexed by them. Also, I&#039;m not getting very good search results from those search engines.<BR><BR>I tried adding a conditional statement:<BR><BR>if (Request.Browser.Crawler)<BR> // Let the browser through since it&#039;s a crawler.<BR><BR>But based on my web stats reports, it looks like Google and Yahoo (and a few others) are still not getting through.<BR><BR>Any suggestions?<BR>-Steve