Why do I have references to robots txt in my web stat programs?

Many search engines will look for a robots.txt file before spidering your sites content. The robots.txt file contains specific rules a robot/spider should abide by when capturing data.

  • 522 Users Found This Useful

Was this answer helpful?

Related Articles

What is the location of curl?

Curl is located @ /usr/bin/curl

html or htm parsed as shtml not working

To parse .html or .htm files as .shtml add the following to your .htaccess file in the...

Redirect http to https and www

To forward a website to use both www. and https:// use the following in an .htaccess file:...

How can I turn off directory indexing?

In the directory that you wish you turn off directory indexing, you can do it under Index Manager...

Error 404 - File Not Found

Files are case sensitive. Make sure you are typing the address in correctly. Verify that the...