Many search engines will look for a robots.txt file before spidering your sites content. The robots.txt file contains specific rules a robot/spider should abide by when capturing data.
- 524 Users Found This Useful
Related Articles
What is the location of curl?
Curl is located @ /usr/bin/curl
My index page does not load - Why not?
Filenames are case sensitive. Be sure you have an index filename all in lower case. Examples are...
What is a domain?
A domain is a human-readable name assigned to an IP address to make accessing websites much...
How do I parse html files as shtml?
Create an .htaccess file in the directory needed and add the following: AddHandler...
What perl modules are installed on the server?
You can get a complete list of the modules that are installed by going to the control panel and...
