Many search engines will look for a robots.txt file before spidering your sites content. The robots.txt file contains specific rules a robot/spider should abide by when capturing data.
- 524 Users Found This Useful
Related Articles
How do I parse html files as shtml?
Create an .htaccess file in the directory needed and add the following: AddHandler...
When I upload an htaccess file it disappears
On unix/linux systems, files that start with a . are considered hidden so many ftp programs do...
How can I turn off directory indexing?
In the directory that you wish you turn off directory indexing, you can do it under Index Manager...
Why do I get emails for cron jobs?
This is the default. If you would like to disable receiving emails when a cron job runs append...
Redirect Internet Explorer visitors to a supported browsers page
Redirect users of Internet Explorer to a supported browsers page or any other page. To do this,...
