What is the robots.txt file?
The file called robots.txt is a file that can be used to control which files on your website so-called crawlers should access.
The robots.txt file is always located in the document root of a website, i.e. in the directory that contains all the files of a website. For the website www.your-own-domain.ch, the path to the robots.txt file would be as follows: www.your-own-domain.ch/robots.txt.
The robots.txt file is a plain text file without formatting that contains the “Robots Exclusion Standard” (for more on this: https://en.wikipedia.org/wiki/Robots_exclusion_standard). Rules are defined in this file which allow or block a specific crawler from accessing the document root of the domain or subdomain. So if nothing is specified in the robots.txt file, crawlers can access all files of the website.
What comes with the standard version of this file?
If you have not defined your own version of the robots.txt file for your website, the Hostpoint standard version is delivered (default version). This defines a crawl delay (interval between calls in seconds) so that the bots cannot send their queries directly one after the other, but have to wait between each query. The Hostpoint standard also allows access to all files on a website.
The standard version of the file therefore contains the following parameters:
User-agent: *
Crawl-delay: 3
Please note that not all bots pay attention to the crawl delay.
How can I overwrite the standard version of the file?
If you want to create your own robots.txt file, you can place it in your website's directory and write as many rules as you wish inside. You can find more information on robots.txt files at the following link: https://moz.com/learn/seo/robotstxt
For support requests please use this form instead.