Robots.txt Generator


Default - All Robots are:  
    
Crawl-Delay:
    
Sitemap: (leave blank if you don't have) 
     
Search Robots: Google
  Google Image
  Google Mobile
  MSN Search
  Yahoo
  Yahoo MM
  Yahoo Blogs
  Ask/Teoma
  GigaBlast
  DMOZ Checker
  Nutch
  Alexa/Wayback
  Baidu
  Naver
  MSN PicSearch
   
Restricted Directories: The path is relative to root and must contain a trailing slash "/"
 
 
 
 
 
 
   



Now, Create 'robots.txt' file at your root directory. Copy above text and paste into the text file.


Free Online Robots.txt Generator

Our quality robots.txt file generator is one of the most advanced robots.txt creator tool online. You can select many options like crawl delay and search engine names that the robots.txt rules would apply to. You can choose to save the created robots.txt or you can just copy and paste the created text content tou your robots.txt file manually. Furthermore, all this intensive options come 100% free of charge. Besides free online robots.txt generator, we have more free seo tools that may help you.

What is a robots.txt file and how is it used?

Robots.txt is a small text file located at the website's root directory that shows search engines crawlers and spiders like google, yahoo. bing etc. which site pages and files you choose or don't choose them to visit. Usually, webmasters and bloggers make a huge effort to get noticed by search engines, but there are actually instances when it isn't needed for certain parts of the web properties they own. For example, if you store private information, some sort of sensitive content or you just need to save bandwidth by not indexing heavy web pages with loads of images. Whenever a search engine crawler or any spider accesses a website, it requests a file called '/robots.txt' in the very first place. If such a file named 'robots.txt' is available, the crawler controls it for any website or blog indexing instructions. This way, it decides to access and index only the pages that the webmaster or blogger wants and ignores the other pages.

Examples of usage
Here is a list of useful examples of robots.txt usage: 

Deny whole site from indexing by all web crawlers:
User-agent: * 
Disallow: / 


Allow all web crawlers to index the whole site:
User-agent: * 
Disallow: 


Deny some directories from indexation:
User-agent: * 
Disallow: /private/ 


Deny the site’s indexing for a certain web crawler:
User-agent: BadBot 
Disallow: /

Useful sources about robots.txt

https://support.google.com/webmasters/answer/6062608?hl=en - Learn about robots.txt files

https://en.wikipedia.org/wiki/Robots_exclusion_standard - Robots.txt information on Wikipedia