By default the Rails environment includes public/robots.txt that puts no restrictions on search engines accessing your site. This is generally OK for production, but definitely not desired for development & staging. By generating the file using Rails, you can easily have a different file for each environment without having to move files back and forth.

My solution was inspired by this StackOverflow answer, but it is out of date for Rails 3, so I updated it, and simplified it a little bit too.

Here’s how I solved this:

  • Copy public/robots.txt to config/robots.development.txt, config/robots.production.txt, and any other environments you might have created, and edit the files to have the correct values for your environments. For example, for most sites, everything but production should have the User-Agent: * and Disallow: / lines uncommented.
  • Delete the default public/robots.txt (otherwise it will always be served by your webserver and Rails won’t even know of the request).
  • In one of your controllers, add a method to handle incoming requests for robots.txt - here’s mine:
1
2
3
4
def robots
  robots = File.read(Rails.root + "config/robots.#{Rails.env}.txt")
  render :text => robots, :layout => false, :content_type => "text/plain"
end
  • In config/routes.rb, add: get '/robots.txt' => 'pages#robots'
  • To check that everything is working, visit http://mysite/robots.txt and make sure there’s no errors. I also used this robots.txt verifier makes sure my files were formatted properly and served with the text/plain content type.