Changes

Robots Are Our Friends

248 bytes added, 12:37, 17 November 2012
Typical Reasons for not allowing Robots
[[Image:Friendly-Robut.png|leftright]]
For a variety of reasons cultural heritage organizations often have [http://www.robotstxt.org/ robots.txt] documents that restrict what web crawlers (aka robots) can see on a website. This is a bad thing because it means that the content that libraries, archives and museums are putting online becomes virtually invisible to search engines like Google, Bing, Yahoo, is less likely to be shared in social media sites like Facebook, Twitter, Flickr, Pinterest and stands less of a chance of being used in educational sites like Wikipedia. The Robots Are Our Friends campaign aims to help promote an understanding of the role that robots.txt plays in determining the footprint our cultural heritage collections have on the Web.
* While indexing a dynamic site, robots can put an extra strain on the server, causing a slow response, or in some cases, pegging the CPU at 100%.
* Some content is intentionally shielded from search engines to help shape how a websites resources are presented in search results. For example, if an organization has put a lot of PDFs online and doesn't want those to turn up in search results.
== Throttling ==