So, a woman puts a “notice” on her website saying basically that anyone who looks at her site is entering into a contract and cannot copy the content.
The internet archive’s spider hits her site and provides an archived copy of this.
According to Eric Goldman, the court properly did not dismiss the breach of contract claim.
Now, I’m just a law student, but I’m also a former software engineer. I don’t particularly know which law applies here, but what sticks out to me is that enforcement of a contract in this case would be totally against the common practice / usage of trade.
Every reasonably informed website operator knows 1) that spiders will come to your site and 2) that they’re going to take copies of your pages and provide them to the public in some limited form. See Google’s cache for the most prominent example. Also, every reasonably informed webmaster knows about a ‘robots.txt’ file, which is a file that tells search spiders what NOT to access. There are also a number of HTML META tags for specifying what the search engines can/cannot do.
This is not rocket science, it’s commonly known how to prevent the spiders from crawling your site. Enforcement of a contract here, well, regardless of what text of the site says or how it’s programmed, well, that would just fly in the face of common practice.