Control how search engines access and index your site


4-Step Computer Security Upgrade

Learn to encrypt your files, secure your computer when using public Wi-Fi, enable two-factor authentication, and use good passwords.

The Official Google Blog says a bit about robots.txt, the file on your server that tells search engines how to access your site, what to index, and much more. From the OGB:


However, you may have a few pages on your site you don’t want in Google’s index. For example, you might have a directory that contains internal logs, or you may have news articles that require payment to access. You can exclude pages from Google’s crawler by creating a text file called robots.txt and placing it in the root directory. The robots.txt file contains a list of the pages that search engines shouldn’t access. Creating a robots.txt is straightforward and it allows you a sophisticated level of control over how search engines can access your web site.

The OGB article starts on a detailed instructional guide to robots.txt, with more to come. [via Lifehacker]


Get Lawyerist in Your Inbox, Daily

Current Articles
Current Lab Discussions