Solutions Used to Prevent Google Indexing

Have you ever needed to protect against Google from indexing a specific URL on your world wide web web page and exhibiting it in their search engine final results internet pages (SERPs)? If you control net web-sites long plenty of, a day will probable come when you require to know how to do this.

The three methods most normally used to prevent the indexing of a URL by Google are as follows:

Employing the rel=”nofollow” attribute on all anchor components employed to backlink to the webpage to stop the back links from staying followed by the crawler.
Using a disallow directive in the site’s robots.txt file to reduce the site from being crawled and indexed.
Making use of the meta robots tag with the content material=”noindex” attribute to stop the web page from being indexed.
While the dissimilarities in the a few methods surface to be delicate at to start with glance, the effectiveness can differ greatly depending on which method you choose.

Employing rel=”nofollow” to prevent Google indexing

Quite a few inexperienced site owners try to reduce Google from indexing a particular URL by employing the rel=”nofollow” attribute on HTML anchor things. They include the attribute to just about every anchor element on their internet site employed to connection to that URL.

Which includes a rel=”nofollow” attribute on a website link stops Google’s crawler from pursuing the backlink which, in flip, stops them from discovering, crawling, and indexing the focus on web site. When google inverted index may possibly get the job done as a quick-expression solution, it is not a viable very long-phrase resolution.

The flaw with this method is that it assumes all inbound inbound links to the URL will incorporate a rel=”nofollow” attribute. The webmaster, even so, has no way to stop other world-wide-web web sites from linking to the URL with a adopted backlink. So the prospects that the URL will at some point get crawled and indexed employing this process is pretty large.

Utilizing robots.txt to avoid Google indexing

One more prevalent technique utilized to protect against the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be included to the robots.txt file for the URL in dilemma. Google’s crawler will honor the directive which will stop the site from being crawled and indexed. In some cases, on the other hand, the URL can nonetheless look in the SERPs.

From time to time Google will display screen a URL in their SERPs while they have by no means indexed the contents of that site. If more than enough internet internet sites link to the URL then Google can often infer the subject matter of the web page from the link textual content of those people inbound links. As a consequence they will show the URL in the SERPs for related lookups. While utilizing a disallow directive in the robots.txt file will avert Google from crawling and indexing a URL, it does not warranty that the URL will hardly ever look in the SERPs.

Working with the meta robots tag to reduce Google indexing

If you will need to protect against Google from indexing a URL whilst also avoiding that URL from currently being exhibited in the SERPs then the most successful strategy is to use a meta robots tag with a content material=”noindex” attribute inside of the head aspect of the net web site. Of course, for Google to basically see this meta robots tag they want to very first be capable to discover and crawl the site, so do not block the URL with robots.txt. When Google crawls the site and discovers the meta robots noindex tag, they will flag the URL so that it will under no circumstances be proven in the SERPs. This is the most helpful way to avert Google from indexing a URL and exhibiting it in their search outcomes.