Whatever You Required To Understand About The X-Robots-Tag HTTP Header

Posted by

Seo, in its the majority of basic sense, relies upon one thing above all others: Online search engine spiders crawling and indexing your website.

However almost every website is going to have pages that you don’t wish to include in this expedition.

For instance, do you truly desire your privacy policy or internal search pages appearing in Google results?

In a best-case situation, these are not doing anything to drive traffic to your website actively, and in a worst-case, they could be diverting traffic from more crucial pages.

Thankfully, Google allows webmasters to inform search engine bots what pages and material to crawl and what to neglect. There are a number of ways to do this, the most typical being utilizing a robots.txt file or the meta robotics tag.

We have an outstanding and comprehensive explanation of the ins and outs of robots.txt, which you should absolutely read.

However in high-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exclusion Procedure (REP).

Robots.txt supplies spiders with guidelines about the website as an entire, while meta robots tags consist of instructions for specific pages.

Some meta robots tags you may utilize include index, which tells online search engine to include the page to their index; noindex, which tells it not to include a page to the index or include it in search results page; follow, which advises a search engine to follow the links on a page; nofollow, which tells it not to follow links, and a whole host of others.

Both robots.txt and meta robotics tags are useful tools to keep in your toolbox, but there’s likewise another method to advise search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another way for you to manage how your web pages are crawled and indexed by spiders. As part of the HTTP header action to a URL, it controls indexing for a whole page, as well as the specific aspects on that page.

And whereas using meta robotics tags is relatively straightforward, the X-Robots-Tag is a bit more complicated.

However this, of course, raises the concern:

When Should You Utilize The X-Robots-Tag?

According to Google, “Any regulation that can be utilized in a robots meta tag can also be specified as an X-Robots-Tag.”

While you can set robots.txt-related instructions in the headers of an HTTP response with both the meta robotics tag and X-Robots Tag, there are specific circumstances where you would wish to utilize the X-Robots-Tag– the two most typical being when:

  • You wish to control how your non-HTML files are being crawled and indexed.
  • You wish to serve instructions site-wide instead of on a page level.

For instance, if you want to obstruct a particular image or video from being crawled– the HTTP action technique makes this easy.

The X-Robots-Tag header is likewise helpful due to the fact that it permits you to combine several tags within an HTTP response or utilize a comma-separated list of directives to specify directives.

Perhaps you do not desire a particular page to be cached and desire it to be not available after a certain date. You can use a combination of “noarchive” and “unavailable_after” tags to advise search engine bots to follow these instructions.

Basically, the power of the X-Robots-Tag is that it is much more versatile than the meta robotics tag.

The advantage of utilizing an X-Robots-Tag with HTTP reactions is that it allows you to utilize routine expressions to carry out crawl regulations on non-HTML, as well as use criteria on a bigger, global level.

To assist you comprehend the distinction in between these regulations, it’s valuable to categorize them by type. That is, are they crawler directives or indexer instructions?

Here’s a helpful cheat sheet to explain:

Spider Directives Indexer Directives
Robots.txt– utilizes the user representative, permit, disallow, and sitemap instructions to specify where on-site search engine bots are enabled to crawl and not allowed to crawl. Meta Robotics tag– allows you to define and avoid online search engine from showing particular pages on a site in search results.

Nofollow– allows you to define links that ought to not hand down authority or PageRank.

X-Robots-tag– permits you to manage how specified file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s state you wish to obstruct particular file types. A perfect technique would be to add the X-Robots-Tag to an Apache configuration or a.htaccess file.

The X-Robots-Tag can be added to a site’s HTTP responses in an Apache server setup via.htaccess file.

Real-World Examples And Utilizes Of The X-Robots-Tag

So that sounds great in theory, but what does it appear like in the real life? Let’s take a look.

Let’s state we wanted online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:

Header set X-Robots-Tag “noindex, nofollow”

In Nginx, it would look like the listed below:

area ~ *. pdf$ add_header X-Robots-Tag “noindex, nofollow”;

Now, let’s take a look at a various scenario. Let’s say we want to utilize the X-Robots-Tag to block image files, such as.jpg,. gif,. png, and so on, from being indexed. You could do this with an X-Robots-Tag that would appear like the below:

Header set X-Robots-Tag “noindex”

Please keep in mind that understanding how these directives work and the effect they have on one another is vital.

For instance, what occurs if both the X-Robots-Tag and a meta robots tag are located when crawler bots find a URL?

If that URL is obstructed from robots.txt, then specific indexing and serving regulations can not be discovered and will not be followed.

If regulations are to be followed, then the URLs including those can not be prohibited from crawling.

Check For An X-Robots-Tag

There are a few various techniques that can be utilized to check for an X-Robots-Tag on the website.

The most convenient way to check is to set up a browser extension that will tell you X-Robots-Tag info about the URL.

Screenshot of Robots Exclusion Checker, December 2022

Another plugin you can utilize to determine whether an X-Robots-Tag is being used, for example, is the Web Developer plugin.

By clicking the plugin in your browser and navigating to “View Action Headers,” you can see the numerous HTTP headers being used.

Another approach that can be utilized for scaling in order to determine problems on websites with a million pages is Screaming Frog

. After running a website through Screaming Frog, you can browse to the “X-Robots-Tag” column.

This will show you which areas of the website are using the tag, along with which particular instructions.

Screenshot of Shrieking Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Website Understanding and controlling how online search engine connect with your website is

the foundation of seo. And the X-Robots-Tag is an effective tool you can utilize to do just that. Simply know: It’s not without its threats. It is very easy to slip up

and deindex your entire site. That said, if you’re reading this piece, you’re most likely not an SEO beginner.

So long as you utilize it carefully, take your time and check your work, you’ll discover the X-Robots-Tag to be a beneficial addition to your arsenal. More Resources: Included Image: Song_about_summer/ SMM Panel