Search engine optimization, in its a lot of basic sense, trusts something above all others: Search engine spiders crawling and indexing your site.
However nearly every site is going to have pages that you don’t wish to consist of in this expedition.
In a best-case scenario, these are doing nothing to drive traffic to your website actively, and in a worst-case, they could be diverting traffic from more important pages.
Fortunately, Google permits webmasters to inform search engine bots what pages and content to crawl and what to ignore. There are numerous methods to do this, the most common being utilizing a robots.txt file or the meta robots tag.
We have an outstanding and in-depth description of the ins and outs of robots.txt, which you should definitely read.
But in high-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exemption Protocol (ASSOCIATE).
Robots.txt provides spiders with guidelines about the site as a whole, while meta robots tags consist of instructions for particular pages.
Some meta robots tags you may utilize include index, which tells online search engine to add the page to their index; noindex, which informs it not to add a page to the index or include it in search results page; follow, which advises a search engine to follow the links on a page; nofollow, which informs it not to follow links, and a whole host of others.
Both robots.txt and meta robots tags work tools to keep in your toolbox, but there’s also another way to advise online search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another way for you to control how your websites are crawled and indexed by spiders. As part of the HTTP header action to a URL, it controls indexing for an entire page, in addition to the particular elements on that page.
And whereas utilizing meta robots tags is relatively straightforward, the X-Robots-Tag is a bit more complex.
However this, obviously, raises the question:
When Should You Use The X-Robots-Tag?
According to Google, “Any directive that can be used in a robots meta tag can also be specified as an X-Robots-Tag.”
While you can set robots.txt-related instructions in the headers of an HTTP action with both the meta robots tag and X-Robots Tag, there are particular scenarios where you would want to use the X-Robots-Tag– the two most common being when:
- You want to control how your non-HTML files are being crawled and indexed.
- You want to serve directives site-wide rather of on a page level.
For instance, if you want to block a specific image or video from being crawled– the HTTP response approach makes this simple.
The X-Robots-Tag header is also beneficial since it permits you to integrate several tags within an HTTP response or utilize a comma-separated list of directives to define directives.
Perhaps you do not desire a specific page to be cached and desire it to be unavailable after a certain date. You can use a combination of “noarchive” and “unavailable_after” tags to instruct search engine bots to follow these guidelines.
Basically, the power of the X-Robots-Tag is that it is much more versatile than the meta robots tag.
The benefit of utilizing an X-Robots-Tag with HTTP responses is that it enables you to utilize regular expressions to carry out crawl regulations on non-HTML, along with use criteria on a larger, worldwide level.
To help you comprehend the difference between these regulations, it’s practical to categorize them by type. That is, are they crawler instructions or indexer instructions?
Here’s a helpful cheat sheet to describe:
|Crawler Directives||Indexer Directives|
|Robots.txt– utilizes the user agent, allow, disallow, and sitemap directives to define where on-site online search engine bots are permitted to crawl and not permitted to crawl.||Meta Robotics tag– allows you to define and avoid online search engine from showing specific pages on a site in search results.
Nofollow– permits you to specify links that must not pass on authority or PageRank.
X-Robots-tag– enables you to control how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you wish to block particular file types. A perfect technique would be to include the X-Robots-Tag to an Apache setup or a.htaccess file.
The X-Robots-Tag can be contributed to a website’s HTTP reactions in an Apache server setup via.htaccess file.
Real-World Examples And Utilizes Of The X-Robots-Tag
So that sounds great in theory, but what does it look like in the real world? Let’s take a look.
Let’s state we desired online search engine not to index.pdf file types. This configuration on Apache servers would look something like the below:
In Nginx, it would look like the listed below:
location ~ * . pdf$
Now, let’s look at a different situation. Let’s state we wish to use the X-Robots-Tag to block image files, such as.jpg,. gif,. png, and so on, from being indexed. You might do this with an X-Robots-Tag that would look like the below:
Please keep in mind that understanding how these regulations work and the impact they have on one another is crucial.
For instance, what occurs if both the X-Robots-Tag and a meta robots tag lie when spider bots find a URL?
If that URL is obstructed from robots.txt, then specific indexing and serving directives can not be discovered and will not be followed.
If directives are to be followed, then the URLs consisting of those can not be prohibited from crawling.
Check For An X-Robots-Tag
There are a few various techniques that can be utilized to look for an X-Robots-Tag on the website.
The simplest method to examine is to install a web browser extension that will inform you X-Robots-Tag information about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can utilize to determine whether an X-Robots-Tag is being used, for instance, is the Web Designer plugin.
By clicking on the plugin in your web browser and navigating to “View Reaction Headers,” you can see the different HTTP headers being used.
Another technique that can be used for scaling in order to determine issues on websites with a million pages is Yelling Frog
. After running a site through Shouting Frog, you can navigate to the “X-Robots-Tag” column.
This will show you which sections of the website are utilizing the tag, together with which specific regulations.
Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Website Comprehending and managing how online search engine engage with your site is
the cornerstone of search engine optimization. And the X-Robots-Tag is an effective tool you can use to do simply that. Just know: It’s not without its dangers. It is really simple to make a mistake
and deindex your entire website. That said, if you’re reading this piece, you’re most likely not an SEO novice.
So long as you utilize it carefully, take your time and examine your work, you’ll discover the X-Robots-Tag to be a helpful addition to your arsenal. More Resources: Featured Image: Song_about_summer/ Best SMM Panel