URL specifications are the parts of the address of a website (the LINK) that are visible after an inquiry mark (?). They might also be referred to as ‘query strings’.

Numerous sites use query string specifications as a core component of their website architecture for tracking session IDs, languages, and also a lot more. Question string parameters could often result in numerous variations of the same URL all serving the exact same content.

For example, the following Links would, in concept, all point to the exact same content: a collection of bikes. The only difference below is that several of these pages could be organized or filteringed system somewhat in a different way:

  • http:// www.example.com/products/bikes.aspx 
  • http:// www.example.com/products/bikes.aspx?category=mountain&color=blue
  • http:// www.example.com/products/bikes.aspx?category=mountain&type=womens&color=blue

URL parameters like those noted above are common on ecommerce sites that use faceted navigating systems. Faceted navigating systems enable users to browse items utilizing a selection of pre-defined filters:

social media marketing strategy

Using LINK specifications to serve material based upon filters applied by means of faceted navigating menus typically results in URLs just like those above. While this can be handy to individuals attempting to pierce to specific product teams within ecommerce websites, they could offer online search engine a frustration if not managed correctly.

Common Problems with URL Parameters

Duplicate Content

The generation of URL parameters based upon web site filters could create serious on-page issues that can affect the positions of ecommerce classification web pages. We have actually already gone over exactly how filters could be utilized to arrange or narrow the content of a web page, hence creating additional Links that add no genuine value. If your site allows users to arrange material by cost or function (colour, dimension etc.), and also these alternatives aren’t in fact transforming the material of the page but simply narrowing the results, after that this might hinder your site’s performance.

One method to examine whether this is the case for your site is to examine some of the filters readily available on one of your item group pages to examine whether the material adjustments dramatically after the products have actually been filtered. For instance, let’s visualize that the content of an initial category web page on a biking web site consisted of a paragraph or 2 of copy to advertise a particular sort of bike. After that, when filters were used making use of the faceted navigation to pick female bikes, the LINK of the web page modifications to consist of a question string (example.com/bikes?type=female) – if most of the web page material stayed the exact same, these pages can be classed as replicate content by Google IF their partnership isn’t made clear by the site owner.

Keyword Cannibalisation

Keyword cannibalisation happens when numerous web pages on a site target the same or comparable key words. This commonly creates internet search engine issues when determining which web page is one of the most ideal web page to place for a certain search question, which then can cause the “incorrect” or “unwanted” web page position for that term.

A archetype of this can be seen on the accommodation internet site www.booking.com. Booking.com usage URL parameters to present special hotel results pertaining to specific areas around the world. To better highlight how they do this, let’s take an appearance at the list below results for ‘Booking Hotels in Dubai’.

For this search, Booking.com appear two times in the search results:

social media management tools

The initial result from booking.com that shows up for that search inquiry is this web page: http://www.booking.com/city/ae/dubai.en-gb.html

This page uses an internet search engine pleasant LINK structure, as well as offers a tailor-maked touchdown web page that serves as a search portal for all resorts they provide in Dubai.

However, Google has additionally indexed a 2nd page from Booking.com’s website for that term, which is: http://www.booking.com/searchresults.en-gb.html?city=-782831

marketing strategy

This page is considerably various to the very first web page we took a look at, as well as has been established so that the heading is created with the name of the city being looked for, and also the variety of properties being returned for that search. The city quality has actually been used to populate the page’s meta title to ensure it’s not considereded as a replicate of the initial web page:

Now, while these web pages won’t be classified as specific duplicates of one an additional, this does not mean that they will not cause issues. The fact that Booking.com has two pages that feature the expression ‘Hotels in Dubai’ in their page titles, headings, and duplicate may hold back the efficiency of both pages. This is a timeless example of keyword cannibalisation, where a website is simply confusing internet search engine by having 2 web pages covering the exact same topic, suggesting search engines will be not sure which web page to rank.

Avoiding Pitfalls Associated with LINK Parameters

There are a variety of options to ensure LINK specifications don’t cause Search Engine Optimization issues for your site. Prior to you hurry in as well as carry out any of the solutions below, you require to inspect whether LINK specifications could be causing your website troubles by asking the complying with questions:

  • When utilizing a search filter on your web site (see faceted navigation), does the LINK change and the duplicate continue to be the very same as the copy on the initial URL?
  • When making use of a search filter on your internet site, does the LINK adjustment and also the web page title as well as meta summary stay the very same or contain the very same target keyword?

If you answered yes to one or both of the above, LINK criteria might be holding back the efficiency of your website in natural search, and also it may be time to take some action.

URL Parameter tools

Use the URL Parameters tool to provide Google details about just how to handle Links consisting of certain criteria. Bing additionally gives a tool for neglecting LINK criteria, which you could discover here.

Robots. txt – Disallowing question strings

The robots.txt data could help you remedy a replicate content scenario, by obstructing search query specifications from being crawled by internet search engine. However, before you go in advance and also block all query strings, I would certainly recommend seeing to it every little thing you’re prohibiting remains in truth something you don’t want to be indexed. In the majority of cases, you could specify search engines to neglect any type of parameter based pages just by including the adhering to line to your website’s robots.txt data:

Disallow: /*?*

This will disallow any type of URLs that showcase a question mark. Certainly this works for making certain any URL criteria are blocked from being crept by internet search engine, yet you initially have to ensure that there aren’t any kind of other areas of your site using specifications within their LINK structure.

To do this, I ‘d suggest accomplishing a crawl of your entire website making use of a device like Howling Frog’s SEO spider, exporting the checklist of your site’s URLs into a spreadsheet and executing a search within the spread sheet for any type of Links consisting of concern marks (?).

Common things to watch out for here would be making use of LINK specifications to serve different language versions of a page, which by itself is a bad concept. If this is the case, you do not want to obstruct search engines from crawling those versions through robots.txt. You’ll should rather check into carrying out a feasible LINK framework to target several countries.

If you’ve worked through the checklist of Links and confirmed that the only pages utilizing URL criteria are those triggering duplicate material problems, I ‘d recommend including the above command to your internet site’s robots.txt file.

Canonical Tags

Canonical tags are used to suggest to look engines that particular web pages need to be treated as copies of a specific LINK, which any positions need to in fact be attributed toward the canonical URL.

digital marketing

Web developers can set the approved variation of an item of material to the classification web page URL before a filter is used. This is an easy remedy to assist route internet search engine robots toward the content you actually want crawled, while maintaining filters on the site to assist customers discover items very closely pertaining to their needs. To find out more on implementing rel=’ canonical’ tags see SEOmoz’s Canonicalisation guide.


Using a faceted navigating can confirm extremely useful for consumers trying to find certain products within your website, but you require to guarantee that any kind of Links generated as an outcome of filters being applied don’t hold back the efficiency of your initial category web pages in natural search results.

While I have actually outlined three of one of the most typical solutions for URL specifications, every website platform is a little bit various, and you should therefore make the effort to analyze each situation on an instance by situation basis prior to jumping in and implementing any of the solutions I’ve described.