A Google bug? Or why would Google bring (old) 301 redirected URLs back to the index?

As most of you eventually know, we did a rebrand from Ads2people to Peak Ace a couple of months ago, on the 25th of August to be more specific. A day after we announced that we obviously also switched domains: The old domain www.ads2people.de – from that day onwards – started 301 redirecting to www.peakace.de. Needless to say that we implemented 1:1 redirects (sub-pages from the old domain pointing to the exact same page on the new domain). Plus to be super safe we also used Google Webmaster Tools to notify Google about our domain change (and even for a confirmation in the GWT message central).

A couple of side notes before we continue:

  • The new domain is actually new, it has never been used before – and therefore should not have been flagged by Google (in any (negative) way).
  • We didn’t really change anything on the new domain (yeah I know, but we couldn’t get the redesign done in time) except for the logo and some CSS info (fonts, font size) – other than that it is all the same.

Usually one would suspect that this is absolutely good enough and rankings should start to appear for the new domain. This is a quick look into Searchmetrics:

So far, so good – right? Well… from a ranking perspective this is all good; But index-wise, well… not really. So here is where it starts getting interesting. Have a look at the results for this query:

Google is keeping our old URLs within the index (for months now, the “count” is moving up and down, but only slightly) – even though these URLs are all redirecting properly. And here is where it gets even more interesting:

The info-Query returns the proper URL:

So it seems they’re actually seeing and understanding the redirect; but still keeping the source indexed.

The cached version of ads2people.de actually returns (fresh) content from peakace.de:

See here for yourself: Cache copy of peakace.de (last updated on the 9th of December 2014).

To be honest I was pretty surprised by this one… however made me think: What about other redirects, like from services such as bit.ly, t.co or even Googles very own shortener goo.gl:

[three_columns ]

[/three_columns] [three_columns ]
[/three_columns] [three_columns_last ]
[/three_columns_last]

Yeah… right, I thought the same. Another funny fact: They’re even “stealing” the rich snippets from that redirect destination URL, for example if you look at the 4th result for the bit.ly site-query. That breadcrumb they’re showing really only exists on the destination page (and has been marked over there using schema.org):

But honestly: Why the hell would they really put that into the index at all – and more importantly keep it there? The way it used to be made much more sense in my opinion: Hit the redirect couple of times, remove the source, show the destination – all good, but this? It seems like a very strange behavior without any (obvious) reason.

Some more opinions and thoughts on that:

As this all seemed very strange I reached out to couple of industry friends and peers to get some more opinions on this one – so here we go:

  • Philipp came up with an newly added redemption period which would actually have made a lot of sense and also would’ve prevented a lot of short-term URL-switching tactics, however this doesn’t seem to be the case if you dig further; you will also find a lot of very old URLs coming back (see next comment).
  • Bert actually confirmed this behavior for their own domains as well. Plus he added that in their case even super old URLs from for example 2007 all of a sudden started appearing in SERPs again (but they surely didn’t change anything in terms of how they do redirects). Further they also played with X-Robots “noindex” which didn’t seem to make any differences.
  • Sebastian is comparing this with how Google is actually currently handling canonical tags: “It feels like Google now treats 301s on weak pages to some extent as it treats canonicals on weak pages: it takes forever to process them.”
  • Jeffrey mentioned that he noticed this with some of their sites as well. One thing that helped was changing inbound links to the new destination – a thing we did as well, however it didn’t seem to make a difference just yet (surely we didn’t change all of them).

So what the hell is this!?

We are also very, very certain that (in our case) the target pages were available at all times (when Googlebot visited, no GWT errors, etc.) which leaves me with only one conclusion: Something is very, very broken right now… it looks really like a bug to me. But maybe I’ve missed something? What do you guys think?

Author bio:

Bastian Grimm ist als „VP Organic Search“ verantwortlich für den Bereich Suchmaschinenoptimierung und blickt dabei auf knapp zehn Jahre Erfahrung im Performance Marketing bzw. SEO zurück. Mit einer Passion für Software-Entwicklung und Technik liebt er das Sparring mit IT- und Marketing-Abteilungen, um für unsere Kunden...

Leave a Reply

Your email address will not be published. Required fields are marked *