Thursday 1 November 2012

5 Google Algorithm Changes I’d Like to See

Recently, Google killed off (or at least substantially reduced) the SEO benefit associated with building websites on exact match domains (EMDs). And while I think that that’s a great step forward, it’s clear that Google still has a ways to go in terms of wiping away low-quality listings from the SERPs.

And since you all know how much I love algorithm changes, here are a few additional steps that I hope Google will consider as it rolls out future changes to its search results:

Change #1: More Accurate Measurement of On-page Quality

It’s not exactly a secret that Google struggles with measuring on-page quality objectively. The whole reason we have search engine algorithms in the first place is because the Googlebot can’t just go out, read a couple of pages, and decide which one is qualitatively the best.

And because of that, I realize that saying, “Google should do a better job of measuring page quality” is somewhat of a moot point. Obviously, this is a priority for the Web’s largest search engine, and obviously, it’s something that Google is constantly addressing.

That said, is anyone else disappointed by the pace at which these changes are occurring? Perhaps one of the engine’s most obvious missteps was a situation that occurred immediately after the roll out of the Penguin algorithm update (a change that was implemented explicitly to weed Web spam out of the results) in which a Blogger site with no content earned a temporary first page ranking for the competitive keyword phrase, “make money online.”

Mistakes like these limit the public’s confidence in Google to provide a good search experience, not to mention the impact had on SEOs whose sites are either demoted in favor of unworthy results like these or caught in the cross-fire of updates whose effects on the SERPs weren’t fully anticipated.

Given this diminished confidence, I believe that it will be important for Google to speed up the pace on algorithm updates designed to provide high-quality SERPs listings, while also minimizing the impact that these future changes have on sites that are unintentionally penalized. Doing so will be critical to the engine’s ability to maintain public trust and its competitive advantage.

Change #2: Penalization for Spun Content

In particular, one type of low-quality content I want to call out is “spun content”—pages in which words, phrases, and sentences have been automatically replaced using spinning tools in order to generate “unique” content. Sure, the content’s unique, but that doesn’t mean that it’s fit for human consumption!

But although Google has made it its mission to weed out low-value results from the SERPs, I still encounter plenty of instances of spun content while I’m making my rounds on the Internet. While I recognize that the text produced by this process may look unique to the search engine bots, surely Google’s language processing algorithms make it possible to uncover phrasing patterns that indicate content has been spun (at least in the most egregious of examples)?

Really, I’m sure that this is something that Google is working on, as it does the company a disservice to have this type of junk content appear alongside legitimate results in the natural SERPs. I’m just hoping that they’re able to speed the process up and get this update rolled out ASAP!

Change #3: Better Integration of Facebook and Twitter Data

Sure, we all know that Google has a social integration problem, given that Bing has the contracts with Facebook and Twitter, and Google is left to rely primarily on social data generated by the fledgling Google+ network.

However, I for one, would love to see the engine take one for the team and make the compromises necessary to bring this data set to the Google results (wishful thinking, I know…). Or hell, if Google even bothered to use the public data made available by these two primary social networks, I bet that it’d be able to seriously improve upon the quality of its existing personalized results.

Here’s the deal: If Google wants to claim that its results are the best, it simply can’t do that without the social access granted to Bing by Facebook and Twitter. In this case, relying on information from the Google+ network is like a coach claiming that his minor league baseball team is the best, even compared to heavy hitters like the Cardinals or the Nationals. You’ve got to know something’s wrong when the Google SERPs initially list Mark Zuckerberg’s unused Google+ profile over his own Facebook page…

Do I think this is going to happen? No, probably not. But since this is my wish list, I get to say that I believe the Google SERPs would be stronger with either the integration of readily available Facebook and Twitter public data or with the types of contracts these networks currently enjoy with Bing.

Change #4: An End to the Benefit of Profile Links:

Honestly, this one’s bugged me for a long time. And despite all that Google has done to diminish the benefit earned by low-value linking schemes, it seems that the value given to profile links is still alive and well.

In theory, assigning value to links that originate from within an author’s website profile account makes sense. In many cases, these accounts are tied to participation on prestigious, high PageRank sites, so it’s natural to assume that users who add their own website links to their profiles are the ones who are actively contributing to their communities.

Unfortunately, once word got out that these profile links conferred an SEO benefit, the digital marketing community saw a surge in automated tools that would create profiles on as many of these high-ranking websites as possible. The result wasn’t a certain amount of ranking value given to active, contributing site participants. It was a link structuring scheme that wastes website bandwidth and storage space by pumping an otherwise well-intentioned site full of spam profiles that gave nothing back to the community.

I have to assume that Google’s on this one, as it’s a clear violation of the engine’s Webmaster Guidelines that prohibit link schemes that intentionally manipulate the search engines without providing any type of value in return. However, if they aren’t, consider this my impassioned plea that this loophole be closed as quickly as possible!

Change #5: Less Reliance on About.com, Wikihow.com, and Wikipedia

One last thing I want to mention is that I’m sick to death of typing in a search query and being inundated with results from About.com, Wikihow.com, and Wikipedia filling up the top spots. As far as I’m concerned, with the possible exception of Wikipedia (in some cases), the articles that come from these sites tend to be barely higher in quality than the articles found in the mass content directories that Google slapped down last year with the Panda update.

Now, if I were an SEO conspiracy theorist, I might suggest that Google preferentially lists these websites over other pages that offer better information, but without any existing Google monetization (as in the case of industry experts who write exceptional blog posts in order to sell their own products).

But whether that’s the reality or not doesn’t actually matter. What matters, from a search engine perspective, is that the people using a given engine find its results to be as useful as possible. I don’t think that I’m alone in growing increasingly frustrated with being served up poorly-written, error-riddled About.com articles, so I’m crossing my fingers that future algorithm changes include a greater diversity in voices and sources of information.

No comments:

Post a Comment