Most SEO Studies are Basically Pointless

For some years now, the industry has had an unspoken problem with most SEO studies published.

Sure, they pretty much always reveal the same factors (as long as the data is interpreted correctly), you need Content, Links and structure. Pretty much in that order.

There are two main categories, those backed with proprietary index data (like these by SEMrush or SearchMetrics), or those accumulated from surveying industry experts (Moz have done this for years, & Northcutt have a great approach).

If you’re early in your path as an SEO, you should consume every single word of the roundups. Then read them again.

Why are they Pointless?

As long as your expectation is general information on the fundamentals of ranking, they are a great resource – but they all suffer a crucial flaw: people read them with the assumption that the same factors are required to rank across Google, regardless of the query.

This simply isn’t the case, and starting from this broad assumption likely makes you fixate on factors that may have nothing to do with ranking for the keywords that you want to.

SearchMetrics have recognized this crucial problem and have recently moved to industry specific ranking correlation studies, which are a much better solution (here’s the travel one which I strongly recommend you download).  

The problem basically still remains though: you’re looking at ranking factors for a handful of keywords, and interpretation of specific data points within that restrictive set of keywords.

Thats not “unlocking the knowledge of SEO”, its analyzing a small fraction of what should be a tiny piece of your overall strategy.

What Ranking Factors Matter – 2017 Edition

We know that traditional SEO metrics, content, PageRank, site speed and so on, all still matter.

That hasn’t changed, but increasingly the mix of what you need is changing, and that’s in part down to the inclusion of more user experience metrics.

Using the Travel SEO niche to illustrate my point, features of a page that ranks well for “New York Hotels” won’t likely be the same as a page that will rank well for “Flights to New York from Los Angeles“.

It’s certainly not the same characteristics as pages returned by queries like “New York Jets”, despite how semantically close that query may seem.

I’m not trying to say that Google expects you to have Product features like a map module, a calendar, a price graph, or whatever other arbitrary page feature in order to rank – but your customers do have an expectation as to which features they expect.

Simply measuring the ‘return to SERP’ rate of each page in the results, would yield search engines great visibility into whether expectations are being met, and over time they can adjust rankings accordingly.

If those aren’t met, they will likely spend less time on your site. To use industry buzzwords, this angers the evil RankBrain.

That’s not to say that bounce rate by itself is a cornerstone of Google’s AI work, but it’s safe to say that bounce and usage metrics are core to Quality Score in Adwords, and that RankBrain itself derives from the Quality Score metric.

Given this, it’s not too much of a stretch to consider that people spending more time on your pages than your competitors, for similar queries, is a good thing for algorithmic reinforcement.

That’s not the same thing as having a penalty for not including certain features, its more like a de-prioritization for not providing a good enough user experience.

Some Examples:

If that query is a hotel name, then the pages will likely be full of rich information about the hotel, photos, reviews, map data and so on.

Typical Hotel UX

Pages that don’t contain all or most of these elements might not fulfill the searchers expectation, and would result in a higher than average bounce rate for that query, thereby triggering a return to search and click through on another result.

Searches on the other hand, for “flights from CITY-A to CITY-B” return pages that typically have entirely different levels of content, such as price grids or tables of data, where the challenge is creating enough pages, sticky enough to stay in the index, while distributing enough PageRank into them to allow them to be competitive.

So in 2017, the challenge is no longer “just” the SEO metrics we had to be concerned about before, but also ensuring that the experience you provide is relevant to the searchers intent.

Typical Flight UX

If the page doesn’t meet the average needs of people landing on them, they will lose out despite perhaps having superior content or link equity.

In short, your challenge here is not just traditional SEO metrics of Content, PageRank, Speed and so on, but also ensuring that you’re providing a satisfactory experience to the people that land on your site.

Studying YOUR Ranking Factors

The good news is, you can build your own ranking factor study using fairly cheaply available tools, and very little technical ability, and see just which metrics appear to be relevant to your vertical.

We can scrape a lot of the data that Google’s algorithms depend on, be they link metrics or page specific data.  We can’t see how RankBrain reclassifies sites from user metrics, but we can see broader representations of characteristics other than that, and can deduce from those RankBrain’s impact by improving specific factors on certain pages and measuring the outcomes.

All you need are the right ingredients in place, starting with all the SERP Data you can muster (from my post yesterday), and tomorrow’s post will cover scraping the additional data you need to start conducting deep investigations into your own specific ranking factors, down to a keyword category or page type level.

The main tools we’ll be using will be Rank Tracker from Link Assistant, Screaming Frog, URL Profiler and a couple of excel plugins, so if you don’t already have all of those, it might be a good time to get them installed!

 

As always – follow me on twitter to get the latest updates, and check back tomorrow for a how-to post (and hopefully a screencast of the whole thing as well, time permitting!).

 

Am I way off? Are User Metrics Irrelevant? Does “Evil RankBrain” Even Exist?
HAVE YOUR SAY BELOW!

Martin MacDonald
Previously: Head of SEO, Omnicom. Inbound Marketing Director, Expedia. Head of Content & SEO, Orbitz. Currently: Marketing Consultant to Fortune 500's and High Growth Startups locally in Silicon Valley. Retired BlackHat & Current Tech SEO Geek.
Martin MacDonald

@searchmartin

Fortune 500's Digital Marketing Consultant. Former Head of Content & SEO for Expedia / Orbitz / Omnicom. English & Español
@leehill73 ping me a DM if you want me to check anything in particular mate (also, just looked at your agency site,… https://t.co/zPXamWy6iT - 6 hours ago
8 Comments

Comments are closed.

  1. Jimboot 3 months ago

    The ranking factors change based on user intent. Ecommerce & Blogging sites are completely different for ranking. Forget about Google, is my new favourite saying. Focus on your users, speed, ease of use, relevance. Build your brand and audience. When a brand has dominance in search VOLUME for a kw category, it CAN dominate that kw category. If you’re going to pubcon love to chat over a beer 🙂 Good article.

    • Author
      Martin MacDonald 3 months ago

      Absolutely agree on all points, and hope to make it out to Pubcon this year – I’m not talking but aim to checkout a couple of things 🙂

      No doubt if I’m there, it’ll be fairly obvious by my twitter feed 😉

      Thanks for leaving the comment, appreciated!

  2. Eric Van Buskirk 3 months ago

    You know your stuff very well, that’s is clear. Where I would disagree is that drawing conclusions for the “whole” vs. niche is not useful. Many people are not doing industry specific SEO. I oversaw the 1mm search result study for Brian Dan 2 years ago (and other large rank studies the following year) and would argue the results are still super useful. I totally agree, though, that it’s much more imporant now to do rank studies understanding that type of content and sites are treated differently by G.

    • Author
      Martin MacDonald 3 months ago

      Hey Eric!

      Thanks for the comment – much appreciated 🙂

      Counterpoint: drawing conclusions for the “whole” is mainly relevant, in the situation that you’re grounding yourself in the fundamentals of SEO.

      That’s not to say they aren’t inherently useful for that purpose, I can’t think of a better way of teaching people what matters generally in SEO and what doesn’t.

      Where I do have a problem though, is in the practical application of the learnings from these studies. Let me give you an example – specific page types at Expedia were underperforming, and a decision had to be taken as to how to correct the situation. Using knowledge learned from these studies, it might have signaled that the required action was to improve xxx (whatever that may have been, links, content etc.), whereas the reality of the situation was that none of these factors were required, and something that may otherwise appear trivial resolved the performance.

      In order to truly understand the practical implementation of strategy, you need to tailor your data gathering to the things you actually want to rank for, not an “average” of what any site needs to rank.

      That’s not to diminish your work, or that of anyone else conducting these studies, but audiences should also be aware that situational knowledge is crucial.

      thanks again 🙂
      Martin

      (ps. everyone does industry specific SEO, nobody does “general SEO”, even if they work across many industries at a time, there are specifics in play).

      (pps. You may well have read it already, but I highly recommend this book in the context Im talking about: https://www.amazon.com/End-Average-Succeed-Values-Sameness/dp/0062358367 )

  3. Dixon 3 months ago

    Yo! Macdonald!

    Your point is well made. Probably a reason why Majestic has really not tried to do something to compete with the SearchMetrics or Moz studies.

    Oddly enough – since we have our own web index (and search engine) you can put in any competitive keyword and we show WHY we, (not Google) rank one site over another. So we have an answer in context to any given phrase.

    But here’s the question… how can we scale that bloody brilliant insight into a study that the SEO world WOULD like to see?

    I remember doing this 3 years ago: blog.majestic.com/development/comparing-the-comparators/ which I thought might be a nice template to send people on… but it hardly shook the industry.

    So my question to you – what WOULD light your fire (that Majestic could deliver)? Maybe we should just get our search engine a bit better and remind the SEO world that we have one?

    • Author
      Martin MacDonald 3 months ago

      Yo! Jones!

      That’s one hell of a question, what would we like to see as an industry, from what’s probably the single biggest source of search data outside of Google…. 🙂

      I’m going to percolate my thoughts, but in the meantime I’m opening this up to everyone else as well 😉

      thanks for the offer, hope you’re not going to regret it 😀

      cheers!
      M

      • Dixon 3 months ago

        >>hope you’re not going to regret it<< Me too! But hey – I can just say no 🙂 – but seriously, You make a VERY valid point. apart from your point about context, causation correlation and most studies are observational at best. I have causal relationships. They aren’t GOOGLE’S relationships, but I know the logic on which they are founded, So there HAS to be use for the SEO world in buckets. Guide us people….

  4. Fion 2 months ago

    Agree 100%.. about time some one spoke about this.

CONTACT US

We're not around right now. But you can send us an email and we'll get back to you, asap.

Sending

©2017 WebMarketingSchool.com is a product of MOG Media Inc

Log in with your credentials

or    

Forgot your details?

Create Account