For some years now, the industry has had an unspoken problem with most SEO studies published.
Sure, they pretty much always reveal the same factors (as long as the data is interpreted correctly), you need Content, Links and structure. Pretty much in that order.
There are two main categories, those backed with proprietary index data (like these by SEMrush or SearchMetrics), or those accumulated from surveying industry experts (Moz have done this for years, & Northcutt have a great approach).
If you’re early in your path as an SEO, you should consume every single word of the roundups. Then read them again.
Why are they Pointless?
As long as your expectation is general information on the fundamentals of ranking, they are a great resource – but they all suffer a crucial flaw: people read them with the assumption that the same factors are required to rank across Google, regardless of the query.
This simply isn’t the case, and starting from this broad assumption likely makes you fixate on factors that may have nothing to do with ranking for the keywords that you want to.
SearchMetrics have recognized this crucial problem and have recently moved to industry specific ranking correlation studies, which are a much better solution (here’s the travel one which I strongly recommend you download).
The problem basically still remains though: you’re looking at ranking factors for a handful of keywords, and interpretation of specific data points within that restrictive set of keywords.
Thats not “unlocking the knowledge of SEO”, its analyzing a small fraction of what should be a tiny piece of your overall strategy.
What Ranking Factors Matter – 2017 Edition
We know that traditional SEO metrics, content, PageRank, site speed and so on, all still matter.
That hasn’t changed, but increasingly the mix of what you need is changing, and that’s in part down to the inclusion of more user experience metrics.
Using the Travel SEO niche to illustrate my point, features of a page that ranks well for “New York Hotels” won’t likely be the same as a page that will rank well for “Flights to New York from Los Angeles“.
It’s certainly not the same characteristics as pages returned by queries like “New York Jets”, despite how semantically close that query may seem.
I’m not trying to say that Google expects you to have Product features like a map module, a calendar, a price graph, or whatever other arbitrary page feature in order to rank – but your customers do have an expectation as to which features they expect.
Simply measuring the ‘return to SERP’ rate of each page in the results, would yield search engines great visibility into whether expectations are being met, and over time they can adjust rankings accordingly.
If those aren’t met, they will likely spend less time on your site. To use industry buzzwords, this angers the evil RankBrain.
That’s not to say that bounce rate by itself is a cornerstone of Google’s AI work, but it’s safe to say that bounce and usage metrics are core to Quality Score in Adwords, and that RankBrain itself derives from the Quality Score metric.
Given this, it’s not too much of a stretch to consider that people spending more time on your pages than your competitors, for similar queries, is a good thing for algorithmic reinforcement.
That’s not the same thing as having a penalty for not including certain features, its more like a de-prioritization for not providing a good enough user experience.
Some Examples:
If that query is a hotel name, then the pages will likely be full of rich information about the hotel, photos, reviews, map data and so on.
Pages that don’t contain all or most of these elements might not fulfill the searchers expectation, and would result in a higher than average bounce rate for that query, thereby triggering a return to search and click through on another result.
Searches on the other hand, for “flights from CITY-A to CITY-B” return pages that typically have entirely different levels of content, such as price grids or tables of data, where the challenge is creating enough pages, sticky enough to stay in the index, while distributing enough PageRank into them to allow them to be competitive.
So in 2017, the challenge is no longer “just” the SEO metrics we had to be concerned about before, but also ensuring that the experience you provide is relevant to the searchers intent.
If the page doesn’t meet the average needs of people landing on them, they will lose out despite perhaps having superior content or link equity.
In short, your challenge here is not just traditional SEO metrics of Content, PageRank, Speed and so on, but also ensuring that you’re providing a satisfactory experience to the people that land on your site.
Studying YOUR Ranking Factors
The good news is, you can build your own ranking factor study using fairly cheaply available tools, and very little technical ability, and see just which metrics appear to be relevant to your vertical.
We can scrape a lot of the data that Google’s algorithms depend on, be they link metrics or page specific data. We can’t see how RankBrain reclassifies sites from user metrics, but we can see broader representations of characteristics other than that, and can deduce from those RankBrain’s impact by improving specific factors on certain pages and measuring the outcomes.
All you need are the right ingredients in place, starting with all the SERP Data you can muster (from my post yesterday), and tomorrow’s post will cover scraping the additional data you need to start conducting deep investigations into your own specific ranking factors, down to a keyword category or page type level.
The main tools we’ll be using will be Rank Tracker from Link Assistant, Screaming Frog, URL Profiler and a couple of excel plugins, so if you don’t already have all of those, it might be a good time to get them installed!
As always – follow me on twitter to get the latest updates, and check back tomorrow for a how-to post (and hopefully a screencast of the whole thing as well, time permitting!).