One of the most controversial SEO myths is keyword difficulty. I reverse engineered every important keyword difficulty metric on the web, the 15 of them that are important.
What I discovered is that many of them base their calculations solely on link data. What does that mean?
What this means is that the methodology being taught to the market is that search results should be looked at and the people in the search results should be analyzed.
So this is literally playing results.
It’s like what happens in game theory when one plays results in game theory. You make really bad decisions about how good your decisions were earlier. For example, you’re playing cards and on that last play, after you see the final result, you think to yourself, “Oh man I should’ve stayed in on that.”
But of course, you couldn’t know the results of your decision at the time you made it.
What search engine optimization platforms have done is they’ve applied things that they can do, such as high level correlations using link data. There’s no doubt that links are extremely effective. But they aren’t the entire story for keyword difficulty.
Where Keyword Difficulty Get it Wrong (Examples)
Let’s see how this plays out using a couple of examples.
Google Adwords keyword planner shows pay-per-click competition. It has nothing to do with organic search. That’s the easiest one, right?
Semrush has slightly improved their keyword difficulty metric. But their old one, all it did was stack rank all the traffic. They did a traffic ranking of the top 10 million sites and it would give you the average of where the sites in the search results were stack ranked.
What does that mean?
So basically, higher traffic sites were designated as being harder to rank against effectively. They did this independently of anything related to the keywords or topics being targeted, the search results themselves, and the search results features, like the right rail, and ads.
They took all those other factors and just threw them out the window. Instead they decided to only look at the traffic that goes to the sites that are in the search results. That is tremendously error prone. Definitely not correlative. Huge errors.
Ahrefs publicly confirms that their difficulty metric is only a link metric.
So what does it say and what problems does this create for teams? What this does, is it ignores the content on your site.
We’re all content people, yet this approach to keyword difficulty ignores:
- Who you are
- Who your business is
- What you write about
- Where your strengths are
- Where your authority is
- Where your historical momentum is on successful topics
- Where you’ve had failures with content
Instead, it looks at a search result for a term, looks at the cohort’s link volume and makes an assumption on that. For example, it’s saying my score is 50 and the search result has an average score of 50. So I should be able to rank.
That’s basically what the process is being taught to the market based on links.
The effectiveness rate of doing this in this way is tremendously error prone. I’ve seen teams waste massive amounts of money, believing that this is how it works or believing that they can’t rank for something.
So they don’t try on a topic. Most commonly, this is how it manifests itself.
What does this mean?
In the case of a brand new site, they’re not even going to try to go write the best article that’s ever been written about this topic. Or they’ve written a cluster of content that ranks for some easy search phrases. According to the data, they’ve maxxed out their potential on the topic, so they’re not going to keep on trying.
That’s such a waste of potential.
A Better Approach to Keyword Difficulty
A better approach is to take link data as one aspect of a difficulty calculation and build composites against:
- How much content you’ve built on a topic
- Your breadth of coverage
- Your depth of coverage
- The quality of your coverage
- Your historical success rate
When you meld that with link data it can become very predictable.
However, the market has trained us to only look at link data. So many mistakes can be made here and I’ll give you one example to illustrate it.
If we were to write the best article ever written on the brand new iPhone and post it on the MarketMuse blog, it would not perform well for the phrase “iPhone review.” Sorry.
However, take that exact same article, throw it up on Cnet and it’s going to do really well.
Why? It isn’t just about links.
It will rank well because they:
- Have history of writing great reviews
- Offer an enormous breadth of coverage
- Possess a great depth of coverage
- Write about technology
- Write about phones
- Write about the iPhone specifically
- Have historical authority on those topics site, section combinations
The topic-site section combination authority is as important for assessing difficulty as is the quality of the page itself and link data. So if you have a practice, that’s just looking at a pay-per-click competitive data or just link data for your competition, you need to get personalized.
Here’s what I mean.
You’ve got to figure out more about you and who you are:
- What topics do you cover?
- What do you have success on?
- What does your link profile actually say?
That last one is an important point because not all links are created equal. So let’s just say you’re in the middle of the pack when it comes to links and all your links are about horse racing. Looking at the top search results you see you’re a 50 too but all the content that you write is about audio equipment.
That does not mean that you can go write an article about horse racing and rank. It’s absolutely untrue, but that’s what the process has been taught to the market – that you’re allowed to just jump into the pool. Very few sites can actually do that without building infrastructure.
What Are Personalized Metrics?
When I say personalized, I mean that the metrics you use need to be tied to the content on your site, specifically. Unfortunately, virtually all third-party metrics like domain authority, page authority, etc. are topic agnostic.
If I use one of them to rationalize creating an article on big screen TVs and posting that on the MarketMuse blog, I have no chance of success because:
- I have no authority on reviews
- I have no authority on TVs
- I have no infrastructure
If I want to grow there, I have to build infrastructure. I’m gonna have to build hundreds of articles about TVs that write dozens of reviews in order to bridge into having that authority.
Keep that in mind.
It is even tougher if you’re in a regulated market. Don’t even try to just jump in and go write MarketMuse thoughts on vaccine distribution. Good luck! It’s not going to work! Our blog is not even close to being in a your money or your life (YMYL) sector classification.
There’s so much more to it than just link data. Yet, I’ve seen very large teams set their watch to using linked data for difficulty. And it is so inefficient.
It creates tremendous mistakes at the content strategy level. Specifically:
- Investing in content that has virtually no chance of becoming successful
- A missed opportunity cost by not investing in content that has a solid chance of becoming successful
A great example of this comes from Josh Spilker, a friend and MarketMuse customer, who runs content at Friday.app.
Using our platform, which offers a personalized difficulty metric, he wrote a top-five ranking page for the word “planner.” Yet every other platform would have told him not to create this article that’s performing so well for him.
That’s the advantage you gain when using personalized metrics.
What you should do now
When you’re ready… here are 3 ways we can help you publish better content, faster:
- Book time with MarketMuse Schedule a live demo with one of our strategists to see how MarketMuse can help your team reach their content goals.
- If you’d like to learn how to create better content faster, visit our blog. It’s full of resources to help scale content.
- If you know another marketer who’d enjoy reading this page, share it with them via email, LinkedIn, Twitter, or Facebook.