Below is an excerpt from our inaugural AMA (ask me anything) with MarketMuse Co-founder and Chief Product Officer, Jeff Coyle. This event was held on our newly launched Slack community, the Content Strategy Collective. Many more AMA’s are coming your way featuring Garrett Moon, Co-Founder and CEO CoSchedule, Hamlet Batista, CEO RankSense, Mike Sweeney, CEO Right Source Marketing, and others.
Join the Content Strategy Collective here.
Many industries will get dramatically tougher to compete – and google (and other large entities) will learn and have Major Advertising Rule changes as a result — directly impacting areas where high-quality content is already required like YMYL (Health, Financial Planning, Legal, etc.)
The search trends that matter in the near-to-mid term will matter differently to different groups. If you are in e-commerce, the purchase patterns will vary dramatically. There are already published lists of purchasing growth patterns and reduction patterns for various products available — I will find and post in this thread shortly.
For B2B or seemingly unrelated topics, this becomes much more nuanced. Monitoring changes in the SERPs and new intent profiles will surface if anything will happen long term on your core target topics. In a response below, I’ll review some ways to get at this information!
Great question. I predict that we’ll see the bar raised on all content contributors very quickly when three key things become commonplace — just as we did with the rise of editors (like Grammarly and similar) and their influence on content machine efficiencies. Teams with those technologies in place are able to shorten publish cycles by many days and feedback loops have reduced. In those scenarios, editors are able to focus more on creativity and value.
The same will be the short-to-mid-term implications of Natural Language Generation technologies with a few additional areas of augmentation possible. A low-quality content generation job will become near-to obsolete. Everyone aiming to make money writing will need to ensure they are providing unique and distinct value.
Editors will be able to be more strategic and demand more creativity and value from their resources. Large-scale content strategies (like, covering EVERY election with unique value | covering all sporting events) is already happening with these technologies — and the publishers using them have doubled down on content! These real examples illustrate that when automated content succeeds, it drives the desire to write high-quality content even more for these entities.
Additional focus areas like repurposing and “packaging” of content for various audiences and learner types become even more valuable too. Production value, ABM, Personalization and targeting will become highly important to stay ahead of the pack.
Net-net, the bar will be raised, any business taking advantage of these technologies will DESIRE more resources to publish 10-20x+ the content and all of it will be higher quality. There will be abusers and people who try to use it to publish poor quality at scale, but, they will be sorted out quickly. (Heliograf at Washington Post is a great example!)
First, I struggle with Google Trends. It’s manual — as you mentioned and may speak to demand, but, doesn’t cover term pool multipliers for highly fractured intent terms.
Google Search Console can give you a lens on the explicit search queries generating impressions. When the term pool generating impressions for a page fluctuates more or less and the page maintains a head term or collection of medium to high volume rankings, its a signal.
Google AdWords is also a secret weapon here if you can spend or build into your agreements. Going hard with paid campaigns across a pool of broad and phrase match lists can generate massive intelligence not available anywhere else. Parallel monitoring of this and GSC can make trends and commodity KW data from SEMRUSH/AHREFs superpowered and higher fidelity.
Always monitor Term Pool Multipliers on pages. How many entrances does the page get from keywords with recurring volume you can track against how many entrances does it get total? Similar to using GSC to monitor impression generators with various variants/intent, but, when you collect these across many similar pages, they can show you major changes in demand externally, SERP flux and (in some cases) Snippet, SERP Feature flux monitoring can also show heat for differing reasons.
AHREFs can be jiggered quickly to do a SERP Feature Audit and where you have immediate demand changes. Track this over time for your key terms, providing a quick feedback loop.
Incorporation of any data-driven strategy (where I’d place NLG-driven content) now needs to be even higher quality to be sensible, In the current environment wrt COVID, empathy and accuracy are must-haves.
If the content is misleading or inaccurate, or, frankly — tone-deaf — it can sink a brand. Whether you are posting medical data, or, just posting thought leadership content — If it ‘doesn’t smell right’ — the slope of the slide to a really bad place is much steeper.
Checks and balances need to be in even more force. Editors need to be more particular and understand how users (and Google) are thinking about intent, quality. Imagine if your brand posted mad-libs style content on thousands of pages today and passed that off as being helpful? Imagine if that was a lawyer? a doctor? Disasterous implications.
I mention a few strategies and the AdWords hacks above, but, to add to that, look at related keywords/clusters information. Obviously, I’m biased, but, MarketMuse’s Research and Inventory Applications are extremely quick here at assessing the related concepts and intent-focused variants ++ where you (or your client) have a chance to win or a gap in a cluster.
Other than those areas – search-demand AS Search Volume isn’t anything more than a directional value. Within the clusters you have to develop, there will be extremely important concepts that you HAVE to cover to exhibit subject-matter expertise. You may start with the leading topic but stop there as far as using search volume as the North Star.
As a competitive strategy here, I often train teams on inverting this to predict a competitor’s content strategy. Within a space, you’ll often have content teams writing and publishing against a term, as searched for in Google AdWords Keyword Planner, descending sort by volume. You can exploit this and bury anyone employing this technique.
Find the topics that are blind spots, but that exhibit subject-matter expertise on the topics you are looking to cover It’s the quickest path to success in a highly variable, trending, competitive zone.
Deep breath. Grab all your seed terms with their Google Trends information and if you have it, search volumes.
- Grab all of the competing pages in the SERPs for these, grab all the words these pages rank for.
- Search against those sites for pages that are very similar or in those clusters. Grab all those words.
- Add up the volumes across the universe accumulated.
- Cross-reference with your own ranking data.
- Take a break.
Try it with one word and you’ll see exactly how you can use this to support cluster research
Also, the flux on these queries with coronavirus and COVID is off the charts. If you can find a low-difficulty variant, give it a shot, but consistent traffic is truly hard to guess right now. Check out the SERP flux on the areas you are focused on!
Great question! The most common issue with on-page internal linking is not having a plan at all! Internal links should act as a means to support user behavior and provide insights to the value of pages on the site.
Tactically, linking to pages you have that are related to the current page is a major must. Also tactically, the typical situation is that publishers will publish and link back to older content during publishing, but, not go through their existing inventory to find pages that should link TO the new pages. This creates link graphs that are seriously unnatural and lengthen ramp time for new content.
Periodic review of overall site structure and internal link graphs (with auditing solutions like Screaming Frog, Sitebulb, Botify, MarketMuse (for page level)) have to be part of any content strategy. You cannot build clusters without this on medium to larger sites.
On-page External links are the most misunderstood element of SEO and content strategy. The most common errors are:
- Linking to stronger content on the same target topics from within your content.
- NOT linking externally to sources, adjacent topics, natural user paths.
- Inappropriately structured, paid, or affiliate linking. The first diminishes the credibility of your page and the second is a path to a hand-edit by Google if it is your editorial policy. The third is just a “must-do perfectly” concept where you better trust your resource. Disasterous mistakes for external linking happen all of the time.
If you are using keyword variant lists to build clusters without thinking about common user behaviors, covering all stages of the buy cycle and intent fracture (specificity, keyword granularity) – your days are numbered. The other case I commonly see is “unsupported arms” of power on a site, where a site has one powerful page on a topic, but, hasn’t supported it with additional content. The two observations connect because you have to look to that older content and determine if it is at-risk or not.
If you build up a support foundation around an unsupported page that used to do well, it becomes less risky to change it and make it more akin to current intent and user goals. Changing it without any “net” is often a scary proposition! The same concept appears frequently when a site dominates one stage of the buy cycle but has nothing on the other phases. They may do so on one topic or many topics.
When those begin to falter, they often ALL do…because the site hadn’t focused on the entire user journey; awareness, consideration, purchase, post-purchase affinity, troubleshooting, etc. Those gaps lead to degradations that can’t be easily diagnosed. It’s a lot harder to be a long-term winner for a “What is X” query without content at all stages of the buy cycle today than it was before these changes have been put into production.
You must build across the funnel, every time you target an important funnel, or, you put your business at risk of major flux and competitive threats.
Having worked at multiple firms that focused on Lead Generation, Lead Qualification, Lead Nurturing and more, I’ve been very opposed to generic lead profiling, like, BANT or Firmographic/Demographic ICP only. I can’t review all of the product evolution that encompasses KnowledgeStorm and TechTarget, but, the net-net is that you need to correlate intent, activity, and customer fit to produce significant value.
For MarketMuse, we tried a lot of approaches, hypothesis, and learned a lot from trial and error — and — frankly, we sold to what we knew (B2B SaaS, Publishers, eCommerce, Agencies) in the early days. With our evolution, we’ve identified key data points that correlate to intent and likelihood to have pain points and product knowledge using Pardot Activity and we parlay that with MadKudu for Customer Fit.
Those two metrics mixed with the understanding of content creation and historical success of content are heavily aligned with customers that will be able to dramatically influence success for their business or clients (in the case of agencies). We’re still on the journey, but, our efficiency rates and success rates continue to grow and the combination of intent signals and customer fit gives the ability to prioritize (whether it is inbound, channel, personal handoff) and predict for outbound.
Connect with your peers, ask questions, and share your experience through our new Slack community. Join the Content Strategy Collective here!