How Google Universal Search Ranking Works – Darwinism In Search
The secrets behind the Google algorithm and its ranking factors have been the subject of heated debate for almost a quarter of a century.
For most of that time, the results were just 10 blue links, and the debate focused on inbound links and keyword density.
From the vantage point of 2022, and with hindsight that now seems like a simple debate in innocent times.
With the introduction of universal search in 2007, Google started including other elements. Marissa Mayer said at the time:
“We’re attempting to break down the walls that traditionally separated our various search properties and integrate the vast amounts of information available into one simple set of search results.”
Google have been true to their word.
Since 2007, the SERPs have become increasingly rich and now incorporate a large number of verticals – images, videos, news, jobs, People Also Ask, maps, to name but a few.
One question we should perhaps have been asking since 2007 is, “How does Google decide which of these elements get a place on the SERP?”.
I started to think about that in 2013 when I started working on my personal brand SERP (the result for a search on my name).
I quickly mastered the 10 blue links, but struggled to have a knowledge panel, video boxes, Twitter boxes, and other rich elements.
So, the question was: What triggers them? What are the algorithmic triggers I need to “tickle” to get these SERP features to appear?
Then in 2019, Gary Illyes explained the mechanics of universal search to me and a roomful of SEOs in Sydney, Australia.
Over a 20-minute explanation, all the pieces fell into place.
Importantly, Bing have confirmed that their universal search functions in much the same way (with some additional insights from Nathan Chalmers at Bing that boggle my mind), and Gary Illyes also confirmed that:
“It’s not Google-specific. Other engines do it as well, and because most search engines rank results in much the same way… this is probably applicable to every search engine…”
How Universal Search Ranking Works In Google Search
What Are The Ranking Factors?
There are a massive number of factors that affect ranking.
Years ago, there was an idea that there were 200. But today, because the algorithms are machine-learning driven, things are significantly more complex and intricate.
John Mueller pointed out that Google has moved way beyond “200 ranking factors.” Read more about that here.
Search Engine Journal published this helpful guide that breaks down the whole complex topic into 88 chapters.
Google do tell us that they group them: Topicality, Quality, Page Speed, RankBrain, Entities, Structured Data, Freshness… and others.
A couple of things to point here:
- Those seven are real ranking factors we can count on (in no particular order).
- Each ranking factor includes multiple signals. For example, Quality is mostly PageRank but also includes other signals, and Structured Data includes not only Schema.org but also tables, lists, semantic HTML5, and certainly a few others.
Google calculates a score for a piece of content for each of the ranking factors.
Remember that throughout this article, all these numbers are completely hypothetical.
How Ranking Factors Contribute To The Bid
Google takes the individual ranking factor scores and combines them to calculate the total score (the term ‘bid’ is used, which makes super good sense to me).
Importantly, the total bid is calculated by multiplying these scores.
The total score has an upper limit of two to the power of 64… (not 100% sure, but I think that is what Illyes said, so perhaps it is a reference to the Wheat and Chessboard problem, where the numbers on the second half of the chessboard are so phenomenally off-the-scale that it is effectively a kind of fail-safe buffer).
That means these individual scores could be single, double, triple, or even quadruple digits and the total would never hit that upper limit.
That very high ceiling also means that Google can continue to throw in more factors and never have a need to “dampen” the existing scores to make space for the new one.
Just up to there, my mind was already swirling, but it gets better.
Watch Out – One Single Low Score Can Kill A Bid
And the fact that the total is calculated by multiplication is a phenomenal insight. Why? Because any single score under one will seriously handicap that bid, whatever the other scores are.
Look at how the score tanks as just one factor drops slightly below one. That is enough to put this page out of contention.
Dropping further below one will generally kill it off. It is possible to overcome a sub-1 ranking factor, but the other factors would need to be phenomenally strong.
Looking at the numbers below, one gets an idea of just how strong. Ignoring a weak factor is not a good strategy. Working to get that factor above one is a great strategy.
My bet here is that the super impressive ‘up and to the right SEO wins’ examples we (often) see in the SEO industry are examples of when a site “simply” corrects a sub-1 ranking factor.
This system rewards pages that have good scores across the board. Pages that perform well on some factors, but badly on others will always struggle. A balanced approach wins.
Credit to Brent D. Payne for making this great analogy during Gary’s explanation: “Better to be a straight C student than 3 As and an F.”
What A Bid-Based Ranking Looks like
Refining The Bids For A Final Ranking
The top results (let’s say 10) are sent to a second algorithm that is designed to refine the ranking and remove any unacceptable results that slipped through the net.
The factors taken into account here are different and appear to be aimed at specific cases.
This recalculation can raise or lower a bid (or conceivably leave it the same).
So, we are looking at a final set of bids that might look something like this.
Note that in this example, one result gets one zero score and is therefore completely removed from consideration/eliminated (remember, because we are multiplying, any individual zero score will guarantee that the overall score is also zero).
And that is seriously radical. And a very significant fact, however you look at it.
Such a zero can be generated algorithmically.
My guess is that a zero could additionally serve as a way to implement some manual actions (this is a pretty big jump from what I was told, and is my conclusion and has in no way been confirmed by anyone at Google).
What is sure is that the order changes, and we have a final list of results for the web/”10 blue links.”
If that weren’t enough for one day, now it gets really interesting.
Rich Elements Are ‘Candidate Result Sets’ (My Term, Not Google’s)
Candidate Result Sets Compete For A Place On Page One
Each type of result/rich element is effectively competing for a place on page one.
News, images, videos, featured snippets, carousels, maps, GBP, etc. – each one provides a list of candidates for page one with their bids.
There is already quite a variety competing to appear on page one, and that list keeps on growing.
Candidate Result Ranking Factors
The terms ‘Candidate Result’ and ‘Candidate Result Set’ are from me, not from Google.
The combination of factors that affect ranking in these candidate result sets is necessarily specific to each since some factors will be unique to an individual candidate result set and some will not apply.
An example would be alt tags that apply to the Images candidate result set, but not to others, or a news sitemap that would be necessary for the News candidate result set, but have no place in a calculation for the others.
Candidate Result Set Ranking Factor Weightings
The relative weighting of each factor will also necessarily be different for each candidate result set since each one provides a specific type of information in a specific format.
And the aim is to provide the most appropriate elements to the user in terms:
- The content itself.
- The media format.
- The place on the page.
For example, freshness is going to be a heavily weighted factor in News, and RankBrain and MUM for Featured Snippets.
Candidate Result Set Bid Calculations
The bids provided by each candidate result set are calculated in the same way as the first web/blue links example (by multiplication and, I assume, with the second refinement algorithm).
Google then has multiple candidates bidding for a place (or several places, depending on the type).
Pulling It All Together For Page One
Candidate Result Sets Bid Against Each Other
Google is simply looking for any rich result that will provide a “better” solution for the user.
It wants to provide the SERP that will lead its user to the best solution to their problem, or the answer to their question as efficiently as possible (an approach that was confirmed by Meenaz Merchant at Bing in 2020).
When it does identify a “better” candidate result, that result is given a place (at the expense of one or more classic blue links).
The Final Choice Of Rich Elements On Page One
Each candidate result set is subject to specific limitations – and all are subservient to that traditional web result/classic blue links.
- One result, one possible position (Featured Snippet, knowledge panel, Google Business Profile, for example)
- Multiple results, multiple possible positions (images, videos, Twitter boxes, for example)
- Multiple results, one possible position (news, entity carousel, for example)
And the winners in my example are (remember that the rules I used to make these choices are fictional, and not how Google really does this)…
- News: Failed to outbid the #1 web bid and is therefore not sufficiently relevant and does not win a place.
- Images: We have one winner. The space allotted is five so the other four get a free ride.
- Video: Two are outbidding the top web result so they both get a place.
- Featured Snippet: We have several winners, but only one is used because this is “the” answer.
As places are given to rich elements, the lower positioned web results drop onto page two.
As more rich elements are added to an SERP, they tend to dominate visually, and so the blue links gradually lose their importance.
Frédéric Dubut from Bing confirmed that blue links are not going away anytime soon, but their visibility on the SERP is increasingly a losing game.
I reiterate: I have no information about how positions are attributed to the videos or images – I attributed positions to them with my own invented simplistic system, not Google’s. 🙂
In Conclusion – SEO Needs To Evolve
Data from Kalicube Pro shows that the number of blue links on the average SERP is fairly stable, but the number of universal features is increasing.
Here is a snapshot view that shows just how much – in one year, the average number of rich elements (SERP features) on brand SERPs has grown from 1.5 to 2.5.
Universal search increasingly dominates the SERP and should be a much bigger focus for us as SEOs.
Universal results now dominate most SERPs visually, and the traditional blue links are getting fewer clicks. That is a worry for traditional SEO strategies, so we need to adapt and look at the wider picture.
Universal search relies on non-textual elements images, videos, maps, questions, social channels… so we need to develop those formats and integrate them into our strategy to gain better visibility on Google (and Bing) SERPs.
More than that, because Twitter, YouTube, and other third-party platforms tend to dominate universal results on the SERP, we need to look at integrating them more closely into our SEO strategies.
Off-site SEO has never been more important or more powerful.
Featured Image: KatePilko/Shutterstock
In-Post images created by Véronique Barnard, May 2019
Credit: Source link