In the first post in this series, I talked about how relatively few URLs on the web are currently clearing the double-hurdle required for a maximum CWV (Core Web Vitals) ranking boost:
Passing the threshold for all three CWV metrics
Actually having CrUX data available so Google knows you’ve passed said thresholds
For Google’s original rollout timeline in May, we would have had 9% of URLs clearing this bar. By August 2021, this had hit 14%.
This alone may have been enough for Google to delay, downplay, and dilute their own update. But there’s another critical issue that I believe may have undermined Google’s ability to introduce Page Experience as a major ranking factor: flimsy metrics.
It’s a challenging brief to capture the frustrations of millions of disparate users’ experiences with a handful of simple metrics. Perhaps an impossible one. In any case, Google’s choices are certainly not without their quirks. My principle charge is that many frustrating website behaviors are not only left unnoticed by the three new metrics, but actively incentivized.
To be clear, I’m sure experience as measured by CWV is broadly correlated with good page experience. But the more room for maneuver there is, and the fuzzier the data is, the less weight Google can apply to page experience as a ranking factor. If I can be accused of holding Google up to an unrealistic standard here, then I’d view that as a bed of their own making.
Largest Contentful Paint (LCP)
This perhaps feels the safest of the three new metrics, being essentially a proxy for page loading speed. Specifically, though, it measures the time taken for the largest element to finish loading. That “largest element” is the bit that raises all manner of issues.
Take a look at the Moz Blog homepage, for example. Here’s a screenshot from a day close to the original, planned CWV launch:
What would you say is the largest element here? The hero images perhaps? The blog post titles, or blurbs?
For real world data in the CrUX dataset, of course, the largest element may vary by device type. But for a standard smartphone user agent (Moz Pro uses a Moto G4 as its mobile user agent), it’s the passage at the top (“The industry’s top wizards, doctors, and other experts…”). On desktop, it’s sometimes the page titles — depending on what the length of the two most recent titles happens to be. Of course, that’s part of the catch here: you have to remember to take a look with the right device. But even if you do, it’s not exactly obvious.
(If you don’t believe me, you can set up a campaign for Moz.com in Moz Pro, and check for yourself in the Performance Metrics feature within the Site Crawl tool.)
There are two reasons this ends up being a particularly unhelpful comparison metric.
1. Pages have very different structures
The importance of the “largest element” varies hugely from one page to another. Sometimes, it’s an insignificant text block, like with Moz above. Sometimes it’s the actual main feature of the page. Sometimes it’s a cookie overlay, like this example from Ebuyer:
First, let me pull up the best seat in the house for you: the local business owner or marketer who has weathered so much in the past two years. For your work of serving the public, you deserve the comfy chair by the fire, the celebratory cup of hot chocolate, […]