Googlebot Crawls & Indexes First 15 MB HTML Content
In an update to Googlebot’s help document, Google quietly announced it will crawl the first 15 MB of a webpage. Anything after this cutoff will not be included in rankings calculations.
Google specifies in the help document:
This left some in the SEO community wondering if this meant Googlebot would completely disregard text that fell below images at the cutoff in HTML files.
“It’s specific to the HTML file itself, like it’s written,” John Mueller, Google Search Advocate, clarified via Twitter. “Embedded resources/content pulled in with IMG tags is not a part of the HTML file.”
What This Means For SEO
To ensure it is weighted by Googlebot, important content must now be included near the top of webpages. This means code must be structured in a way that puts the SEO-relevant information with the first 15 MB in an HTML or supported text-based file.
It also means images and videos should be compressed not be encoded directly into the HTML, whenever possible.
SEO best practices currently recommend keeping HTML pages to 100 KB or less, so many sites will be unaffected by this change. Page size can be checked with a variety of tools, including Google Page Speed Insights.
In theory, it may sound worrisome that you could potentially have content on a page that doesn’t get used for indexing. In practice, however, 15MB is a considerably large amount of HTML.
As Google states, resources such as images and videos are fetched separately. Based on Google’s wording, it sounds like this 15MB cutoff applies to HTML only.
It would be difficult to go over that limit with HTML unless you were publishing entire books’ worth of text on a single page.
Should you have pages that exceed 15MB of HTML it’s likely you have underlying issues that need to be fixed anyway.
Source: Google Search Central
Featured Image: SNEHIT PHOTO/Shutterstock
Credit: Source link