Google Search Console 4. Coverage

Robert Crowther Sep 2022
Last Modified: Jan 2023

PrevNext

How Googlebots discover links

Now, this should be easy. A Googlebot will visit the site/URL (‘property’), then follows the links through the site, picking them for Google. It takes Google some time to index your site. They say a few days to a week. My experience is about four days.

I have sites for comparison. For example, a base HTML fifty page site was automatically discovered (after a few months), then full‐indexed. As advertised. I’m sure if you use use a Content Management System set up for web‐search crawling, you’ll find you are ok.

Ways coverage can fail

But it doesn’t always work. Sometimes discovery is slow. For one classic‐linked site I own Google took a year and a half to discover the links (no, I wasn’t too bothered about this, but you may be). And sometimes site coverage fails. No, I’d go further—it often fails. Some thoughts…

Small sites

Google say, if you have a small site, meaning less than 500 articles, they’ll find everything. I do not disagree, but that advice assumes sites perfect for search engines. In practice I have built small sites which Google failed to crawl. I wasn’t trying to cheat Google using link‐gathering or similar SEO trickery, the sites measured well. Not only that, but Google then sent me messages suggesting a sitemap would improve site indexing. Do not assume a small site will get good coverage.

Custom navigation

Beyond that, you may have a site with custom navigation. I don’t know what your custom navigation is, perhaps you have links in Javascript, or no hierarchical structure? Anywhat, in this case link coverage can be very bad. I had one site with over seventy potential links, all the same structure of page. But only ten were found, and only five indexed. The site had no reported problems with page material. Indeed, if tools from Google and others were pointed at the site, they reported outstanding performance. Later, I constructed a theory as to why the site was covered poorly. I don’t think the theory is worth publishing, it’s too particular, but I do want to show this can happen. If a website has anything but classic HTML anchors and structure, it’s worth checking coverage.

Fixing poor coverage

Right…

First step

The first question, is the site ok? Can a visitor visit the links from the web? This may seem obvious, but obvious is exclusive—go check.

Improving Coverage

Ok, links can be visited from the web. For links found/discovered but not indexed, use link fixes using the URL Inspection tool. For pages not found by Google, use a sitemap.

Deprecating coverage ‐ no method to remove URLs

From Google Help,

…there is no way to tell Googlebot to permanently forget a URL, although it will crawl it less and less often.

For instance, I once set a bunch of malformed URLs into a sitemap. Before I changed the URLs, Google crawled the sitemap. Those URLs are now fixed in the console as 404 errors. I’m not too bothered, but must remember—if I visit the console for that site, there are near‐20 404 errors. And that the errors are not important. I could drop the site from the console, then register again… I’ve not tried that.

Next

Link Rendering