Here are some interesting aspects of Googlebot's crawling of news websites that are useful to know when you want to optimise crawl efficiency.
Question; what if in Search console we have - for the same url - "Crawl request: JSON" 200, and "Page indexing" 404 response ? Lets say its some lazy loaded element. Should crawl to those url's be blocked trough robots.txt ?
Question; what if in Search console we have - for the same url - "Crawl request: JSON" 200, and "Page indexing" 404 response ? Lets say its some lazy loaded element. Should crawl to those url's be blocked trough robots.txt ?