The Good: New Developments Ease Compatibility:
Bing announced that they are adopting the new Microsoft Edge as the Bing Engine to render pages.
Bingbot will now render all web pages using the same underlying web platform technology already used by Googlebot, Google Chrome, and other Chromium-based browsers.
Both leading search engines also announced that they will make their solution evergreen, committing to regularly update their web page rendering engine to the most recent stable version of their browser.
These regular updates will ensure support for the latest features, a significant leap from the previous versions.
Search Engines Are Simplifying SEO by Leveraging the Same Rendering Technology:
These developments from Google and Bing make it easier for web developers to ensure their websites and their web content management system work across both browsers without having to spend time investigating each solution in depth.
With the exception of files that are not robots.txt disallowed, the secondary content they see and experience in their new Microsoft Edge browser or their Google Chrome browser is what search engines will also experience and see.
For SEOs and developers, this saves time and money.
For example, there is:
- No longer a need to keep Google Chrome 41 around to test Googlebot.
- No longer a need to escalate to Bing.
And the list goes on and on.
When a search engine downloads a web document and starts analyzing it, the first thing it does is understand the document type.
For HTML files, if they have enough resources, they will attempt to render the document using their optimized browser rendering solutions.
Search engines must download the file to read and execute it.
If the content is robots.txt disallowed, it won’t be able to.
If they are allowed, search engines must succeed downloading the file, facing crawl quota per site and site unavailability issues.
Search engines generally don’t do complex actions such as clicking a button, so it would be best to use basic HTML as <script> link to the file like the example above.
When it’s not compatible, the search engine isn’t going to read it, and if we can’t read it, we’re not going to remember it.
With the recent move for search engines to use the same technology and commitment to updating their browsers, this should become easier to deal with in the future.
- Search normalized URLs with a #. Dropping all parameters after the # (except the legacy #! Standard).
- Search engines don’t generally click buttons and do other complex actions.
- Search engines don’t wait long periods of time for pages to render.
- Search engines don’t output complex interactive webpages.
The Uncertainty: For Optimal SEO, Use JS Practically, Sparingly or Ideally, Not at All:
Also, allow crawlers to access content with one HTTP request for the HTML and text that you want to be indexed.
The good news is Google and Bing both suggest there is no need to worry if you output nearly the same text and content as the one viewed by your human customers.
- Google says:
- Bing says:
“When it comes to rendering content specifically for search engine crawlers, we inevitably get asked whether this is considered cloaking… and there is nothing scarier for the SEO community than getting penalized for cloaking … The good news is that as long as you make a good faith effort to return the same content to all visitors, with the only difference being the content is rendered on the server for bots and on the client for real users, this is acceptable and not considered cloaking.”
Do or Don’t?
Be sure to understand the technical implications so that your documents can be properly indexed or consult with a technical SEO expert.
Search engines are incentivized to index your content to satisfy their customers.
If you come across issues, investigate them using search engines webmaster online tools or contact them.