When a group of brilliant minds from Merkle set out to test the ability of Google to crawl and index JavaScript functions, little did they know about the surprise that awaited them!A series of tests ran by the team proved that Google is fully capable of executing and indexing JS functions. Google is also equipped enough to render whole pages and read the DOM to index dynamically generated content.

For those of you who are still swooning from the news. Let’s break this down, step by step.

We always knew (at least since 2008) that Google was capable of crawling JavaScript. Back in the day, the access may have been limited. Today, Google can not only crawl and index the same kind of JS, but it can also render entire WebPages, and Google has been doing this for the last 12 months at least.

But does that mean you need to overhaul your SEO game completely? Well, in most cases the SEO strategies remain the same since Googlebot respects SEO signals in all DOM. Any type of content inserted in the DOM can be fully indexed.

Are you getting a little overwhelmed? Then, it’s time for understanding a few jargons!

What is the DOM?

Very simplistically speaking, DOM is an application programming interface for markup language like HTML and XML. It holds the integrity of logical structure of all documents and defines the ways in which a document can be manipulated and accessed. DOM is definitely not tied to one particular language (language agnostic) and is thus used for JS and dynamic content.

Another way to understand the importance of DOM is to imagine a bridge that connects the web pages/websites to programming languages. The languages can be JavaScript or HTML. The content of the web page that represents the bridge or the interface is the DOM.

Googlebot and DOM: a new found love

The team was pleasantly surprised to find the ability of Google to recognize DOM as a separate entity. Googlebots are capable of reading the DOM and interpreting signals from dynamically inserted content. Parts of a web page like title tags, header tags, meta tags and meta descriptions can be a part of the DOM that is accessible by Googlebots for crawling.

So the next time you are beating yourself up about ranking low in the SERPs in spite of having the right keyword density and URL backlinks, you may want to check the backend code of your website.

Testing Google’s appetite for JavaScript functions

After becoming certain about Googlebot’s crawling powers, a number of tests were devised to test how JS functions could be indexed and crawled. Although the tests were long and detailed enough to provide sustained cure for insomnia, here are the 5 segments of the results that provide an interesting insight to Google’s crawling and indexing abilities:

i. JavaScript redirects – in the first phase JavaScript redirects were tested varying the representation of the URLs.
Result – the redirects were quickly picked up by Googlebots and were construed as 301s. This essentially means the redirected URLs were swapped with the end-state URLs in the Google index.
JavaScriptredirect, phase 2–an authoritative page was used and a new JavaScript redirect was implemented. This directed the users (Googlebot) to a new page on the website with identical content.
Special mention – the original page ranked in page 1 of Google SERPs.
Result –the original page was dropped from index and the new URL was indexed and ranked in the same position as the original one. This would seem like JavaScript redirects can behave was permanent redirects from Google’s vantage point.

ii. Testing JS links–several kinds of JS links were tested, including drop-down menus. A few standard JS links were also tested most of which bear recommendations from SEO to be converted to plain text. For example-

 Functions inside href AVP
 Functions outside of the href AVP
 Functions outside of the a but called within the hrefAVP
Result – Google did something that had rarely been seen before. The Googlebots successfully crawled the links and followed them too, without any discrimination.

iii. Google’s reaction to dynamically inserted content–this includes content like text, images and links on a webpage. The tests were designed to check the bots response in two different circumstances–

 The search engine’s ability to access and crawl dynamic content within the HTML source of a page
 The search engine’s ability to crawl dynamic content when text is outside the HTML source of the page
Result – in both the cases the dynamic content was crawled quite remarkably. The content was indeed crawled and the pages were then ranked for the content.

iv. Dynamically inserted DOM – the dynamically inserted DOM elements can include title tags, meta tags, meta descriptions, canonical tags and meta robots. Now the main aim of this test was to check Googlebot’s ability to successfully crawl and index the dynamically inserted meta data and page elements in a web page.
Result – the tags were successfully crawled, respected and indexed exactly as HTML elements in a source code.

However, further tests reveal that Google may disregard a tag in any HTML source code in the favor of its DOM.

v. What about rel=”nofollow”?

In this case the test’s aim was to reveal Google’s reaction to “nofollow” tags. In one case a link-level “nofollow” attribute was inserted in the source code and in the other, a positive control was created with a source code without any “nofollow” attribute.
Result – the “nofollow” attribute in the source code prevented Google from following the link. But the “nofollow” attribute in the DOM was not effective. Google followed the link and the page was indexed.

The most plausible explanation for this exception would be the order of access. Google had already crawled the link and lined up the URL for indexing before the bot got to the JS function that executes the rel=”nofollow” attribute.

Till date, SEO was mainly about website content and keywords. With Googlebots accessing and crawling the HTML source codes, JS links and Ajax SEO has been now extended to the dynamically inserted content as well.

Author Bio: Charlie Brown is the team director of a leading US SEO company. If you have any questions about your company SEO, his team will have the answer for you. He has exemplified perfection in the fields of medical SEO, dental SEO and nursing home SEO over the years, that has earned him the title of SEO guru among his contemporaries.