What is JavaScript?
The computer programming language JavaScript should not be confused with another language, Java. JavaScript is one of the fastest growing languages in the world.
JavaScript (JS) was originally created to allow HTML web pages to interact with Java web applications.
JS has since been mostly used to add animations and effects to make pages more dynamic. This is inherently different from existing HTML and CSS languages, where you set the basic structure and then design how the page should look.
This comprehensive JavaScript SEO guide aims to equip developers and SEO professionals with the knowledge to use JavaScript effectively.
Advantages and Disadvantages of JavaScript
Advantages of JavaScript
- Supported in all modern browsers.
- It allows creating user-friendly, highly interactive websites.
- JavaScript code is relatively easy to debug and test because it is interpreted line by line.
- It can be used by both front-end and back-end developers.
Disadvantages of JavaScript
- Google is potentially unable to generate and index landing pages rendered in JavaScript.
- Requiring JavaScript to be rendered on a page can negatively impact two key areas:
- Site speed
- Search engine crawling and indexing
However, depending on the rendering method you use, you may be able to slow down the page load speed and ensure that the content is accessible to search engines for crawling and indexing.
JavaScript makes pages load fast, offers a rich interface and is easy to implement; however, browser fluidity changes with user interaction, making it difficult for search engines to understand the page and associate a value with the content.
Search engines have their own limitations when processing web pages containing JavaScript content. Google performs an initial crawl of the page and indexes what it finds. Once the resources are available, the bots fall back to rendering the JS on those pages. This means that content and links that rely on JavaScript run the risk of not being seen by search engines, potentially damaging the site's SEO.
Creating JavaScript
Rendering focuses on fetching related data to populate a page and visual layout templates and components, and then combining them to create HTML that a web browser can display. Here we need to introduce two basic concepts; server-side rendering and client-side rendering. It is crucial for any SEO who manages JavaScript websites to recognize the difference between the two.
The built-in approach involves server-side processing, a crawler that retrieves the HTML that fully describes the page, or a search engine bot (crawler). So your crawler or search engine bot must download the attached assets (CSS, images, etc.) to show how the page was designed. Since this is the traditional approach, search engines generally have no problem with server-side rendered content. Websites that have traditionally operated this way are programmed in PHP, ASP or Ruby and may have used popular content management systems such as Kentico, WordPress or Magento.
However, the more modernized client-side rendering approach is very different. It has been noted that many search engine bots struggle in this way because as a result of primary loading, a blank HTML page with little content is reflected back to the bot. The included JavaScript code then sends a request to the server and uses the data it gets back to render and render the page.
Creating JavaScript with the DOM
JavaScript rendering works when the Document Object Model (DOM) of the page is sufficiently loaded. To explain further, the DOM is the basis of the dynamically created page. Standard HTML pages are static where they are not modified, while dynamic pages are pages that have the ability to change and can be created on the fly.
As mentioned earlier, JavaScript and resources are vital for building a page on which to execute JavaScript later. JavaScript then makes changes to the DOM and serves the HTML code of the particular web page. The selected search engine bot usually waits about three seconds before taking a snapshot of the generated HTML code.
How JavaScript Generating Works with Googlebot
Googlebot processes JavaScript in three main stages, these are:
- Crawling
- rendering
- Indexing
As shown in Google's diagram, Googlebot places pages in a queue for crawling and rendering. Googlebot receives a URL from the crawl queue and reads the robots.txt file to see if the URL is allowed.
Googlebot then parses the HTML response into the other URL and adds it to the crawl queue. When Googlebot's resources allow, a Chromium renders the page and runs the JavaScript. The generated HTML is then used to index the page.
Because Google runs two separate waves of indexing, it's possible that some details were missed during the indexing process. For example, if you're not creating important title tags and meta descriptions on the server side, Google might miss it the second way and have negative effects on your organic visibility in the SERPs.
What's the Difference Between Crawling and Indexing?
Crawling and indexing are two different things that can be confused in the SEO industry. Crawling is associated with a search engine bot like Googlebot, discovering and analyzing all the content or code on a web page. On the other hand, indexing means that the page is more likely to appear on the Search Engine Results Page (SERPs).
Despite the improvement of bots in crawling and indexing, JavaScript makes this process much less efficient and more expensive. JavaScript's built-in content and links require tremendous effort for browsers to render entire web pages. These search engines will crawl and index JavaScript-generated pages, but this will likely take longer than a static page due to the toggle between the crawler and the indexer. As opposed to allowing Googlebot to index the page by downloading and extracting links from HTML and CSS files, JavaScript is an extra step. The JavaScript rendering process as a whole is much more complex.