How To get a Single Page Application Discovered/Indexed

Technical SEO

The Challenge

Relying heavily on JavaScript to render your page content and provide navigation functionality, brings with it well-known risks in terms of technical SEO, indexing and linkability challenges. Here, I had to come up with a strategy so I could at least get the pages indexed before I attempted any other SEO strategy.

The Solution

“Single-Page Applications (SPAs) are Web apps that load a single HTML page and dynamically update that page as the user interacts with the app.” I kept in mind that SPA’s are dynamically generated. The first thing I had to do was to let Google know about each URL. This is where I started a very in-depth XML Sitemap Strategy, which we dubbed Ungoliant After 3 months of intense XML Optimization, Log File Analysis, and monitoring Google Search Console, we created our Title & Description Tag and Alt Text script called Shelob.

The XML Sitemaps are all .xml.gz compressed and kept at either 50mb or 50 000 URL’s. A basic XML Sitemap structure for client:<–

Sitemap Location on Robots.txt–>

Sitemap: https://domain/sitemap_index_tag_products.xml

<–Multiple Sitemap Management  (compressed)–>

<?xml version=”1.0″ encoding=”UTF-8″?>   <sitemapindex xmlns=”″><sitemap><loc></loc><lastmod>2004-10-01T18:23:17+00:00</lastmod>  </sitemap><sitemap><loc></loc><lastmod>2005-01-01</lastmod></sitemap></sitemapindex>

<–Sitemap with  Single URL & Image Information–>

<?xml version=”1.0″ encoding=”UTF-8″?><urlset xmlns=”″  xmlns:image=”″ xmlns:xhtml=””><url><loc>https://domain/profile/danielv/</loc><image:image>   <image:loc>https://domain/services/rest/profiles/DanielV/photo/1494320904.76.jpg</image:loc></image:image></url><url><loc>https://domain/profile/aalberts/</loc><image:image><image:loc>https://domain/services/rest/profiles/aalberts/photo/1485806668.92.jpg</image:loc></image:image></url></urlset>

We do not use any Priority in our XML Sitemaps.

We rely on lastmod.

Having to select what is more important than others just didn’t made sense, so we left it at lastmod.

Our next challenge was deciding how often we would generate the XML Sitemaps. Due to the large scale, we didn’t want to generate on a daily basis. If the platform expanded 10x, we might start running into problems.

After each XML Sitemap update, we submitted the XML Sitemap to Search Console and monitored our Log Files.

For the first month, Google crawled regularly with each update. After 2 months, this was less. (I guess they could start to form apattern and had enough information to know when to recrawl certain URL’s).

We went from updating twice a week, to updating once a week.Every time we updated the XML Sitemaps we saw the Google Crawler requesting our INDEX File, but not requesting all of the XML Sitemaps.By then we knew once a week is where we would keep it at, and we stopped by submitting manually instead to Search Console.

Fixing some SPEED errors improved our Crawl Budget. Another test I did was testing 4 new URL’s, two were left untouched, and for the other two, I created backlinks from Pinterest to them. The two that received backlinks from Google were recrawled and indexed before the other two.

Increasing Total Clicks & Total Impressions

Increasing Total Clicks & Total Impressions & Avg Position.

Keep in mind, this client came to me to get indexed. Strategies will be put in place to rank higher.
We are facing a couple of problems with Structured Data. Due to the fact that this is a Single Page Application, images, data etc keeps moving around, and no URL can assure to have the same content every time. What I did was to use the Data Highlighter in Search Console. We are currently busy writing a script to inject Structured Data. We are also at the moment looking into a rel=canonical strategy.

Project Details

Year: 2017
Services: SEO Consu;ting