How I sped up my site using Google's PageSpeed module

Having a beautiful website is important, but nobody's going to stick around to appreciate it if the content is too slow to load. Static content alone can bring rendering to a crawl when performance is overlooked, even when the static resources are seemingly simple. With this in mind, I set out to deliver my site content as fast as possible and learn the best practices to make it happen.

What is PageSpeed?

The PageSpeed module is a web server plugin, offered for both Apache and nginx. It does all the heavy lifting for you by optimizing resources automatically. This gives you the freedom to fine tune your site in a way that works best for your specific use case, making for both a happy user and a happy developer.

Why PageSpeed?

There are many possible ways to optimize content delivery, but the PageSpeed module proved to be the simplest for me. Your definition of "simple" may differ; I had to recompile nginx from source to use this module, which can be tricky if you've never done it before.

Getting Started

Browse to the PageSpeed documentation for specific installation instructions for your web server. Since I used nginx, I'll outline the steps I took below. If you're using Apache, feel free to skip ahead to the Configuring PageSpeed section below.

I had to recompile nginx from source and remember to include my existing modules such as OpenSSL. I highly recommend you don't shut down your web server before attempting this in case you mess up your configuration and are left with a broken production environment while you figure out what you did wrong. Go ahead, ask me how I know.

First, get your preferred version of nginx. This may be a good time to update to a later version if you so choose.

$ wget<version>.tar.gz

Extract the folder and browse to it.

$ tar zxvf nginx-<version>.tar.gz
$ cd nginx-<version>

Next, get a list of the modules your server is currently using. Your list will probably be quite long.

$ nginx -V
nginx version: nginx/<version>
built by gcc <version> 20150623 (Red Hat <version>) (GCC)
built with OpenSSL <version> 25 May 2017
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx ... <stuff> ...

Take the configure arguments output from this command and pass it into the new configuration using the configure script provided with the nginx download.

$ ./configure --add-module=$HOME/ngx_pagespeed-<version>-stable ... <configure arguments go here>

Once configured, install nginx. Go have a duel or something.

$ make install

Before you touch anything else, make sure your install was successful. If it is, you're probably safe to restart nginx and load the new configuration.

$ nginx -t
$ nginx restart

Configuring PageSpeed

With the installation out of the way, you can move onto the fun stuff. PageSpeed offers a plethora of configuration options which can be overwhelming at first, but initial setup can be simplified immensely by using one of a few base filters:


PassThrough disables all filters and allows you to set them individually. CoreFilters is the default level and the one I went with. Although OptimizeForBandwidth supposedly provides a stronger guarantee of safety, I found that CoreFilters performed better in my case according to the PageSpeed Insights tool.

In order to apply a filter, you must add a line to your web server configuration (in the server block of nginx.conf, for example). The filter order does not matter since they are evaluted in the order listed in the above page of the documentation.

Don't forget to restart your web server to apply the changes. Once added, it will take a few minutes for PageSpeed to apply your optimizations, so if you are using the PageSpeed Insights tool you may not see results immediately.

Other filters

There are a few other filters not included in the CoreFilters configuration that I found to be helpful for performance.

Since I am serving content via HTTPS exclusively, I added the MapOriginDomain and LoadFromFile filters, the latter of which is nginx-specific. Please note that LoadFromFile is intended to be used for static sites only, as mentioned in the risks section.

pagespeed MapOriginDomain "http://localhost" "";
pagespeed LoadFromFile "" <path on disk to content>;

I found that the remove_comments and collapse_whitespace filters did very little, but included them anyway so I don't send unnecessary bytes over the network.

pagespeed EnableFilters remove_comments;
pagespeed EnableFilters collapse_whitespace;

Lastly, PageSpeed Insights warned me about blocking CSS that was preventing above the fold content from rendering quickly. The prioritize_critical_css filter helped mitigate this. I noticed it refactor my CSS to load certain styles first and I was happy with the path it chose, but this filter is not for everyone and carries a moderate risk. See the Risks section for more info.

pagespeed EnableFilters prioritize_critical_css;




I'd say the results speak for themselves!

It's worth noting that this tool isn't perfect. Even if I have a high score I'm not guaranteed a fast page render, and not all pages will score the same. My front page doesn't score perfectly because of the big picture at the top, though it still scores in the 90s on both desktop and mobile. I've also noticed the scores can fluctuate, presumably due to the module trying different optimizations behind the scenes. Perhaps it requires a certain number of hits before content can be optimized, which would explain why my site scores in the 70s or 80s when PageSpeed Insights is first run after a deployment, but numbers quickly improve on subsequent runs. In any case, there are clear benefits to using this module versus going it alone. This simple optimization tool spared me from running around the internet trying to find a bunch of separate tools that yield the same end result.

Now that I've added Google Analytics, I can no longer get a perfect score because the cache time on analytics.js is only 2 hours. Ironically, Google is refusing to ignore their own script and have said they won't fix the issue. Sad!

Was all this really necessary?

If I'm honest... no it wasn't. I have a relatively small and simple site that was already fast due to its minimalism, and few (if any) of my site's visitors would notice the best practices I'd missed. Regardless, I want my site to be as good as it can be and I value the learning process that got me to this point. Throughout that process I learned much about web development and I'm a more competent developer as a result. Next time I find one of our pages is slow at work, I'll know where to start looking.

Update (April 21, 2018): I'm no longer using this module because I've done a thorough cost-benefit analysis and have determined the benefits are negligible for my use case I'm tired of recompiling nginx every time I do an update and haven't bothered to automate this like I should.