See the source code for this post.

Lots of service worker posts (like this one I wrote for the David Walsh blog) show you enough of a service worker to get started. Usually, you’re caching files. This is a great start! It improves your app’s performance, and with 20% of Americans experiencing smartphone dependence, it’s a great way to make sure users can access your app – regardless of their connection or network speed.

But what about requests, and specifically GET requests? A service worker, along with the Cache Storage API, can also cache your GET requests to avoid unnecessary trips over the network. Let’s look at how we would do that.

Note: This post assumes a basic understanding of service workers. If you need a service worker explainer, check out this blog post.

Let’s take a look at our onfetch function in our service worker. It currently looks like this:

self.onfetch = function(event) {
    event.respondWith(
        (async function() {
            var cache = await caches.open(cacheName);
            var cachedFiles = await cache.match(event.request);
            if(cachedFiles) {
                return cachedFiles;
            } else {
                return fetch(event.request);
            }
        }())
    );
}

Briefly, this function intercepts a fetch request and searches for a match in the cache. If a match isn’t found, the request proceeds as usual. The problem here is that there’s nothing to go ahead and cache new data as it’s received. Let’s change that.

Our new function looks like this:

/* ... */
else {
    var response = await fetch(event.request);
}

If this looks similar to what you’re doing in your client code, that’s because it is! Essentially, we recreate the original fetch, but this time within the service worker. Now we’ll go ahead and cache the response.

/* ... */
else {
    var response = await fetch(event.request);
    await cache.put(event.request, response.clone());
}

There’s a couple of new things in here. First, we are using cache.put over cache.add. cache.put allows a key-value pair to be passed in, which will match the request to the appropriate response. You might also notice the response.clone(). The body of a response object can only be used once. This means, if you cache the response object, you’ll be able to return it, but your client won’t be able to access the body of the response. To be able to access your data we’ll go ahead and make a clone of the response and cache that instead.

Lastly, you return the response. So the full onfetch function looks like this:

self.onfetch = function(event) {
    event.respondWith(
         (async function() {
            var cache = await caches.open(cacheName);
            var cachedFiles = await cache.match(event.request);
            if(cachedFiles) {
                return cachedFiles;
            } else {
                try {
                    var response = await fetch(event.request);
                    await cache.put(event.request, response.clone());
                    return response;
                } catch(e) { /* ... */ }
            }
        }())
    )
}

There you have it! Now you’re ready to start dynamically caching API responses.

If you want to learn more about service workers, I hope you’ll head over to serviceworkerbook.com and sign up for my mailing list, and follow me on Twitter! You’ll be the first to know when my book, ‘Let’s Take This Offline’ is out!


Note: this blog post assumes a working knowledge of service workers. If you need a refresher, I recommend looking here.

There might be times you want to send back a custom response from your service worker, rather than going out over the network. For example, you might not have a certain asset cached, while a user’s internet connection is simultaneously down. In this case, you can’t go over the network to fetch the asset, so you might want to send a custom response back to the client.

Let’s take a look at how we might implement a custom response.

This project demonstrates how you would return a custom response from a service worker. The basic idea is to make a fetch request to an API (in this case FayePI, which you should definitely check out). The service worker, however, sits between the client making requests and the API receiving them.

In this case, the service worker intercepts the fetch request and sends back a custom response – rather than going across the network.

Let’s look at the service worker to see how that works.

A service worker’s fetch event listener usually looks like this:

self.onfetch = function(event) {
    event.respondWith(
        caches.match(event.request)
        .then(function(cachedFiles) {
            if(cachedFiles) {
                return cachedFiles;
            } else {
                // go get the files/assets over the network
                // probably something like this: `fetch(event.request)`
            }
        })
    )
}

Let’s change up what’s happening in the else-block of this code to return a custom response.

/* ... */
else {
    if(!event.request.url.includes(location.origin)) {
        var init = { "status" : 200 , "statusText" : "I am a custom service worker response!" };
        return new Response(null, init);
    }
}

You might be wondering why the if-check is included there. We only want this to affect out-going requests, rather than requests coming to the app, which are likely requests for html files, and other assets. This quick check makes sure that the requested URL doesn’t match the URL of the app, so all of the static pages will still load properly.

Within that if-check, we are newing up a Response object. Here we go ahead and set the status code to 200 since it “worked” (that is, it reached our service worker and our service worker returned our custom response). We’ll also go ahead and change the status text, so we know that the response is actually coming from the service worker.

And that’s about it! Now you know how to return a custom response from your service worker! Be sure to check back soon to learn about how to cache more than just static files!

If you want to learn more about service workers, I hope you’ll head over to serviceworkerbook.com and sign up for my mailing list, and follow me on Twitter! You’ll be the first to know when my book, ‘Let’s Take This Offline’ is out!


A lot of service worker examples show an install example that looks something like this:


self.oninstall = function(event) {
    caches.open('hard-coded-value')
    .then(function(cache) {
        cache.addAll([ /.../ ])
        .catch( /.../ )
    })
    .catch( /.../ )
}

Let’s do a quick overview of the above code. When a browser detects a service worker, a few events are fired, the first of which is an install event. That’s where our code comes in.

Our function creates a cache called hard-coded-value and then stores some files in it. There’s one small problem with this … our cache is hard-coded!

There’s an issue with that. A browser will check in with a service worker every 24 hours and re-initiate the process, but only if there are changes. You might change your app’s CSS or JavaScript, but without a change to the service worker, the browser will never go and update your service worker. And if the service worker never gets updated, the changed files will never make it to your user’s browser!

Fortunately, there’s a pretty simple fix – we’ll version our cache. We could hard code a version number in the service worker, but our app actually already has one. So handy!

We’ll use our app’s version number from the package.json file to help. This method also requires (pun intended) us to be using webpack.

In our service worker, we’ll require our package.json file. We’ll grab the version number from the the package.json and concatenate that to our cache name.


self.oninstall = function(event) {
    var version = require(packagedotjson).version;
    caches.open('hard-coded-valuev' + version)
    .then(function(cache) {
        cache.addAll([ /.../ ])
        .catch( /.../ )
    })
    .catch( /.../ )
}

Turns out, there’s actually an even better way than above using some of webpack’s built-in tools. A problem with the above code is that your package.json file will get bundled into your service worker. That’s pretty unnecessary and it’s going to increase the size of your bundle.

We’ll use DefinePlugin to make this even cleaner.

Let’s add a property to our DefinePlugin function in our webpack file. We’ll call it process.env.PACKAGEVERSION.

It might look like this:


var version = require(packagedotjson).version;
new webpack.DefinePlugin({
  'process.env.PACKAGEVERSION': JSON.stringify(version)
});

Source: webpack DefinePlugin

And then in our service worker instead of referencing version directly, we’ll use process.env.PACKAGEVERSION. It’ll look like this:


self.oninstall = function(event) {
    caches.open('hard-coded-valuev' + process.env.PACKAGEVERSION)
    .then(function(cache) {
        cache.addAll([ /.../ ])
        .catch( /.../ )
    })
    .catch( /.../ )
}

webpack will work behind the scenes for you, and swap out the ‘process.env.PACKAGEVERSION’ for the proper number. This solves the problem of needing to update our service worker, and it handles it in a clean, simple way. Plus it will help us out when we need to clean up former caches. I’ll write about that next, so stay tuned!

If you want to learn more about service workers, I hope you’ll head over to serviceworkerbook.com and sign up for my mailing list, and follow me on Twitter! You’ll be the first to know when my book, ‘Let’s Take This Offline’ is out!


Late last year I had the opportunity to appear on the CodeNewbies podcast! Saron and I talked about Offline First, and I have to say, she is so cool. She’s a great interviewer and the CodeNewbies community is absolutely fantastic.

If you haven’t heard to the podcast, you can give it a listen here: link.

Offline First is an idea that apps should work in an offline capacity. It doesn’t have to mean that 100% of the app’s features work offline. Instead it can just mean that an app uses a service worker and indexeddb to store off someone’s data in the event of internet loss.

The important takeaway from Offline First is that you consider at the beginniing, what will happen when my app loses connection? It’s important to frame this as when your app loses connection because it will definitely happen to everyone.

Following along with the CodeNewbie chat was incredibly interesting. You can see the story here.

We know from studies that internet access is based on income, but it was really interesting to see that play out in the chat:


It was really interesting seeing how many people were affected by a lack of internet, and also their ingenuity for staying connected. Some people mentioned having to use libraries, coffeeshops, or hot spots to learn to code:


Toward the end, the conversation shifted to how we have been affected by the internet. Most people said they thought internet access had been positive for them:


People mentioned being able to communicate in ways we’ve never been able to before.


Others mentioned how the internet has changed their lives for the better.


It was really interesting, though, to see people talking about how internet has become a bit of a “double-edged sword”.


I’m so glad I had the opportunity to go on CodeNewbies and help shed some light on this issue. This conversation just puts real faces on the issue of inconsistent internet access. We know what the studies say, but seeing how many people throughout the conversation dealt with not having good access or a slow connection really drove the point home.

Thanks for having me, CodeNewbies!


Last week I was at KCDC and it was absolutely amazing. I was lucky enough to not only attend, but also to speak about something near and dear to my heart – Offline First techniques. This time I was talking about using IndexedDB.

I’ve written about the importance of offline availability here. Offline First offers a kind way to handle a user’s loss of connection – something over which they have little to no control. And even more than that – implementing Offline First techniques can help people, which I’ve written about here.

Before we dive into any code, let’s talk about IndexedDB’s structure. I think IndexedDB is more similar to NoSQL, rather than traditional relational SQL. So let’s delve into how IndexedDB data is stored.

Each web app can have multiple IndexedDB databases, which are made up of Object Stores. An Object Store is kind of like a table, but it doesn’t use columns. It’s a place to store Objects, which will probably look similar to a server response or ajax request. Later on, we’ll look at some examples.

Within an Object Store, there are Indexes. An Index is a defined property within an Object Store. It’s a little bit like a column, except that it doesn’t have a defined type, nor does it require a type.

Indexes have various properties which can be applied to them. A few common ones include autoIncrement, unique, and keyPath. When we apply autoIncrement to an Index, the Index will create something similar to an auto-incrementing ID in SQL. I mentioned before that Indexes don’t have type requirements, but every rule needs an exception. When we use autoIncrement our Index will always be a number. However, we don’t need to specify a type for this to happen, nor do we need to pass in a number, as IndexedDB is going to handle that for us.

Another handy property we can apply is unique. unique allows us to specify whether or not an Index value can be duplicated. So for instance, if we wanted a list of email addresses, we might say that the email Index would be unique. This would keep any duplicates from being added. In the event that someone tries to add an email that is already in our Object Store, the email (and associated record) won’t be added. This helps protect the integrity of the data.

The last property we’ll discuss here is keyPath. The keyPath property allows us to define a sort of “master” Index within our Object Store. It’s a little similar to a primary key in a SQL table, but like most Indexes, does not require a specified type. Defining an Index as a keyPath allows us to look up records directly by that property.

Using our email example from before, if we specified the email address Index as our Object Stores keyPath, we could look up “ceo@bigdealcompany.com” directly, and get that record. If the keyPath for a record were a name instead, we would need to look up “Patricia CEO” to find the email address. Specifically, we cannot directly look up records via an Index that has not been defined as a keyPath.

Now that we have a good idea of IndexedDB’s structure, we’ll look at some code in the next post.

If you want to learn more about offline tech, I hope you’ll head over to serviceworkerbook.com and sign up for my mailing list, and follow me on Twitter! You’ll be the first to know when my book, ‘Let’s Take This Offline’ is out!