Cache Storage: What's the difference between cache.add and cache.put?
If you want to learn everything you need to know about service workers, head over to serviceworkerbook.com and order my book, “Let’s Take This Offline”! Be sure to use cache-add
for 10% off!
What is cache.add?
If you’ve ever looked at a service worker, you’re probably familiar with this little bit of code from an oninstall
function:
cache.addAll([ /* ... */ ]);
Or perhaps you’ve seen:
cache.add(/* ... */);
Fortunately the syntax is pretty self-explanatory on what these two functions are doing. They are adding items to a cache. In one case they add multiple files, and in another, only a single file or request.
What is cache.put
Another function for adding items to a cache is cache.put
. This function has two parameters: the request and the response.
cache.put(request, response);
So besides the obvious difference, that addAll
and add
only accept a single parameter (though of varying types), what is the difference between add
and put
?
The difference between add and put
It’s a trick question! As it turns out, add
and addAll
use put
under the hood.
It works like this:
1. For each request in the array (or a single request in the case of `add`), do a fetch
2. Add the response to a key-value pair list, with the request as the identifying key
3. Cache the request _and_ response together (by using put)
Source: Service Worker Spec
When to use add and when to use put
Usually in a service worker install function, you’ll see this:
// code: https://github.com/carmalou/cache-response-with-sw/blob/master/serviceworker.js
self.oninstall = function() {
caches.open(cacheName).then(function(cache) {
cache.addAll([ /* ... */ ])
.then(/* ... */)
.catch(/* ... */)
})
}
It makes a lot of sense to use addAll
over put
in this case, because the user has never visited our site before. addAll
is going to go fetch those responses and add them to your cache automatically, so we can use it during once our install event and no longer need to go over the network to get our static assets like HTML files. Additionally the syntax is very clean, so its very clear to future-us what we’re doing.
So, if we should use addAll
in an install event, where should we use put
?
Before we get into the add
vs put
debate, let’s look at our fetch event.
self.onfetch = function(event) {
event.respondWith(
caches.match(event.request)
.then(function(cachedFiles) {
if(cachedFiles) {
return cachedFiles;
} else {
// should I use cache.put or cache.add??
}
})
)
)
}
In the above code, we have a fetch event sitting between the client making the request and the network which will process the request. Our service worker will intercept our request and check for a match within our cache. If a match is found, the service worker will return the response. Otherwise, we’ll fall into the else.
Let’s first look at using cache.add
:
/* ... */
if(cachedFiles) {
return cachedFiles;
} else {
return caches.open(cacheName)
.then(function(cache) {
return caches.add(request)
.then(function() {
return fetch(request)
});
})
}
/* ... */
Briefly, should no match be found, we’ll fall into the else. Within the else, we’ll go ahead and open up the cache we want to store our data in, and then use cache.add
to store off the response. At the end, we’ll go ahead and do a fetch over the network so the client can access the data.
But there’s a problem with this: we’ll end up needing to do two fetches! Because caches.add
doesn’t actually return the response to the request it runs, we still need to do an additional fetch to get the response back to the client. Doing two fetches for the same data is redundant, and fortunately put
makes it unnecessary.
Let’s take a look at how we might rewrite this with put
:
// code: https://github.com/carmalou/cache-response-with-sw/blob/master/serviceworker.js
/* ... */
if(cachedFiles) {
return cachedFiles;
} else {
return fetch(event.request)
.then(function(response) {
return caches.open(cacheName)
.then(function(cache) {
return cache.put(event.request, response.clone());
})
.then(function() {
return response;
})
})
}
/* ... */
So we’ve sort of flipped the order of what we were doing before with cache.add
. Instead of doing the fetch last, we go ahead and do it first. After that’s completed, we have a reference to the response. Next we can go ahead and get a reference to our cache with caches.open
and use put
to cache both the request and the response.
Note: Keep your eyes peeled for a blog post about why we are cloning the response!
Once we’ve finished caching the data, we can go ahead and return the response back to the client! And then next time the client makes this request, a trip over the network will no longer be necessary!
So, in conclusion, you should use cache.add
or cache.addAll
in situations where you don’t need that data to get back to the client – such as your install event. When you’re caching new requests, such as API requests, it’s better to use cache.put
because it allows you to cache the data and send it back to the client with a single network request.
Thanks so much for reading! If you liked this post, you should head on over to serviceworkerbook.com and buy my book! Be sure to use cache-add
for 10% off!
How to cache API requests with a service worker
If you want to learn everything you need to know about service workers, head over to serviceworkerbook.com and order my book, “Let’s Take This Offline”! Be sure to use cache-api-requests
for 10% off!
See the source code for this post.
Lots of service worker posts (like this one I wrote for the David Walsh blog) show you enough of a service worker to get started. Usually, you’re caching files. This is a great start! It improves your app’s performance, and with 20% of Americans experiencing smartphone dependence, it’s a great way to make sure users can access your app – regardless of their connection or network speed.
But what about requests, and specifically GET
requests? A service worker, along with the Cache Storage API, can also cache your GET
requests to avoid unnecessary trips over the network. Let’s look at how we would do that.
Note: This post assumes a basic understanding of service workers. If you need a service worker explainer, check out this blog post.
Let’s take a look at our onfetch
function in our service worker. It currently looks like this:
self.onfetch = function(event) {
event.respondWith(
(async function() {
var cache = await caches.open(cacheName);
var cachedFiles = await cache.match(event.request);
if(cachedFiles) {
return cachedFiles;
} else {
return fetch(event.request);
}
}())
);
}
Briefly, this function intercepts a fetch request and searches for a match in the cache. If a match isn’t found, the request proceeds as usual. The problem here is that there’s nothing to go ahead and cache new data as it’s received. Let’s change that.
Our new function looks like this:
/* ... */
else {
var response = await fetch(event.request);
}
If this looks similar to what you’re doing in your client code, that’s because it is! Essentially, we recreate the original fetch, but this time within the service worker. Now we’ll go ahead and cache the response.
/* ... */
else {
var response = await fetch(event.request);
await cache.put(event.request, response.clone());
}
There’s a couple of new things in here. First, we are using cache.put
over cache.add
. cache.put
allows a key-value pair to be passed in, which will match the request to the appropriate response. You might also notice the response.clone()
. The body of a response object can only be used once. This means, if you cache the response object, you’ll be able to return it, but your client won’t be able to access the body of the response. To be able to access your data we’ll go ahead and make a clone of the response and cache that instead.
Lastly, you return the response. So the full onfetch
function looks like this:
self.onfetch = function(event) {
event.respondWith(
(async function() {
var cache = await caches.open(cacheName);
var cachedFiles = await cache.match(event.request);
if(cachedFiles) {
return cachedFiles;
} else {
try {
var response = await fetch(event.request);
await cache.put(event.request, response.clone());
return response;
} catch(e) { /* ... */ }
}
}())
)
}
There you have it! Now you’re ready to start dynamically caching API responses.
Thanks so much for reading! If you liked this post, you should head on over to serviceworkerbook.com and buy my book! Be sure to use cache-api-requests
for 10% off!
How to return a custom response from a service worker
If you want to learn everything you need to know about service workers, head over to serviceworkerbook.com and order my book, “Let’s Take This Offline”! Be sure to use custom-response
for 10% off!
Note: this blog post assumes a working knowledge of service workers. If you need a refresher, I recommend looking here.
There might be times you want to send back a custom response from your service worker, rather than going out over the network. For example, you might not have a certain asset cached, while a user’s internet connection is simultaneously down. In this case, you can’t go over the network to fetch the asset, so you might want to send a custom response back to the client.
Let’s take a look at how we might implement a custom response.
This project demonstrates how you would return a custom response from a service worker. The basic idea is to make a fetch request to an API (in this case FayePI, which you should definitely check out). The service worker, however, sits between the client making requests and the API receiving them.
In this case, the service worker intercepts the fetch request and sends back a custom response – rather than going across the network.
Let’s look at the service worker to see how that works.
A service worker’s fetch event listener usually looks like this:
self.onfetch = function(event) {
event.respondWith(
caches.match(event.request)
.then(function(cachedFiles) {
if(cachedFiles) {
return cachedFiles;
} else {
// go get the files/assets over the network
// probably something like this: `fetch(event.request)`
}
})
)
}
Let’s change up what’s happening in the else-block of this code to return a custom response.
/* ... */
else {
if(!event.request.url.includes(location.origin)) {
var init = { "status" : 200 , "statusText" : "I am a custom service worker response!" };
return new Response(null, init);
}
}
You might be wondering why the if-check is included there. We only want this to affect out-going requests, rather than requests coming to the app, which are likely requests for html files, and other assets. This quick check makes sure that the requested URL doesn’t match the URL of the app, so all of the static pages will still load properly.
Within that if-check, we are newing up a Response object. Here we go ahead and set the status code to 200 since it “worked” (that is, it reached our service worker and our service worker returned our custom response). We’ll also go ahead and change the status text, so we know that the response is actually coming from the service worker.
And that’s about it! Now you know how to return a custom response from your service worker! Be sure to check back soon to learn about how to cache more than just static files!
Thanks so much for reading! If you liked this post, you should head on over to serviceworkerbook.com and buy my book! Be sure to use custom-response
for 10% off!
How to version your service worker cache
If you want to learn everything you need to know about service workers, head over to serviceworkerbook.com and order my book, “Let’s Take This Offline”! Be sure to use version-cache
for 10% off!
A lot of service worker examples show an install example that looks something like this:
self.oninstall = function(event) {
caches.open('hard-coded-value')
.then(function(cache) {
cache.addAll([ /.../ ])
.catch( /.../ )
})
.catch( /.../ )
}
Let’s do a quick overview of the above code. When a browser detects a service worker, a few events are fired, the first of which is an install event. That’s where our code comes in.
Our function creates a cache called hard-coded-value
and then stores some files in it. There’s one small problem with this … our cache is hard-coded!
There’s an issue with that. A browser will check in with a service worker every 24 hours and re-initiate the process, but only if there are changes. You might change your app’s CSS or JavaScript, but without a change to the service worker, the browser will never go and update your service worker. And if the service worker never gets updated, the changed files will never make it to your user’s browser!
Fortunately, there’s a pretty simple fix – we’ll version our cache. We could hard code a version number in the service worker, but our app actually already has one. So handy!
We’ll use our app’s version number from the package.json
file to help. This method also requires (pun intended) us to be using webpack.
In our service worker, we’ll require our package.json
file. We’ll grab the version number from the the package.json
and concatenate that to our cache name.
self.oninstall = function(event) {
var version = require(packagedotjson).version;
caches.open('hard-coded-valuev' + version)
.then(function(cache) {
cache.addAll([ /.../ ])
.catch( /.../ )
})
.catch( /.../ )
}
Turns out, there’s actually an even better way than above using some of webpack’s built-in tools. A problem with the above code is that your package.json
file will get bundled into your service worker. That’s pretty unnecessary and it’s going to increase the size of your bundle.
We’ll use DefinePlugin
to make this even cleaner.
Let’s add a property to our DefinePlugin
function in our webpack file. We’ll call it process.env.PACKAGEVERSION
.
It might look like this:
var version = require(packagedotjson).version;
new webpack.DefinePlugin({
'process.env.PACKAGEVERSION': JSON.stringify(version)
});
Source: webpack DefinePlugin
And then in our service worker instead of referencing version
directly, we’ll use process.env.PACKAGEVERSION
. It’ll look like this:
self.oninstall = function(event) {
caches.open('hard-coded-valuev' + process.env.PACKAGEVERSION)
.then(function(cache) {
cache.addAll([ /.../ ])
.catch( /.../ )
})
.catch( /.../ )
}
webpack will work behind the scenes for you, and swap out the ‘process.env.PACKAGEVERSION
’ for the proper number. This solves the problem of needing to update our service worker, and it handles it in a clean, simple way. Plus it will help us out when we need to clean up former caches. I’ll write about that next, so stay tuned!
Thanks so much for reading! If you liked this post, you should head on over to serviceworkerbook.com and buy my book! Be sure to use version-cache
for 10% off!
CodeNewbies: Offline First
Late last year I had the opportunity to appear on the CodeNewbies podcast! Saron and I talked about Offline First, and I have to say, she is so cool. She’s a great interviewer and the CodeNewbies community is absolutely fantastic.
If you haven’t heard to the podcast, you can give it a listen here: link.
Offline First is an idea that apps should work in an offline capacity. It doesn’t have to mean that 100% of the app’s features work offline. Instead it can just mean that an app uses a service worker and indexeddb to store off someone’s data in the event of internet loss.
The important takeaway from Offline First is that you consider at the beginniing, what will happen when my app loses connection? It’s important to frame this as when your app loses connection because it will definitely happen to everyone.
Following along with the CodeNewbie chat was incredibly interesting. You can see the story here.
We know from studies that internet access is based on income, but it was really interesting to see that play out in the chat:
A1: Very accessible if you can pay the company’s monopoly price. If not, well no. #CodeNewbie https://t.co/k8kQveqzTs
— SEArndt (@SEArndt) February 14, 2019
A1: I live in a small rural town in ne Minnesota. There are not a lot of options for internet here. It’s better in town and degrades quickly as you get farther out. Our local library has att hot spots folks can check out. And there’s an initiative to increase broad band...1/2
— T Z Drift (@tzottoladrift) February 14, 2019
It was really interesting seeing how many people were affected by a lack of internet, and also their ingenuity for staying connected. Some people mentioned having to use libraries, coffeeshops, or hot spots to learn to code:
A1: Very lucky to have fast Internet now, but had no Internet at home when I was first learning to code -- had to connect via mobile hotspot or mooch office wifi after work. 🙄 #CodeNewbie
— Chrissy Hunt (@whereischrissy) February 14, 2019
At one of my old careers I didn’t have fast/reliable internet at work so I would download conference talks and watch them during down time. #CodeNewbie
— Andrew Cook (@codingwcookie) February 14, 2019
We only got 3g at my house for awhile. most of the time it didn't work so I used my phone to learn to code initially. I snuck to the library whenever my mom would watch my kids.
— Christina Gorton (@coffeecraftcode) February 14, 2019
I sat in my car to interview for my first tech job so I'd have phone service 😅 #CodeNewbie
Toward the end, the conversation shifted to how we have been affected by the internet. Most people said they thought internet access had been positive for them:
A3) Internet is a double edged sword. Internet causes distractions, but it also helps to answer questions and learn new things, so I have to say it absolutely helps a lot!#codenewbie
— Matthew Jacobs (@codingmatty) February 14, 2019
People mentioned being able to communicate in ways we’ve never been able to before.
A3: Sped it up by a LOT - google and stackoverflow are invaluable resources, and being able to jump on a quick video chat with a coworker at any time means we can WFH and still remain productive and collaborative #CodeNewbie
— 👏 Andy 🍁 George 🙏 (@RealAndyGeorge) February 14, 2019
Others mentioned how the internet has changed their lives for the better.
A3: Without access to the internet I would not have learned any computer code whatsoever. The impact of the internet is so immense it's impossible to put into words. #CodeNewbie
— Eric Tillberg (@Thrillberg) February 14, 2019
It was really interesting, though, to see people talking about how internet has become a bit of a “double-edged sword”.
A4: I often wonder if I'd be just as good without it. Some positive things but also many negative. The time suck effect in particular. Taking away from other things that may be more valuable in the longer haul. #CodeNewbie
— Dennis Keim (@denniskeim) February 14, 2019
Q3: When connections are slow, my ability to access resources (SO, blogs, Slack) and productivity are impacted and I become very frustrated.
— El Otro Alondo (@abrewing) February 14, 2019
In terms of my life, the internet has been a source of distraction and major temptations but from a work and money making stand point the internet has given rise to opportunity after opportunity to make extra money. For my coding journey, the internet is vital. #codenewbie
— warrick bowman jr (+PST+) (@WarrickB_Qaatil) February 14, 2019
A3: Everything speeds up with the speed of the internet as we become more and more dependent on it! But it also stops abruptly the moment the internet goes out! #codeNewbie
— Aditya Bharadwaj (@Adibharadwaj26) February 14, 2019
I’m so glad I had the opportunity to go on CodeNewbies and help shed some light on this issue. This conversation just puts real faces on the issue of inconsistent internet access. We know what the studies say, but seeing how many people throughout the conversation dealt with not having good access or a slow connection really drove the point home.
Thanks for having me, CodeNewbies!