Introducing a library to a front-end project is a common task, and nearly every project depend on others. There are several ways you can do it, and while it seems like an easy task, it can easily end up in a mess if not done with caution.
In this series, we’ll look into different ways to manage front-end dependencies. In this installment, we’ll start with the classical ways, when you don’t use any tools. We’ll look into the downsides as well as the benefits, and the potential remedies.
This is the written version of a screencast I’ve done for SitePoint. If you have a subscription, be sure to check out that too at this link.
Download and unzip like in the ’00s
The most obvious method is to simply download, unzip, and reference the library. Most people who are just starting out with web development are doing this. You don’t need any tools, just Google the name and click Download. Most projects still publish new versions this way.
Usually, the only things you need to decide on is the directory structure and which files to include. But the simplest way is to simply unzip the contents into a directory.
We write articles like this regularly. Join our mailing list and let's keep in touch.
It’s easy and just works. You don’t need to transpile anything, you don’t need to know how npm, yarn, or any other package management works, and you can be 100% sure that it will work the same way on other peoples’ machines.
Another advantage is that the libraries will be hosted on the same host as your main files. If you are using HTTPS, then only one handshake will be needed, and it makes HTTP/2 pushing possible.
Most of the disadvantages are coming from the lack of self-documentation. If you just drop everything under a folder, it’s hard to see what versions you are using of each dependency. Some tools have their version code as part of the directory name, some have a versions file, but there is no unified way.
Also, depending on the directory structure, it might be hard to see what dependencies you are using. If you don’t have a separate directory for all the dependencies, they can easily intermingle with each other and with your application code; it can be a mess.
For these two problems, a strict directory structure like dependency_name/version might help. This way you can always see the dependencies and the versions as well.
A more insidious problem is that you can not tell whether any of the files are modified or not. If someone modified a dependency, then you can’t simply update to a new version. It can easily go unnoticed and undocumented, resulting in very hard-to-find errors.
Also because of the lack of self-documentation, it’s hard to see where the files came from. Maybe the fellow developer who downloaded it forked it first, patched something and used that version. If you would then update it from upstream, the changes are lost, and you are left with a broken app.
And lastly, you need to check these files to VCS. I’m sure everybody saw those giant commits when someone pushed hundreds of Ks of library code, and it is then in the history from then on. It not only pollutes the statistics, but makes it hard to see the actual changes in the commit.
Using a CDN is pretty straightforward; just find the library and reference it in your HTML. This way you can offload some of the traffic to third-party servers, and if many sites are using the same files then the visitors don’t need to download them every time.
There are many CDNs you can choose from. You can use the Google Hosted Libraries if you find the library you need there. Only a handful of projects are hosted, but it still worth a look.
For a greater selection, you can search at cdnjs, which offers most of the popular libraries. You can even request the inclusion of a lib if it already gained some popularity. Chances are you can find what you are looking for here.
Special purpose CDNs
Along with the general CDNs, there are some with a different scope. If you want to reference a file from GitHub, you can’t just simply click Raw and use the URL, because it has the text/plain content type. This is when RawGit can help. Simply paste the GitHub URL to the top input box, and use the appropriate resulting URL. Be sure not to use the production one with a branch URL, as once the resource is cached, you won’t get the new versions. To reference a specific commit, that can not change, go to the commit summary page, and click Browse files. If you use the files from here, then you can use the production URL from RawGit.
If you are not using npm, but would like to reference a file from that repository, you can use unpkg (aka. npmcdn). To find the URL, simply use the https://unpkg.com/<package>, select the version at the top right corner, and you can browse the files. Since it’s tied to npm, you can find all the packages and versions.
Including scripts from third-party sources brings some risks. The visitors’ browsers are trusting these contents as if they were coming from your site. But since you don’t control the responses from CDNs, using them bring in security risks.
This is where Subresource Integrity hashes (SRI hashes for short) come handy. Instead of just referencing the resource in a script tag like this:
you can also specify a hash that identifies the content:
In case the CDN is compromised and start sending malicious content, the browsers simply won’t load it. Your site might be down, but security is not compromised.
As with all new web technology, browser compatibility is a crucial point. Currently, Firefox, Chrome, and Opera already have full support, while Safari and Edge are lagging behind.
The good news is that it is fully backwards-compatible. If a visitor’s browser does not support this check, then it will simply accept all content, same as if you did not specify it. In effect, you should always add the hash.
Generating SRI hashes
The easiest way is to go to srihash.org, enter the script URL, click Hash!, and use the resulting script tag. If you don’t want to use some third-party website or your content is not available to the public, you can use openSSL to generate it with the following script, taken from the site:
The obvious upside of simply referencing content is it’s the easiest solution of all. You don’t need to download and host anything, and you can easily confine simple programs to one or two files. Even with SRI hashes, you can just find what you need, paste the URL, and insert the tag. With the special CDNs, it’s likely that you’ll easily find what you are looking for.
Using a different infrastructure to host the bigger files saves bandwidth. You don’t need to serve these files yourself, as you can offload it for free. Also, the content may be already cached. If many sites are using the same CDNs, the visitors don’t need to download the files every time.
The script tag is also self-documenting. It depends on the CDN, but generally, it’s quite easy to say which projects and which versions are in use, or where you can grab the code. The code can not be modified locally, preventing bugs discussed before.
Apart from the upsides, CDNs have a serious downside, and that’s why many people don’t recommend them. If you reference critical resources, your site’s uptime is tied to the third-party. Simply put: If the CDN is down, your site is down too. And if you use multiple providers, your reliability will degrade even further.
This happened to me, the CDN went down just before a presentation when we wanted to demo the app.
You can mitigate it to some degree if you introduce a fallback:
The working mechanics are simple: if jQuery is not loaded, the script adds a local fallback. This way in the unlikely event the CDN is down, your site starts serving the library and the visitors won’t even notice.
There are two problems with this approach. If the CDN does not respond instead of sending an error, your site will freeze. And that’s a major flaw, since you can’t predict how the third-party server will behave, and there is no way to handle this scenario at the moment. And second, there is no universal way to know if a library is loaded successfully. In the case of jQuery, you can check the existence of a variable; but for many libraries it’s harder and may also change between versions.
Another downside of CDNs is that you can’t push content from there. If the visitor doesn’t have the library locally, they need additional roundtrips.
In this post, we’d discussed the classical ways to introduce front-end dependencies. While you should use neither in a critical system, both the downloaded libs and the CDNs are ubiquitous even today.
This part of the series focuses on methods not relying on any tools. In the second installment, we’ll focus on more modern approaches to dependency management.