Let’s Encrypt provides an easy and automatable way to get valid and trusted SSL certificates for your webserver. Traditionally, getting a certificate consists of registering to a trusted provider, validating that you really own the domain, then installing the cert to your server, then repeating the process every 1-3 years. This proved to be a fragile and insecure practice.
In contrast, with Let’s Encrypt you configure your server so that it can request and install the certificate without manual intervention. As the validity is capped, it happens every few months.
In effect, Let’s Encrypt makes certificate issuance and management more of a configuration problem than a business one. And as such, you need to know the technical details to successfully configure certbot so that the services you operate can get, renew, and use certificates. Gone are the days when you just emailed the CTO to get a new cert and he sent you one.
In this and the following posts, you’ll learn the basics of Let’s Encrypt verification, and how to handle more challenging scenarios
1. The basic cases
Let’s Encrypt’s certbot works with Apache and Nginx out of the box for most configurations. If you use either of them, use the appropriate authenticator, and you are practically done setting up HTTPS.
It works by reading the config file and making the necessary changes automatically. The only manual task is to add a crontab entry to run the renewal script every day.
But this approach won’t work in some cases. If you use a custom backend, like Node.js, to handle HTTP, then certbot has no way of configuring it automatically. Or the Apache/Nginx config may be too complex, figuring out the necessary changes is not possible.
Or you don’t want some third-party programs to mess with your configs.
These are all common cases when Let’s Encrypt won’t work out-of-the-box, and you need just a bit more planning to make it work.
2. Terminate SSL before the server
The easiest, and in most cases the best, approach is to terminate SSL before the server. If you use some kind of load balancer, chances are it supports this.
In a nutshell, add a server (or a service) that runs a web server, configure Let’s Encrypt on that, and let the connection to your real front-end servers go unencrypted. This is a simple solution, as all the burden of encryption and certificate management is moved out of your app, and in some cases you can use the autoconfigure plugins of certbot.
In the case of having a cluster of servers and a load balancer (for example, you use ELB with AutoScaling on AWS), handling certificate at the single ingress point simplify things greatly.
The obvious downside is that the connection is not encrypted from end-to-end. If all your servers are in a closed ecosystem, like AWS or Google Cloud, this might not be a problem. But routing the unencrypted traffic through the public internet is something you need to avoid.
As a rule of thumb, if you have a cluster within a single ecosystem, terminate SSL at the load balancer. But if you have a single server, it brings more problems than it solves.
3. Configure your server for http auth
Let’s Encrypt uses the ACME protocol (no connection to the Coyote and the Road Runner cartoons).
The HTTP auth works like this:
- Certbot places a file in a directory
- Then a remote server tries to fetch that from
- If it is successful, you proved ownership of the domain and get the certificate
In effect, all you need to do is serve that file on that URL.
That’s why it works with simple Apache and Nginx configs. Certbot can read where the docroot is, and places the file there. It will be available to the remote server.
For a custom server, you need to tell certbot where a publicly available path is. Then after getting a certificate, how to restart the server.
The easiest way is to use webroot setting with a
And to renew:
If you don’t have a public path and you can’t easily configure one, then things get interesting, and we’ll cover those scenarios in later posts.
4. Use DNS challenge
Let’s Encrypt’s DNS challenge is a convenient way of authorization that does not require config changes in the servers. It works by putting a TXT record on your domain’s DNS.
But modifying DNS is a nontrivial task. Some services offer an API, for example AWS Route 53, that you can use in hook scipts. But if the DNS provider you are using does not, you are out of luck of automating it.
Another downside is that even though the process is automated via an API, you need to supply API keys to the instance itself. In contrast, the HTTP challenge does not require access to any secrets (except for the certificate itself, but if an attacker gains control of the instance, he can request one). And AFAIK, DNS API’s usually can’t be locked down to allow only reconfiguring TXT records. This means, if an attacker gains access to the API keys, he can reconfigure the whole domain, opening up a whole lot of attack vectors.
Tip: use this only if the HTTP auth is difficult for some reason.
5. (Disabled now) TLS-SNI challenge
This was a way to cope with difficult http redirections scenarios, but it is disabled now. If you google around, you’ll find articles and answers describing and recommending it, but know that they are no longer relevant.