Category Archives: Uncategorized

Using letsencrypt with Zentyal

In this article, I will describe the steps I took to use to obtain and manage TLS certificates for a Zentyal server running in my homelab.


On my home network, I run a single virtualized instance of Zentyal Developers Edition to act as an internal DNS server. Let me make a few things clear about this server up front:

  • I am in no sense fully utilizing Zentyal. Zentyal can provide a multitude of services to a LAN, including acting as a Domain Controller, DHCP server, Mail server (with optional filtering and a Web Mail interface), NTP service, RADIUS, VPN services and more. I’m using it for DNS, which arguably isn’t even scratching the surface of Zentyal’s capabilities.
  • There is a dirth of Open Source DNS server offerings available for small networks that would fit my ideal requirements. Those would include:
    • Providing an API service (hopefully compatible with the API for something like AWS Route 53) that would support automation.
    • Providing a web user interface for maintaining DNS.
    • Including the ability to manage zones across providers. For example, I want to manage and serve “” within my home network, but “” would reside in an external DNS server or provider.

One of the things I’ve been trying to do within my home networking projects is to use TLS certificates from an actual recognized “official” issuer. This is possible because I own the domain I’m using for internal names, and free TLS providers like LetsEncrypt and ZeroSSL support the use of DNS TXT records for issuer challenges. What this means is that rather than authenticating a certificate request by issuing an HTTP request to a webserver on the target domain, the issuer

  • generates a text secret that it returns to the requester,
  • sleeps for a bit to allow the requester to store the secret as a TXT record in DNS and then
  • issues a DNS lookup for a specific TXT record and insures that the value returned matches.

ACME is a protocol for requesting, obtaining and renewing TLS certificates. The protocol is used by many certificate authorities, services and has been implemented by many certificate management system vendors within their products. There are a number of client tools that can be used to interact with a service that supports the ACME protocol; the most well-known of these is certbot from the Electronic Frontier Foundation. While certbot is a fine tool for the job, it has a potential drawback in that it is written in Python, and thus requires a python interpreter to work. An alternative client,, implemented entirely as a bash shell script, offers similar capabilities with a simpler deployment process.

Zentyal’s own documentation for using LetsEncrypt TLS for its web administration interface is (sadly, like most of their documentation) terse and a bit incomplete. I was able to find a snipet for using with GoDaddy on a Zentyal server, and adapt it to use AWS Route 53, my provider.


Log into the zentyal server using ssh. Use an account with administrator privileges (ie, one that can run sudo.)

You can either run the quick install as outlined at, or clone the project source and run the installer. It shouldn’t make a difference which you use; if you choose to clone the project, you should make sure that git and socat are installed on the server

apt install -y git socat
cd /tmp
git clone
./ --install --accountemail <YOUR EMAIL>

After the installation, you will need to restart your root shell session to pick up changes that the install process makes to the root .bashrc. The simplest way to do this is to exit the shell and restart it with sudo again.

If everything went well, you will now have a ~/ containing the installed script, and a shell alias that executes the script.

AWS Credentials

There are various tutorials for setting up an IAM user with access keys and policy rights to interact with Route 53. I followed this one. The bottom line is that before you can get certificates, you need to set these environment variables:

export AWS_ACCESS_KEY_ID="<your access key ID>"
export AWS_SECRET_ACCESS_KEY="<your secret access key>"
export HOST_FQDN="<your fully qualified domain name>"
export ACCOUNT_EMAIL="<your email address>"

Choosing a CA is configured to use ZeroSSL by default. If you want to use that issuer, you need to read this page, which covers the initial steps you need to register your ZeroSSL credentials for the script.

I decided to use LetsEncrypt, and to make it the default issuer. To do this, run the command --set-default-ca --server letsencrypt

Obtaining a certificate

Assuming you set the environment variables above, you can obtain a certificate by running this command --issue -d ${HOST_FQDN} --dns dns_aws --ocsp-must-staple \
   --keylength 4096  --force

The script will process for a while, and the results should be a brand new certificate.

Installing the certificate in Zentyal

This is the tricky part. has automation that will keep your certificates up to date, but you need to use the script to do the installation. Fortunately, there’s a way to do this: --install-cert -d ${HOST_FQDN}  \
  --reloadcmd "cat /root/${HOST_FQDN}/${HOST_FQDN}.cer /root/${HOST_FQDN}/${HOST_FQDN}.key > /var/lib/zentyal/conf/ssl/ssl.pem && systemctl restart zentyal.webadmin-nginx.service"

What all this is doing is creating a .pem file out of the certificate and key created when the certificate was issued, putting it into place where Zentyal will use it, and then restarting the web admin interface to pick up the new cert.

If everything goes well, you should now have an “offical” certificate for your server.

Project Status update

This raspberry Pi cluster with 3 control plane nodes and 7 workers has been running for 111 days now. Here are some observations:

  • /var/log just grows and grows. My master nodes especially seem to be writing to /k3s-service.log a lot, and there’s no log-rollover happening. I can sh into the nodes and remove the files occasionally, while I’m looking for a more automated solution. Ideally, /var/log would be a tempfs mount.
  • system-update seems to be working (all my nodes currently show version v1.20.11+k3s1). However, occasionally a node will remain cordoned, and the result will affect pods that use Longhorn storage. When this happens, the “apply” job will get a “Job was active longer than the expected deadline” event posted. Typically, I can resolve these by removing the cordon on the node.

In my next post, I’ll describe how I created the k3os images for the cluster, how I brought the cluster up, and how things have been installed on it.

What happened?

I’ve been taking a break. I’ve found that with the restrictions on activities due to (or rather, rationalized by) COVID19, I’m having work-life balance issues all over the place.

I”m still working on the cluster, and I have some posts in the pipeline, but it’s going to be a while before I’m able to put the kind of time in required to maintain the pace I’ve had since March. That is, if I ever get back to there, which is questionable because it’s manifestly unhealthy.

I hope everyone reading this is safe and healthy. These are trying times. Take breaks if you need them.


Last night, I tore down the entire stack and rebuilt it. This was to accommodate two things:

  • The release of Ubuntu 20.04, which required an upgrade, and
  • A hardware change involving using USB flash drives for the root file system on all nodes, rather than just a micro SD card.

As a result of this, I’ve come up with some changes to node provisioning – the steps required to go from bare hardware to an operating node. I’m planning a step-by-step guide for building out the cluster based on this. Stay tuned.