In this article, I want to delve into some additional measures to the ones I exposed in my first article, which add a fairly high level of control and protection of systems within our house. I will leave to the reader the choice of applying them or not, depending on their needs.
Frequently, when we have a varied ecosystem of applications inside the internal network at home because they are usually part of personal projects, we underestimate internal security in favor of simple solutions such as using a VPN.
One of the things that is often forgotten is the creation of SSL certificates for internal services. Let us remember that the usefulness of an SSL certificate is not only enabling browsers to show that the certifying authority is trusted, but regardless, they allow the use of encryption in communication between services.
Some of the possible options we have to manage internal SSL certificates:
- Creation of an SSL certificate using the command line: This is the option that requires the least initial work. Basically, we create an SSL certificate for any domain, using OpenSSL for example. We can even generate a certificate that expires in 100 years. Of course, we will have to install that certificate in our internal services, add exceptions to the browsers, so that warnings do not appear, and understand that we do not have the “trust” part that is usually associated with the HTTPS protocol.
- Implement certificate generation automatisms: One of them is certbot which allows, theoretically, to automate the generation, and sometimes the installation, of SSL certificates (for example, those of Let’s Encrypt). It can even be installed as a service in docker.
We all know that in order to connect to a server, we need to make some concessions in terms of making the attack surface inside that server larger. One of these examples is the opening of ports. In order to connect to a server securely, we can use an ssh server connected to it, with an open port. Since we don’t know when we want to connect, we must leave this port open at all times.
Could we, prior agreement between client and server, reduce this attack surface? This is what the concept of port knocking tries to answer.
We have seen the idea in movies about mobsters and illegal clubs. A closed door, without any markings, to enter you must knock a certain number of times, otherwise they will not open. In the case of port knocking, we will have all the server ports closed, and only when making a certain sequence of “hits”, in this case, sending packets to certain ports, will we open a port (usually the ssh server port).
And how will the server find out that they are “knocking” at his door? Simply by logging the packets that arrive at empty ports and analyzing said log.
In order to configure port knocking, assuming an ubuntu server, we will execute:
sudo apt install -y knockd
Next, we will open the configuration file:
sudo nano /etc/knockd.conf
[options] UseSyslog [openSSH] sequence = 7000,8000,9000 seq_timeout = 5 command = /sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT tcpflags = syn [closeSSH] sequence = 9000,8000,7000 seq_timeout = 5 command = /sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT tcpflags = syn
Here we can observe:
- sequence: the ports, in order, that will have to be called to activate this rule
- seq_timeout: the total seconds during which the entire sequence will have to be executed
- command: port opening, in iptables execution format
- OpenSSH, closeSSH: rule identification names (you can put whatever you want)
We would usually modify the list of ports (the specific protocol can even be indicated on each port, for example, 2222:udp,5736:tcp) and if we are not sure about possible latency problems when connecting remotely, we would increase the timeout.
Once this is configured, we have several options to make the “call”. There are mobile port knocking apps, and out of the box, the knockd package, installed on another machine, comes with the knock client, which can be run by a command like this:
knock –v ip_servidor 7000 8000 9000
DNS and Qname minimization
Many times decisions about theoretical security are related to two other concepts, privacy, and control.
As we well know, there are some machines that help the existence of the Internet called DNS, which translate the web addresses that we write in a browser, or through which we access various APIs, to IP addresses.
There is a type of attack called DNS spoofing or DNS cache poisoning, consisting of controlling or tricking a DNS server into serving a wrong IP address for a web address, usually pointing to a malicious copy of the original service, to hijack the data that the user enters.
How could we protect ourselves against this attack? One of the possible answers is control.
If we remember my previous article, we have a DNS filtering system called PiHole, that blocks potential threats, at the web address level. But Pi-Hole below, with standard configuration, uses proprietary DNS, like Cloudflare.
We can exercise a little more control, by installing (for example on the same machine as Pi-Hole, or in a different container) our own DNS, in this case with Unbound. I will not go into the specific installation, as there is an official guide.
Once Unbound is configured, we already have our own DNS, but there is an option to have even more control (and in this case, also adds privacy), QName Minimization.
To explain this, we will use an example. Our DNS pulls its information from global TLD (Top Level Domains) servers. Let’s imagine that we request the web foo.bar.com, in order to retrieve the IP of this web address, let’s say that we have to consult the following TLDs:
- TLD 1, which knows about .com
- TLD 2, which knows about .bar
In normal usage, we will send both servers the full address, foo.bar.com. But if we enable QName Minimization, we will only send the address chunk that applies to each:
- TLD 1, “.com”
- TLD 2, “.bar”
This way, we avoid sending unnecessary information to the TLDs, and we also avoid that all the TLDs involved know the full address that is being searched for.
In this article, I wanted to go into some advanced points of protection and control over our local infrastructure. As I have already mentioned, it is not necessary to reach this level of security and control, in most cases (although it can be a good exercise to increase knowledge). For those cases in which you think it is necessary, I hope it will help you.