Tag: ubuntu

  • LAMP Stack App on AWS with Docker, RDS and phpMyAdmin

    Intro

    I was recently tasked with migrating an old LAMP-stack app (PHPv5) running on a Centos6 server to the newer PHPv7 on a Centos8 machine, and ensuring that the code didn’t break in the php upgrade. I figured the best way to do that would be to use Docker to simulate PHP 7 on a Centos8 machine running on my laptop.

    However, the plan changed and instead of deploying the new app on a Centos8 machine, it was decided that we would deploy the app to its own EC2 instance. Since I was already using Docker, and since I no longer had to plan for a Centos8 deployment, I decided to use Ubuntu 20.04 for the EC2 instance. I installed docker and docker-compose, and adapted the code to use proper PHP-Apache and phpMyAdmin Docker images. I also decided to use AWS RDS mysql, and to use the EC2 instance to implement logical backups of the mysql DB to AWS S3.

    The rest of this article consists in more detailed notes on how I went about all of this:

    • Dockerizing a LAMP-stack Application
      • php-apache docker image
      • creating dev and prod versions
      • updating code from PHPv5 to PHPv7
      • handling env variables
      • Adding a phpMyAdmin interface
    • AWS RDS MySQL Setup
      • rdsadmin overview
      • creating additional RDS users
      • connecting from a server
    • AWS EC2 Deployment
      • virtual machine setup
      • deploying prod version with:
        • Apache proxy with SSL Certification
        • OS daemonization
      • MySQL logical backups to AWS S3

    Dockerizing a LAMP-stack Application

    php-apache docker image

    I’ll assume the reader is somewhat familiar with Docker. I was given a code base in a dir called DatasetTracker developed several years ago with PHPv5. The first thing to do was to set up a git repo for the sake of development efficiency, which you can find here.

    Next, I had to try and get something working. The key with Docker is to find the official image and RTFM. In this case, you want the latest php-apache image, which leads to the first line in your docker file being: FROM php:7.4-apache. When you start up this container, you get an apache instance that will interpret php code within the dir /var/www/html and listening on port 80.

    creating dev and prod versions

    I decided to set up two deployment tiers: dev and prod. The dev tier is chiefly for local development, wherein changes to the code do not require you to restart the docker container. Also, you want to have php settings that allow you to debug the code. The only hiccup I experienced in getting this to work was understanding how php extensions are activated within a docker context. It turns out that the php-apache image comes with two command-line tools: pecl and docker-php-ext-install. In my case, I needed three extensions for the dev version of the code: xdebug, mysqli, and bcmath. Through trial and error I found that you could activate those extensions with the middle 3 lines in the docker file (see below).

    You can also set the configurations of your php to ‘development’ by copying the php.ini-development file. In summary, the essence of a php-apache docker file for development is as follows:

    FROM php:7.4-apache
    
    RUN pecl install xdebug
    RUN docker-php-ext-install mysqli
    RUN docker-php-ext-install bcmath
    
    RUN cp /usr/local/etc/php/php.ini-development /usr/local/etc/php/php.ini

    When you run a container based on this image, you just need to volume-mount the dir with your php code to /var/www/html to get instant updates, and to map port 80 to some random port for local development.

    Next, we need to write a docker-compose file in order to have this image run as a container along with a phpMyAdmin application, as well as to coordinate environment variables in order to connect to the remote AWS RDS mysql instance.

    An aspect of the set up that required a bit of thought was how to log into phpMyAdmin. The docker-image info was a bit confusing. In the end though, I determined that you really only need one env variable — PMA_HOST — passed to the phpMyAdmin container through the docker-compose file. This env variable just needs to point to your remote AWS RDS instance. phpMyAdmin is really just an interface to your mysql instance, so you then log in through the interface with your mysql credentials. (See .env-template in the repo.)

    (NOTE: you might first need to also pass env variables for PMA_USER and PMA_PASSWORD to get it to work once, and then you can remove these; I am not sure why this seems to be needed.)

    updating code from PHPv5 to PHPv7

    Once I had an application running through docker-compose, I was able to edit the code to make it compatible with PHPv7. This included, amongst other things, replacing mysql_connect with mysqli_connect, and replacing hard-coded mysql credentials with code for grabbing such values from env variables. A big help was using the VSCode extension intelephense, which readily flags mistakes and code that is deprecated in PHPv7.

    AWS RDS MySQL Setup

    rdsadmin overview

    Note: discussions about ‘databases’ can be ambiguous. Here, I shall use ‘DB’ or ‘DB instance’ to refer to the mysql host/server, and ‘db’ to refer to the internal mysql collection of tables that you select with the syntax `use [db name];`. As such, a mysql DB instance can have multiple dbs within it.

    In order to migrate the mysql database from our old Centos6 servers to an RDS instance, I first used the AWS RDS interface to create a mysql db instance.

    When I created the mysql DB instance via the AWS RDS interface, I assumed that the user I created was the root user with all privileges. But this is not the case! Behind the scenes, RDS creates a user called rdsadmin, and this user holds all the cards.

    To see the privileges of a given user, you need to use SHOW GRANTS FOR 'user'@'host'. Note: you need to provide the exact host associated with the user you are interested in; if you are not sure what the host is for the user, you first need to run:

    SELECT user, host FROM mysql.user WHERE user='user';

    In the case of an RDS DB instance, rdsadmin is created so as to only be able to log into the DB instance from the same host machine of the instance, so you need to issue the following command to view the permissions of the rdsadmin user:

    SHOW GRANTS for 'rdsadmin'@'localhost';

    I’ll call the user that you initially create via the AWS console the ‘admin’ user. You can view the admin’s privileges by running SHOW GRANTS; which yields the following result:

    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, 
    DROP, RELOAD, PROCESS, REFERENCES, INDEX, 
    ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, 
    LOCK TABLES, EXECUTE, REPLICATION SLAVE, 
    REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, 
    CREATE ROUTINE, ALTER ROUTINE, CREATE USER, 
    EVENT, TRIGGER ON *.* TO `admin`@`%` 
    WITH GRANT OPTION

    The final part — WITH GRANT OPTION — is mysql for “you can give all of these permissions to another user”. So this user will let you create another user for each db you create.

    If you compare these privileges with those for rdsadmin, you’ll see that rdsadmin has the following extra privileges:

    SHUTDOWN, FILE, SUPER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE, SERVICE_CONNECTION_ADMIN, SET_USER_ID, SYSTEM_USER

    Several of these privileges — such as shutdown — can be executed via the AWS console. In summary, rdsadmin is created in such a way that you can never use it directly, and you will never need to. The admin user has plenty of permissions, and one needs to consider best practices as to whether to use the admin user when connecting from one’s application.

    I personally think that it is good general practice to have a separate db for each deployment tier of an application. So if you are developing an app with, say, a ‘development’, ‘stage’, and ‘production’ deployment tier, then it’s wise to create a separate db for each tier. Alternatively, you might want to have the non-production tiers share a single db. The one thing that I believe is certain though is that you need a dedicated db for production, that it needs to have logical backups (i.e. mysqldump to file) carried out regularly, and that you ideally never edit the prod db directly (or, if you do, that you do so with much fear and trembling).

    Is it a good practice to have multiple dbs on a single DB instance? This totally depends on the nature of the applications and their expected load on the DB instance. Assuming that you do have multiple applications using dbs on the same DB instance, you might want to consider creating a specialized user for each application in case compromise of one user compromises ALL your applications. In that case, the role of the admin is ONLY to create users whose credentials will be used to connect an application to the db. The next section shows how to accomplish that.

    creating additional RDS users

    So lets assume that you want to create a user who’s sole purpose is to enable an application deployed on some host HA (application host) to connect to the host on which the DB instance is running Hdb (db host). Enter the RDS DB instance with your admin user credentials and enter:

    CREATE USER 'newuser'@'%' IDENTIFIED BY 'newuser_password';
    GRANT ALL PRIVILEGES ON db_name.* TO 'newuser'@'%';
    FLUSH PRIVILEGES;

    This will create user ‘newuser’ with all of the privileges of the admin user. The ‘user’@’%’ syntax means “this user connecting from any host”.

    Of course, if you want to be extra secure, you can specify that the user can only connect from specific hosts by running this command multiple times replacing the wildcard ‘%’.

    As an aside, if you want to know the name of the host you are currently connecting from, then run:

    mysql> SELECT USER() ;
    +-------------------------------------------+
    | USER()                                    |
    +-------------------------------------------+
    | admin@c-XX-XX-XXX-XXX.hsd1.sc.comcast.net |
    +-------------------------------------------+
    1 row in set (0.07 sec)

    In this case, the host ‘c-XX-XX-XXX-XXX.hsd1.sc.comcast.net’ has been determined as pointing to my home’s public IP address (assigned by my ISP). (I assume that under the hood mysql has used something like nslookup MYPUBLIC_IPADDRESS to determine the hostname as it prefers that rather than my present IP address, which is assumed to be less permanent.)

    enabling user to change password

    As of Nov 2022, there seems to be an issue with phpmyadmin whereby a user thus created cannot change his/her own password through the phpmyadmin interface. Presumably under the hood the sql command to change the user’s password is such as to require certain global privileges (and this user has none). A temporary solution is to connect to the DB instance with your admin user and run:

    GRANT CREATE USER ON *.* TO USERNAME WITH GRANT OPTION; 

    connecting from a server

    One thing that threw me for a while was the need to explicitly white-list IP addresses to access the DB instance. When I created the instance, I selected the option to be able to connect to the database from a public IP address. I assumed that this meant that, by default, all IP addresses were permitted. However, this is not the case! Rather, when you create the DB instance, RDS will determine the public IP address of your machine (in my case – my laptop at my home public IP address), and apply that to the inbound rule of the AWS security group attached to the DB instance.

    In order to be able to connect our application running on a remote server, you need to go that security group in the AWS console and add another inbound-rule for MySQL/Aurora for connections from the IP address of your server.

    AWS EC2 Deployment

    virtual machine setup

    I chose Ubuntu server 20.04 for my OS with a single core and 20GB of storage. (The data will be stored in the external DB and S3 resources, so not much storage is needed.) I added 4GB of swap space and installed docker and docker-compose.

    apache proxy with SSL Certification

    I used AWS Route 53 to create two end points pointing to the public IP address of the EC2 instance. To expose the two docker applications to the outside world, I installed apache on the EC2 instance and proxy-ed these two end points to ports 5050 and 5051. I also used certbot to establish SSL certification. The apache config looks like this:

    <IfModule mod_ssl.c>
    <Macro SSLStuff>
        ServerAdmin webmaster@localhost
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
        Include /etc/letsencrypt/options-ssl-apache.conf
        SSLCertificateFile /etc/letsencrypt/live/xxx/fullchain.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/xxx/privkey.pem
    </Macro>
    
    <VirtualHost _default_:443>
        Use SSLStuff
        DocumentRoot /var/www/html
    </VirtualHost>
    
    <VirtualHost *:443>
        Use SSLStuff
        ServerName dataset-tracker.astro-prod-it.aws.umd.edu
        ProxyPass / http://127.0.0.1:6050/
        ProxyPassReverse / http://127.0.0.1:6050/
    </VirtualHost>
    
    <VirtualHost *:443>
        Use SSLStuff
        ServerName dataset-tracker-phpmyadmin.astro-prod-it.aws.umd.edu
        ProxyPass / http://127.0.0.1:6051/
        ProxyPassReverse / http://127.0.0.1:6051/
        RequestHeader set X-Forwarded-Proto "https"
        RequestHeader set X-Forwarded-Port "443"
    </VirtualHost>
    </IfModule>

    OS daemonization

    Once you clone the code for the applications to the EC2 instance, you can begin it in production mode with:

    docker-compose -f docker-compose.prod.yml up -d

    … where the flag ‘-d’ means to start it in the background (‘daemonized’).

    One of the nice things about using docker is that it becomes super easy to set up your application as a system service by simply adding restart: always to your docker-compose file. This command will cause docker to take note to restart the container if it registers an internal error, or if the docker service is itself restarted. This means that if the EC2 instance crashes or is otherwise restarted then docker (which, being a system service, will itself restart automatically) will automatically restart the application.

    MySQL logical backups to AWS S3

    Finally, we need to plan for disaster recovery. If the EC2 instance gets messed up, or the AWS RDS instance gets messed up, then we need to be able to restore the application as easily as possible.

    The application code is safe, thanks to github, and so we just need to make sure that we never lose our data. RDS performs regular disk backups, but I personally prefer to create logical backups because, in the event that the disk becomes corrupted, I feel wary about trying to find a past ‘uncorrupted’ state of the disk. Logical backups to file do not rely on the intergrity of the entire disk, and thereby arguably provide a simpler and therefore less error-prone means to preserve data.

    (This is in accordance with my general philosophy of preferring to backup files over than disk images. If something serious goes wrong at the level of e.g. disk corruption, I generally prefer to ‘start afresh’ with a clean OS and copy over files as needed, rather than to try and restore a previous snapshot of a disk. This approach also helps maintain disk cleanliness since disks tend to accumulate garbage over time.)

    To achieve these backups, create an S3 bucket on AWS and called it e.g. ‘mysql-backups’. Then install an open-source tool to mount S3 buckets onto a linux file system with sudo apt install s3fs.

    Next, add the following line to /etc/fstab:

    mysql-backups /path/to/dataset-tracker-mysql-backups fuse.s3fs allow_other,passwd_file=/home/user/.passwd-s3fs 0 0

    Next, you need to create an AWS IAM user with permissions for full programmatic access your S3 bucket. Obtain the Access key ID and Secret access key for that user and place them into a file /home/user/.passwd-s3fs in the format:

    [Access key ID]:[Secret access key]

    Now you can mount the S3 bucket by running sudo mount -a (which will read the /etc/fstab file).

    Check that the dir has successfully mounted by running df -h and/or by creating a test file within the dir /path/to/dataset-tracker-mysql-backups and checking in the AWS S3 console that that file has been placed in the bucket.

    Finally, we need to write a script to be run by a daily cronjob that will perform a mysql dump of your db to file to this S3-mounted dir, and to maintain a history of backups by removing old/obsolete backup files. You can see the script used in this project here, which was adapted from this article. Add this as a daily cronjob, and it will place a .sql file in your S3 dir and remove obsolete versions.

  • Raspberry Pi Cluster V: Deploying a NextJs App on Ubuntu Server 20.04

    Intro

    In the last part I opened up my primary node to the Internet. We’re now in a position to make public-facing applications that will eventually connect up with microservices that harness the distributed computing power of the 4 RPi nodes in our cluster.

    Before we can make such parallel applications, we need to be able to deploy a web-facing interface to the primary node in our cluster. Not only is that a generally important thing to be able to do, but it allows us to separate web-specific code/processes from what we might call “computing” or “business-logic” code/processes (i.e. a microservices architecture).

    So in this post (and the next few), I am going to go through the necessary steps to get a MERN stack up and running on our primary node. This is not a cluster-specific task; it is something every full-stack developer needs to know how to do on a linux server.

    Tech Stack Overview

    In the last part, we used AWS Route 53 to set up a domain pointing to our primary node. I mentioned that you need to have a web server like Apache running to check that everything is working, namely the port forwarding and dynamic DNS cronjob.

    We are going to continue on here by creating a customer-facing application with the following features:

    • Full set up of Apache operating as our gateway/proxy web server
    • SSL Certification with certbot
    • NextJs as our application server (providing the “Express”, “React” and “Node” parts of our MERN stack)
    • User signup and authentication with:
      • AWS Cognito as our Authentication Server
      • MongoDB as our general/business-logic DB

    Apache & Certbot Setup

    Apache 101

    This section is aimed at beginners for setting up Apache. Apache is a web server. It’s job is to receive an http/https request and return a response. That response is usually one of three things:

    1. A copy of a file on the filesystem
    2. HTML representing the content within the filesystem with links to download individual files (a ‘file browser’)
    3. A response that Apache gets back from another server that Apache “proxied” your original request to.

    In my opinion, absolutely every web developer needs to know how to set up Apache and/or Nginx with SSL certification to be able to accomplish these three things. I tend to use Apache because I am just more used to it than Nginx.

    An important concept in Apache is that of a “virtual host”. The server that Apache runs on can host multiple applications. You might want to serve some files in a folder to the internet at one subdomain (e.g. myfiles.mydomain.com), a react app at another domain (e.g. react.mydomain.com), and an API with JSON responses at yet another domain (e.g. api.mydomain.com).

    In all three of these example subdomains, you are setting up the DNS server to point the subdomain to the same IP Address — the public IP Address of your home in this project’s case. So if requests are coming into the same apache server listening on port 443, then we need in general to configure Apache to separate these requests and have them processed by the appropriate process running on our machine. The main way that one configures Apache to separate requests is based on the target subdomain. This is done by creating a “virtual host” within the Apache configuration, as demonstrated below.

    Installing Apache and Certbot on Ubuntu 20.04

    Installing apache and certbot on Ubuntu 20.04 is quite straightforward.

    sudo apt install apache2
    sudo snap install core
    sudo snap refresh core
    sudo apt remove certbot
    sudo snap install --classic certbot
    sudo ln -s /snap/bin/certbot /usr/bin/certbot

    We also need to enable the following apache modules with the a2enmod tool (“Apache2-Enable-Module”) that gets installed along with the apache service:

    sudo a2enmod proxy_http proxy macro

    Make sure you have a dynamic domain name pointing to your public IP address, and run the certbot wizard with automatic apache configuration:

    sudo certbot --apache

    If this is the first time running, it will prompt you for an email, domain names, and whether to set up automatic redirects from http to https. (I recommend you do.) It will then modify your configuration files in /etc/apache2/sites-available. The file /etc/apaches/sites-available/000-default-le-ssl.conf looks something like this:

    <IfModule mod_ssl.c>
    <VirtualHost *:443>
            # ...        
            ServerAdmin webmaster@localhost
            DocumentRoot /var/www/html
            # ...
            ErrorLog ${APACHE_LOG_DIR}/error.log
            CustomLog ${APACHE_LOG_DIR}/access.log combined
            # ...
            ServerName wwww.yourdomain.com
            SSLCertificateFile /etc/letsencrypt/live/wwww.yourdomain.com/fullchain.pem
            SSLCertificateKeyFile /etc/letsencrypt/live/wwww.yourdomain.com/privkey.pem
            Include /etc/letsencrypt/options-ssl-apache.conf
    </VirtualHost>
    </IfModule>

    There is quite a lot of boilerplate stuff going on in this single virtual host. It basically says “create a virtual host so that requests received on port 443 with target URL with subdomain wwww.yourdomain.com will get served a file from the directory /var/www/html; decrypt using the information within these SSL files; if errors occur, log them in the default location, etc.”.

    Since we might want to have lots of virtual hosts set up on this machine, each with certbot SSL certification, we will want to avoid having to repeat all of this boilerplate.

    To do this, let’s first disable this configuration with the tool sudo a2dissite 000-default-le-ssl.conf.

    Now lets create a fresh configuration file with sudo touch /etc/apaches/sites-available/mysites.conf and add the following text:

    <IfModule mod_ssl.c>
    <Macro SSLStuff>
        ServerAdmin webmaster@localhost
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
        Include /etc/letsencrypt/options-ssl-apache.conf
        SSLCertificateFile /etc/letsencrypt/live/www.yourdomain.com/fullchain.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/www.yourdomain.com/privkey.pem
    </Macro>
    
    <VirtualHost _default_:443>
        Use SSLStuff
        DocumentRoot /var/www/notfound
    </VirtualHost>
    <VirtualHost *:443>
        Use SSLStuff
        ServerName www.yourdomain.com
        ProxyPass / http://127.0.0.1:5050/
        ProxyPassReverse / http://127.0.0.1:5050/
    </VirtualHost>
    </IfModule>

    Here we are making use of the apache “macro” module we enabled earlier to define the boilerplate configurations that we want all of our virtual hosts to have. By including the line Use SSLStuff in a virtual host, we thereby include everything we defined in the SSLStuff block.

    This configuration has two virtual hosts. The first one is a default; if a request is received without a recognized domain, then serve files from /var/www/notfound. (You of course need to create such a dir, and, at minimum, have an index.html file therein with a “Not found” message.)

    The second virtual host tells Apache to forward any request sent to www.yourdomain.com and forward it onto the localhost on port 5050 where, presumably, a separate server process will be listening for http requests. This port is arbitrary, and is where we will be setting up our nextJs app.

    Whenever you change apache configurations, you of course need to restart apache with sudo systemctl restart apache2. To quickly test that the proxied route is working, install node (I always recommend with with nvm), install a simple server with run npm i -g http-server, create a test index.html file somewhere on your filesystem, and run http-server -p 5050.

    Now visit the proxied domain and confirm that you are receiving the content of the index.html file you just created. The great thing about this set up is that Apache is acting as a single encryption gateway on port 443 for all of your apps, so you don’t need to worry about SSL configuration on your individual application servers; all of our inner microservices are safe!

    Expanding Virtual Hosts

    (There will inevitably come a time when you want to add more virtual hosts for new applications on the same server. Say that I want to have a folder for serving miscellaneous files to the world.

    First, you need to go back to your DNS interface (AWS Route 53 in my case), and add a new subdomain pointing to your public IP Address.

    Next, in my case, where I am using a script to dynamically update my AWS-controlled the IP Address that my domain points to, as I described in the last part of this cluster series, I need to open up crontab -e and add a line for this new domain.

    Next, I need to change the apache configuration by adding another virtual host and restarting apache:

    <VirtualHost *:443>
        Use SSLStuff
        DocumentRoot /var/www/miscweb
        ServerName misc.yourdomain.com
    </VirtualHost>

    Next, we need to create a dir at /var/www/misc (with /var/www being the conventional location for all dirs served by apache). Since /var/www has strict read/write permissions requiring sudo, and since I don’t want to have to remember to use sudo every time I want to edit a file therein, I tend to create the real folder in my home dir and link it there with, in this case:

    sudo ln -fs /home/myhome/miscweb /var/www/miscweb

    Next, I need to rerun certbot with a command to expand the domains listed in my active certificate. This is done with the following command:

    sudo certbot certonly --apache --cert-name www.yourdomain.com --expand -d \
    www.yourdomain.com,\
    misc.yourdomain.com

    Notice that when you run this expansion command you have to specify ALL of the domains to be included in the updated certificate including those that had been listed therein previously; it’s not enough to specify just the ones you want to add. Since it can be hard to keep up with all of your domains, I recommend that you keep track of this command with all of your active domains in a text file somewhere on your server. When you want to add another domain, first edit this file with one domain on each line and then copy that new command to the terminal to perform the update.

    If you want to prevent the user from browsing the files within ~/miscweb, then you need to place an index.html file in there. Add a simple message like “Welcome to my file browser for misc sharing” and check that it works by restarting apache and visiting the domain with https.

    Quick Deploy of NextJs

    We’ll talk more about nextJs in the next part. For now, we’ll do a very quick deployment of nextJs just to get the ball rolling.

    Normally, I’d develop my nextJs app on my laptop, push changes to github or gitlab, pull those changes down on the server, and restart it. However, since node is already installed on the RPi primary node, we can just give it a quick start by doing the following:

    • Install pm2 with npm i -g pm2
    • Create a fresh nextJs app in your home directory with cd ~; npx create-next-app --typescript
    • Move into the dir of the project you just created and edit the start script to include the port you will proxy to: "start": "next start -p 5050"
    • To run the app temporarily, run npm run start and visit the corresponding domain in the browser to see your nextJs boilerplate app in service
    • To run the app indefinitely (even after you log out of the ssh shell, etc.), you can use pm2 to run it as a managed background process like so: pm2 start npm --name "NextJsDemo" -- start $PWD -p 5050

    NextJs has several great features. First, it will pre-render all of your react pages for fast loading and strong SEO. Second, it comes with express-like API functionality built-in. Go to /api/hello at your domain to see the built-in demo route in action, and the corresponding code in pages/api/hello.ts.

    More on NextJs in the next part!

  • Raspberry Pi Cluster Part II: Network Setup

    Introduction

    In the last post we got the hardware in order and made each of our 4 RPi nodes production ready with Ubuntu Server 20.04. We also established wifi connections between each node and the home router.

    In this post, I’m going to describe how to set up the “network topology” that will enable the cluster to become easily transportable. The primary RPi4 node will act as the gateway/router to the cluster. It will communicate with the home router on behalf of the whole network. If I move in the future, then I’ll only have to re-establish a wifi connection with this single node in order to restore total network access to each node. I also only need to focus on securing this node in order to expose the whole cluster to the internet. Here’s the schematic again:

    In my experience, it’s tough to learn hardware and networking concepts because the field is thick with jargon. I am therefore going to write as though to my younger self keenly interested in becoming self-reliant in the field of computer networking.

    Networking Fundamentals

    If you’re not confident with your network fundamentals, then I suggest you review the following topics by watching the linked explainer videos. (All these videos are made by the YouTube chanel “Power Cert Animated Videos” and are terrific.

    Before we get into the details of our cluster, let’s quickly review the three main things we need to think about when setting up a network: IP-address assignment, domain-name resolution, and routing.

    IP-Address Assignment

    At its core, networking is about getting fixed-length “packets” of 1s and 0s from one program running on a computer to another program running on any connected computer (including programs running on the same computer). For that to happen, each computer needs to have an address – an IP Address – assigned to it. As explained in the above video, the usual way in which that happens is by interacting with a DHCP server. (However, most computers nowadays run a process in the background that will attempt to negotiate an IP Address automatically in the event that no machine on its network identifies itself as a DHCP server.) In short, we’ll need to make sure that we have a DHCP server on our primary node in order to assign IP addresses to the other nodes.

    Domain-Name Resolution

    Humans do not like to write instructions as 1s and 0s, so we need each node in our network to be generally capable of exchanging a human-readable address (e.g. ‘www.google.com’, ‘rpi3’) into a binary IP address. This is where domain-name servers (DNS) and related concepts come in.

    The word “resolve” is used to describe the process of converting a human-readable address into an IP address. In general, an application that needs to resolve an IP address will interact with a whole bunch of other programs, networks and servers to obtain its target IP address. The term “resolver” is sometimes used to refer to this entire system of programs, networks and servers. The term resolver is also sometimes used to refer to a single element within such a system. (Context usually makes it clear.) From hereon, we’ll use “resolver” to refer to a single element within a system of programs, networks and servers whose job is to convert strings of letters to an IP Address, and “resolver system” to refer to the whole system.

    Three types of resolver to understand here are “stub resolvers”, “recursive resolver”, and “authoritative resolver”. A stub resolver is a program that basically acts as a cache within the resolver system. If it has recently received a request to return an IP address in exchange for a domain name (and therefore has it in its cache), then it will return that domain name. Otherwise, it will pass the request onto another resolver, (which might also be a stub resolver that has to just pass the buck on).

    A recursive resolver will also act as a cache and if it does not have all of the information needed to return a complete result, then it will pass on a request for information to another resolver. Unlike a stub resolver though, it might not receive back a final answer to its question but, rather, an address to another resolver that might have the final answer. The recursive resolver will keep following any such lead until it gets its final answer.

    An “authoritative” resolver is a server that does not pass the buck on. It’s the final link in the chain, and if it does not have the answer or suggestions for another server to consult, then the resolution will fail, and all of these resolvers will send back a failure message.

    In summary, domain-name resolution is all about finding a simple lookup table that associates a string (domain name) with a number (the IP Address). This entry in the table is called an “A Record” (A for Address).

    Routing

    Once a program has an IP Address to send data to, it needs to know where first to send the packet in order to get it relayed. In order for this to happen, each network interface needs to have a router address applied to it when configured. You can see the router(s) on a linux with router -n. In a home setup, this router will be the address of the wifi/modem box. Once the router address is determined, the application can just send packets there and the magic of Internet routing will take over.

    Ubuntu Server Networking Fundamentals

    Overview

    Ubuntu Server 20.04, which we’re using here, comes with several key services/tools that are installed/enabled by default or by common practice: systemd-resolved, systemd-networkd, NetworkManager and netplan.

    systemd-resolved

    You can learn the basic about it by running:

    man systemd-resolved

    This service is a stub resolver making it possible for applications running on the system to resolve hostnames. Applications running on the system can interact with it by issuing some low-level kernel jazz via their underlying C libraries, or by pinging the internal (“loopback”) network address 127.0.0.53. To see it in use as a stub server, you can run dig @127.0.0.53 www.google.com.

    You can check what DNS servers it is set up to consult by running resolvectl status. (resolvectl is a pre-installed tool that lets you interact with the running systemd-resolved service; see resolvectl --help to get a sense of what you can do with it.)

    Now we need to ask how systemd-resolved resolves hostnames? It does it by communicating over a network with a DNS server. How do you configure it so it knows what DNS servers to consult and in what order of priority?

    systemd-networkd

    systemd-networkd is a pre-installed and pre-enabled service on Ubuntu that acts as a DHCP client (listening on port 68 for signals from a DHCP server). So when you switch on your machine and this service starts up, it will negotiate the assignment of an IP Address on the network based upon DHCP broadcast signals. In the absence of a DHCP server on the network, it will negotiate with any other device. I believe it is also involved in the configuration of interfaces.

    NetworkManager

    This is an older service that does much the same as networkd. It is NOT enabled by default, but is so prominent that I thought it would be worth mentioning in this discussion. (Also, during my research to try and get the cluster configured the way I want it, I installed NetworkManager and messed with it only to ultimately conclude that this was unnecessary and confusing.)

    Netplan

    Netplan is pre-installed tool (not service) that, in theory, makes it easier to configure systemd-resolved and either networkd or NetworkManager. The idea is that you declare your desired network end state in a YAML file (/etc/netplan/50-cloud-init.yaml) so that after start up (or running netplan apply), it will do whatever needs to be done under the hood with the relevant services to get the network into your desired state.

    Other Useful Tools

    In general, when doing networking on linux machines, it’s useful to install a couple more packages:

    sudo apt install net-tools traceroute

    The net-tools package gives us a bunch of classic command-line utilities, such as netstat. I often use it (in an alias) to check what ports are in use on my machibne: sudo netstat -tulpn.

    traceroute is useful in making sense of how your network is presently set up. Right off the bat, running traceroute google.com, will show you how you reach google.

    Research References

    For my own reference, the research I am presenting here is derived in large part from the following articles:

    • This is the main article I consulted that shows someone using dnsmasq to set up a cluster very similar to this one, but using Raspbian instead of Ubuntu.
    • This article and this article on getting dnsmasq and system-resolved to handle single-word domain names.
    • Overview of netplan, NetworkManager, etc.
    • https://unix.stackexchange.com/questions/612416/why-does-etc-resolv-conf-point-at-127-0-0-53
    • This explains why you get the message “ignoring nameserver 127.0.0.1” when starting up dnsmasq.
    • Nice general intro to key concepts with linux
    • This aids understanding of systemd-resolved’s priorities when multiple DNS’s are configured on same system
    • https://opensource.com/business/16/8/introduction-linux-network-routing
    • https://www.grandmetric.com/2018/03/08/how-does-switch-work-2/
    • https://www.cloudsavvyit.com/3103/how-to-roll-your-own-dynamic-dns-with-aws-route-53/

    Setting the Primary Node

    OK, enough preliminaries, let’s get down to setting up out cluster.

    A chief goal is to try to set up the network so that as much of the configuration as possible is on the primary node. For example, if we want to be able to ssh from rpi2 to rpi3, then we do NOT want to have to go to each node and explicitly state where each hostname is to be found.

    So we want our RPi4 to operate as the single source of truth for domain-name resolution and IP-address assignment. We do this by running dnsmasq – a simple service that turns our node into a DNS and DHCP server:

    sudo apt install dnsmasq
    sudo systemctl status dnsmasq

    We configure dnsmasq with /etc/dnsmasq.conf. On this fresh install, this conf file will be full of fairly detailed notes. Still, it takes some time to get the hang of how it all fits together. This is the file I ended up with:

    # Choose the device interface to configure
    interface=eth0
    
    # We will listen on the static IP address we declared earlier
    # Note: this might be redundant
    listen-address=127.0.0.1
    
    # Enable addresses in range 10.0.0.1-128 to be leased out for 12 hours
    dhcp-range=10.0.0.1,10.0.0.128,12h
    
    # Assign static IPs to cluster members
    # Format = MAC:hostname:IP
    dhcp-host=ZZ:YY:XX:WW:VV:UU,rpi1,10.0.0.1
    dhcp-host=ZZ:YY:XX:WW:VV:UU,rpi2,10.0.0.2
    dhcp-host=ZZ:YY:XX:WW:VV:UU,rpi3,10.0.0.3
    dhcp-host=ZZ:YY:XX:WW:VV:UU,rpi4,10.0.0.4
    
    # Broadcast the router, DNS and netmask to this LAN
    dhcp-option=option:router,10.0.0.1
    dhcp-option=option:dns-server,10.0.0.1
    dhcp-option=option:netmask,255.255.255.0
    
    # Broadcast host-IP relations defined in /etc/hosts
    # And enable single-name domains
    # See here for more details
    expand-hosts
    domain=mydomain.net
    local=/mydomain.net/
    
    # Declare upstream DNS's; we'll just use Google's
    server=8.8.8.8
    server=8.8.4.4
    
    # Useful for debugging issues
    # Run 'journalctl -u dnsmasq' for resultant logs
    log-queries
    log-dhcp
    
    # These two are recommended default settings
    # though the exact scenarios they guard against 
    # are not entirely clear to me; see man for further details
    domain-needed
    bogus-priv

    Hopefully these comments are sufficient to convey what is going on here. Next, we make to sure that the /etc/hosts file associates the primary node with its domain name, rpi1. It’s not clear to me why this is needed. The block of dhcp-host definitions above do succeed in enabling dnsmasq to resolve rpi2, rpi3, and rpi4, but the line for rpi1 does not work. I assume that this is because dnsmasq is not setting the IP address of rpi1, and this type of setting only works for hosts that it sets the IP Address of. (Why that is the case seems odd to me.)

    # /etc/hosts
    10.0.0.1 rpi1

    Finally, we need to configure the file /etc/netplan/50-cloud-init.yaml on the primary node in order to declare this node with a static IP Address on both the wifi and ethernet networks.

    network:
        version: 2
        ethernets:
            eth0:
                dhcp4: no
                addresses: [10.0.0.1/24]
        wifis:
            wlan0:
                optional: true
                access-points:
                    "MY-WIFI-NAME":
                        password: "MY-PASSWORD"
                dhcp4: no
                addresses: [192.168.0.51/24]
                gateway4: 192.168.0.1
                nameservers:
                    addresses: [8.8.8.8,8.8.4.4]

    Once these configurations are set up and rpi1 is rebooted, you can expect to find that ifconfig will show ip addresses assigned to eth0 and wlan0 as expected, and that resolvectl dns will read something like:

    Global: 127.0.0.1
    Link 3 (wlan0): 8.8.8.8 8.8.4.4 2001:558:feed::1 2001:558:feed::2
    Link 2 (eth0): 10.0.0.1

    Setting up the Non-Primary Nodes

    Next we jump into the rpi2 node and edit the /etc/netplan/ to:

    network:
        version: 2
        ethernets:
            eth0:
                dhcp4: true
                optional: true
                gateway4: 10.0.0.1
                nameservers:
                    addresses: [10.0.0.1]
        wifis:
            wlan0:
                optional: true
                access-points:
                    "MY-WIFI-NAME":
                        password: "MY-PASSWORD"
                dhcp4: no
                addresses: [192.168.0.52/24]
                gateway4: 192.168.0.1
                nameservers:
                    addresses: [8.8.8.8,8.8.4.4]

    This tells netplan to set up systemd-networkd to get its IP Address from a DHCP server on the ethernet network (which will be found to be on rpi1 when the broadcast event happens), and to route traffic and submit DNS queries to 10.0.0.1.

    To reiterate, the wifi config isn’t part of the network topology; this is optionally added because it makes life easier when setting up the network to be able to ssh straight into a node. In my current setup, I am assigning all the nodes static IP Addresses on the wifi network of 192.168.0.51-4.

    Next, as described here, in order for our network to be able to resolve single-word domain names, we need to alter the behavior of systemd-resolved by linking these two files together:

    sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf

    This causes the systemd-resolved stub resolver to dynamically determine a bunch of settings based upon what dnsmasq broadcasts on rpi1.

    After rebooting, and doing the same configuration on rpi3 and rpi4, we can run dig rpi1, dig rpi2, etc. on any of the non-primary nodes and expect to get the single-word hostnames resolved as we intend.

    If we go to trpi1 and check the ip-address leases:

    cat /var/lib/misc/dnsmasq.leases

    … then we can expect to see that dnsmasq has successfully acted as a DHCP server. You can also check that dnsmasq has been receiving DNS queries by examining the system logs: journalctl -u dnsmasq.

    Routing All Ethernet Traffic Through the Primary Node

    Finally, we want all nodes to be able to connect to the internet by routing through the primary node. This is achieved by first uncommenting the line net.ipv4.ip_forward=1 in the file /etc/sysctl.conf and then running the following commands:

    sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
    sudo iptables -A FORWARD -i wlan0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
    sudo iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT

    These lines mean something like the following:

    1. When doing network-address translation (-t nat), and just before the packet is to go out via the wifi interface (-A POSTROUTING = “append a postrouting rule”), replace the source ip address with the ip address of this machine on the outbound network
    2. forward packets in from wifi to go out through ethernet
    3. forward packets in from ethernet to go out through wifi

    For these rules to survive across reboots you need to install:

    sudo apt install iptables-persistent

    and agree to storing the rules in /etc/iptables/rules.v4. Reboot, and you can now expect to be able to access the internet from any node, even when the wifi interface is down (sudo ifconfig wlan0 down).

    Summary

    So there we have it – an easily portable network. If you move location then you only need to adjust the wifi-connection details in the primary node, and the whole network will be connected to the Internet.

    In the next part, we’ll open the cluster up to the internet through our home router and discuss security and backups.