Tag: apache

  • LAMP Stack App on AWS with Docker, RDS and phpMyAdmin

    Intro

    I was recently tasked with migrating an old LAMP-stack app (PHPv5) running on a Centos6 server to the newer PHPv7 on a Centos8 machine, and ensuring that the code didn’t break in the php upgrade. I figured the best way to do that would be to use Docker to simulate PHP 7 on a Centos8 machine running on my laptop.

    However, the plan changed and instead of deploying the new app on a Centos8 machine, it was decided that we would deploy the app to its own EC2 instance. Since I was already using Docker, and since I no longer had to plan for a Centos8 deployment, I decided to use Ubuntu 20.04 for the EC2 instance. I installed docker and docker-compose, and adapted the code to use proper PHP-Apache and phpMyAdmin Docker images. I also decided to use AWS RDS mysql, and to use the EC2 instance to implement logical backups of the mysql DB to AWS S3.

    The rest of this article consists in more detailed notes on how I went about all of this:

    • Dockerizing a LAMP-stack Application
      • php-apache docker image
      • creating dev and prod versions
      • updating code from PHPv5 to PHPv7
      • handling env variables
      • Adding a phpMyAdmin interface
    • AWS RDS MySQL Setup
      • rdsadmin overview
      • creating additional RDS users
      • connecting from a server
    • AWS EC2 Deployment
      • virtual machine setup
      • deploying prod version with:
        • Apache proxy with SSL Certification
        • OS daemonization
      • MySQL logical backups to AWS S3

    Dockerizing a LAMP-stack Application

    php-apache docker image

    I’ll assume the reader is somewhat familiar with Docker. I was given a code base in a dir called DatasetTracker developed several years ago with PHPv5. The first thing to do was to set up a git repo for the sake of development efficiency, which you can find here.

    Next, I had to try and get something working. The key with Docker is to find the official image and RTFM. In this case, you want the latest php-apache image, which leads to the first line in your docker file being: FROM php:7.4-apache. When you start up this container, you get an apache instance that will interpret php code within the dir /var/www/html and listening on port 80.

    creating dev and prod versions

    I decided to set up two deployment tiers: dev and prod. The dev tier is chiefly for local development, wherein changes to the code do not require you to restart the docker container. Also, you want to have php settings that allow you to debug the code. The only hiccup I experienced in getting this to work was understanding how php extensions are activated within a docker context. It turns out that the php-apache image comes with two command-line tools: pecl and docker-php-ext-install. In my case, I needed three extensions for the dev version of the code: xdebug, mysqli, and bcmath. Through trial and error I found that you could activate those extensions with the middle 3 lines in the docker file (see below).

    You can also set the configurations of your php to ‘development’ by copying the php.ini-development file. In summary, the essence of a php-apache docker file for development is as follows:

    FROM php:7.4-apache
    
    RUN pecl install xdebug
    RUN docker-php-ext-install mysqli
    RUN docker-php-ext-install bcmath
    
    RUN cp /usr/local/etc/php/php.ini-development /usr/local/etc/php/php.ini

    When you run a container based on this image, you just need to volume-mount the dir with your php code to /var/www/html to get instant updates, and to map port 80 to some random port for local development.

    Next, we need to write a docker-compose file in order to have this image run as a container along with a phpMyAdmin application, as well as to coordinate environment variables in order to connect to the remote AWS RDS mysql instance.

    An aspect of the set up that required a bit of thought was how to log into phpMyAdmin. The docker-image info was a bit confusing. In the end though, I determined that you really only need one env variable — PMA_HOST — passed to the phpMyAdmin container through the docker-compose file. This env variable just needs to point to your remote AWS RDS instance. phpMyAdmin is really just an interface to your mysql instance, so you then log in through the interface with your mysql credentials. (See .env-template in the repo.)

    (NOTE: you might first need to also pass env variables for PMA_USER and PMA_PASSWORD to get it to work once, and then you can remove these; I am not sure why this seems to be needed.)

    updating code from PHPv5 to PHPv7

    Once I had an application running through docker-compose, I was able to edit the code to make it compatible with PHPv7. This included, amongst other things, replacing mysql_connect with mysqli_connect, and replacing hard-coded mysql credentials with code for grabbing such values from env variables. A big help was using the VSCode extension intelephense, which readily flags mistakes and code that is deprecated in PHPv7.

    AWS RDS MySQL Setup

    rdsadmin overview

    Note: discussions about ‘databases’ can be ambiguous. Here, I shall use ‘DB’ or ‘DB instance’ to refer to the mysql host/server, and ‘db’ to refer to the internal mysql collection of tables that you select with the syntax `use [db name];`. As such, a mysql DB instance can have multiple dbs within it.

    In order to migrate the mysql database from our old Centos6 servers to an RDS instance, I first used the AWS RDS interface to create a mysql db instance.

    When I created the mysql DB instance via the AWS RDS interface, I assumed that the user I created was the root user with all privileges. But this is not the case! Behind the scenes, RDS creates a user called rdsadmin, and this user holds all the cards.

    To see the privileges of a given user, you need to use SHOW GRANTS FOR 'user'@'host'. Note: you need to provide the exact host associated with the user you are interested in; if you are not sure what the host is for the user, you first need to run:

    SELECT user, host FROM mysql.user WHERE user='user';

    In the case of an RDS DB instance, rdsadmin is created so as to only be able to log into the DB instance from the same host machine of the instance, so you need to issue the following command to view the permissions of the rdsadmin user:

    SHOW GRANTS for 'rdsadmin'@'localhost';

    I’ll call the user that you initially create via the AWS console the ‘admin’ user. You can view the admin’s privileges by running SHOW GRANTS; which yields the following result:

    GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, 
    DROP, RELOAD, PROCESS, REFERENCES, INDEX, 
    ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, 
    LOCK TABLES, EXECUTE, REPLICATION SLAVE, 
    REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, 
    CREATE ROUTINE, ALTER ROUTINE, CREATE USER, 
    EVENT, TRIGGER ON *.* TO `admin`@`%` 
    WITH GRANT OPTION

    The final part — WITH GRANT OPTION — is mysql for “you can give all of these permissions to another user”. So this user will let you create another user for each db you create.

    If you compare these privileges with those for rdsadmin, you’ll see that rdsadmin has the following extra privileges:

    SHUTDOWN, FILE, SUPER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE, SERVICE_CONNECTION_ADMIN, SET_USER_ID, SYSTEM_USER

    Several of these privileges — such as shutdown — can be executed via the AWS console. In summary, rdsadmin is created in such a way that you can never use it directly, and you will never need to. The admin user has plenty of permissions, and one needs to consider best practices as to whether to use the admin user when connecting from one’s application.

    I personally think that it is good general practice to have a separate db for each deployment tier of an application. So if you are developing an app with, say, a ‘development’, ‘stage’, and ‘production’ deployment tier, then it’s wise to create a separate db for each tier. Alternatively, you might want to have the non-production tiers share a single db. The one thing that I believe is certain though is that you need a dedicated db for production, that it needs to have logical backups (i.e. mysqldump to file) carried out regularly, and that you ideally never edit the prod db directly (or, if you do, that you do so with much fear and trembling).

    Is it a good practice to have multiple dbs on a single DB instance? This totally depends on the nature of the applications and their expected load on the DB instance. Assuming that you do have multiple applications using dbs on the same DB instance, you might want to consider creating a specialized user for each application in case compromise of one user compromises ALL your applications. In that case, the role of the admin is ONLY to create users whose credentials will be used to connect an application to the db. The next section shows how to accomplish that.

    creating additional RDS users

    So lets assume that you want to create a user who’s sole purpose is to enable an application deployed on some host HA (application host) to connect to the host on which the DB instance is running Hdb (db host). Enter the RDS DB instance with your admin user credentials and enter:

    CREATE USER 'newuser'@'%' IDENTIFIED BY 'newuser_password';
    GRANT ALL PRIVILEGES ON db_name.* TO 'newuser'@'%';
    FLUSH PRIVILEGES;

    This will create user ‘newuser’ with all of the privileges of the admin user. The ‘user’@’%’ syntax means “this user connecting from any host”.

    Of course, if you want to be extra secure, you can specify that the user can only connect from specific hosts by running this command multiple times replacing the wildcard ‘%’.

    As an aside, if you want to know the name of the host you are currently connecting from, then run:

    mysql> SELECT USER() ;
    +-------------------------------------------+
    | USER()                                    |
    +-------------------------------------------+
    | admin@c-XX-XX-XXX-XXX.hsd1.sc.comcast.net |
    +-------------------------------------------+
    1 row in set (0.07 sec)

    In this case, the host ‘c-XX-XX-XXX-XXX.hsd1.sc.comcast.net’ has been determined as pointing to my home’s public IP address (assigned by my ISP). (I assume that under the hood mysql has used something like nslookup MYPUBLIC_IPADDRESS to determine the hostname as it prefers that rather than my present IP address, which is assumed to be less permanent.)

    enabling user to change password

    As of Nov 2022, there seems to be an issue with phpmyadmin whereby a user thus created cannot change his/her own password through the phpmyadmin interface. Presumably under the hood the sql command to change the user’s password is such as to require certain global privileges (and this user has none). A temporary solution is to connect to the DB instance with your admin user and run:

    GRANT CREATE USER ON *.* TO USERNAME WITH GRANT OPTION; 

    connecting from a server

    One thing that threw me for a while was the need to explicitly white-list IP addresses to access the DB instance. When I created the instance, I selected the option to be able to connect to the database from a public IP address. I assumed that this meant that, by default, all IP addresses were permitted. However, this is not the case! Rather, when you create the DB instance, RDS will determine the public IP address of your machine (in my case – my laptop at my home public IP address), and apply that to the inbound rule of the AWS security group attached to the DB instance.

    In order to be able to connect our application running on a remote server, you need to go that security group in the AWS console and add another inbound-rule for MySQL/Aurora for connections from the IP address of your server.

    AWS EC2 Deployment

    virtual machine setup

    I chose Ubuntu server 20.04 for my OS with a single core and 20GB of storage. (The data will be stored in the external DB and S3 resources, so not much storage is needed.) I added 4GB of swap space and installed docker and docker-compose.

    apache proxy with SSL Certification

    I used AWS Route 53 to create two end points pointing to the public IP address of the EC2 instance. To expose the two docker applications to the outside world, I installed apache on the EC2 instance and proxy-ed these two end points to ports 5050 and 5051. I also used certbot to establish SSL certification. The apache config looks like this:

    <IfModule mod_ssl.c>
    <Macro SSLStuff>
        ServerAdmin webmaster@localhost
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
        Include /etc/letsencrypt/options-ssl-apache.conf
        SSLCertificateFile /etc/letsencrypt/live/xxx/fullchain.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/xxx/privkey.pem
    </Macro>
    
    <VirtualHost _default_:443>
        Use SSLStuff
        DocumentRoot /var/www/html
    </VirtualHost>
    
    <VirtualHost *:443>
        Use SSLStuff
        ServerName dataset-tracker.astro-prod-it.aws.umd.edu
        ProxyPass / http://127.0.0.1:6050/
        ProxyPassReverse / http://127.0.0.1:6050/
    </VirtualHost>
    
    <VirtualHost *:443>
        Use SSLStuff
        ServerName dataset-tracker-phpmyadmin.astro-prod-it.aws.umd.edu
        ProxyPass / http://127.0.0.1:6051/
        ProxyPassReverse / http://127.0.0.1:6051/
        RequestHeader set X-Forwarded-Proto "https"
        RequestHeader set X-Forwarded-Port "443"
    </VirtualHost>
    </IfModule>

    OS daemonization

    Once you clone the code for the applications to the EC2 instance, you can begin it in production mode with:

    docker-compose -f docker-compose.prod.yml up -d

    … where the flag ‘-d’ means to start it in the background (‘daemonized’).

    One of the nice things about using docker is that it becomes super easy to set up your application as a system service by simply adding restart: always to your docker-compose file. This command will cause docker to take note to restart the container if it registers an internal error, or if the docker service is itself restarted. This means that if the EC2 instance crashes or is otherwise restarted then docker (which, being a system service, will itself restart automatically) will automatically restart the application.

    MySQL logical backups to AWS S3

    Finally, we need to plan for disaster recovery. If the EC2 instance gets messed up, or the AWS RDS instance gets messed up, then we need to be able to restore the application as easily as possible.

    The application code is safe, thanks to github, and so we just need to make sure that we never lose our data. RDS performs regular disk backups, but I personally prefer to create logical backups because, in the event that the disk becomes corrupted, I feel wary about trying to find a past ‘uncorrupted’ state of the disk. Logical backups to file do not rely on the intergrity of the entire disk, and thereby arguably provide a simpler and therefore less error-prone means to preserve data.

    (This is in accordance with my general philosophy of preferring to backup files over than disk images. If something serious goes wrong at the level of e.g. disk corruption, I generally prefer to ‘start afresh’ with a clean OS and copy over files as needed, rather than to try and restore a previous snapshot of a disk. This approach also helps maintain disk cleanliness since disks tend to accumulate garbage over time.)

    To achieve these backups, create an S3 bucket on AWS and called it e.g. ‘mysql-backups’. Then install an open-source tool to mount S3 buckets onto a linux file system with sudo apt install s3fs.

    Next, add the following line to /etc/fstab:

    mysql-backups /path/to/dataset-tracker-mysql-backups fuse.s3fs allow_other,passwd_file=/home/user/.passwd-s3fs 0 0

    Next, you need to create an AWS IAM user with permissions for full programmatic access your S3 bucket. Obtain the Access key ID and Secret access key for that user and place them into a file /home/user/.passwd-s3fs in the format:

    [Access key ID]:[Secret access key]

    Now you can mount the S3 bucket by running sudo mount -a (which will read the /etc/fstab file).

    Check that the dir has successfully mounted by running df -h and/or by creating a test file within the dir /path/to/dataset-tracker-mysql-backups and checking in the AWS S3 console that that file has been placed in the bucket.

    Finally, we need to write a script to be run by a daily cronjob that will perform a mysql dump of your db to file to this S3-mounted dir, and to maintain a history of backups by removing old/obsolete backup files. You can see the script used in this project here, which was adapted from this article. Add this as a daily cronjob, and it will place a .sql file in your S3 dir and remove obsolete versions.

  • Raspberry Pi Cluster V: Deploying a NextJs App on Ubuntu Server 20.04

    Intro

    In the last part I opened up my primary node to the Internet. We’re now in a position to make public-facing applications that will eventually connect up with microservices that harness the distributed computing power of the 4 RPi nodes in our cluster.

    Before we can make such parallel applications, we need to be able to deploy a web-facing interface to the primary node in our cluster. Not only is that a generally important thing to be able to do, but it allows us to separate web-specific code/processes from what we might call “computing” or “business-logic” code/processes (i.e. a microservices architecture).

    So in this post (and the next few), I am going to go through the necessary steps to get a MERN stack up and running on our primary node. This is not a cluster-specific task; it is something every full-stack developer needs to know how to do on a linux server.

    Tech Stack Overview

    In the last part, we used AWS Route 53 to set up a domain pointing to our primary node. I mentioned that you need to have a web server like Apache running to check that everything is working, namely the port forwarding and dynamic DNS cronjob.

    We are going to continue on here by creating a customer-facing application with the following features:

    • Full set up of Apache operating as our gateway/proxy web server
    • SSL Certification with certbot
    • NextJs as our application server (providing the “Express”, “React” and “Node” parts of our MERN stack)
    • User signup and authentication with:
      • AWS Cognito as our Authentication Server
      • MongoDB as our general/business-logic DB

    Apache & Certbot Setup

    Apache 101

    This section is aimed at beginners for setting up Apache. Apache is a web server. It’s job is to receive an http/https request and return a response. That response is usually one of three things:

    1. A copy of a file on the filesystem
    2. HTML representing the content within the filesystem with links to download individual files (a ‘file browser’)
    3. A response that Apache gets back from another server that Apache “proxied” your original request to.

    In my opinion, absolutely every web developer needs to know how to set up Apache and/or Nginx with SSL certification to be able to accomplish these three things. I tend to use Apache because I am just more used to it than Nginx.

    An important concept in Apache is that of a “virtual host”. The server that Apache runs on can host multiple applications. You might want to serve some files in a folder to the internet at one subdomain (e.g. myfiles.mydomain.com), a react app at another domain (e.g. react.mydomain.com), and an API with JSON responses at yet another domain (e.g. api.mydomain.com).

    In all three of these example subdomains, you are setting up the DNS server to point the subdomain to the same IP Address — the public IP Address of your home in this project’s case. So if requests are coming into the same apache server listening on port 443, then we need in general to configure Apache to separate these requests and have them processed by the appropriate process running on our machine. The main way that one configures Apache to separate requests is based on the target subdomain. This is done by creating a “virtual host” within the Apache configuration, as demonstrated below.

    Installing Apache and Certbot on Ubuntu 20.04

    Installing apache and certbot on Ubuntu 20.04 is quite straightforward.

    sudo apt install apache2
    sudo snap install core
    sudo snap refresh core
    sudo apt remove certbot
    sudo snap install --classic certbot
    sudo ln -s /snap/bin/certbot /usr/bin/certbot

    We also need to enable the following apache modules with the a2enmod tool (“Apache2-Enable-Module”) that gets installed along with the apache service:

    sudo a2enmod proxy_http proxy macro

    Make sure you have a dynamic domain name pointing to your public IP address, and run the certbot wizard with automatic apache configuration:

    sudo certbot --apache

    If this is the first time running, it will prompt you for an email, domain names, and whether to set up automatic redirects from http to https. (I recommend you do.) It will then modify your configuration files in /etc/apache2/sites-available. The file /etc/apaches/sites-available/000-default-le-ssl.conf looks something like this:

    <IfModule mod_ssl.c>
    <VirtualHost *:443>
            # ...        
            ServerAdmin webmaster@localhost
            DocumentRoot /var/www/html
            # ...
            ErrorLog ${APACHE_LOG_DIR}/error.log
            CustomLog ${APACHE_LOG_DIR}/access.log combined
            # ...
            ServerName wwww.yourdomain.com
            SSLCertificateFile /etc/letsencrypt/live/wwww.yourdomain.com/fullchain.pem
            SSLCertificateKeyFile /etc/letsencrypt/live/wwww.yourdomain.com/privkey.pem
            Include /etc/letsencrypt/options-ssl-apache.conf
    </VirtualHost>
    </IfModule>

    There is quite a lot of boilerplate stuff going on in this single virtual host. It basically says “create a virtual host so that requests received on port 443 with target URL with subdomain wwww.yourdomain.com will get served a file from the directory /var/www/html; decrypt using the information within these SSL files; if errors occur, log them in the default location, etc.”.

    Since we might want to have lots of virtual hosts set up on this machine, each with certbot SSL certification, we will want to avoid having to repeat all of this boilerplate.

    To do this, let’s first disable this configuration with the tool sudo a2dissite 000-default-le-ssl.conf.

    Now lets create a fresh configuration file with sudo touch /etc/apaches/sites-available/mysites.conf and add the following text:

    <IfModule mod_ssl.c>
    <Macro SSLStuff>
        ServerAdmin webmaster@localhost
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
        Include /etc/letsencrypt/options-ssl-apache.conf
        SSLCertificateFile /etc/letsencrypt/live/www.yourdomain.com/fullchain.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/www.yourdomain.com/privkey.pem
    </Macro>
    
    <VirtualHost _default_:443>
        Use SSLStuff
        DocumentRoot /var/www/notfound
    </VirtualHost>
    <VirtualHost *:443>
        Use SSLStuff
        ServerName www.yourdomain.com
        ProxyPass / http://127.0.0.1:5050/
        ProxyPassReverse / http://127.0.0.1:5050/
    </VirtualHost>
    </IfModule>

    Here we are making use of the apache “macro” module we enabled earlier to define the boilerplate configurations that we want all of our virtual hosts to have. By including the line Use SSLStuff in a virtual host, we thereby include everything we defined in the SSLStuff block.

    This configuration has two virtual hosts. The first one is a default; if a request is received without a recognized domain, then serve files from /var/www/notfound. (You of course need to create such a dir, and, at minimum, have an index.html file therein with a “Not found” message.)

    The second virtual host tells Apache to forward any request sent to www.yourdomain.com and forward it onto the localhost on port 5050 where, presumably, a separate server process will be listening for http requests. This port is arbitrary, and is where we will be setting up our nextJs app.

    Whenever you change apache configurations, you of course need to restart apache with sudo systemctl restart apache2. To quickly test that the proxied route is working, install node (I always recommend with with nvm), install a simple server with run npm i -g http-server, create a test index.html file somewhere on your filesystem, and run http-server -p 5050.

    Now visit the proxied domain and confirm that you are receiving the content of the index.html file you just created. The great thing about this set up is that Apache is acting as a single encryption gateway on port 443 for all of your apps, so you don’t need to worry about SSL configuration on your individual application servers; all of our inner microservices are safe!

    Expanding Virtual Hosts

    (There will inevitably come a time when you want to add more virtual hosts for new applications on the same server. Say that I want to have a folder for serving miscellaneous files to the world.

    First, you need to go back to your DNS interface (AWS Route 53 in my case), and add a new subdomain pointing to your public IP Address.

    Next, in my case, where I am using a script to dynamically update my AWS-controlled the IP Address that my domain points to, as I described in the last part of this cluster series, I need to open up crontab -e and add a line for this new domain.

    Next, I need to change the apache configuration by adding another virtual host and restarting apache:

    <VirtualHost *:443>
        Use SSLStuff
        DocumentRoot /var/www/miscweb
        ServerName misc.yourdomain.com
    </VirtualHost>

    Next, we need to create a dir at /var/www/misc (with /var/www being the conventional location for all dirs served by apache). Since /var/www has strict read/write permissions requiring sudo, and since I don’t want to have to remember to use sudo every time I want to edit a file therein, I tend to create the real folder in my home dir and link it there with, in this case:

    sudo ln -fs /home/myhome/miscweb /var/www/miscweb

    Next, I need to rerun certbot with a command to expand the domains listed in my active certificate. This is done with the following command:

    sudo certbot certonly --apache --cert-name www.yourdomain.com --expand -d \
    www.yourdomain.com,\
    misc.yourdomain.com

    Notice that when you run this expansion command you have to specify ALL of the domains to be included in the updated certificate including those that had been listed therein previously; it’s not enough to specify just the ones you want to add. Since it can be hard to keep up with all of your domains, I recommend that you keep track of this command with all of your active domains in a text file somewhere on your server. When you want to add another domain, first edit this file with one domain on each line and then copy that new command to the terminal to perform the update.

    If you want to prevent the user from browsing the files within ~/miscweb, then you need to place an index.html file in there. Add a simple message like “Welcome to my file browser for misc sharing” and check that it works by restarting apache and visiting the domain with https.

    Quick Deploy of NextJs

    We’ll talk more about nextJs in the next part. For now, we’ll do a very quick deployment of nextJs just to get the ball rolling.

    Normally, I’d develop my nextJs app on my laptop, push changes to github or gitlab, pull those changes down on the server, and restart it. However, since node is already installed on the RPi primary node, we can just give it a quick start by doing the following:

    • Install pm2 with npm i -g pm2
    • Create a fresh nextJs app in your home directory with cd ~; npx create-next-app --typescript
    • Move into the dir of the project you just created and edit the start script to include the port you will proxy to: "start": "next start -p 5050"
    • To run the app temporarily, run npm run start and visit the corresponding domain in the browser to see your nextJs boilerplate app in service
    • To run the app indefinitely (even after you log out of the ssh shell, etc.), you can use pm2 to run it as a managed background process like so: pm2 start npm --name "NextJsDemo" -- start $PWD -p 5050

    NextJs has several great features. First, it will pre-render all of your react pages for fast loading and strong SEO. Second, it comes with express-like API functionality built-in. Go to /api/hello at your domain to see the built-in demo route in action, and the corresponding code in pages/api/hello.ts.

    More on NextJs in the next part!