Tag: node

  • Developing a Simple Angular Library

    Intro

    If you want to be able to easily develop, test, and publish an angular library to npm then follow these steps. Here I’ll be showing what I did to get my library ‘ngx-js9’ published to npm.

    Library Development

    On your local machine, create a regular angular application with a name of the form “ngx-XXX-library”:

    ng new ngx-js9-library
    cd ngx-js9-library

    I’ll refer to this regular app we just created as the “wrapper (app)”. Within this wrapper app, we will now generate a library with a name of the form “ngx-XXX”:

    ng generate library ngx-js9

    The code for the library will be in projects/ngx-XXX, and the code for the wrapper will be in the usual src dir. We now compile the library with:

    ng build ngx-js9

    This command outputs to dist/ngx-XXX. Within the wrapper app we can import this library by going to app.module.ts and importing the library module as follows:

    ...
    import { NgxJs9Module } from 'ngx-js9';
    
    @NgModule({
      declarations: [AppComponent],
      imports: [BrowserModule, AppRoutingModule, NgxJs9Module],
      providers: [],
      bootstrap: [AppComponent],
    })
    export class AppModule {}

    Now go to app.compnent.html and make the following the content:

    <h1>Lib Test</h1>
    <lib-ngx-js9></lib-ngx-js9>

    … and run the app as per usual with ng serve, and you’ll see the content of the library component embedded within the wrapper app. Now for hot-reloading development of your library component, you can also build the library with the --watch option:

    ng build ngx-js9 --watch

    … along with ng serve in order to get instant updates. Awesome — angular has made things very straightforward to set up and develop a basic library!

    Publishing to npm

    If you are signed into npm then all that’s involved is to run npm build ngx-XXX, then go into the generated dir dist/ngx-XXX and run npm publish. It’s that simple!

  • Raspberry Pi Cluster V: Deploying a NextJs App on Ubuntu Server 20.04

    Intro

    In the last part I opened up my primary node to the Internet. We’re now in a position to make public-facing applications that will eventually connect up with microservices that harness the distributed computing power of the 4 RPi nodes in our cluster.

    Before we can make such parallel applications, we need to be able to deploy a web-facing interface to the primary node in our cluster. Not only is that a generally important thing to be able to do, but it allows us to separate web-specific code/processes from what we might call “computing” or “business-logic” code/processes (i.e. a microservices architecture).

    So in this post (and the next few), I am going to go through the necessary steps to get a MERN stack up and running on our primary node. This is not a cluster-specific task; it is something every full-stack developer needs to know how to do on a linux server.

    Tech Stack Overview

    In the last part, we used AWS Route 53 to set up a domain pointing to our primary node. I mentioned that you need to have a web server like Apache running to check that everything is working, namely the port forwarding and dynamic DNS cronjob.

    We are going to continue on here by creating a customer-facing application with the following features:

    • Full set up of Apache operating as our gateway/proxy web server
    • SSL Certification with certbot
    • NextJs as our application server (providing the “Express”, “React” and “Node” parts of our MERN stack)
    • User signup and authentication with:
      • AWS Cognito as our Authentication Server
      • MongoDB as our general/business-logic DB

    Apache & Certbot Setup

    Apache 101

    This section is aimed at beginners for setting up Apache. Apache is a web server. It’s job is to receive an http/https request and return a response. That response is usually one of three things:

    1. A copy of a file on the filesystem
    2. HTML representing the content within the filesystem with links to download individual files (a ‘file browser’)
    3. A response that Apache gets back from another server that Apache “proxied” your original request to.

    In my opinion, absolutely every web developer needs to know how to set up Apache and/or Nginx with SSL certification to be able to accomplish these three things. I tend to use Apache because I am just more used to it than Nginx.

    An important concept in Apache is that of a “virtual host”. The server that Apache runs on can host multiple applications. You might want to serve some files in a folder to the internet at one subdomain (e.g. myfiles.mydomain.com), a react app at another domain (e.g. react.mydomain.com), and an API with JSON responses at yet another domain (e.g. api.mydomain.com).

    In all three of these example subdomains, you are setting up the DNS server to point the subdomain to the same IP Address — the public IP Address of your home in this project’s case. So if requests are coming into the same apache server listening on port 443, then we need in general to configure Apache to separate these requests and have them processed by the appropriate process running on our machine. The main way that one configures Apache to separate requests is based on the target subdomain. This is done by creating a “virtual host” within the Apache configuration, as demonstrated below.

    Installing Apache and Certbot on Ubuntu 20.04

    Installing apache and certbot on Ubuntu 20.04 is quite straightforward.

    sudo apt install apache2
    sudo snap install core
    sudo snap refresh core
    sudo apt remove certbot
    sudo snap install --classic certbot
    sudo ln -s /snap/bin/certbot /usr/bin/certbot

    We also need to enable the following apache modules with the a2enmod tool (“Apache2-Enable-Module”) that gets installed along with the apache service:

    sudo a2enmod proxy_http proxy macro

    Make sure you have a dynamic domain name pointing to your public IP address, and run the certbot wizard with automatic apache configuration:

    sudo certbot --apache

    If this is the first time running, it will prompt you for an email, domain names, and whether to set up automatic redirects from http to https. (I recommend you do.) It will then modify your configuration files in /etc/apache2/sites-available. The file /etc/apaches/sites-available/000-default-le-ssl.conf looks something like this:

    <IfModule mod_ssl.c>
    <VirtualHost *:443>
            # ...        
            ServerAdmin webmaster@localhost
            DocumentRoot /var/www/html
            # ...
            ErrorLog ${APACHE_LOG_DIR}/error.log
            CustomLog ${APACHE_LOG_DIR}/access.log combined
            # ...
            ServerName wwww.yourdomain.com
            SSLCertificateFile /etc/letsencrypt/live/wwww.yourdomain.com/fullchain.pem
            SSLCertificateKeyFile /etc/letsencrypt/live/wwww.yourdomain.com/privkey.pem
            Include /etc/letsencrypt/options-ssl-apache.conf
    </VirtualHost>
    </IfModule>

    There is quite a lot of boilerplate stuff going on in this single virtual host. It basically says “create a virtual host so that requests received on port 443 with target URL with subdomain wwww.yourdomain.com will get served a file from the directory /var/www/html; decrypt using the information within these SSL files; if errors occur, log them in the default location, etc.”.

    Since we might want to have lots of virtual hosts set up on this machine, each with certbot SSL certification, we will want to avoid having to repeat all of this boilerplate.

    To do this, let’s first disable this configuration with the tool sudo a2dissite 000-default-le-ssl.conf.

    Now lets create a fresh configuration file with sudo touch /etc/apaches/sites-available/mysites.conf and add the following text:

    <IfModule mod_ssl.c>
    <Macro SSLStuff>
        ServerAdmin webmaster@localhost
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
        Include /etc/letsencrypt/options-ssl-apache.conf
        SSLCertificateFile /etc/letsencrypt/live/www.yourdomain.com/fullchain.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/www.yourdomain.com/privkey.pem
    </Macro>
    
    <VirtualHost _default_:443>
        Use SSLStuff
        DocumentRoot /var/www/notfound
    </VirtualHost>
    <VirtualHost *:443>
        Use SSLStuff
        ServerName www.yourdomain.com
        ProxyPass / http://127.0.0.1:5050/
        ProxyPassReverse / http://127.0.0.1:5050/
    </VirtualHost>
    </IfModule>

    Here we are making use of the apache “macro” module we enabled earlier to define the boilerplate configurations that we want all of our virtual hosts to have. By including the line Use SSLStuff in a virtual host, we thereby include everything we defined in the SSLStuff block.

    This configuration has two virtual hosts. The first one is a default; if a request is received without a recognized domain, then serve files from /var/www/notfound. (You of course need to create such a dir, and, at minimum, have an index.html file therein with a “Not found” message.)

    The second virtual host tells Apache to forward any request sent to www.yourdomain.com and forward it onto the localhost on port 5050 where, presumably, a separate server process will be listening for http requests. This port is arbitrary, and is where we will be setting up our nextJs app.

    Whenever you change apache configurations, you of course need to restart apache with sudo systemctl restart apache2. To quickly test that the proxied route is working, install node (I always recommend with with nvm), install a simple server with run npm i -g http-server, create a test index.html file somewhere on your filesystem, and run http-server -p 5050.

    Now visit the proxied domain and confirm that you are receiving the content of the index.html file you just created. The great thing about this set up is that Apache is acting as a single encryption gateway on port 443 for all of your apps, so you don’t need to worry about SSL configuration on your individual application servers; all of our inner microservices are safe!

    Expanding Virtual Hosts

    (There will inevitably come a time when you want to add more virtual hosts for new applications on the same server. Say that I want to have a folder for serving miscellaneous files to the world.

    First, you need to go back to your DNS interface (AWS Route 53 in my case), and add a new subdomain pointing to your public IP Address.

    Next, in my case, where I am using a script to dynamically update my AWS-controlled the IP Address that my domain points to, as I described in the last part of this cluster series, I need to open up crontab -e and add a line for this new domain.

    Next, I need to change the apache configuration by adding another virtual host and restarting apache:

    <VirtualHost *:443>
        Use SSLStuff
        DocumentRoot /var/www/miscweb
        ServerName misc.yourdomain.com
    </VirtualHost>

    Next, we need to create a dir at /var/www/misc (with /var/www being the conventional location for all dirs served by apache). Since /var/www has strict read/write permissions requiring sudo, and since I don’t want to have to remember to use sudo every time I want to edit a file therein, I tend to create the real folder in my home dir and link it there with, in this case:

    sudo ln -fs /home/myhome/miscweb /var/www/miscweb

    Next, I need to rerun certbot with a command to expand the domains listed in my active certificate. This is done with the following command:

    sudo certbot certonly --apache --cert-name www.yourdomain.com --expand -d \
    www.yourdomain.com,\
    misc.yourdomain.com

    Notice that when you run this expansion command you have to specify ALL of the domains to be included in the updated certificate including those that had been listed therein previously; it’s not enough to specify just the ones you want to add. Since it can be hard to keep up with all of your domains, I recommend that you keep track of this command with all of your active domains in a text file somewhere on your server. When you want to add another domain, first edit this file with one domain on each line and then copy that new command to the terminal to perform the update.

    If you want to prevent the user from browsing the files within ~/miscweb, then you need to place an index.html file in there. Add a simple message like “Welcome to my file browser for misc sharing” and check that it works by restarting apache and visiting the domain with https.

    Quick Deploy of NextJs

    We’ll talk more about nextJs in the next part. For now, we’ll do a very quick deployment of nextJs just to get the ball rolling.

    Normally, I’d develop my nextJs app on my laptop, push changes to github or gitlab, pull those changes down on the server, and restart it. However, since node is already installed on the RPi primary node, we can just give it a quick start by doing the following:

    • Install pm2 with npm i -g pm2
    • Create a fresh nextJs app in your home directory with cd ~; npx create-next-app --typescript
    • Move into the dir of the project you just created and edit the start script to include the port you will proxy to: "start": "next start -p 5050"
    • To run the app temporarily, run npm run start and visit the corresponding domain in the browser to see your nextJs boilerplate app in service
    • To run the app indefinitely (even after you log out of the ssh shell, etc.), you can use pm2 to run it as a managed background process like so: pm2 start npm --name "NextJsDemo" -- start $PWD -p 5050

    NextJs has several great features. First, it will pre-render all of your react pages for fast loading and strong SEO. Second, it comes with express-like API functionality built-in. Go to /api/hello at your domain to see the built-in demo route in action, and the corresponding code in pages/api/hello.ts.

    More on NextJs in the next part!