AWS Production Static Site Setup

Intro

I’ve fumbled through the process of setting up a production static site on AWS a few times now. These are my notes for the next time to get through the process faster.

Overview

We want to be able to run a local script that wraps around the AWS CLI to upload assets to an AWS S3 Bucket (with user credentials with limited restrictions). The S3 Bucket is to be set up for serving a static-site and serve as the origin of a Cloudfront instance, which is itself aliased to a Route 53 hosted-zone record, all glued together with a ACM certificate.

Finally, we need a script to copy over the contents of a directory for the static site to S3 in such a way as to compress all image files. In summary:

  • S3 Bucket
  • Cloudfront Instance
  • Certificate Manager Instance
  • Route 53 Configuration
  • AWS User for CLI uploads

S3 Bucket

Setting up an S3 Bucket is quite straightforward to accomplish in the AWS GUI Console. When asked about “Block all public access“, just uncheck, and don’t apply checks to any of the sub options. (Everyone I’ve seen just seems to ignore these convoluted suboptions without explanation.)

Under permissions you need to create a bucket policy that will allow anyone to access objects in the bucket. So copy the ARN for the bucket (e.g. “arn:aws:s3:::rnddotcom-my-site-s3-bucket”) and use the “Policy Generator” interface to generate some JSON text as depicted below.

Note: under the “Actions” option you need to select just the “GetObject” option. Click “Add Statement” and “Generate Policy” to get the JSON. Copy/paste it into the bucket’s policy text field and save. The following JSON is confirmed to work.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::rnddotcom-site-s3-bucket/*"
        }
    ]
}

Next, when you enable “Static website hosting”, you must specify the “Index document” since the S3 servers will not default to index.html.

Upload Static Files (with gzip compression)

When developing, I always want to be able to re-upload/re-deploy my software with a script. For that, I use a bash script that wraps around the AWS CLI. You can install it on a Mac with homebrew.

For an example of such a script, see here in my terraform-aws-modules this repo. For this to work, you need to have AWS credentials for a user with access to this bucket.

A good practice is to create a user with just enough permissions for the resources you need to access. So go to the AWS IAM console, and create a user with “Programmatic Access”.

In the permissions step, click on “Attach existing policies directly” and select — in this example — the “AmazonS3FullAccess” policy and click on “Next: Tags”.

Skip through Tags, create the user, and copy the “Access key ID” and “Secret access key” items to somewhere safe. If you are using the script I shared above, then you can add these items directly to your local .env file. By sourcing the .env file, you give these credentials priority over those stored in ~/.aws/credentials (which is handy if you manage multiple AWS accounts.)

export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."

Now you can run the above bash script that wraps around the AWS CLI to upload the contents of a local directory. The script also includes logic to pick out image files and compress them before uploading.

You now have a complete simple http static site, great for development, etc.

Cloudfront I

If you need a production site then you need to have SSL encryption (at minimum to look professional), CDN distribution, and a proper domain.

So next go to Cloudfront in the AWS GUI Console and create a new “Distribution”. There are a lot of options here (CDN’s are complicated things after all), and you just have to go through each one and give it some thought. In most cases, you can just leave the defaults. A few notes are worth making:

  • Grant Read Permissions on Bucket“: No we already set these up
  • Compress Objects Automatically“: Select yes; here is a list of type of file that CloudFront will compress automatically
  • Alternate Domain Names (CNAMEs)“: Leave this blank — sort it out after creating a distribution
  • Default Root Object“: Make sure to set this to index.html
  • Viewer Protocol Policy“: Set this to “Redirect HTTP to HTTPS” (as is my custom)

SSL Certification

Now we need to point host name to the CloudFront distribution. Surprisingly, it seems you NEED to have SSL, and to have it setup first for this to happen. So go to ACM and click on “Request a Certificate”. Select “Request a public certificate” and continue.

Add your host names and click continue. Assuming you have access to the DNS servers, select “DNS Validation” and click ‘next’. Skip over tags and click on “Confirm and Request”.

The next step will be to prove to AWS ACM that you do indeed control the DNS for the selected hosts you wish to certify. To do this, the AWS console will provide details to create DNS records whose sole purpose will be for ACM to ping in order to validate said control.

Screenshot

You can either go to your DNS server console and add CNAME records manually, or, if you’re using Route 53, just click on “Create record in Route 53”, and it will basically do it automatically for you. Soon thereafter, you can expect the ACM entry to turn from “Pending validation” to “Success”.

Cloudfront II

Now go back and edit your Cloudfront distribution. Add the hostname to the space “Alternate Domain Names
(CNAMEs)“, choose “Custom SSL Certificate (example.com)”, and select the certificate that you just requested, and save these changes.

Route 53

Finally, go to the hosted zone for your domain in Route 53, and click on “Create Record”. Leave the record type as “A” and toggle the “Alias” switch. This will transform the “Value” field to a drop down menu letting you select “Route traffic to”, in this case, “Alias to Cloudfront distribution”, and then a region, and then in the final drop down you can expect to be able to select the default url to the CloudFront instance (something like “docasfafsads.cloudfront.net”).

Hit “Create records” and, in theory, you have a working production site.

NextJs Routing

If you are using nextJs to generate your static files then you will not be able to navigate straight to a page extension because, I have discovered, the nextJs router will not pass you onto the correct page when you fall back to index.js, as it would if you’re using e.g. react router. There are two solutions to this problem, both expressed here.

  • Add trailing slash to all routes — simple but ugly solution IMO
  • (Preferred) Create a copy of each .html file without the extensions whenever you want to reupload your site; requires extra logic in your bash script

Trouble Shooting

  • The terraform-launched S3 bucket comes with the setting “List” in the ACL section of permissions tab; it’s not clear to me what difference this makes.
  • I was getting a lot of 504 errors at one point that had me befuddled. I noticed that they would go away if I first tried to access the site with http and then with https. I was saved by this post, and these notes that I was then prompted to find, that brought my attention to a setting that you cannot access in the AWS GUI Console called “Origin Protocol Policy”. Because I originally created the Cloudfront distribution with terraform, which can set this setting, and it set it to “match-viewer”, the Cloudfront servers were trying to communicate with S3 with the same protocol that I was using. So when I tried to view the site with https and got a cache miss on a file, Cloudfront would try to access the origin with https; but S3 doesn’t handle https, so it would fail. When I tried using http, Cloudfront would successfully get the file from S3, so that the next time I tried with https I would get a cache hit. Now, since I don’t like using http in general, and in fact switched to redirecting all http requests to https, I was stuck until I modified my terrafform module to change the value of Origin Protocol Policy to http-only. I do not know what the default value of Origin Protocol Policy is when you create a Cloudfront distribution through the Console — this might be a reason so always start off with terraform.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *