Intro
I’ve heard good things about Litespeed so decided to try setting it up on AWS and to perform some comparisons with Apache. I also decided that I would try AWS RDS and S3 for data persistence.
This article assumes general knowledge of setting up and administering an EC2 instance, and focuses on OpenLiteSpeed (OSL) setup.
EC2 Setup
I set up an EC2 instance with Ubuntu 20.04 LTS through the AWS Console. I chose 15GB of EBS storage, which I expect will be more than enough so long as this instance remains dedicated to one WordPress instance with data and media files stored externally (~5GB for OS, ~4GB for swap space, leaving ~5-6GB to spare). I’ve started off with 1GB RAM (i.e. the free-tier-eligible option).
Then you need to ssh into your EC2 instance, do the usual set up (add swap space, add configurations for vim, zsh, tmux, etc.), and, if you plan for this to be a production WordPress site, then you’ll want to set up backups using the Life Cycle Manage.
Installing OpenLiteSpeed
Once your EC2 instance is configured for general usage, we need to install OpenLiteSpeed. Although it claims to be “compatible” with Apache, there are a lot of differences to setting it up and operating it. I used the official guide as well as this googled resource.
NOTE: this section describes how to install OLS manually; see below for the option of installing OLS, php, wordpress, mysql, and LSCache through a convenience “one-click” script.
First, to install “manually”, you need to add the relevant repository:
wget -O - http://rpms.litespeedtech.com/debian/enable_lst_debian_repo.sh | sudo bash
Then you can install the package:
sudo apt install openlitespeed
This will install to /usr/local/lsws (laws = “Lite Speed Web Server”) and includes a service allowing you to run:
sudo systemctl start|stop|restart|status|enable|disable lsws
OLS will be running and enabled upon installation. Installing OSL also installs a bunch of other packages, including a default version of php73, so OSL is ready to work with php out of the box.
Managing OpenLiteSpeed
Unlike apache — where all configuration is performed by editing files within /etc/apache2, OSL comes with an admin interface allowing you to configure most things graphically. This interface runs on port 7080 (by default) so, to access it on your EC2 instance, you need to open up port 7080 in your security group, and then navigate to your_ec2_ip_address:7080, where you will see a login form.
Now, first time you do all of this, you will not have set up SSL certification, so any such login will be technically insecure. So my approach is to:
Kick off with a weak/throwaway password to get stuff up initially, set up SSL, then switch to a proper/strong/long-term password, and hope that, during these few minutes, your ISP or some government power does not packet-sniff out your credentials and take over your OSL instance.
To set up the initial credentials, run this CLI wizard:
sudo /usr/local/lsws/admin/misc/admpass.sh
Then use those credentials to login into the interface. By modifying settings here, you cause details within config files to get updated within /usr/local/lsws, so in theory you never need to directly alter settings in these files directly.
By default, OSL will be running the actual server on port 8088. We want to change that to 80 to make sure things are working. So go to “Listeners”, click “View” on the Default listener, and edit the port to 80. Save and restart the server. Now you can got to your_ec2_ip_address in the browser to view the default server setup provided by OSL.
This default code is provided in /usr/local/lsws/Example/html. Let’s create the file /usr/local/lsws/Example/html/temp.php with the contents:
<?php
echo "Is this working?";
phpinfo();
?>
And then go to your_ec2_ip_address/temp.php to confirm things are working. If you’ve followed these instructions precisely, then you’d expect to see something like this:

A note on LiteSpeed & PHP
The first time I tried installing OSL, I was rather confused on what one needed to do to get php working with it. The instructions I had come across told me to run the following commands after installing OSL:
sudo apt-get install lsphp74
sudo ln -sf /usr/local/lsws/lsphp74/bin/lsphp /usr/local/lsws/fcgi-bin/lsphp5
This might have been necessary back in Ubuntu18.04, but with Ubuntu20.04 it is not necessary: installing OSL comes with the package lsphp73 already installed, so you only need to install this if you care about having php 7.4 over 7.3.
It was also frustrating to be told to create the soft link given above without any explanation as to what it does or why it is needed. As far as I can discern, you need this soft link iff you want to specify the php interpreter to be used with fast-cgi scripts. But since I never deal with cgi stuff, I am pretty sure one can skip this.
Furthermore, the instructions I read were incomplete. To get OSL to recognize the lsphp74 interpreter, you need to perform the additional step of setting the path in admin console. To do that, go to “Server Configuration” and the “External App” tab. There you need to edit the settings for the “LiteSpeed SAPI App” entry, and change the command field from “lsphp73/bin/lsphp” to “lsphp73/bin/lsphp”. Save, restart OSL, and check that the php version coming through in the temp.php page set up earlier is 7.4.
SSL Setup
I followed the instructions here, though they’re slightly out of date for Ubuntu20.04.
Point a subdomain towards your EC2 instance; in this example, I’ll be using temp.rndsmartsolutions.com.
Run sudo apt install certbot to install certbot.
Run sudo cerbot certonly to kick off the certbot certificate wizard. When asked “How would you like to authenticate with the ACME CA?”, choose “Place files in webroot directory (webroot)”.

Add your domain name(s) when prompted, then when it asks you to “Input the webroot for [your domain name]” , enter “/usr/local/lsws/Example/html“. This is the default dir that OLS comes with, and certbot will then know to add a temporary file there in order to coordinate with the CA server to verify that you control the server to which the specified domain name is pointing.
If successful, certbot will output the certificate files onto your server. You now have to use the OLS console to add those files to your server’s configuration. Go to the “Listeners” section and under the “General” tab change the”Port” field to 443 and the “Secure” field to Yes. In the SSL tab set the “Private Key File” field to, in this example, /etc/letsencrypt/live/temp.rndsmartsolutions.com/privkey.pem and the “Cerficate File” field to /etc/letsencrypt/live/temp.rndsmartsolutions.com/fullchain.pem. Now restart the server and try navigating to your the domain that you just tried to setup with https and you can expect it to work.
If SSL is working for the main server for the default Example virtual host on port 443, then you can use those same certificates for the WebAdmin server listening on port 7080. To do so, go to the “WebAdmin Settings > Listeners” section and view the “adminListener” entry. Under the SSL tab, set the “Private Key File” and “Certificate File” fields to the same values as above (i.e. pointing to our certbot-created certificates), and then save and restart the server. Now you can expect to be able to access the WebAdmin interface by visiting, in this example, temp.rndsmartsolutions.com:7080.
Now that we can securely access the WebAdmin interface without the threat of packet sniffing, we can set a new password to something strong and longterm by going to “WebAdmin Settings > General” and under the “Users” tab we can view the entry for our username and a form for updating the password.
Creating Further Virtual Hosts
In general, we want to be able to add further web applications to our EC2 instance that funnel through the OSL server in one way or another. For that, we need to be able to set up multiple virtual hosts. Let’s start off with a super basic html one, and then explore the addition of more sophisticated apps.
First, I’ll go to my DNS control panel (AWS Route 53) and add another record pointing the subdomain temp2.rndsmartsolutions.com to my EC2 instance.
Now, in the WebAdmin interface, go to the “Virtual Hosts” section and click “+” to Add a new Virtual Host. Create a name in the “Virtual Host Name” field; this can be set to the text of the subdomain that, in this case, is “temp2”. In the “Virtual Host Root” field, set the path to the directory that you plan to use for your content. You need to create this dir on your EC2 instance; I tend to put them in my home folder so, in this case, I am using “/home/ubuntu/temp2”. While you’re there, create a dir called “html” in there and place a test index.html with some hello-world text. (You can determine the exact dir name to hold the root content by going to the “General” tab and setting the field “Document Root” to something other than “$VH_ROOT/html/”; in this case, we have set “$VH_ROOT” to “/home/ubuntu/temp2”.)
Under the table titled “Security” within the “Basic” tab of “Virtual Hosts” section you probably want to also set things as depicted in the image below.

Having created the Virtual Host entry for our new app, go to the “Listeners” section. Since earlier we changed the Default listener to have “Secure” value of “Yes” and the “Port” to have value “443”, we need to create a separate listener for port 80, that does not need to be secure. (At minimum we need the listener on port 80 in order for our next certbot call to be able to perform its verification steps.) So create such a listener and give it two “Virtual Host Mappings” to our two existing virtual hosts as depicted below.

We now have two listeners — one for 80 and one for 443 — and both virtual hosts can be reached at their respective domains. Going now to temp2.rndsmartsoliutions.com is expected to show the hello-word index.html file created earlier.
In order for SSL to work with out new virtual host, we need to expand the domains within our certificate files. So go to your SSH EC2 terminal and re-run certbot certonly as follows:
sudo certbot certonly --cert-name temp.rndsmartsolutions.com --expand -d \
temp.rndsmartsolutions.com,\
temp2.rndsmartsolutions.com
If all goes well, the certificates will get updated and once you restart the server you will be able to access your new virtual host at, in this example, https://temp2.rndsmartsolutions.com.
(Note: you can add the SSL configurations to the virtual hosts rather than the listeners, but I prefer the latter.)
Finally, we want to be able to tell OLS to redirect all traffic for a given virtual host from http to https. We’ll exemplify this here with the temp2 virtual host. Go to the “Virtual Hosts” section and view the temp2 virtual host. Under the “Rewrite” tab set the field “Enable Rewrite” to Yes in the “Rewrite Control” table. Then add the following apache-style rewrite syntax to the “Rewrite Rules” field:
rewriteCond %{HTTPS} !on
rewriteCond %{HTTP:X-Forwarded-Proto} !https
rewriteRule ^(.*)$ https://%{SERVER_NAME}%{REQUEST_URI} [R,L]
Restart the OSL server and you can now expect to be redirected to https next time you visit http://temp2.rndsmartsolutions.com.
Certbot Renewal Fix
This section was inserted several months later to fix a problem with the set up described in the last section. What I discovered was that adding these apache rewrite rules disrupted the way that certbot renews the certificate automatically. It wants to check you control the domain by adding temporary documents and then accessing them over http. However, these rewrite rules redirect you to https, which certbot doesn’t like (presumably because it doesn’t want to assume you can use https.)
The fix I came up with is to disable the rewrite rules and, if I want to have an application that can only be accessed over https, then I create an additional virtual host listening on port 80 for the same domain, and then I just make that application a single index.php with the following redirection logic (taken from here):
if (empty($_SERVER['HTTPS']) || $_SERVER['HTTPS'] === "off") {
$location = 'https://' . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'];
header('HTTP/1.1 301 Moved Permanently');
header('Location: ' . $location);
exit;
}
So now, if you accidentally go to this domain, you will be redirected to the https listener, which will route you to the actual application you want users using at this domain. (And if someone accidentally goes to a non-root extension of this domain, then OLS will issue a “not found” error.)
Installing WordPress with One-Click Install
I discovered (somewhat after the fact) that you can actually install OLS, php, mysql, wordpress, and LSCache with a convenient script found here. To use, download and execute with:
curl https://raw.githubusercontent.com/litespeedtech/ols1clk/master/ols1clk.sh -o ols1clk.sh
sudo bash ols1clk.sh -w
… where the -w flag indicates that you also want wordpress installation. This will prompt you with all of the items and credentials that the script plans to make, including installing mysql. If you have already installed OLS, it will wipe out your existing admin user+password with the new one declared, and will likely disrupt your Example settings, and create confusion by adding virtual hosts and/or listeners that conflict with what you’ve already set up. In short, do NOT use this script if you have already set up OLS — just install wordpress manually; Digital Ocean provides thorough guides for accomplishing this; e.g. see here.
Once you have cleanly run the ols1clk.sh script, a virtual host will be ready for you, so go to the domain/ip-address for this instance, and you will encounter the set up wizard for wordpress. (Obviously, ideally, you first go through the relevant steps as laid out already for setting up SSL before going through the wizard.)
However, before you do anything in the WP setup wizard, you need to change mysql DB configurations unless you want to host on your EC2 instance; we’ll outline that process in the next section.
Installing WordPress Manually
Having install OLS and lsphp manually already, and not wanting to disrupt my setup with the the ols1clk.sh script, I installed WP following the instructions here. I deviated from these instructions slightly since they are for apache and I am using OLS.
One difference in particular is permissions … [TBC]
AWS RDS Mysql Setup
AWS using some confusing terminology. I normally think of a mysql DB as being the thing that you create with the sql command CREATE DATABASE name_of_db;, but when you click on “Create database” in the AWS console, what you are really setting up is/are the server(s) that host(s) what is, in general, a distributed mysql-server. Contra to AWS’s confusing nomenclature, I shall refer to these managed servers as the mysql or RDS “instance”, and the entities created therein via CREATE DATABASE as the “DBs”.
Anyhow, click on “Create database” and go through the wizard to create a mysql instance. I am using the free-tier instance for now, with Mysql Community 8.0.23.
You need to create a master user and password. You do not need a super strong password since we will also be setting the connectivity such that the instance can only be accessed from AWS resources sharing the same (Default) VPC. Since I only intend to connect to this instance from EC2 instances in the same VPC that are themselves very secure (I SSH in via SSL certs), we do not need another lay of “full” security to have to note down somewhere. (Obviously, if you want to connect from outside AWS, then you need super strong credentials.) We also choose default “Subnet group” and “VPC security group”.

In the section “Additional Configuration” we have the choice to create an initial DB, but we will not since we will do that manually under a different non-admin username.

After the console is done creating the instance, it will make available an endpoint of the form instance-name.ABCD.us-east-1.rds.amazonaws.com that we can use to test connecting from our EC2 instance.
First, we want to ensure that we control precisely what EC2 instances we can connect from. In the RDS console, select the instance you just created, select the “Connectivity and security” tab, and scroll down to the “Security group rules” table. This will show you all of the rules for inbound/outbound traffic determined by the setting with in the security group assigned to the instance upon creation. You’ll want it to look like the following image:

Click on that security group link to edit the associated inbound/outbound rules. For the sake of security, it’s sufficient to just limit all inbound traffic to the mysql instance, and leave all outbound traffic unrestricted. Here, I’ve limited the traffic to be inbound only from AWS resources using the specific security groups shown in the above image; these are associated with two separate EC2 instances that I set up.
Back in the EC2 instance hosting my OLS server, install the mysql client with sudo apt install mysql-client. Then run:
mysql --host END_POINT -u admin -p
… where END_POINT is given to you in the RDS console for your mysql instance. You’ll be prompted for the admin password you created for the instance, and you can then expect to connect to the instance.
We set up the mysql instance without an initial database, so lets now create one explicitly for our wordpress site. We also want to create username-password combo for specific use with this wordpress instance who only has permissions to read/write that db. Run the SQL commands:
CREATE DATABASE dbname;
CREATE USER 'newuser'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON dbname . * TO 'newuser'@'%';
FLUSH PRIVILEGES;
… replacing dbname, newuser, and password with values of your choice. (Note the syntax ‘username’@’%’ means “a user with username connecting from any host”.) You can now exit this mysql session and try logging in again with the user you just created to make sure that that user is able to connect to the RDS instance remotely.
Next, go to the dir in whcih you downloaded wordpress, and open the file wordpress/wp-config.php. Go down to the section for “MySQL settings” and enter the details for this user we just created, as well as the endpoint for the RDS instance in the DB_HOST slot. It also makes your WP more secure to change the $table_prefix variable to e.g. 'wp_sth_'.
WordPress Troubleshooting
If you have difficulty getting your wordpress to work, try adding the line:
define( 'WP_DEBUG', true );
… to true in wp-config.php to get an error stack. In my case, I had trouble getting php to recognize mysqli_connect; I got it working by installing sudo apt install lsphp74-mysql and restarting the OLS server. I also had some trouble getting php to recognize the php-curl module; I eventually got it working after running all sorts of installs (sudo apt install php-curl, etc.) and restarts, though I am not sure what the eventual solution was exactly (all I know is that I did not need to edit any OLS config files directly). After playing with OLS for several days now, I am tempted to say that you never want to edit an OLS config file directly; there is always a way to do things though the interface, or your WP/.htaccess files.
Setting Up WordPress
Once you have your WP interface working (i.e. you can login through a browser), you need to perform some essential set up.
First, go to Settings > Permalinks and select “Post name”. Make sure the REST API is working by going to /wp-json/wp/v2/. If it is not working, try /index.php/wp-json/wp/v2/. If that works, then you need to get OLS to perform rewrites to skip the index.php part. OLS does not read .htaccess files by default (that WP supplies), so to get OLS to recognize those files go to your OLS admin, go the “Server Configuration” section, and in the “Rewrite Control” table, set the “Auto Load from .htaccess” field to “Yes” and restart. If your REST API is still not working, then, well, you’ve got some investigation to do.
(I think it’s the case that because we are loading .htaccess files at the server level, OLS will read these files into memory upon first encounter, so subsequent use will be cached; if you set this setting at the virtual host level then OLS will consult the file system on each request.)
I have heard it said that it’s a good idea to prevent users from executing the wp-config.php file. I think the idea here is that, by default, all it takes is an accidental deletion of the first line <?php for the file to be treated as plain text and, therefore, for its content to be printed to screen. The usual precaution against this on an Apache server is to add the following to the root .htaccess file:
<Files wp-config.php>
<IfModule mod_authz_core.c>
Require all denied
</IfModule>
<IfModule !mod_authz_core.c>
Order deny,allow
Deny from all
</IfModule>
</Files>
However, this will not work with OLS because it “only supports .htaccess for rewrite rules, and not for directives”. We therefore need to add the following instead:
RewriteEngine on
RewriteRule ^wp-config\.php$ - [F,L]
… and then confirm that you get a 403 Forbidden Error upon visiting /wp-config.php.
Next, we want to install some vital plugins. I try to keep these plugins to as few as possible since I am wary of conflicts and bloat; our goal is to keep WP operating in a super-lean manner, but we still need some essentials.
First, we want LSCache, since this is the big motivation of using OSL. The plugin claims to be an all-round site accelerator, cache-ing everything on all levels. When you install it, the has default settings with all site-acceleration features in place.
Next, we want WordFence (WF) to provide site protection against attacks, as well as to provide free Two-Factor Authorization (2FA) logins. Install WF and enable auto-updates. WF will go into learning mode for a week or so.
In order to set up 2FA you need a smart phone with client app. I will describe how to set up 2FA using the free “Duo Mobile” app on an iPhone. In the WF menu, go to “Login Security” and with your iPhone use the camera to scan the code. It will give you the option to open within Duo Mobile. Then, back in the WF interface, input the latest code from Duo Mobile for your WP site to activate it. Also download the recovery codes and keep them safe. Under the WF “Settings” tab for “Login Security”, I also make 2FA required for all types of user (though I only plan to use administrator and maybe Editor roles for this headless CMS). You can also require recaptcha via WF, but this is overkill for my purposes.
WF will also want you to enable “Extended Protection” mode. If you agree to this, then it will prompt you to download your old version of the site’s .htaccess file (presumably in case WF screws it up if/when you uninstall it later). I am a bit skeptical about this feature since it sounds like it would incur quite a performance hit. However, since the overall architecture I am building here aims to put all of the serious site load onto AWS Cloudfront — with WP just functioning as the headless CMS for the convenience of the client — I have opted for now to add this extra layer of security.
For this feature to be enabled on Litespeed, you need to follow these instructions, namely, go to the “Virtual Hosts” section and in the entry for your WP site, go to the “General” tab and add the following to the field “php.ini Override”:
php_value auto_prepend_file /path/to/wp/wordfence-waf.php
You may also need to go to LSCache and purge all caches.
Offloading Media to AWS S3
We want to offload the serving of our media files to AWS S3 with Cloudfront. This will also ensure that we can scale to any amount of media storage. We also want to avoid having duplicates on our EC2 disk.
At first I assumed the best way to go about this would be through a WP plugin, and I tried out W3 Total Cache. However, these free plugins always seem to have a downside. In this case, W3 Total Cache would not automatically delete the file on disk after uploading to S3.
I therefore decided to pursue a different strategy using S3fs. This is an open source apt-installable package that lets you mount an S3 bucket to your disk. Writing to disk in such a mounted volume therefore has the effect of uploading directly to S3, and leaving no footprint on your EC2 storage. You also don’t need any WP plugins.
To set up S3fs on Ubuntu 20.04, first install it along with the AWS CLI:
sudo apt install s3fs awscli
In the AWS console (or via terraform if you prefer), create a new S3 bucket, Cloudfront distribution, and ACM SSL certificate. You can see this post for guidance on those steps, but note that, in this case, we are going to create a user with only the permissions needed to edit this particular S3 bucket.
To create that user, go to the IAM interface in the AWS Console and click to “Add user”. Give the user “Programmatic Access”, then, under the “Set permissions” step, select “Attach existing policies directly” and the click on “Create policy”. Paste the following into the “JSON” tab:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "cloudfront:ListDistributions",
"Resource": "*"
}
]
}
[Note to self: will need to update this policy when it comes time to enable user to invalidate Cloudfront distributions]
Skip tags and save the policy with a name like “my-wps3-editor-policy”. Now, back in the user-creation wizard, search for and select the policy you just created. Skip tags and create the user. You will then be able to access the key and secret key for programmatic use of this user.
Back in the EC2 console, run the following to set this user as the user who will mount the S3 bucket (replacing the keys):
touch ${HOME}/.passwd-s3fs
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
chmod 600 ${HOME}/.passwd-s3fs
We will be mounting the S3 bucket to wp-content/uploads within your WP dir. Before mounting, we need to enable other users to read/write to the mounted dir so that WP can properly sync the files we upload. To enable that, you need to edit the file /etc/fuse.conf by simply uncommenting the line user_allow_other.
Now we can mount the dir BUT, before you do that, check if you already have content in the uploads dir. If you do, move that content to /tmp, and then make sure the uploads dir is empty, and then run the following:
s3fs BUCKET_NAME /path/to/wp_dir/wp-content/uploads -o allow_other -o passwd_file=${HOME}/.passwd-s3fs
Now you can copy back the contents that you may have moved, and/or upload something new, and expect it to appear within the corresponding S3 bucket.
(Optionally, you can also run `aws configure`, and enter the credentials for this user, if you want to interact with the S3 bucket from the command line.)
Finally, we want to mount this dir to S3 upon EC2 instance reboots, so add this to /etc/fstab file:
BUCKET_NAME /path/to/wp_dir/wp-content/uploads fuse.s3fs allow_other,passwd_file=/path/to/home/.passwd-s3fs 0 0
To test that it works, unmount uploads and then run sudo mount -a. If it looks like it works, you can then try actually rebooting, but be careful since messed up fstab files can brick your OS.
Here are some final notes on using S3/S3fs:
- You can setup a local image cache to speed up performance of serving files from WP, but performance doesn’t matter here since this will only be used by the CMS admin.
- s3fs does not sync the bucket between different clients connecting to it; so if you want to create a distributed WP-site then you might want to consider e.g. yas3fs.
- Depending on the amount of content on your site, you might benefit from creating a lifecycle rule on your S3 bucket to move objects from “Standard” storage to “Standard Infrequent Access” (SIA) storage. However, SIA will charge you for each file smaller than 128kb as though it were 128kb; since WP tends to make multiple copies of images of different sizes, which are often smaller than 128kb, this might offset your savings.
URL Rewrites
The last thing to consider is getting the end-user interface to serve up content from the Cloudfront URL instead of the WP URL. If you use a WP Plugin to sync with S3, then it will do the rewrites for you.
In my case though, I am going to avoid having to work with php and do all the rewrites within my nextJs frontend app. See the next part for setting that up.
Next Part
The next part in the series is on practicing backup restoration.
Leave a Reply