Tag: hardware

  • Disk Partitions, Tables, Labels and File Systems

    Intro

    I just got a new disk, installed it physically into my Ubuntu Server, and realized that I’ve forgotten everything to do with disk formatting/partitioning, etc. Here I remind myself as to what exactly all of these are concepts are and how to work with them.

    Identifying Disks

    When you’ve got a disk installed for the first time, run lsblk -o name,fstype,size,mountpoint,partuuid. If your disk is in a blank state, then you will probably only get this sort of info:

    ├─sdb1  vfat       512M /boot/efi     79033c3d-...
    └─sdb2  ext4       931G /             5c6b1ad5-...
    ...
    nvme0n1            3.6T

    In this case, I can see my disk has been detected and given the handle /dev/nvme0n1, and that it has a size of 3.6T, but that is it.

    Partition Tables

    Every disk needs to have some space dedicated to a partition table, that is, a space that enables interfacing systems to determine how the disk is partitioned. There are different standards (i.e. conventions) for how such tables are to be organized and read.

    A widely-supported type of partition table is the “GUID Partition Table” (GPT). GUID stands for “globally unique identifiers”.

    To see what partition table is used by a given disk, you can use ‘print’ within the wizard launched by parted in the following manner:

    ❯ sudo parted /dev/nvme0n1
    GNU Parted 3.4
    Using /dev/nvme0n1
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) print
    Error: /dev/nvme0n1: unrecognised disk label
    Model: CT4000P3PSSD8 (nvme)
    Disk /dev/nvme0n1: 4001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: unknown
    Disk Flags:

    In this case, we are told that we have an “unrecognised disk label”, and that the Partition Table is “unknown”. These two issues are one and the same: the ‘disk label’ is just another name for the partition table, and we don’t have one.

    Typing “help” will show the actions that we can take, in particular, it tells us we can create our partition table using either mklabel TYPE or mktable TYPE. Type gpt is ‘the’ solid and universally supported type, so I just went with that: mklabel gpt. Now when I type print I do not get the earlier error messages.

    Note: if you want to have a table of type ‘Master Boot Record’ (MBR), then you enter mklabel msdos. I have read that “GPT is a more modern standard, while MBR is more widely supported among older operating systems. For a typical cloud server, GPT is a better option.” Also, if you erase a disk on a Mac with Disk Utility, it gives you the option of an ‘Apple Partition Map’ (APM) for its ‘Scheme’ (as Mac calls the partition table). I did some light googling on this Scheme, and concluded that it is outdated — used for PowerPC Macs — and that the most recent Apple disks use GPT. In short, whenever you have to decide what Scheme to use, use GUID.

    Creating Partitions

    Now that we have a partition table, we can insert information into it to establish partitions. Still using parted, we can run mkpart as a wizard of sorts. This will prompt you for information about the partition you want to create.

    The first thing it asked me for was a “Partition name? []”. This was confusing, because the arguments for mkpart given by ‘help’ are: mkpart PART-TYPE [FS-TYPE] START END.

    So the argument is calling it a ‘type’, and the wizard is asking for a ‘name’. What’s going on? The confusion stems from different conventions used by different partition-table types. For partition tables of type msdos or dvh, you can specify a ‘type’ from one of these three options: ‘primary’, ‘extended’ or ‘ logical ‘. The wizard for parted mkpart is geared towards setting up partitions governed by msdos partition tables, hence why its documentation calls it a ‘type’. However, these categories do not apply to GUID partition tables. Instead, GPTs have a category that msdos does not have — a ‘name’ — which you have to specify (though it can just be an empty string). Hence when the wizard detects that you are filling in a GUID table, it prompts you for a ‘name’ instead of a ‘type’.

    What is the GPT ‘name’ used for? It can be used to boot the drive from /etc/fstab (see below). It does not determine the name of the handle in /dev/* for the partition (which, from what I can tell, is determined solely by the OS).

    Next, it asks for a “File system type? [ext2]”. The default is ext2. Note that parted does not go ahead and create a filesystem on this partition; this is just a ‘hint’ being recorded in the table as to the intended use of the partition, and certain programs might use this hint when e.g. looking to auto-mount the partition. Here I chose ‘ext4’ — the standard file system for linux.

    Next it asks for the ‘Start’ and then ‘End’ of the partition. Because I am only making one partition, and I want it to take up all the available space, I could just put 0% and 100% respectively. If you have to work around other partitions, then you need to note their start and end positions first using parted print, etc.

    To summarize so far, every disk needs to have it’s first few blocks set aside for a partition table. Partition tables are, no doubt, designed by strict conventions to convey to any interface as to what sort of partition table it will be dealing with (I’m not sure how exactly, but I’d guess it’s something like the very first byte tells you the type from 256 options established by the comp sci community). The partition table then in turn records where each partition begins, ends, its type, its name, etc. The only role of the program parted is to set/adjust/erase the contents of the partition table. (As far as I can tell, it does not read-write to any part of the disk outside of the partition table.)

    Note: if you delete a partition with parted, I expect that all that that does is remove the corresponding entries from the partition table, without erasing the information within the partition table itself. I therefore expect — but can’t be bothered to confirm — that if you were to then recreate a partition with the exact same bounds, that you would be able to re-access the file system and its contents.

    File Systems

    Now that the partition table has been created, our disk has a well-defined range of addresses to which we can read/write data. Now, in order to store data into the partition in an organized and efficient manner, we need another set of conventions by which programs acting on the data in the partition can operate. Such conventions are called the ‘file system’. As with the partition table being the very first thing on the disk, modern ‘hierarchical’ file systems (i.e. ones that allow for folders within folders) work by reserving the very first section of the partition a space that describes the contents and properties of the root directory, which in turn points to the location of data files and other directory files within it. Those directory files in turn point to the location of files and directories within them, etc. For an excellent, more-detailed overview of these concepts, see this video.

    Now, having quit parted, we can create a file system within our partition with:

    sudo mkfs.ext4 /dev/nvme0n1p1

    In practice, there can only be one file system per partition, so you don’t need to think about what sort of space the file system takes up — it is designed to work within the boundaries of the partition it finds itself in.

    Mounting

    To mount the partition temporarily, you use sudo mount /dev/nvme0n1p1 /mnt/temp where the dir /mnt/temp already exists. To have the partition mounted automatically on boot, add the following line to /etc/fstab:

    PARTLABEL=NameYouChose /mnt/data ext4 defaults 0 2

    …where:

    • PARTLABEL=NameYouChose is the criterion by which the OS will select from all detected partitions
    • /mnt/data is the path where the partition is being mounted
    • ext4 signals to the OS what sort of file system to expect to find on the partition
    • defaults means that this partition should be mounted with the default options, such as read-write support
    • 0 2 signifies that the filesystem should be validated by the local machine in case of errors, but as a 2nd priority, after your root volume

    To put this line into effect without rebooting, run sudo mount -a.

    Note: just to make things a little more confusing, you can also use mkfs.ext4 with the -L flag to set yet another kind of ‘label’ within the partition table. If you use this label, then you can use it to mount the partition in the /etc/fstab file using LABEL=FlagYouChose (instead of PARTLABEL=NameYouChose using parted above).

    Volumes, Containers, etc.

    As a final note, sometimes the term ‘Volume’ is used in the context of disks and partitions. The important thing to note is that the term is used differently on different platforms. In Appleland, the APFS has ‘Containers’, ‘Volumes’ and ‘Partitions’ as distinct constructs. From what I can tell, a Volume in Appleland is synonomous with Logical Volume in other lands. An example of a Logical Volume is a RAID1 set up where you have two disks, your data is duplicated on one of the disks, but you interact with that data as though it is in one place (i.e. the fact that the data has been spread across two disks has been abstracted away and hidden from you). In general, a LV can be spread across multiple physical disks, but is presented you as though you were dealing one old school physical disk.

    It’s not clear to me at this time what a Mac ‘container’ really is.

  • Raspberry Pi Cluster Part I: Goals, Hardware, Choosing OS

    A while back I built a raspberry cluster with 1 x RPi4 and 2 x RPi3b devices. This was shortly after the release of the RPi4, and, due to the many fixes that it required, I didn’t get far beyond hooking them up through a network switch.

    Now that RPi4 has had some time to mature, I decided to start again from scratch and to document my journey in some detail.

    Goals

    Being able to get computers to coordinate together over a network to achieve various tasks is a valuable skill set that has been made affordable to acquire thanks to the RPi Foundation.

    My goals are to build a cluster in order to figure out and/or practice the following technical competencies:

    • Hardware Skills: acquiring, organizing, and monitoring the cluster hardware
    • Networking Skills: setting up a network switch, DHCP server, DNS server, network-mounting drives, etc.
    • Dev Ops: installing, updating, managing the software, and backing everything up in a scalable manner
    • Web Server Skills: installing apache and/or nginx, with load balancing across the cluster nodes; also, being able to distribute python and node processes over the cluster nodes
    • Distributed-Computing Skills: e.g. being able to distribute CPU-intensive tasks across the nodes in the cluster
    • Database Skills: being able to create shards and/or replica nodes for Mysql, Postgres, and Mongo
    • Kubernetes Skills: implement a kluster across my nodes

    Those the are the goals; I hope to make this a multi-part series with a lot of documentation that will help others learn from my research.

    Hardware

    RPi Devices

    This is a 4-node cluster with the following nodes:

    • 1 x RPi4b (8GB RAM)
    • 3 x RPi3b

    I’ll drop the ‘b’s from hereon.

    The RPi4 will serve as the master/entry node. If you’re building from scratch then you may well want to go with 4xRpi4. I chose to use 3xRPi3 because I already had three from previous projects, and I liked the thought of having less-power hungry devices running 24×7. (Since their role is entirely one of cluster pedagogy/experimentation, it doesn’t bother me that their IO speed is less than that of the RPi4. Also, the RPi4 really needs some sort of active cooling solution, while the RPi3b arguably does not, so my cluster will only have one fan running 24/7 instead of 4.)

    Equipment Organization

    I know from my previous attempt that it’s really hard keeping your hardware neat, tidy and portable. It is important to me to be able to transport the cluster with minimal disassembly, and I therefore sought to house everything on a single tray and with a single power cable to operate it. That means that my cluster’s primary connection to the Internet would be by wifi but, importantly, I’ve insisted that the nodes communicate to each other over ethernet through a switch. The network schematic therefore looks something like this:

    RPi Cluster Network Schematic

    The RPi4 will thus need to act as a router so that the other nodes can access the internet. Since each node has built-in wifi, I’m also going to establish direct links between each node and my home wifi router, but these shall only be used for initial setup and debugging purposes (if/when the network switch fails).

    To keep the RPi nodes arranged neatly, I got a cluster case for $20-$30. Unfortunately, the RPi4 has a different physical layout which spoils the symmetry of the build, but it also makes it easy to identify it. I also invested in a strip plug with USB-power connectors, so that I would only need a single plug to connect the cluster to the outside world. I was keen to power the RPi3s through the USB connectors on the strip plug in order to avoid having 5 power supplies,​*​ which gets bulky and ugly IMO.

    Finally, I had to decide about what sort of storage drives I would use on my RPi3s. For the RPi4s, there was no question that I would need an external SSD drive to make the most of its performance.

    BEWARE about purchasing an SSD for your RPi4! Not all drives work on the RPi4 and I lost a ton of time/money with Sabrent. This time round I went with this 1TB drive made by Netac. So far, so good. If $130 is too pricey then just get a 120/240GB version in the $20-40 range. (I only got 1TB because I have plans to use my cluster to do some serious picture-file back ups and serving).

    For the RPi3s, which I expected to use a lot less in general, there is not nearly as much to be gained from an external SSD. Also, I wanted to limit the cost of the set up as well as the number of cables floating around the cluster and so I decided to start off with SD Cards for the RPi3bs, though I am wary of this decision (and deem it likely that I will regret this decision as soon as one of them fails). I’m using 3x64GB Samsung Evo Plus (U3 speed). I’ll be sure to benchmark their performance once set up.

    I also got a 2TB HDD drive to provide the RPi3s with some more durable read-write space, and on which I’ll be able to backup everything on the cluster.

    I got a simple network switch, some short micro-USB cables , and some short flexible ethernet cables. Be careful with your ethernet cables; you want them short to keep your cluster tidy, but make sure they are not too rigid as a result; in my previous attempt I got these cables that were short but so rigid that they created a lot of torque between the switch and node connectors, and made the whole cluster look/feel contorted.

    I also got a high quality power supply for my RPi4 since it, being the primary node that will undergo the most work, and having two external storage drives to power, needs a reliable voltage.

    Finally, I also got a bunch of USB-A and USB-C Volt/Amp-Meters for a few bucks from China, because I like to know the state of the power going through the nodes.

    So, in total, I calculate that the equipment will have cost ~$500. It’s added up, but that’s not bad a for computing cluster.

    4-Node RPi Cluster Hardware

    And, yes, I need a tray upgrade.

    Choosing an OS

    When it came to choosing an OS, the only two I considered viable candidates were Raspbian OS (64 bit beta), or Ubuntu 20.04 LTS server (64 bit).

    I went with Ubuntu in the end because my project is primarily pedagogic in nature and so, by choosing Ubuntu, I figured I’d be deepening my knowledge of a “real world” OS. I also just generally like Ubuntu, and it has long been my OS of choice on cloud servers.

    For the RPi3s, I used the Raspberry Pi Imager application to select the Ubuntu server 20.04 and burned that image onto each SD card.

    Raspberry Pi Imager

    For the RPi4 though I wanted to boot from an external SSD drive, and this isn’t trivial yet with the official Ubuntu image. I therefore opted to use an image posted here that someone had built using the official image but with a few tweaks to enable booting from an external USB device. (It required you to first update the RPi4’s EEPROM, but I had already done that. It’s easily googled.)

    Initial Setup

    Once the cluster hardware had been assembled and wired up, I powered everything on and then had to go through each fresh install of ubuntu and perform the following:

    • Login with ‘ubuntu’, ‘ubuntu’ credentials
    • Connect to wifi following this advice; note: you need to reboot after calling sudo netplan apply before it will work! (My netplan conf is included in the next part of this series.)
    • Update with sudo apt update; sudo apt upgrade -y
    • Set the timezone with sudo timedatectl set-timezone America/New_York; if you want to use a different timezone then list he ones available with timedatectl list-timezones.
    • Add a new user ‘dwd’ (sudo adduser dwd) and assign him the groups belonging to the original ubuntu user (sudo usermod -aG $(groups | sed "s/ /,/g") dwd)
    • Switch to dwd and disable ubuntu (sudo passwd -l ubuntu)
    • Install myconfig and use it to further install super versions of vim, tmux, etc. See this post for more details.
    • Install oh-my-zsh, powerlevel10k, and zsh-autosuggestions.
    • Install iTerm2 shell integration
    • Install nvm
    • Create a ~/.ssh/authorized_keys file enabling public-key ssh-ing
    • Change the value of /etc/hostname in order to call our nodes rpi0, rpi1, rpi2, rpi3.

    This workflow allowed me to get my four nodes into a productive state in a reasonably short amount of time. I also set up an iTerm2 profile so that my cluster nodes have a groovy Raspberry Pi background, making it quick and easy to distinguish where I am.

    RPi4 node at the ready with tmux, vim, oh-my-zsh, powerlevel10k

    Finally, we also want to allocate memory “swap space” on any device not using. an SD card. (Swap space is the space you allocate on your storage disk that will get used if you use up your RAM. Most linux distros nowadays will not allocate swap space to your boot drive by default, so it has to be done manually.)

    Since only the RPi4 has an external drive, that’s all we’ll lset up for now. (Later, once we have a single network mounted HDD drive available to each node, we’ll allocate swap space there.) Use the following to add 8GB of swap:​†​

    sudo fallocate -l 8G /swapfile
    sudo chmod 600 /swapfile
    sudo mkswap /swapfile
    sudo swapon /swapfile

    Finally, add the following line to /etc/fstab to make this change permanent: /swapfile swap swap defaults 0 0

    Summary

    That’s it for part I. In the next part, we’re going to set up our ethernet connections between the RPi nodes using our network switch.


    1. ​*​
      4 x RPi + 1 x Network Switch
    2. ​†​
      According to lore it’s best practice to only add ~1/2 your RAM size as swap. However, I’ve never encountered issues by going up to x2.
  • Installing Windows 10 when all you have is a Mac (Legacy/Non-UEFI Version)

    Important! This advice has been deprecated and is only applicable if you’re trying to install Windows 10 on a machine without UEFI support. If your machine does have UEFI support, then start here.

    Background

    Skip this section if you just want to jump to the “how I did it” content.

    I haven’t used Windows in a serious way for about 15 years now. I enthusiastically converted to Apple in ~2005 and never expected to want to own a PC again. From about 2005 to 2015, I held Microsoft with contempt and couldn’t imagine why anyone would use their products. IMO, Steve Jobs raised the bar incredibly high as to what a modern personal-computer company can be.

    It turns out though that Microsoft eventually took notice (circa Balmer’s exit?) and began raising its own bar. It began to get my notice about ~5 years ago when it started developing some fantastic open source products (typescript and Visual Studio Code in particular).

    I’d also started hearing rumors about linux being integrated into Windows 10. Curiosity to check it out, and to be able to better interact with students and interns who had Windows machines, lead me to finally commit to getting a machine on which I could give Window’s a fresh shot.

    I’d also been wanting to try out some photogrammetry software for a while that requires an Nvidia GPU. That, and other motivations, lead me to seek out a server/workstation on to which I could install Windows 10. I found a great deal on Newegg.com for a Dell Precision T5810. For $290 (+ $15 tax) I got 64 GB DDR4 RAM, a E5-2620 v3 2.4GHz 6-Core (x2 hyper-thread) Intel Xeon CPU, and a simple GPU (Quadro NVS 295). The only catch was it had no drives and, therefore, no starting OS.

    I had some drives sitting around, and figured it would be easy to just install Windows 10 myself. However, I was surprised to learn upon googling “Install Windows from USB” that almost all solutions assumed you already had access to a Windows 10 machine. Getting round this chicken-and-egg problem with just a Mac took some research that I hope others can benefit from.

    Installing Windows 10 without Windows 10

    Requirements

    • Mac with recent OS and Homebrew installed
    • At least ~25GB free hard drive space
    • An external SSD drive (see below; a recent USB 3.0 flash drive is supposed to work but didn’t seem to in my case)

    Basic Approach

    After some digging, I concluded that there are two basic ways to get a Windows 10 installer onto a USB drive starting with just a Mac:

    1. Run a modified “Boot Camp Assistant” from my Mac’s /Applications/Utilities/Boot Camp Assistant.
    2. Emulate Windows 10 on your Mac.

    If you want to go with the first approach then you can consult this gist that seems to be thoroughly researched. However, I it had an aura of hackiness about it, so I opted to go with an emulation solution.

    To be clear, I really dislike emulating whole operating systems, so the goal here is to create-use-delete a VM Windows 10 instance as quickly and simply as possible so that we can get Windows 10 properly installed on a separate dedicated machine.

    Install Windows 10 via Virtualbox

    First, you’ll need to obtain a copy of Windows 10 as an ISO file. You can get that direct from Microsoft. I went for Windows 10, 64-bit, with English language. The downloaded ISO file is ~5GB. Save this to your Downloads folder.

    While that’s downloading, install virtualbox and its extension pack on your Mac with:

    brew cask install virtualbox
    brew cask install virtualbox-extension-pack

    Open virtualbox from /Applications and click on the blue-spiky-ball “New” icon to create a new virtual machine.

    Virtualbox Main UI

    You’ll be guided through a few steps. Give the VM a name, choose Windows (64-bit) as your OS, and decide how much RAM you want the VM to be able to use. (I wasn’t sure what to pick here since I didn’t know how RAM-hungry Windows 10 is or the mechanics of virtualbox in these regards. My machine has 16GB, so I figured I’d allot ~6GB. I monitored virtualbox during the heavy parts and it pretty much maxed out all my Mac’s available memory at times, but I’m still not sure how this setting is handled in that regard.)

    When you’ve chosen your memory size, select “Create a virtual hard disk” and then click “Create”.

    Virtualbox wizard to allot RAM.

    You’ll then be asked about how/where you want the hard disk for the VM to be set up. The first time I did this, I gave a ball park guess of ~20GB, but later found that this wasn’t enough. Windows 10 actually requires a minimum of 22,525MB, hence why I recommended earlier that you allot at least 25GB. However, if you can spare it, then go for even more like 40GB — this is the amount I chose (second time trying!), and it seemed to work fine for me.

    Leave the other default settings and click “Create”.

    Virtualbox wizard to allot hard-disk space and file format.

    Once the VM is created, we need to enable the VM to access the USB device.

    Now, as I mentioned earlier, a USB flashdrive is supposed to work, but I tried 3 different flash drives and found that they would all fail late in the process (and get extremely hot). Luckily for me, I had an external SSD drive lying around so I tried that and it worked no problem. I can only speculate that the work involved in writing to a USB drive passing through the Mac OS to/from the Windows 10 instance was too intense for relatively slow flash drives, and thus requires an SSD. For reference, the SSD drive I used can be found here on Amazon. (If you want to give as fast USB thumdrive a go then good luck to you, but I’ll assume from hereon that you’re using an SSD.)

    Anyhow, once you have your SSD interted into your Mac, select the Windows 10 item in the left column of main virtualbox view so that its various settings and properties and can be viewed on the right of that view. Go down to the USB section and click on the word “USB”.

    The main view of the virtualbox interface.

    This will open a menu enabling you to add the SSD drive that we will want to make available to our virtual Windows 10 instance.

    Virtualbox menu to pass USB device through to virtual Windows 10 instance.

    You also must also make sure to select “USB 3.0 (xHCI) Controller”, or your Windows 10 instance won’t detect the USB. (Note: these USB 2.0 and 3.0 options are only available because you installed the virtualbox-extension-pack earlier.)

    One last thing before we try to start our Windows 10 instance: go to virtualbox preferences, and select the “Display” tab. Change the “Maximum Guest Screen Size” to “none” and then click “OK”. This will prevent the simulated screen from showing up in a smallish box that will make it rather tedious to interact with the Windows 10 simulation.

    Virtualbox preferences; enable the view of instance to scale.

    Now we’re ready to start our Windows 10 instance, so click “Start” at the main interface. The first time you run the Windows 10 instance, virtualbox will prompt you to select a “disk” with which to boot the new virtual machine. You need to go through the drop-down selector to add the ~5GB Windows 10 ISO file that you saved to your Downloads folder earlier.


    Virtualbox wizard to select Windows 10 ISO file.

    Once the disk has been selected from your Downloads folder, click “Start” to launch the VM simulating a first installation of Windows 10. Along the way, you’ll have to answer standard setup questions, accept terms, etc. Keep things simple: don’t sign into a Microsoft account (go with “Offline Account” and “Limited Experience”). When prompted, do not try to use an activation key (just press “I don’t have a key”). When you come to select an OS, I went with Windows 10 Professional (though things might have been more streamlined if I’d gone with the simpler Home Edition). On the screen “Choose privacy settings for your device”, I switched basically every service off. Decline all of the personalization, Cortana-spy-on-you functionality, etc.

    Switch off all of the invasive-data options in the Windows 10 setup.

    This process took me about 10 minutes to get through.

    An important step is to choose “Custom: Install Windows Only” since we are not upgrading a system from a previous incantation of Windows.

    Eventually, you’ll end up with a working Windows 10 interface.

    Installing a Windows 10 installer within an virtual Windows 10 machine onto a USB device

    Once in a working Windows 10 instance, we need to install the program that will turn our USB device into a portable Windows 10 installer. Open the Edge browser (the icon on the Desktop is easiest), go to:

    www.microsoft.com/en-us/software-download/windows10

    … and click on “Download tool now” under “Create Windows 10 installation media”.

    Opt to save this download, and then double click on that exe in the Downloads folder. This will launch the “Windows 10 Setup” wizard. Select “Create installation media (USB flash drive, DVD, ISO file) for another PC” when prompted.

    You’ll then need to select/confirm your Windows version, language and target architecture.

    Finally, you’ll be asked to choose your USB device. If you set up the pass-through options correctly earlier, then it will show up as the sole option.

    Click next to start the installation onto the USB disk. This will take an external SSD drive about 10-15 minutes to complete. As I mentioned, I also tried three USB flash drives, and they each started getting sluggish after about 10 minutes (slowing to a halt at ~50%), and then reported an obscure error after about 20 minutes (having got very hot!). If you don’t have an external SSD handy, then I’ve heard good things about the Sandisk Extreme Pro.

    Once the process completes, you can power down the VM instance, right click on the item in the main virtualbox interface, and remove it. It will then give you an option to remove all related files. This will free up your disk space.

    Using the SSD as a boot drive on a Dell T5810

    It’s beyond the scope of this article to go over the general details of installing an OS from such a USB device, but I’ll quickly mention the smooth ride I had from thereon with my refurbished Dell T5810.

    In my case, I had to add a main drive to the T5810 (a 1TB SSD Samsung), and then I powered on the machine with the external SSD plugged into one of the USB 3.0 slots. The first boot took a while (~1 min as I recall) to show anything, but then the Dell logo showed up and I pressed F2 in order to enter System Setup. There I was able to select Legacy Boot and ensure that the external SSD would be used early on in a legacy-BIOS boot. Exiting that menu caused the system to reboot from the external SSD and entered me into a Windows 10 installation wizard as expected.

    The only hiccup I encountered concerned an error when trying to select the internal SSD drive (I was told Windows 10 could not be installed there), but that problem was quickly solved by this absolutely fantastic ~1 min video on Youtube.

    I now have Windows 10 working great on the T5810, and I’m so impressed with it that I’ll have to write another article soon on that subject!