August 22

How to Add a Hard Disk to an ESXi Virtual Machine

VMware ESXI Hard Drive

In an effort to consolidate my homelab, I recently deployed an ESXi host to my main server. There’s been a bit of a learning curve but overall it’s been a pleasant experience. In this tutorial, I will show you how to add a new hard disk to an ESXi virtual machine. (If you’re looking for hard drives, I usually use WD Reds that I shuck from a WD Easystore or Elements drive. They’re high capacity, well-priced, and reliable.)

Now, theoretically, this should be an easy task where you can just edit the VM and click the Add hard disk > New standard hard disk, specify the hard disk location and be done. Unfortunately, though, there’s a bug in the ESXi host web client and so instead this will be a tutorial on adding an additional hard disk to an ESXi guest via the command line (esxcli).

Starting with the bug in the ESXi host web client, if you add a new hard disk to a VM, and the primary datastore where the VM is located is full, you’ll be greeted with the following message and the save button will gray out:

The disk size specified is greater than the amount available in the datastore. You cannot overcommit disk space when using a thick provisioned disk. You must increase the datastore capacity, or reduce the disk size before proceeding.
ESXi error message

The disk size specified is greater than the amount available in the datastore. You cannot overcommit disk space when using a thick provisioned disk. You must increase the datastore capacity, or reduce the disk size before proceeding.

ESXi host web client bug: Save button is always disabled even despite changing location of hard disk. The save button being disabled will persist even after removing the offending "New Hard disk".
ESXi host web client bug: Save button is always disabled even despite changing location of hard disk. The save button being disabled will persist even after removing the offending “New Hard disk”.

Now, that’s all fine and good, but the problem is, the “Save” button stays that way even when you choose a datastore that does have space. Heck, the save button even remains disabled when you remove the new hard disk! It appears that the web client only evaluates once, cripples the save button, and never re-evaluates the settings ever again.

Another thing I’ve learned along the way is that vSphere != ESXi host web client and unfortunately almost all of VMware’s documentation refers explicitly to vSphere, not the ESXi web client, so in this tutorial I’ll be showing you how to use the command line if you don’t have vSphere set up.

Creating the Virtual Disk via the Command Line (esxcli and vmkfstools):

1. Enable SSH on the ESXi host by right-clicking on the host in the web client > Services > Enable Secure Shell:

Enabling SSH service in ESXi.
Enabling SSH service in ESXi

2. SSH into ESXi host. If you’re using Ubuntu, open a terminal and type the following:

ssh root@<insert IP address of ESXi host here>

3. Find the location of your datastore with the following command:

esxcli storage filesystem list
esxcli storage filesystem list
Note the Mount Point (i.e. the location of the volume)

4. Change directory into that location with the following command:

cd <copy Mount Point from above here>

5. Create your hard disk with the following command:

vmkfstools --createvirtualdisk 256G --diskformat thin <nameYourDriveHere>.vmdk

Note that the above creates a 256 GB virtual drive. If you want a different size, just change it.

vmkfstools --createvirtualdisk 256G --diskformat thin <nameYourDriveHere>.vmdk
vmkfstools –createvirtualdisk 256G –diskformat thin .vmdk

Adding the New Virtual Disk to Your VM:

At this point, we’ve created our virtual disk, but it currently only exists in isolation in the datastore. It isn’t connected to any VM so we need to add it to a VM.

6. Add the hard disk to your VM:

Here is the “choose your own adventure” part of the guide. I originally assumed that the majority of readers will want to add the virtual disk to their VM using the ESXi host’s GUI. That’s fine since this part of the ESXi Embedded Host Client isn’t broken. If that’s you, use the steps outlined in 6a.

However, based on reader feedback, I have learned that some readers are interested in also adding the new hard disk to the VM via the command line, so that both disk creation and hot-adding the disk to the VM are 100% command line. If that’s you, refer to 6b.

6a. Add the hard disk to your VM via the ESXi Embedded Host Client (i.e. the ESXi web client GUI):

We’re done with the command line now. Go back to your ESXi web client and edit your VM. Now click “Add a hard disk” but this time instead of a new hard disk, we’re going to select “Existing hard disk” and navigate to the .vmdk you created in step 5:

Add existing (new) hard disk in ESXi web client
Add existing (new) hard disk in ESXi web client

Defaults are fine. Don’t panic if your new hard disk shows 0 GB. The virtual disk just hasn’t been formatted yet. Save and boot up your VM. As you can see I’m using a Windows VM here.

6b. Hot Adding the Disk to the VM via the Command Line:

To hot-add our new virtual disk to the VM via the command line, we first need to find the ID of our VM. To do so, still SSH’d into the ESXi host, we issue the following command:

vim-cmd vmsvc/getallvms
Result of  vim-cmd vmsvc/getallvms showing Vmid of VMs
Result of vim-cmd vmsvc/getallvms showing Vmid of VMs

I want to add this to my Windows VM, so I want Vmid 1. Now to hot-add our new virtual disk, we simply issue the following command:

vim-cmd vmsvc/device.diskaddexisting <vmid> </vmfs/volumes/pathToDisk.vmdk> 0 1

Replace <vmid> and </vmfs/volumes/pathToDisk.vmdk> with the Vmid and path to your new virtual disk, respectively. The path has to be the full path to the disk you created in steps 3/4/5 above. The 0 at the end refers to your SCSI controller (typically 0) and the 1 refers to your SCSI target number (the 0 slot is occupied by the primary virtual disk your VM is using, so 1 is just the next available target. If you already have a second hard disk on your VM, you would increment this number and use 2 instead. You get the idea). In my case, the command would look something like this:

vim-cmd vmsvc/device.diskaddexisting 1 /vmfs/volumes/5d5d560e-962037c1-16f8-fcaa142fc77d/Windows10Storage/win10aux.vmdk 0 1

7. Add the Hard Drive to the OS:

When you boot up your VM and navigate to “This PC” in File Explorer, you still won’t see the hard drive. Again, don’t panic, this is just because we haven’t added the drive in Windows yet.

Open up Disk Management (you can search for it in Cortana) and you should see your new drive, albeit unformatted. Right-click on the hard drive and select “New Simple Volume”:

Disk Management - Adding unallocated disk to Windows
Disk Management – Adding unallocated disk to Windows

Follow the wizard and when you go back to “This PC”, voila, your new hard disk appears!

New disk installed on VM
New disk installed in VM
August 14

How To Set Up VLANs On An L3 Switch (HP 1910) With pfSense

This site is getting more traffic than I had ever anticipated and, in order to support this self-hosted site and my homelab, I have upgraded to a 1 gigabit internet connection. In doing so, the bottleneck in my homelab network has shifted from the internet connection to the router itself.

As a reminder, my previous homelab consisted of a DMZ interface on pfSense firewall with a second DMZ router sitting directly behind it (i.e. the WAN of the DMZ router connected to the interface of the DMZ pfSense interface port). This obviously led to a double NAT situation that I had to handle but aside from that, this network topology had worked well for me. With the upgraded internet package though, I noticed that I was only getting half (~500 mbps) of my theoretical download speed. I was unsure if the bottleneck was in my pfSense router or my DMZ router (or if my ISP was lying to me). I had thought my pfSense router might be the bottleneck, but when I did my download speed tests, none of my CPU cores pegged to 100% (in a pfSense router with a dedicated quad port NIC, the bottleneck is most likely to occur in the CPU with heavy traffic as the firewall rules are evaluated), so I reasoned that the pfSense firewall being the bottleneck was unlikely. As a quick, field expedient test, I set up another interface on a free port on pfSense. Speedtesting directly on that interface gave me my full download speed, so I knew the bottleneck was downstream of the connection and was on my DMZ router (not surprising since that was a cheap router running DD-WRT).

As my homelab has grown considerably since I first started out, I decided this was the perfect opportunity to replace my DMZ router and upgrade to an L3 switch.

VLANs and Switching

First, let’s start off with what a switch is. You are probably most familiar with a cheap unmanaged switch that allows you to plug it into another ethernet port and effectively split it into an addition 4-8 ports. This is known as an L2 switch – basically all the devices connected to the switch are on the same network and can communicate with each other. The L2 switch accomplishes this by keeping a MAC address table to keep track of what device is on which port. It then switches packets based on their MAC address/port.

One of the greatest features of a managed switch is that it allows you to create what’s called a VLAN (a virutal LAN). A VLAN lets you divide up (subnet) your physical network into separate logical ones.

Why would you want to do this? Well, my biggest motivation is security. By separating out the network into VLANs, you can cordon off the individual VLANs from each other. As an example, let’s say you had a business and you offered guest wifi. You obviously wouldn’t want your guests to be able to connect to your network and be able to access all of your business PCs. With VLANs, you could create two separate VLANs: one for your guest wifi, and another for your business network. With VLANs on a simple L2 switch, devices on those two networks can’t talk to each other since the devices on the your guest wifi effectively live on a separate network from your business network and there’s no routing between them.

What’s an L3 Switch?

An L3 switch allows you to take this functionality of a managed L2 switch a step further. In all but the most simple scenarios (like the guest wifi/business one above), there are scenarios where you want those VLANs to be able to communicate with each other. An L3 switch takes the routing functionality of a router and offloads it to the switch itself, allowing the switch to route traffic between VLANs. So an L3 switch gives you the security through the principle of compartmentalization (via VLANs), while also allowing you to route traffic between those VLANs on the switch itself.

I headed off to Ebay and picked up the following HP 1910-24G for a whopping $30:

HP 1910-24G L3 Switch
HP 1910-24G

Even though this is the lowest price I’ve ever seen for this switch, it turns out I still overpaid (we’ll get to that later).

While you’re there, be sure to pick up a rollover (console) cable. Trust me, you’ll be glad you did, especially when the seller fails to factory reset the switch before shipping and won’t give you the password. I selected this particular cable because it had the best reviews and the most compatibility with other vendors, not just HP. This particular console cable worked great with my HP 1910.

1. Set Up Our Switch

So now we have our switch. Power it up and connect an ethernet cable between your PC and the switch. You’ll need to assign your computer a static IP since a DHCP server isn’t connected to the switch. The exact IP address you need to assign is based on the default IP listed on the back of the switch. For example, if the default IP on the switch is 192.168.254.12, assign your local PC an address of 192.168.254.20. The default subnet is 255.255.0.0. Navigate to the router’s default IP address in a web browser.

Alternatively, you can connect the switch directly to your pfSense firewall/router and the DHCP server running on pfSense will assign it an IP address. You can then just navigate to that assigned IP address (which can be found on the pfSense GUI or by running nmap over your local network).

Once you navigate to the switch IP address, you’ll be presented with a login page. The default username is admin with a blank password:

HP Login Page at Switch's IP Address
HP Login Page at Switch’s IP Address

2. Set Up Static Routing

The first thing we’ll do is set the switch up so that our VLANs can reach the internet. You see, by using the switch as an L3 switch, all routing is handled locally by the switch. The problem is, our switch’s router doesn’t know how to get to the internet- it isn’t directly connected to the internet, it’s never seen the internet before, it doesn’t know what the internet is. That’s what our pfSense router does, so we need to tell our switch about the pfSense router. We accomplish this by creating a static route. To do so, navigate to Network > IPV4 Routing > Create:

Static routing settings - routes IPv4 traffic (that isn't local to switch's VLANs) out to the pfSense router. Note that this IP address is the IP address of the pfSense router on the interface the HP switch is plugged into.
Static routing settings – routes IPv4 traffic (that isn’t local to switch’s VLANs) out to the pfSense router. Note that this IP address is the IP address of the pfSense router on the interface the HP switch is plugged into.

Enter a destination IP address of 0.0.0.0 with a mask of 0.0.0.0. The next hop should be the IP address of your pfSense router (in my case, it is 192.168.2.1).

So what have we done here? What we’ve said to our switch is that if we’re trying to access any IP address (0.0.0.0 with a mask of 0.0.0.0), get there by sending the traffic through our next hop at 192.168.2.1 (i.e. get there by going through our pfSense router). Preference is inversely related to how strongly we should use that route so by giving it a high preference, we’ll continue to use the other dynamic routes the switch’s router has picked up on if at all possible (i.e. so VLAN-to-VLAN traffic will continue to be routed through the switch).

Keeping along this same train of thought, the pfSense router has no idea how to get response traffic from the internet back to the VLANs since it has no idea of the VLAN’s existence. Therefore, we also need to define static routes on the pfSense route for the return traffic.

To do so, we need to first define a gateway in pfSense by going to System > Routing > Gateways. When you click add you will be prompted for an interface and a gateway IP address. But what interface and gateway IP address should we use? Thinking about the return flow of traffic, pfSense doesn’t know how to get to our VLANs. It just knows it has traffic for an IP address it’s never heard of. But there is one device that does know where the hosts that belong to these IP addresses live- the switch. And where does that switch live from pfSense’s perspective? It lives in the DMZ. So we use the DMZ as our interface and we use the L3 switch’s DMZ IP address as the gateway:

Defining our HP 1910 gateway in pfSense
Defining our HP 1910 gateway in pfSense.

So now that we have a gateway specified, we can define the rest of our static route. Go to the “Static Routes” tab. You’ll then specify the destination network, the interface, and the gateway (that we just created):

Static routes for return traffic from pfSense to L3 Switch
Static routes for return traffic from pfSense to L3 Switch

Let’s summarize what we’ve done. The above static routes tell pfSense that for traffic destined for these networks, send the traffic through the DMZ interface, and use the L3 switch that’s located on that DMZ interface as the “next hop” to route the traffic.

Test it out by changing the gateway (of your IPV4 settings) on your PC to the IP address of your switch. You should now be able to navigate to the web and ping outside servers. N.B. You may also need to update your firewall rules (see end of article).

3. Create Our VLANs

Now we’re finally ready to create our VLANs. Go to Network > VLAN > Create:

Creating a VLAN on the HP 1910; here we are simply telling the switch that a VLAN with this ID exists.
Creating a VLAN on the HP 1910; here we are simply telling the switch that a VLAN with this ID exists.

4. Assign Ports to Your VLANs:

We now need to tell the ports which VLANs they apply to. To do so, go to Network > VLAN > Modify Port:

Network > VLAN > Modify Port; Assign a port as untagged and enter the applicable VLAN ID.
Network > VLAN > Modify Port; Assign a port as untagged and enter the applicable VLAN ID.

Select a port. Since the devices we’re going to connect to our switch aren’t “VLAN aware”, we are going to use an untagged port (meaning there is no VLAN tag added outbound on the port). In the VLAN ID box, specify the VLAN the device listed should be on.

5. Create VLAN Interface:

If we were going to use this switch as a simple L2 switch with our VLANs routed by our pfSense router (known as a “router on a stick” configuration), we wouldn’t need to complete this step. Creating the VLAN interface is what tells our HP 1910 switch to enable L3 routing. To create our VLAN interface, go to Network > VLAN Interface > Create:

Creating a VLAN Interface on the Switch. This tells the switch to route VLAN traffic and by assigning the switch an IP address, we are also defining the gateway for that VLAN.
Creating a VLAN Interface on the Switch. This tells the switch to route VLAN traffic and by assigning the switch an IP address, we are also defining the gateway for that VLAN.

Input the VLAN ID you created in the previous step, select Manual for the IPv4 address (unless you have an external DHCP server you want to use). The IPv4 address you specify here will be the IP for switch on this VLAN. Since we’re using it as an L3 switch, this also means this address will be your gateway address for any devices you connect to this VLAN.

Don’t forget to click “Apply” and save often.

That’s it!

Don’t forget to assign the correct IP addresses and gateway for the devices you have connected to the switch. As a reminder, they should be in the same subnet as the VLAN specified on that port and their gateway should be the IP address of the switch itself on that VLAN (the one we specified in step 5). Example: If the VLAN interface is defined as 10.0.20.0/24 (worded another way 10.0.20.0 with a subnet of 255.255.255.0), then your IP addresses should fall within the 10.0.20.1-10.0.20.254 range.

You may also have to change your pfSense firewall rules on the interface that the L3 switch is plugged into. For example, you likely have a rule that says allow traffic originating from LAN net to anywhere. For our traffic to reach the internet, you would need to reconfigure this rule would need to allow traffic originating from anywhere to anywhere since our VLANs lie within a separate subnet.

Category: Servers | LEAVE A COMMENT
June 19

Preparing Your Server for Vacation – The Unattended Server Access Checklist

Unattended Server Checklist showing steps of 1) Check VPN Connection 2) Reverse SSH Tunnel Backup 3) VNC available on backup devices 4) Test automatic server shutdown after loss of utility power 5) Test Server WOL form one of backup devices.

I am about to embark on a much-needed vacation this Friday. I travel relatively frequently for work and something that has always been a stressor in the back of my mind was leaving my homelab unattended. With my luck, as soon as I lock my door and get in the car, the server will wait for that exact minute to go down. The way I have managed this problem is by creating a checklist to ensure that, no matter where I am, I can access my network. Stuff goes wrong, that’s the nature of any real work, but I can fix anything so long as I have access. Best of all, this checklist can save you from having to call your wife and tell her to push a button on your computer!

In this post, I give you the checklist I use for unattended server access. In future posts, I will create how-to’s for implementing these features.

1) Check VPN Connection:

VPNing into my DMZ is my primary method of access for resources on my network when I am not at home. The advantages of being able to connect to my DMZ via VPN is enormous since it allows me to act just like another client on the network.

2) Check Reverse SSH Tunnel Backup (x2):

Let’s say my VPN server has gone down (in either a controlled or uncontrolled fashion); in that situation, my primary means of unattended access has gone with it- I’d be locked out of my own network. The Army has a phrase for this: “One is none, two is one, three is better.” Since we have currently only discussed one way of accessing the network remotely, applying the military’s logic, I have no way into my network to fix it; therefore, I need a backup. This is where a reverse SSH tunnel comes to the rescue. You can either create one manually or use a 3rd party service.

I use Remote.it’s connectd. With it, I can SSH into another device on my network (keeping with the idea of redundancy for critical systems, this device should not be the same one running your VPN server) and do what I need to do from there. (See WOL below).

3) VNC Server Available on Backup Devices:

My router uses a GUI, so being able to spin up a VNC server on demand from the SSH connection above is necessary.

4) Test Automatic Server Shutdown When Running on Backup Power (UPS):

I have an APC UPS directly connected to my server over USB. In the event of a power failure, the UPS tells my server that we are no longer on utility power and allows it to gracefully shutdown. Testing is essential here since it not only checks to make sure that this feature is still functional, but it also serves as a check on the UPS’s battery life.

5) Test Server Wake-on-LAN (WOL) from Backup Device:

Unlike my other network devices (router, hardware firewall, the above backup devices, etc.), my server is set up to NOT automatically restart upon the restoration of utility power. Since power outages often can have brief periods of power restoration, I don’t want it to continuously startup to then only lose power again. Therefore, after a graceful shutdown, I want the server to stay down until I bring it back up. I accomplish this via a WOL message (“magic packet”) from one of the backup devices to the server.

Category: Servers | LEAVE A COMMENT
April 11

A Short Introduction to Troubleshooting Docker Networks

Docker Network Diagram- depicting the major interfaces involved.

I recently just built an unRAID rig which has now been deployed to my DMZ. It’s great, but something I have been struggling with is that periodically my FileZilla docker container will be unable to connect to my FTP server, erroring out with “ENETUNREACH – Network unreachable”. It seems to be exacerbated by large file moves (like when I’m moving whole directories).

ENETUNREACH - Network unreachable error in FileZilla
ENETUNREACH – Network unreachable error in FileZilla

I started by researching the error message but there isn’t a whole lot on the formal definition of “ENETUNREACH – Network unreachable”. Presumably because “Network unreachable” is self-explanatory. There are a lot of forum posts about FileZilla being blocked by an antivirus suite’s software firewall, however, so I feel comfortable with the error description given. There are no subtleties to the message “ENETUNREACH – Network unreachable”- it simply means that FileZilla can’t connect to the network (i.e. it can’t dial out).

Docker Networking Overview:

Let’s begin with a brief introduction to Docker networking. At a high level, Docker containers are very similar to virtual machines (VMs) but without the overhead of having to run a duplicate OS. The diagram below, shows our Docker containers which run inside of our Host:

Basic Docker network diagram showing three interfaces: the interface between the container and the host, interface between the host and router, and interface between the router and internet.
Basic Docker Network Diagram

The above diagram shows a basic schematic of a typical Docker network. For my containers, I have the network configuration set to bridge mode which means that, at least for networking purposes, the Host acts like a glorified router to the Container. Put another way, when I want to communicate with the container over the network, I just put in the IP address of my Host and a port specific to that Container. The Host then takes that traffic and forwards it to the internal IP address of the Container at whatever port it’s listening on. This is also true for outbound traffic. In essence, bridge mode on Docker is nothing more than network address translation (NAT). Something you’re undoubtedly familiar with if you have a homelab or are self-hosted like I am.

Three network interfaces in Docker networks

Back to our problem. We know FileZilla can’t dial out. Referring back to the diagram, we see that there are three checkpoints where traffic could be failing: (1) the interface between the Docker container and Host, (2) the interface between the Host and the router, and (3) the interface between the router and the WAN/internet.

Troubleshooting the Connection:

Since we know our problem is in the outbound direction, let’s focus in that direction. What’s the simplest way to check for a connection? Ping it!

Testing each network interface we identified above in turn:

1) Ping the Host from the Container:

We can accomplish this by executing a command in the Container. This can be accomplished in unRAID by clicking on the Container and selecting console. If you aren’t running unRAID, this is the same as running <docker exec -it [container name here| FileZilla for me] sh>. Now you can simply ping the Host using <ping [insert IP address of Host on LAN here]. Pinging the Host in my case greeted me with a response so we know that connection isn’t the culprit:

Pinging Docker Host from Container.
Pinging the host from the container. We have a response so no problem here!

Note: You can also ping an address on the internet. When I did that here, I did not receive a response, confirming that the outbound connection is indeed broken somewhere along the chain:

Pinging website from Docker container.
Pinging a website from the container. No response. So we know something is broken along the chain.

2) Ping the Router from the Host:

This is straightforward enough. In unRAID you just open your console/terminal on the Host (“Tower”) and ping the router’s IP address. I also received a response here so we know that we have a connection to router. We probably already knew this since we could connect to the Host over the network in the first place though.

Pinging an address on the internet also received no response meaning my server was not able to access the internet. This further confirmed that the chain is broken still further upstream.

3) Ping an address on the WAN/internet from the Router:

This can be a little bit trickier. Thankfully I run dd-wrt on my routers so I can easily initiate a terminal session on the router with <terminal [insert router local IP address here]>. Pinging an address on the internet resulted in no response here as well! Well, we know this guy is the last interface in the chain so we know the problem is with him.

[There’s a little bit more to know here which is more specific to my network’s “unique” architecture. My server resides in a physically separate part of my home- away from my core network and my home isn’t physically wired for ethernet, therefore to adapt and overcome, I have dd-wrt set up to create a client bridge with this router (the router mentioned above is actually the client router in the client bridge). For those of you who don’t know what this is, it means that all clients connected to my secondary client router behave like they’re physically connected to the primary router. At least that’s the way it’s supposed to work in theory and it most often does- unfortunately, the client bridge mode is notoriously unreliable as is the case here. Had this been the DMZ’s main router at fault, as would be the case on your home network, a big clue would’ve been that I couldn’t access the internet from my computer.]

Note: I also want to point out another potential cause here. If, at any point when pinging an external site on the internet, you found that you didn’t get a response, try pinging an IP address (as opposed to the website address) you know is good. If that works, that suggests a problem with your DNS lookup and you should begin your investigation there.

Rebooting the router fixed the issue and allowed FileZilla to proceed without error. Honestly, the root cause here is that I am using client bridge mode but getting rid of it really isn’t an option for me at this point. I could try upgrading the router to better hardware but I need to let my wallet recover from this server build first. 🙂

Anyway, I thought this made for an interesting case study and demonstrates how breaking a system down into its simple parts allows for effective troubleshooting. With a methodical process and understanding, every problem can be overcome.

Improvise. Adapt. Overcome.

April 10

How to Share Part of Your Plex Libraries without Giving Users Complete Access to Your Full Library: Symbolic Links with Plex and Docker Containers

By now you’ve discovered some of the many shortcomings of Plex. One of which is that you can’t share individual videos with your friends or family- you have to give them access to the entire library! It’s even worse if you have kids and all of your movie content is stored under a single “Movies” library: “Saving Private Ryan” is going to be right next to “Tangled”.

Well, thankfully, there’s a way around that. Enter “symbolic links”. I’ll do another tutorial on what a symbolic link actually is in the future, but for now, think of them as Unix’s version of a shortcut- it merely points to the filename of another file located somewhere else. This means that to share specific content with users, all we have to do is create a library folder specific to that user and place a bunch of symbolic links in it referencing the content in the libraries we have stored already.

So yes, we do still have to create a new library to share with users, but the good news is that we can do this all without having to make a duplicate of the file and taking up valuable storage space!

Creating a Symbolic Link:

If your Plex Media Server is a normal install (i.e. not running in a Docker container), creating a symbolic link is pretty straightforward. Just create your new library directory and navigate to it in a terminal. Then just execute the following command:

ln -s [insert file source location here] ./

The above path in the [insert file path to video here] can be either a directory or a video file itself. Just create a symbolic link for each individual directory or video you want to share. That’s it!

Symbolic Links with Docker Containers:

Normally symbolic links (“symlinks”) are quite transparent to applications in Linux (that’s the whole point), but in the case of running Plex on an unRAID server, we have an additional challenge: Plex is running in a Docker container. Plex Docker containers have a volume mapping configured to map the media path in the Plex container to the user share on the unRAID host. The chances are that the two file paths specified for the container and the host don’t have the exact same parent directory (if they did that would partially defeat the point of the abstraction that Docker containers give you).

Docker Container Volume Mapping
Docker Container Volume Mapping: Host Path ‘= Container Path

The above configuration shows what I mean a little better. As you can see, the Host path does not equal the Container path. The consequence of this is that the Docker Host and the Docker Container have two different systems of reference. This problem we run into when we run the command “ln -s” above is that the symbolic link created follows an absolute path so when the Plex Docker container starts to follow this path that’s specified based off the Host’s directory tree, it fails. I think an example will help illustrate:

Normal "absolute" symlink with absolute file path.
Normal symlink with absolute file path.

The Fix: Relative Symlinks

Thankfully this is easily fixed with the addition of the relative (-r) argument to the ln command which will instead give us a relative symbolic link:

ln -rs [insert file source location here] ./

The example below demonstrates the difference compared to a “regular” symlink:

Relative symbolic link giving us a relative file path that will work in a Docker container.
Relative symlink giving us a relative file path that will work in a Docker container.

This allows the Docker container to translate the file path specified in the symbolic link into its own internal file structure. That’s it!

In a future tutorial I will explain more of the technical details behind symlinks and you’ll see equivalent alternatives to the command above.

January 16

Setup an NGINX Reverse Proxy on a Raspberry Pi (or any other Debian OS)

If you’re running a web server out of your homelab (and you should), you really should consider running your servers behind an NGINX reverse proxy. Honestly, this should be the first thing you build in your homelab. It doesn’t take a lot to setup- NGINX is so efficient it can even be run on something as simple as a Raspberry Pi and it pays you back in dividends once you’ve got it up and running.

What Does a Reverse Proxy Do?

A reverse proxy serves as a sort of dispatcher by acting as a central contact point for clients. Based on the information requested by the client, it then routes the request to the appropriate backend server and makes sure the backend server’s response makes it back to the appropriate client.

Prototypical NGINX reverse proxy diagram

What are these dividends you speak of?

A reverse proxy can give you additional flexibility, security, and even a performance bump. It can also greatly simplify your deployment:

  1. Flexibility: An NGINX reverse proxy can allow you to host multiple sites/domains with only one IP address. It accomplishes this by listening on a port (usually port 80 for HTTP traffic) and parsing the http request header for the host. Based on the host specified in the header, NGINX can route a request to the proper backend server (in a reverse proxy, this is also known as an upstream server).
  2. Security: By standing between the client and the backend server, the reverse proxy provides a degree of separation.
  3. Improved performance: NGINX can be used to cache static content which means that not only is content returned faster to the client, but since it often means that the upstream server doesn’t even need to be contacted, it can take a lot of the load off your backend servers.
  4. Simplifies your deployment: If you’re hosting multiple sites, an NGINX reverse proxy can greatly simplify your implementation by giving you a single point to manage your traffic. This means you only have to set up port forwarding once and whenever you create a new site, all you have to do is add an additional configuration listing to NGINX. When you implement HTTPS (and you should), instead of having to implement it on every individual web server you have setup, you can handle it all on your NGINX reverse proxy.

Installing NGINX:

Now that I’ve hopefully convinced you to implement an NGINX reverse proxy, let’s get started. I recommend using a dedicated device for this (again, it need not be expensive, even a Raspberry Pi will do) but it helps keep everything clean and compartmentalized. With a clean install of Ubuntu (if using an x86-64/AMD64 device) or Raspbian (if on the RPi), do the following:

1. Update your package list and make sure your device is updated:

sudo apt-get update
sudo apt-get upgrade

2. Depending on what Linux distro you’ve picked, it might have Apache already installed. We don’t want that so uninstall it with:

sudo apt-get remove apache2

3. Install NGINX:

 sudo apt-get install nginx

4. NGINX should start automatically, but just in case, you can start it manually with:

sudo systemctl start nginx

5. Confirm that NGINX is up and running by opening your browser and visiting your IP address. You should be able to get the default page to display by visiting your loopback IP address (127.0.0.1) or the actual IP assigned to your device (available by running the command ‘hostname -I’ in the terminal):

6. Good! Now we have NGINX up and running! If you don’t have a backend web server running yet, then we’re done since you don’t have anywhere for us to send traffic. Come back to this point when you do. But if you do have a web server for us to proxy traffic to/from, continue on!

Configuring the Reverse Proxy:

So you’ve made it this far and you now have an NGINX server running. Let’s set up the reverse proxy part to make this an NGINX reverse proxy and not just a simple NGINX web server:

1. Go to our NGINX sites-available directory:

cd /etc/nginx/sites-available/

2. Create the configuration file. You’ll eventually accumulate a lot of these, so I recommend naming it based on the site that you’re reverse proxying so you can easily find it again:

sudo nano example.com.conf

3. In nano, add the following:

server {
	listen 80;
	server_name: example.com
	location / {
	proxy_pass http://192.x.x.2;
	}
}

server_name is going to contain the domain name of the website clients are going to be requesting. proxy_pass is going to be the local (internal) IP address of the web server that you’re forwarding traffic to. You can also specify a particular port if your web server is running on a non-standard port (example: proxy_pass http://192.x.x.2:82300).

4. For NGINX to actually serve your site with your new configuration, you need to link it to /sites-enabled/ with:

ln -s /etc/nginx/sites-available/example.com.conf /etc/nginx/sites-enabled/example.com.conf

5. Test your configuration to make sure you aren’t getting any errors:

sudo nginx -t

6. Reload NGINX to tell it that the configuration has been updated:

sudo systemctl reload nginx

That’s all there is to it! In future posts, I’ll cover how to setup that upstream web server, how to configure the DNS for your domain, and port forwarding so that you can access your sites on the internet.

As always, feel free to comment if you run into any problems or need help!

Category: Servers | LEAVE A COMMENT