August 22

How to Add a Hard Disk to an ESXi Virtual Machine

VMware ESXI Hard Drive

In an effort to consolidate my homelab, I recently deployed an ESXi host to my main server. There’s been a bit of a learning curve but overall it’s been a pleasant experience. In this tutorial, I will show you how to add a new hard disk to an ESXi virtual machine. (If you’re looking for hard drives, I usually use WD Reds that I shuck from a WD Easystore or Elements drive. They’re high capacity, well-priced, and reliable.)

Now, theoretically, this should be an easy task where you can just edit the VM and click the Add hard disk > New standard hard disk, specify the hard disk location and be done. Unfortunately, though, there’s a bug in the ESXi host web client and so instead this will be a tutorial on adding an additional hard disk to an ESXi guest via the command line (esxcli).

Starting with the bug in the ESXi host web client, if you add a new hard disk to a VM, and the primary datastore where the VM is located is full, you’ll be greeted with the following message and the save button will gray out:

The disk size specified is greater than the amount available in the datastore. You cannot overcommit disk space when using a thick provisioned disk. You must increase the datastore capacity, or reduce the disk size before proceeding.
ESXi error message

The disk size specified is greater than the amount available in the datastore. You cannot overcommit disk space when using a thick provisioned disk. You must increase the datastore capacity, or reduce the disk size before proceeding.

ESXi host web client bug: Save button is always disabled even despite changing location of hard disk. The save button being disabled will persist even after removing the offending "New Hard disk".
ESXi host web client bug: Save button is always disabled even despite changing location of hard disk. The save button being disabled will persist even after removing the offending “New Hard disk”.

Now, that’s all fine and good, but the problem is, the “Save” button stays that way even when you choose a datastore that does have space. Heck, the save button even remains disabled when you remove the new hard disk! It appears that the web client only evaluates once, cripples the save button, and never re-evaluates the settings ever again.

Another thing I’ve learned along the way is that vSphere != ESXi host web client and unfortunately almost all of VMware’s documentation refers explicitly to vSphere, not the ESXi web client, so in this tutorial I’ll be showing you how to use the command line if you don’t have vSphere set up.

Creating the Virtual Disk via the Command Line (esxcli and vmkfstools):

1. Enable SSH on the ESXi host by right-clicking on the host in the web client > Services > Enable Secure Shell:

Enabling SSH service in ESXi.
Enabling SSH service in ESXi

2. SSH into ESXi host. If you’re using Ubuntu, open a terminal and type the following:

ssh root@<insert IP address of ESXi host here>

3. Find the location of your datastore with the following command:

esxcli storage filesystem list
esxcli storage filesystem list
Note the Mount Point (i.e. the location of the volume)

4. Change directory into that location with the following command:

cd <copy Mount Point from above here>

5. Create your hard disk with the following command:

vmkfstools --createvirtualdisk 256G --diskformat thin <nameYourDriveHere>.vmdk

Note that the above creates a 256 GB virtual drive. If you want a different size, just change it.

vmkfstools --createvirtualdisk 256G --diskformat thin <nameYourDriveHere>.vmdk
vmkfstools –createvirtualdisk 256G –diskformat thin .vmdk

Adding the New Virtual Disk to Your VM:

At this point, we’ve created our virtual disk, but it currently only exists in isolation in the datastore. It isn’t connected to any VM so we need to add it to a VM.

6. Add the hard disk to your VM:

Here is the “choose your own adventure” part of the guide. I originally assumed that the majority of readers will want to add the virtual disk to their VM using the ESXi host’s GUI. That’s fine since this part of the ESXi Embedded Host Client isn’t broken. If that’s you, use the steps outlined in 6a.

However, based on reader feedback, I have learned that some readers are interested in also adding the new hard disk to the VM via the command line, so that both disk creation and hot-adding the disk to the VM are 100% command line. If that’s you, refer to 6b.

6a. Add the hard disk to your VM via the ESXi Embedded Host Client (i.e. the ESXi web client GUI):

We’re done with the command line now. Go back to your ESXi web client and edit your VM. Now click “Add a hard disk” but this time instead of a new hard disk, we’re going to select “Existing hard disk” and navigate to the .vmdk you created in step 5:

Add existing (new) hard disk in ESXi web client
Add existing (new) hard disk in ESXi web client

Defaults are fine. Don’t panic if your new hard disk shows 0 GB. The virtual disk just hasn’t been formatted yet. Save and boot up your VM. As you can see I’m using a Windows VM here.

6b. Hot Adding the Disk to the VM via the Command Line:

To hot-add our new virtual disk to the VM via the command line, we first need to find the ID of our VM. To do so, still SSH’d into the ESXi host, we issue the following command:

vim-cmd vmsvc/getallvms
Result of  vim-cmd vmsvc/getallvms showing Vmid of VMs
Result of vim-cmd vmsvc/getallvms showing Vmid of VMs

I want to add this to my Windows VM, so I want Vmid 1. Now to hot-add our new virtual disk, we simply issue the following command:

vim-cmd vmsvc/device.diskaddexisting <vmid> </vmfs/volumes/pathToDisk.vmdk> 0 1

Replace <vmid> and </vmfs/volumes/pathToDisk.vmdk> with the Vmid and path to your new virtual disk, respectively. The path has to be the full path to the disk you created in steps 3/4/5 above. The 0 at the end refers to your SCSI controller (typically 0) and the 1 refers to your SCSI target number (the 0 slot is occupied by the primary virtual disk your VM is using, so 1 is just the next available target. If you already have a second hard disk on your VM, you would increment this number and use 2 instead. You get the idea). In my case, the command would look something like this:

vim-cmd vmsvc/device.diskaddexisting 1 /vmfs/volumes/5d5d560e-962037c1-16f8-fcaa142fc77d/Windows10Storage/win10aux.vmdk 0 1

7. Add the Hard Drive to the OS:

When you boot up your VM and navigate to “This PC” in File Explorer, you still won’t see the hard drive. Again, don’t panic, this is just because we haven’t added the drive in Windows yet.

Open up Disk Management (you can search for it in Cortana) and you should see your new drive, albeit unformatted. Right-click on the hard drive and select “New Simple Volume”:

Disk Management - Adding unallocated disk to Windows
Disk Management – Adding unallocated disk to Windows

Follow the wizard and when you go back to “This PC”, voila, your new hard disk appears!

New disk installed on VM
New disk installed in VM
May 4

How to Install Fritzing and Fix Missing Dependency Error Messages Using Symlinks:

In our previous notebook entry, we completed our exploration into the I2C protocol and implemented an external EEPROM for the Arduino. In that post, I have a wiring diagram that was created using an app called Fritzing. In this tutorial, I will explain how to install Fritzing on Ubuntu as well as how to resolve the following missing dependency errors that I was greeted with when I first installed it:

  • /usr/share/fritzing-0.9.3b.linux.AMD64/lib/Fritzing: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory
  • /usr/share/fritzing-0.9.3b.linux.AMD64/lib/Fritzing: error while loading shared libraries: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory

1. Download Fritzing:

Begin by downloading Fritzing available here: http://fritzing.org/download/

2. Unpack the .tar to a convenient directory:

Follow the directions for the install on that same page. I extracted the .tar to my /usr/share/ directory. You may have to run as sudo to do this.

3. Navigate to the directory where you extracted your Fritzing tar and try to launch Fritzing:

Fritzing extracted to /usr/share/. Launching Fritzing using ./Fritzing
Fritzing extracted to /usr/share/. Launching Fritzing using ./Fritzing

If it launches great, but it probably won’t, and will fail with the following error:

/usr/share/fritzing-0.9.3b.linux.AMD64/lib/Fritzing: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory

4. Fix the “error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory” error:

This message tells us that Fritzing is missing a dependency- specifically the libssl1.so.1.0.0 library. Now, this is a very common library so it’s highly probably you already have it. Let’s find it with the Linux locate command:

locate libssl.so.1.0.0

Running this command should give you a list of locations that have this library. As you can see, I have quite a few duplicates of it:

locate command with locations of libssl.so.1.0.0
locate command with locations of libssl.so.1.0.0

Now that I know that I have the libssl.so.1.0.0 library and I know where it’s located, we can use a powerful trick available to us because we’re on Linux- the symlink. We’ve discussed symlinks (or “symbolic links”) before when I discussed how to create a separate Plex library that allows you to selectively share content. In short, symlinks are Unix’s equivalent of a “shortcut”. You can create a symbolic link to another file (or directory) and Linux will treat that shortcut just like it’s really there.

That’s exactly what we’re going to do here. So let’s create a symbolic link by picking one of the paths from our locate command above (it doesn’t really matter which one).

First, make sure we’re in the lib directory of your Fritzing directory:

cd ./lib

How do I know this is where we want to be? Well, it was in the first part of the error message:

/usr/share/fritzing-0.9.3b.linux.AMD64/lib/Fritzing: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory

Now, create your symbolic link:

ln -s [path to directory you found above with your locate command] ./

In my case, I used:

ln -s /snap/core18/941/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 ./
Creating symbolic link ("symlink") to fix dependency error message.
Creating symbolic link (“symlink”) to fix dependency error message.

Now, try to launch Fritzing again. In my case, I was greeted by a new error:

/usr/share/fritzing-0.9.3b.linux.AMD64/lib/Fritzing: error while loading shared libraries: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory

I am always excited to see a new error. It means I actually fixed something and now I get to move on to something else that’s broken!

5. Fix the “error while loading shared libraries: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory” error:

Again, we’re going to start by finding where the missing dependency exists:

locate libcrypto.so.1.0.0

Once we have a viable library location, we’re going to create that symlink to point to it:

ln -s /snap/core18/941/usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 ./lib/

Note that I messed up originally when I did this in my screenshot. I ran the above command with the output path in the main Fritzing directory, therefore I should have used ./lib/ as I show above, but if you’re already in the ./lib/ directory, you can just run the output with ./ as we did in the first one.

locate command with locations of libcrypto.so.1.0.0; followed with fix by creation of symlink
locate command with locations of libcrypto.so.1.0.0; followed with fix by creation of symlink

5. Rinse and repeat.

In my case, running ./Fritzing finally launched Fritzing, but you may have other dependencies that need addressing. Now that you know how to fix these missing dependencies this shouldn’t be too much of a problem. Enjoy!

As always, feel free to ask me any questions about any problems you run into.

Category: Linux | LEAVE A COMMENT
April 11

A Short Introduction to Troubleshooting Docker Networks

Docker Network Diagram- depicting the major interfaces involved.

I recently just built an unRAID rig which has now been deployed to my DMZ. It’s great, but something I have been struggling with is that periodically my FileZilla docker container will be unable to connect to my FTP server, erroring out with “ENETUNREACH – Network unreachable”. It seems to be exacerbated by large file moves (like when I’m moving whole directories).

ENETUNREACH - Network unreachable error in FileZilla
ENETUNREACH – Network unreachable error in FileZilla

I started by researching the error message but there isn’t a whole lot on the formal definition of “ENETUNREACH – Network unreachable”. Presumably because “Network unreachable” is self-explanatory. There are a lot of forum posts about FileZilla being blocked by an antivirus suite’s software firewall, however, so I feel comfortable with the error description given. There are no subtleties to the message “ENETUNREACH – Network unreachable”- it simply means that FileZilla can’t connect to the network (i.e. it can’t dial out).

Docker Networking Overview:

Let’s begin with a brief introduction to Docker networking. At a high level, Docker containers are very similar to virtual machines (VMs) but without the overhead of having to run a duplicate OS. The diagram below, shows our Docker containers which run inside of our Host:

Basic Docker network diagram showing three interfaces: the interface between the container and the host, interface between the host and router, and interface between the router and internet.
Basic Docker Network Diagram

The above diagram shows a basic schematic of a typical Docker network. For my containers, I have the network configuration set to bridge mode which means that, at least for networking purposes, the Host acts like a glorified router to the Container. Put another way, when I want to communicate with the container over the network, I just put in the IP address of my Host and a port specific to that Container. The Host then takes that traffic and forwards it to the internal IP address of the Container at whatever port it’s listening on. This is also true for outbound traffic. In essence, bridge mode on Docker is nothing more than network address translation (NAT). Something you’re undoubtedly familiar with if you have a homelab or are self-hosted like I am.

Three network interfaces in Docker networks

Back to our problem. We know FileZilla can’t dial out. Referring back to the diagram, we see that there are three checkpoints where traffic could be failing: (1) the interface between the Docker container and Host, (2) the interface between the Host and the router, and (3) the interface between the router and the WAN/internet.

Troubleshooting the Connection:

Since we know our problem is in the outbound direction, let’s focus in that direction. What’s the simplest way to check for a connection? Ping it!

Testing each network interface we identified above in turn:

1) Ping the Host from the Container:

We can accomplish this by executing a command in the Container. This can be accomplished in unRAID by clicking on the Container and selecting console. If you aren’t running unRAID, this is the same as running <docker exec -it [container name here| FileZilla for me] sh>. Now you can simply ping the Host using <ping [insert IP address of Host on LAN here]. Pinging the Host in my case greeted me with a response so we know that connection isn’t the culprit:

Pinging Docker Host from Container.
Pinging the host from the container. We have a response so no problem here!

Note: You can also ping an address on the internet. When I did that here, I did not receive a response, confirming that the outbound connection is indeed broken somewhere along the chain:

Pinging website from Docker container.
Pinging a website from the container. No response. So we know something is broken along the chain.

2) Ping the Router from the Host:

This is straightforward enough. In unRAID you just open your console/terminal on the Host (“Tower”) and ping the router’s IP address. I also received a response here so we know that we have a connection to router. We probably already knew this since we could connect to the Host over the network in the first place though.

Pinging an address on the internet also received no response meaning my server was not able to access the internet. This further confirmed that the chain is broken still further upstream.

3) Ping an address on the WAN/internet from the Router:

This can be a little bit trickier. Thankfully I run dd-wrt on my routers so I can easily initiate a terminal session on the router with <terminal [insert router local IP address here]>. Pinging an address on the internet resulted in no response here as well! Well, we know this guy is the last interface in the chain so we know the problem is with him.

[There’s a little bit more to know here which is more specific to my network’s “unique” architecture. My server resides in a physically separate part of my home- away from my core network and my home isn’t physically wired for ethernet, therefore to adapt and overcome, I have dd-wrt set up to create a client bridge with this router (the router mentioned above is actually the client router in the client bridge). For those of you who don’t know what this is, it means that all clients connected to my secondary client router behave like they’re physically connected to the primary router. At least that’s the way it’s supposed to work in theory and it most often does- unfortunately, the client bridge mode is notoriously unreliable as is the case here. Had this been the DMZ’s main router at fault, as would be the case on your home network, a big clue would’ve been that I couldn’t access the internet from my computer.]

Note: I also want to point out another potential cause here. If, at any point when pinging an external site on the internet, you found that you didn’t get a response, try pinging an IP address (as opposed to the website address) you know is good. If that works, that suggests a problem with your DNS lookup and you should begin your investigation there.

Rebooting the router fixed the issue and allowed FileZilla to proceed without error. Honestly, the root cause here is that I am using client bridge mode but getting rid of it really isn’t an option for me at this point. I could try upgrading the router to better hardware but I need to let my wallet recover from this server build first. 🙂

Anyway, I thought this made for an interesting case study and demonstrates how breaking a system down into its simple parts allows for effective troubleshooting. With a methodical process and understanding, every problem can be overcome.

Improvise. Adapt. Overcome.

April 10

How to Share Part of Your Plex Libraries without Giving Users Complete Access to Your Full Library: Symbolic Links with Plex and Docker Containers

By now you’ve discovered some of the many shortcomings of Plex. One of which is that you can’t share individual videos with your friends or family- you have to give them access to the entire library! It’s even worse if you have kids and all of your movie content is stored under a single “Movies” library: “Saving Private Ryan” is going to be right next to “Tangled”.

Well, thankfully, there’s a way around that. Enter “symbolic links”. I’ll do another tutorial on what a symbolic link actually is in the future, but for now, think of them as Unix’s version of a shortcut- it merely points to the filename of another file located somewhere else. This means that to share specific content with users, all we have to do is create a library folder specific to that user and place a bunch of symbolic links in it referencing the content in the libraries we have stored already.

So yes, we do still have to create a new library to share with users, but the good news is that we can do this all without having to make a duplicate of the file and taking up valuable storage space!

Creating a Symbolic Link:

If your Plex Media Server is a normal install (i.e. not running in a Docker container), creating a symbolic link is pretty straightforward. Just create your new library directory and navigate to it in a terminal. Then just execute the following command:

ln -s [insert file source location here] ./

The above path in the [insert file path to video here] can be either a directory or a video file itself. Just create a symbolic link for each individual directory or video you want to share. That’s it!

Symbolic Links with Docker Containers:

Normally symbolic links (“symlinks”) are quite transparent to applications in Linux (that’s the whole point), but in the case of running Plex on an unRAID server, we have an additional challenge: Plex is running in a Docker container. Plex Docker containers have a volume mapping configured to map the media path in the Plex container to the user share on the unRAID host. The chances are that the two file paths specified for the container and the host don’t have the exact same parent directory (if they did that would partially defeat the point of the abstraction that Docker containers give you).

Docker Container Volume Mapping
Docker Container Volume Mapping: Host Path ‘= Container Path

The above configuration shows what I mean a little better. As you can see, the Host path does not equal the Container path. The consequence of this is that the Docker Host and the Docker Container have two different systems of reference. This problem we run into when we run the command “ln -s” above is that the symbolic link created follows an absolute path so when the Plex Docker container starts to follow this path that’s specified based off the Host’s directory tree, it fails. I think an example will help illustrate:

Normal "absolute" symlink with absolute file path.
Normal symlink with absolute file path.

The Fix: Relative Symlinks

Thankfully this is easily fixed with the addition of the relative (-r) argument to the ln command which will instead give us a relative symbolic link:

ln -rs [insert file source location here] ./

The example below demonstrates the difference compared to a “regular” symlink:

Relative symbolic link giving us a relative file path that will work in a Docker container.
Relative symlink giving us a relative file path that will work in a Docker container.

This allows the Docker container to translate the file path specified in the symbolic link into its own internal file structure. That’s it!

In a future tutorial I will explain more of the technical details behind symlinks and you’ll see equivalent alternatives to the command above.

March 28

How to Upgrade to Ubuntu 19.04 (Disco Dingo) Beta (or any other Ubuntu Beta Version)

Ubuntu 19.04 (Disco Dingo) Beta is available today! I do this so infrequently that I always have to look up how to do it, so for future reference I’m tossing my notes up here. Here is how to upgrade to an Ubuntu beta version.

1. Make sure your current version of Ubuntu is up to date.

This can be accomplished by opening a terminal (Ctrl + Alt + T) and running the following command:

sudo apt-get update
sudo apt-get upgrade

2. Launch update manager

Next, launch update manager with the code below in the command box (Alt + F2). The -d option tells it to look for upgrade distributions:

update-manager -d
Prompt to upgrade to a new version of Ubuntu

Just click the “Upgrade” button and you’re done!

FYI, this can also be accomplished in the terminal (as opposed to the command box) with the following:

sudo update-manager -d

That’s it. Don’t forget to re-enable third party sources after you’ve upgraded.

Category: Linux | LEAVE A COMMENT
February 2

How To Update/Install FileZilla on Ubuntu

FileZilla is an incredibly useful FTP client for transferring files between your workstation and servers. In this tutorial, I will walk you through updating/installing FileZilla on Ubuntu without using the repository. In general, if you want the latest and greatest features, try to avoid repositories- the apps in repositories are often outdated. I also feel like a repository is a crutch in that it obfuscates how your software is actually installed on your Linux system.

I can already hear the outrage now; I’m not saying that repositories are worthless. They greatly reduce maintenance when it comes to keeping your system (relatively) up-to-date. Downloading, extracting, and compiling every application from source would be hugely impractical. I use repositories for things that either I don’t use very often or that I don’t care about having the latest version of. For apps that I use often, where I care about having the latest, I handle those manually. Now, on to the tutorial.

1. Obtain your update files.

Obtain your update files. If you already have FileZilla installed, FileZilla checks automatically for updates at launch and downloads them to your home downloads folder. If you don’t have FileZilla already, download it here.

2. Navigate to your downloads folder.

Navigate to your Downloads folder and find your FileZilla tar file. I’ll admit I use the Files GUI app that comes with Ubuntu most of the time. Right click and select “Open in Terminal” (or just open a terminal with Ctrl + Alt + T and just type cd ~/Downloads/).

3. Extract your tar file.

Extract your tar file using the following command:

tar -vxjf FileZilla_3.40.0_x86_64-linux-gnu.tar.bz2

This will extract the file to a directory in your Downloads directory called FileZilla3. You should now have the following:

Extracted FileZilla3 files in ~/Downloads/ directory.
Extracted FileZilla3 files in ~/Downloads/ directory.

Notice this extraction contains a bin directory, implying that it’s ready to run (no compilation necessary).

4. Move your extracted files to their final location.

Let’s move this folder to our /opt/ directory with:

sudo mv ./FileZilla3/ /opt/

But wait! If you’ve already, installed, you’ll get the following error:

mv: cannot move './FileZilla3/' to '/opt/FileZilla3': Directory not empty

Even with sudo, mv will refuse to merge a directory. It’s a nice guardrail. In our case though, we do want to merge. For that, we’ll use rsync:

sudo rsync -a ./FileZilla3/ /opt/FileZilla3/

Warning: DO NOT FORGET to add the /FileZilla3/ directory to /opt/ like it shows above. If you simply did /opt/ you’d wipe out your entire /opt/ folder and be left with only FileZilla3. 

And with that, we’re done!

Category: Linux | LEAVE A COMMENT