August 22

How to Add a Hard Disk to an ESXi Virtual Machine

VMware ESXI Hard Drive

In an effort to consolidate my homelab, I recently deployed an ESXi host to my main server. There’s been a bit of a learning curve but overall it’s been a pleasant experience. In this tutorial, I will show you how to add a new hard disk to an ESXi virtual machine. (If you’re looking for hard drives, I usually use WD Reds that I shuck from a WD Easystore or Elements drive. They’re high capacity, well-priced, and reliable.)

Now, theoretically, this should be an easy task where you can just edit the VM and click the Add hard disk > New standard hard disk, specify the hard disk location and be done. Unfortunately, though, there’s a bug in the ESXi host web client and so instead this will be a tutorial on adding an additional hard disk to an ESXi guest via the command line (esxcli).

Starting with the bug in the ESXi host web client, if you add a new hard disk to a VM, and the primary datastore where the VM is located is full, you’ll be greeted with the following message and the save button will gray out:

The disk size specified is greater than the amount available in the datastore. You cannot overcommit disk space when using a thick provisioned disk. You must increase the datastore capacity, or reduce the disk size before proceeding.
ESXi error message

The disk size specified is greater than the amount available in the datastore. You cannot overcommit disk space when using a thick provisioned disk. You must increase the datastore capacity, or reduce the disk size before proceeding.

ESXi host web client bug: Save button is always disabled even despite changing location of hard disk. The save button being disabled will persist even after removing the offending "New Hard disk".
ESXi host web client bug: Save button is always disabled even despite changing location of hard disk. The save button being disabled will persist even after removing the offending “New Hard disk”.

Now, that’s all fine and good, but the problem is, the “Save” button stays that way even when you choose a datastore that does have space. Heck, the save button even remains disabled when you remove the new hard disk! It appears that the web client only evaluates once, cripples the save button, and never re-evaluates the settings ever again.

Another thing I’ve learned along the way is that vSphere != ESXi host web client and unfortunately almost all of VMware’s documentation refers explicitly to vSphere, not the ESXi web client, so in this tutorial I’ll be showing you how to use the command line if you don’t have vSphere set up.

Creating the Virtual Disk via the Command Line (esxcli and vmkfstools):

1. Enable SSH on the ESXi host by right-clicking on the host in the web client > Services > Enable Secure Shell:

Enabling SSH service in ESXi.
Enabling SSH service in ESXi

2. SSH into ESXi host. If you’re using Ubuntu, open a terminal and type the following:

ssh root@<insert IP address of ESXi host here>

3. Find the location of your datastore with the following command:

esxcli storage filesystem list
esxcli storage filesystem list
Note the Mount Point (i.e. the location of the volume)

4. Change directory into that location with the following command:

cd <copy Mount Point from above here>

5. Create your hard disk with the following command:

vmkfstools --createvirtualdisk 256G --diskformat thin <nameYourDriveHere>.vmdk

Note that the above creates a 256 GB virtual drive. If you want a different size, just change it.

vmkfstools --createvirtualdisk 256G --diskformat thin <nameYourDriveHere>.vmdk
vmkfstools –createvirtualdisk 256G –diskformat thin .vmdk

Adding the New Virtual Disk to Your VM:

At this point, we’ve created our virtual disk, but it currently only exists in isolation in the datastore. It isn’t connected to any VM so we need to add it to a VM.

6. Add the hard disk to your VM:

Here is the “choose your own adventure” part of the guide. I originally assumed that the majority of readers will want to add the virtual disk to their VM using the ESXi host’s GUI. That’s fine since this part of the ESXi Embedded Host Client isn’t broken. If that’s you, use the steps outlined in 6a.

However, based on reader feedback, I have learned that some readers are interested in also adding the new hard disk to the VM via the command line, so that both disk creation and hot-adding the disk to the VM are 100% command line. If that’s you, refer to 6b.

6a. Add the hard disk to your VM via the ESXi Embedded Host Client (i.e. the ESXi web client GUI):

We’re done with the command line now. Go back to your ESXi web client and edit your VM. Now click “Add a hard disk” but this time instead of a new hard disk, we’re going to select “Existing hard disk” and navigate to the .vmdk you created in step 5:

Add existing (new) hard disk in ESXi web client
Add existing (new) hard disk in ESXi web client

Defaults are fine. Don’t panic if your new hard disk shows 0 GB. The virtual disk just hasn’t been formatted yet. Save and boot up your VM. As you can see I’m using a Windows VM here.

6b. Hot Adding the Disk to the VM via the Command Line:

To hot-add our new virtual disk to the VM via the command line, we first need to find the ID of our VM. To do so, still SSH’d into the ESXi host, we issue the following command:

vim-cmd vmsvc/getallvms
Result of  vim-cmd vmsvc/getallvms showing Vmid of VMs
Result of vim-cmd vmsvc/getallvms showing Vmid of VMs

I want to add this to my Windows VM, so I want Vmid 1. Now to hot-add our new virtual disk, we simply issue the following command:

vim-cmd vmsvc/device.diskaddexisting <vmid> </vmfs/volumes/pathToDisk.vmdk> 0 1

Replace <vmid> and </vmfs/volumes/pathToDisk.vmdk> with the Vmid and path to your new virtual disk, respectively. The path has to be the full path to the disk you created in steps 3/4/5 above. The 0 at the end refers to your SCSI controller (typically 0) and the 1 refers to your SCSI target number (the 0 slot is occupied by the primary virtual disk your VM is using, so 1 is just the next available target. If you already have a second hard disk on your VM, you would increment this number and use 2 instead. You get the idea). In my case, the command would look something like this:

vim-cmd vmsvc/device.diskaddexisting 1 /vmfs/volumes/5d5d560e-962037c1-16f8-fcaa142fc77d/Windows10Storage/win10aux.vmdk 0 1

7. Add the Hard Drive to the OS:

When you boot up your VM and navigate to “This PC” in File Explorer, you still won’t see the hard drive. Again, don’t panic, this is just because we haven’t added the drive in Windows yet.

Open up Disk Management (you can search for it in Cortana) and you should see your new drive, albeit unformatted. Right-click on the hard drive and select “New Simple Volume”:

Disk Management - Adding unallocated disk to Windows
Disk Management – Adding unallocated disk to Windows

Follow the wizard and when you go back to “This PC”, voila, your new hard disk appears!

New disk installed on VM
New disk installed in VM
August 14

How To Set Up VLANs On An L3 Switch (HP 1910) With pfSense

This site is getting more traffic than I had ever anticipated and, in order to support this self-hosted site and my homelab, I have upgraded to a 1 gigabit internet connection. In doing so, the bottleneck in my homelab network has shifted from the internet connection to the router itself.

As a reminder, my previous homelab consisted of a DMZ interface on pfSense firewall with a second DMZ router sitting directly behind it (i.e. the WAN of the DMZ router connected to the interface of the DMZ pfSense interface port). This obviously led to a double NAT situation that I had to handle but aside from that, this network topology had worked well for me. With the upgraded internet package though, I noticed that I was only getting half (~500 mbps) of my theoretical download speed. I was unsure if the bottleneck was in my pfSense router or my DMZ router (or if my ISP was lying to me). I had thought my pfSense router might be the bottleneck, but when I did my download speed tests, none of my CPU cores pegged to 100% (in a pfSense router with a dedicated quad port NIC, the bottleneck is most likely to occur in the CPU with heavy traffic as the firewall rules are evaluated), so I reasoned that the pfSense firewall being the bottleneck was unlikely. As a quick, field expedient test, I set up another interface on a free port on pfSense. Speedtesting directly on that interface gave me my full download speed, so I knew the bottleneck was downstream of the connection and was on my DMZ router (not surprising since that was a cheap router running DD-WRT).

As my homelab has grown considerably since I first started out, I decided this was the perfect opportunity to replace my DMZ router and upgrade to an L3 switch.

VLANs and Switching

First, let’s start off with what a switch is. You are probably most familiar with a cheap unmanaged switch that allows you to plug it into another ethernet port and effectively split it into an addition 4-8 ports. This is known as an L2 switch – basically all the devices connected to the switch are on the same network and can communicate with each other. The L2 switch accomplishes this by keeping a MAC address table to keep track of what device is on which port. It then switches packets based on their MAC address/port.

One of the greatest features of a managed switch is that it allows you to create what’s called a VLAN (a virutal LAN). A VLAN lets you divide up (subnet) your physical network into separate logical ones.

Why would you want to do this? Well, my biggest motivation is security. By separating out the network into VLANs, you can cordon off the individual VLANs from each other. As an example, let’s say you had a business and you offered guest wifi. You obviously wouldn’t want your guests to be able to connect to your network and be able to access all of your business PCs. With VLANs, you could create two separate VLANs: one for your guest wifi, and another for your business network. With VLANs on a simple L2 switch, devices on those two networks can’t talk to each other since the devices on the your guest wifi effectively live on a separate network from your business network and there’s no routing between them.

What’s an L3 Switch?

An L3 switch allows you to take this functionality of a managed L2 switch a step further. In all but the most simple scenarios (like the guest wifi/business one above), there are scenarios where you want those VLANs to be able to communicate with each other. An L3 switch takes the routing functionality of a router and offloads it to the switch itself, allowing the switch to route traffic between VLANs. So an L3 switch gives you the security through the principle of compartmentalization (via VLANs), while also allowing you to route traffic between those VLANs on the switch itself.

I headed off to Ebay and picked up the following HP 1910-24G for a whopping $30:

HP 1910-24G L3 Switch
HP 1910-24G

Even though this is the lowest price I’ve ever seen for this switch, it turns out I still overpaid (we’ll get to that later).

While you’re there, be sure to pick up a rollover (console) cable. Trust me, you’ll be glad you did, especially when the seller fails to factory reset the switch before shipping and won’t give you the password. I selected this particular cable because it had the best reviews and the most compatibility with other vendors, not just HP. This particular console cable worked great with my HP 1910.

1. Set Up Our Switch

So now we have our switch. Power it up and connect an ethernet cable between your PC and the switch. You’ll need to assign your computer a static IP since a DHCP server isn’t connected to the switch. The exact IP address you need to assign is based on the default IP listed on the back of the switch. For example, if the default IP on the switch is, assign your local PC an address of The default subnet is Navigate to the router’s default IP address in a web browser.

Alternatively, you can connect the switch directly to your pfSense firewall/router and the DHCP server running on pfSense will assign it an IP address. You can then just navigate to that assigned IP address (which can be found on the pfSense GUI or by running nmap over your local network).

Once you navigate to the switch IP address, you’ll be presented with a login page. The default username is admin with a blank password:

HP Login Page at Switch's IP Address
HP Login Page at Switch’s IP Address

2. Set Up Static Routing

The first thing we’ll do is set the switch up so that our VLANs can reach the internet. You see, by using the switch as an L3 switch, all routing is handled locally by the switch. The problem is, our switch’s router doesn’t know how to get to the internet- it isn’t directly connected to the internet, it’s never seen the internet before, it doesn’t know what the internet is. That’s what our pfSense router does, so we need to tell our switch about the pfSense router. We accomplish this by creating a static route. To do so, navigate to Network > IPV4 Routing > Create:

Static routing settings - routes IPv4 traffic (that isn't local to switch's VLANs) out to the pfSense router. Note that this IP address is the IP address of the pfSense router on the interface the HP switch is plugged into.
Static routing settings – routes IPv4 traffic (that isn’t local to switch’s VLANs) out to the pfSense router. Note that this IP address is the IP address of the pfSense router on the interface the HP switch is plugged into.

Enter a destination IP address of with a mask of The next hop should be the IP address of your pfSense router (in my case, it is

So what have we done here? What we’ve said to our switch is that if we’re trying to access any IP address ( with a mask of, get there by sending the traffic through our next hop at (i.e. get there by going through our pfSense router). Preference is inversely related to how strongly we should use that route so by giving it a high preference, we’ll continue to use the other dynamic routes the switch’s router has picked up on if at all possible (i.e. so VLAN-to-VLAN traffic will continue to be routed through the switch).

Keeping along this same train of thought, the pfSense router has no idea how to get response traffic from the internet back to the VLANs since it has no idea of the VLAN’s existence. Therefore, we also need to define static routes on the pfSense route for the return traffic.

To do so, we need to first define a gateway in pfSense by going to System > Routing > Gateways. When you click add you will be prompted for an interface and a gateway IP address. But what interface and gateway IP address should we use? Thinking about the return flow of traffic, pfSense doesn’t know how to get to our VLANs. It just knows it has traffic for an IP address it’s never heard of. But there is one device that does know where the hosts that belong to these IP addresses live- the switch. And where does that switch live from pfSense’s perspective? It lives in the DMZ. So we use the DMZ as our interface and we use the L3 switch’s DMZ IP address as the gateway:

Defining our HP 1910 gateway in pfSense
Defining our HP 1910 gateway in pfSense.

So now that we have a gateway specified, we can define the rest of our static route. Go to the “Static Routes” tab. You’ll then specify the destination network, the interface, and the gateway (that we just created):

Static routes for return traffic from pfSense to L3 Switch
Static routes for return traffic from pfSense to L3 Switch

Let’s summarize what we’ve done. The above static routes tell pfSense that for traffic destined for these networks, send the traffic through the DMZ interface, and use the L3 switch that’s located on that DMZ interface as the “next hop” to route the traffic.

Test it out by changing the gateway (of your IPV4 settings) on your PC to the IP address of your switch. You should now be able to navigate to the web and ping outside servers. N.B. You may also need to update your firewall rules (see end of article).

3. Create Our VLANs

Now we’re finally ready to create our VLANs. Go to Network > VLAN > Create:

Creating a VLAN on the HP 1910; here we are simply telling the switch that a VLAN with this ID exists.
Creating a VLAN on the HP 1910; here we are simply telling the switch that a VLAN with this ID exists.

4. Assign Ports to Your VLANs:

We now need to tell the ports which VLANs they apply to. To do so, go to Network > VLAN > Modify Port:

Network > VLAN > Modify Port; Assign a port as untagged and enter the applicable VLAN ID.
Network > VLAN > Modify Port; Assign a port as untagged and enter the applicable VLAN ID.

Select a port. Since the devices we’re going to connect to our switch aren’t “VLAN aware”, we are going to use an untagged port (meaning there is no VLAN tag added outbound on the port). In the VLAN ID box, specify the VLAN the device listed should be on.

5. Create VLAN Interface:

If we were going to use this switch as a simple L2 switch with our VLANs routed by our pfSense router (known as a “router on a stick” configuration), we wouldn’t need to complete this step. Creating the VLAN interface is what tells our HP 1910 switch to enable L3 routing. To create our VLAN interface, go to Network > VLAN Interface > Create:

Creating a VLAN Interface on the Switch. This tells the switch to route VLAN traffic and by assigning the switch an IP address, we are also defining the gateway for that VLAN.
Creating a VLAN Interface on the Switch. This tells the switch to route VLAN traffic and by assigning the switch an IP address, we are also defining the gateway for that VLAN.

Input the VLAN ID you created in the previous step, select Manual for the IPv4 address (unless you have an external DHCP server you want to use). The IPv4 address you specify here will be the IP for switch on this VLAN. Since we’re using it as an L3 switch, this also means this address will be your gateway address for any devices you connect to this VLAN.

Don’t forget to click “Apply” and save often.

That’s it!

Don’t forget to assign the correct IP addresses and gateway for the devices you have connected to the switch. As a reminder, they should be in the same subnet as the VLAN specified on that port and their gateway should be the IP address of the switch itself on that VLAN (the one we specified in step 5). Example: If the VLAN interface is defined as (worded another way with a subnet of, then your IP addresses should fall within the range.

You may also have to change your pfSense firewall rules on the interface that the L3 switch is plugged into. For example, you likely have a rule that says allow traffic originating from LAN net to anywhere. For our traffic to reach the internet, you would need to reconfigure this rule would need to allow traffic originating from anywhere to anywhere since our VLANs lie within a separate subnet.

Category: Servers | LEAVE A COMMENT
July 4

Weather Station Project: ESP32 with LoRa Telemetry

Heltec ESP32 with LoRa radio

Monsoon season is rapidly approaching. Last year my neighbors were heavily affected by the weather when we received over a foot of rainfall in just a few hours. Flooding was well over 8 feet in some low lying areas. For my neighbors down the road, who park under their apartment complex, over 30 vehicles were lost during the storm.

In an effort to help my neighbors, I am working on an early warning storm detection system. Ultimately, I have two separate projects in mind: a weather station and a storm warning sentinel. As alluded to in my previous post on project management, I have split the ultimate goal of the early warning detection system up into the two separate projects. The prerequisite steps learned in the weather station project will enable the storm warning sentinel project.

Today marks an exciting day for The Engineer’s Workshop. Today, I am kicking off a new feature of this blog- a recurring series of projects. Each project will start with a design overview followed by a timeline of the intermediate steps devised to build the project. Subsequent posts for the project will then be dedicated to each intermediate step. Let’s get started on building our weather station!

The first step you should take on any engineering project should be researching the design. Time spent designing your project up front will pay dividends later. When I design an engineering solution, I start off with clearly defining the goal. After that, I define the requirements/criteria. With microcontroller projects such as this, I also explicitly list the inputs and outputs involved so that I don’t forget them in my design. In the case of this weather station project, I came up with the following design overview:

Outdoor Weather Station:

Goal: Create an outdoor weather station to detect ambient weather conditions and transmit to a listening station. The listening station will record the data to a SQL table which will then be made visible via an internet dashboard.


- Weatherproof
- Self-powered
- Wireless radio transmission


1) Temperature
2) Barometric pressure
3) Humidity


- Transmit data to listening station over LoRa radio.
Weather station design document

I am a visual thinker so I also like to sketch out a rough high-level design:

Weather monitoring station transmits to listening station over LoRa which then saves off the information to my Web/SQL server.
Weather monitoring station transmits to listening station over LoRa which then saves off the information to my Web/SQL server.

The design I’ve come up with is that I will have an outdoor weather monitoring station which will transmit (over LoRa) to a listening station I have indoors. The listening station will be connected to a web/SQL server via wifi. Any information it receives from the weather monitoring station, it will turn over to the SQL server for storage.

With the above design for my weather station, the project checkpoints easily reveal themselves:

Project plan for weather station
Project plan for weather station

Breaking down the project, we see the following steps are necessary:

  1. Create a web dashboard to display the data
    • For this, I have chosen to use Django
  2. Interface basic sensors with ESP32 for the Weather Monitoring Station
  3. Interface listening Station with SQL Server to log data
  4. Connect Weather Monitoring Station to Listening Station via LoRa
  5. Polish Django dashboard

Stay tuned! In subsequent posts, we will build out each objective in turn.

July 3

Project Management and Buzz Aldrin’s Race Into Space

Buzz Aldrin's Race Into Space

As we approach the 50th anniversary of the lunar landing, I wanted to reflect on this landmark event and what we can learn from it. Back when I was about 8-years-old, I used to play a strategy game called Buzz Aldrin’s Race Into Space (“BARIS”). As you may be able to guess from the name, you are placed in the role of the Administrator of NASA or the Soviet space program and the goal is to beat the other side to the moon. It was a great game and it’s what first gave me my passion for engineering (that and my grandfather who was a NASA engineer). To this very day, I look back on the game with fond memories. (If you’re interested in playing the game yourself, it’s available for free on Windows, Mac OS X, and Linux at It’s even been ported to Android.)

I didn’t realize it at the time, but it turns out that while I was having fun playing this game, it was also teaching an 8-year-old me about project management. As the Administrator of NASA, in order to successfully put a man on the moon you had to come up with a strategy for doing so. From an early age, I learned that the key first step was to clearly identify the end goal. In BARIS, this was obviously to achieve a manned lunar landing (and more importantly return them alive- the game heavily penalizes you for failures). Beginning with the end goal in mind, I learned that the best way to achieve this goal was to split the project up into smaller, more manageable steps. Most importantly, each of these steps (or milestones) had to be real objectives- i.e. not subjective; they have to be tangible. Actionable. Subjective goals are too easily hand-waved; it’s easy to trick yourself with subjective goals that you’ve accomplished something when in reality you haven’t. There’s a reason space missions call these milestones objectives.

In BARIS, this project management strategy plays out something like this:

End goal: Achieve a manned lunar landing.

Break down this goal into smaller objectives:

  1. Get into space in the first place by first launching a satellite which tests out your rocket
  2. Once you have your rocket, make sure you can keep men alive in space. Develop a module program and perform a manned suborbital followed by a manned orbital.
  3. Learn how to keep men alive outside of the spacecraft. Perform an EVA during one of these orbital missions.
  4. Now that you have a rocket and can keep men alive in space, you need a spacecraft that can land on the moon. Begin development of a lunar lander.
  5. You now need a way to connect your lunar lander to your command module. Start testing manned docking missions
  6. Perform a lunar flyby.
  7. Finally, you’re ready to land. Complete the goal and attempt a lunar landing mission.

In addition to making the project manageable, heck achievable at all, each objective incrementally builds on the next and in doing so also helps to test and improve the reliability of those previous objectives. It gives you confidence in your product.

I use this same project management approach I learned from playing BARIS all those years ago when I take on any new project, whether it’s for work or one of my hobby projects. Over the next few weeks, I will be launching a new project. We will start with an overview, breaking the project down into objectives, and we will build out each objective together in subsequent posts. My goal in doing so won’t be to just teach you the technical details for each stage of the project, but most importantly to teach you how to think about engineering projects so that you can design your own without having to follow a cookie-cutter recipe that so many other projects online rely on (example:

I have attached the project management template board I use on Trello:

Trello engineering project template
Trello board for breaking down project into manageable pieces

Just a rough overview of how I use this board. I first identify all the intermediate stages of the project and place them under the “Stages” list. If one of those stages involves something I don’t know how to fully do, I move it to the “Research” list. Once I start active development on that stage, I move it to “In Progress”. When I think I have a working prototype of it, I move it to the “Testing” phase. Finally, once it’s completed, I move it to the “Complete” list.

Sometimes you run into something you weren’t expecting- maybe you found out one of your intermediate stages is more complex than you initially thought. In that case, I will break down the intermediate stage into simpler stages and add them to the list. Other times, you may run into problems that force you to put an objective on hold (for example, receiving the wrong part), for that, I have the “On Hold” list.

Feel free to copy the board and use it for your own projects. I promise it will help make your projects much easier. By forcing you to methodically plan out your design, you’ll find that your projects will go much smoother.

June 19

Preparing Your Server for Vacation – The Unattended Server Access Checklist

Unattended Server Checklist showing steps of 1) Check VPN Connection 2) Reverse SSH Tunnel Backup 3) VNC available on backup devices 4) Test automatic server shutdown after loss of utility power 5) Test Server WOL form one of backup devices.

I am about to embark on a much-needed vacation this Friday. I travel relatively frequently for work and something that has always been a stressor in the back of my mind was leaving my homelab unattended. With my luck, as soon as I lock my door and get in the car, the server will wait for that exact minute to go down. The way I have managed this problem is by creating a checklist to ensure that, no matter where I am, I can access my network. Stuff goes wrong, that’s the nature of any real work, but I can fix anything so long as I have access. Best of all, this checklist can save you from having to call your wife and tell her to push a button on your computer!

In this post, I give you the checklist I use for unattended server access. In future posts, I will create how-to’s for implementing these features.

1) Check VPN Connection:

VPNing into my DMZ is my primary method of access for resources on my network when I am not at home. The advantages of being able to connect to my DMZ via VPN is enormous since it allows me to act just like another client on the network.

2) Check Reverse SSH Tunnel Backup (x2):

Let’s say my VPN server has gone down (in either a controlled or uncontrolled fashion); in that situation, my primary means of unattended access has gone with it- I’d be locked out of my own network. The Army has a phrase for this: “One is none, two is one, three is better.” Since we have currently only discussed one way of accessing the network remotely, applying the military’s logic, I have no way into my network to fix it; therefore, I need a backup. This is where a reverse SSH tunnel comes to the rescue. You can either create one manually or use a 3rd party service.

I use’s connectd. With it, I can SSH into another device on my network (keeping with the idea of redundancy for critical systems, this device should not be the same one running your VPN server) and do what I need to do from there. (See WOL below).

3) VNC Server Available on Backup Devices:

My router uses a GUI, so being able to spin up a VNC server on demand from the SSH connection above is necessary.

4) Test Automatic Server Shutdown When Running on Backup Power (UPS):

I have an APC UPS directly connected to my server over USB. In the event of a power failure, the UPS tells my server that we are no longer on utility power and allows it to gracefully shutdown. Testing is essential here since it not only checks to make sure that this feature is still functional, but it also serves as a check on the UPS’s battery life.

5) Test Server Wake-on-LAN (WOL) from Backup Device:

Unlike my other network devices (router, hardware firewall, the above backup devices, etc.), my server is set up to NOT automatically restart upon the restoration of utility power. Since power outages often can have brief periods of power restoration, I don’t want it to continuously startup to then only lose power again. Therefore, after a graceful shutdown, I want the server to stay down until I bring it back up. I accomplish this via a WOL message (“magic packet”) from one of the backup devices to the server.

Category: Servers | LEAVE A COMMENT
June 11

Django Error Silver Bullets

I’ve recently been teaching myself Django. I was working on creating a new web app to handle the backend for some IoT projects I’ve been working on and recently came across two errors. In case anyone else comes across them, I will post the silver bullets below to hopefully save you some time.

1. no such table: main.auth_user__old

This was an unhandled exception that occurred whenever I tried to POST data from the Django admin:

Request Method: POST
Request URL:
Django Version: 2.1.1
Exception Type: OperationalError
Exception Value: no such table: main.auth_user__old
Exception Location: /home/engineer/anaconda3/envs/djangoEnv/lib/python3.5/site-packages/django/db/backends/sqlite3/ in execute, line 296
Python Executable: /home/engineer/anaconda3/envs/djangoEnv/bin/python
Python Version: 3.5.6
Python Path: [‘/home/engineer/Projects/WEATHERSTATION/weathersite’, ‘/home/engineer/anaconda3/envs/djangoEnv/lib/’, ‘/home/engineer/anaconda3/envs/djangoEnv/lib/python3.5’, ‘/home/engineer/anaconda3/envs/djangoEnv/lib/python3.5/plat-linux’, ‘/home/engineer/anaconda3/envs/djangoEnv/lib/python3.5/lib-dynload’, ‘/home/engineer/anaconda3/envs/djangoEnv/lib/python3.5/site-packages’]

Solution: This problem is described in Issue #21982. It has since been fixed. Since I use Anaconda (“conda”) as my package manager, the simple fix for me was to just update my Django environment to the latest version.

2. ModuleNotFoundError: No module named ‘sqlparse’

I thought I was home free at this point now that I had updated my Django environment to the latest version in conda, but alas that was not the case. This time, I received the following error:

Traceback (most recent call last):
  File "", line 15, in <module>
  File "/home/engineer/anaconda3/envs/djangoEnvNew/lib/python3.7/site-packages/django/core/management/", line 381


File "/home/engineer/anaconda3/envs/djangoEnvNew/lib/python3.7/site-packages/django/db/backends/sqlite3/", line 28,
 in <module>
    from .introspection import DatabaseIntrospection            # isort:skip
  File "/home/engineer/anaconda3/envs/djangoEnvNew/lib/python3.7/site-packages/django/db/backends/sqlite3/",
 line 4, in <module>
    import sqlparse
ModuleNotFoundError: No module named 'sqlparse'

Thankfully the error stack is pretty helpful here. The most recent call with the error “ModuleNotFoundError: No module named ‘sqlparse'” is pretty descriptive. To resolve this error, you simply need to install the module with the following command (while in your virtual environment):

conda install sqlparse

That’s it! Hopefully this helps someone else. As always, let me know if you have any questions!

June 3

Building Blocks of Programming Languages

  • Four basic building blocks of programming languages:
  1. Expressions
  2. Statements
  3. Statement Blocks
  4. Function Blocks

1. Expressions

  • Expressions in computer programming have the same definition as expressions in math: they are a combination of an operator and its operand(s). In keeping with the mathematical definition of an expression, they are well-defined, meaning that an expression must ultimately resolve to a value.
    • An operator tells the computer to perform some kind of mathematical or logical manipulation and is performed on one or more operands
      • Examples:
        • a + b
          • + is the operator; a and b are operands
        • x – 2
          • – is the operator; a and b are the operands
        • a < b
          • < is the operator; a and b are the operands
      • With the above examples, two operands are in play, which is why you’ll hear them referred to as binary expressions
        • As a result, binary expressions use binary operators; binary operators operate on two operands
      • Operator Classification:
        • Operators can be classified based on the number of operands they perform their operation on:
          • Unary Operators
            • Take one operand
            • Example: & (address-of operator); see Pointer tutorial
          • Binary Operators
            • Operate on two operands and are by far the most common
            • Examples: +, -, <, =, etc.
          • Ternary Operators
            • Operate on three operands
        • Operators can also be classified based on the kind of function they perform:
          • Arithmetic (math) Operators:
            • i.e. Operators that perform math
            • Examples: +, -, /
          • Relational Operators:
            • Compare the values of two operands
            • Examples: >, <, ==
            • Return/resolve to a boolean: true (1) or false(0)
          • Logical Operators:
            • Combine logical expressions
            • Examples: && (AND), || (OR)
            • Return/resolve to a boolean: true; false

2. Statements:

  • Statements are syntactically complete instructions
    • In C, syntax dictates that all statements end with a semicolon (this semicolon is known as a statement terminator)
  • Example:
    • Variable assignment:
      • a = 4;
        • This is a syntactically complete instruction; note how it is simply an expression (a = 4) consisting of the assignment operator (=) with the operands a and 4, and terminating with a semicolon as required by C. By syntactically correct, we simply mean that the instruction complies with the rules of the language (i.e. syntax).

3. Statement Blocks:

  • Statement blocks group statements together so they act like a single statement (i.e. the statements act together as a block)
  • In C, statement blocks start with { and end with }. In Python, statement blocks are controlled by indentation. This is why whitespace matters in Python but not in C.
    • The code inside the statement block is known as the statement block body
  • Example:
Example of statement block

4. Function Blocks:

  • Function blocks are blocks of code that accomplish a single task
  • Functions allow you to reuse code so that if you need to do the same thing multiple times, you simply call the function tag wherever you need it; you separately define the function (with its function block) elsewhere in your code
    • This makes it easier to maintain your code since, if you need to update your function, you only have to update the code once in the function itself and not multiple times wherever the function was called
  • Functions also help when working on big projects with multiple developers by acting as a black box
    • Good functions act like a black box in the sense that you don’t have to waste your time, brainpower, or memory knowing exactly how the code inside the function (it’s function block) works; you just have to know that for a given input, you get a given output
    • This is also the basis of the idea behind libraries- you don’t have to know exactly how something is done. You just call the function (written by someone else) from the library; this forms the very basis of abstraction which allows for collaboration
  • Functions also help for code readability; instead of mentally having to parse out multiple lines of code you can look at the function name (like verifyPhoneNumber() ) and know that it verifies the phone number.
    • Example:
      • int addNum(int a, int b){//Function block here }
May 4

How to Install Fritzing and Fix Missing Dependency Error Messages Using Symlinks:

In our previous notebook entry, we completed our exploration into the I2C protocol and implemented an external EEPROM for the Arduino. In that post, I have a wiring diagram that was created using an app called Fritzing. In this tutorial, I will explain how to install Fritzing on Ubuntu as well as how to resolve the following missing dependency errors that I was greeted with when I first installed it:

  • /usr/share/fritzing-0.9.3b.linux.AMD64/lib/Fritzing: error while loading shared libraries: cannot open shared object file: No such file or directory
  • /usr/share/fritzing-0.9.3b.linux.AMD64/lib/Fritzing: error while loading shared libraries: cannot open shared object file: No such file or directory

1. Download Fritzing:

Begin by downloading Fritzing available here:

2. Unpack the .tar to a convenient directory:

Follow the directions for the install on that same page. I extracted the .tar to my /usr/share/ directory. You may have to run as sudo to do this.

3. Navigate to the directory where you extracted your Fritzing tar and try to launch Fritzing:

Fritzing extracted to /usr/share/. Launching Fritzing using ./Fritzing
Fritzing extracted to /usr/share/. Launching Fritzing using ./Fritzing

If it launches great, but it probably won’t, and will fail with the following error:

/usr/share/fritzing-0.9.3b.linux.AMD64/lib/Fritzing: error while loading shared libraries: cannot open shared object file: No such file or directory

4. Fix the “error while loading shared libraries: cannot open shared object file: No such file or directory” error:

This message tells us that Fritzing is missing a dependency- specifically the library. Now, this is a very common library so it’s highly probably you already have it. Let’s find it with the Linux locate command:


Running this command should give you a list of locations that have this library. As you can see, I have quite a few duplicates of it:

locate command with locations of
locate command with locations of

Now that I know that I have the library and I know where it’s located, we can use a powerful trick available to us because we’re on Linux- the symlink. We’ve discussed symlinks (or “symbolic links”) before when I discussed how to create a separate Plex library that allows you to selectively share content. In short, symlinks are Unix’s equivalent of a “shortcut”. You can create a symbolic link to another file (or directory) and Linux will treat that shortcut just like it’s really there.

That’s exactly what we’re going to do here. So let’s create a symbolic link by picking one of the paths from our locate command above (it doesn’t really matter which one).

First, make sure we’re in the lib directory of your Fritzing directory:

cd ./lib

How do I know this is where we want to be? Well, it was in the first part of the error message:

/usr/share/fritzing-0.9.3b.linux.AMD64/lib/Fritzing: error while loading shared libraries: cannot open shared object file: No such file or directory

Now, create your symbolic link:

ln -s [path to directory you found above with your locate command] ./

In my case, I used:

ln -s /snap/core18/941/usr/lib/x86_64-linux-gnu/ ./
Creating symbolic link ("symlink") to fix dependency error message.
Creating symbolic link (“symlink”) to fix dependency error message.

Now, try to launch Fritzing again. In my case, I was greeted by a new error:

/usr/share/fritzing-0.9.3b.linux.AMD64/lib/Fritzing: error while loading shared libraries: cannot open shared object file: No such file or directory

I am always excited to see a new error. It means I actually fixed something and now I get to move on to something else that’s broken!

5. Fix the “error while loading shared libraries: cannot open shared object file: No such file or directory” error:

Again, we’re going to start by finding where the missing dependency exists:


Once we have a viable library location, we’re going to create that symlink to point to it:

ln -s /snap/core18/941/usr/lib/x86_64-linux-gnu/ ./lib/

Note that I messed up originally when I did this in my screenshot. I ran the above command with the output path in the main Fritzing directory, therefore I should have used ./lib/ as I show above, but if you’re already in the ./lib/ directory, you can just run the output with ./ as we did in the first one.

locate command with locations of; followed with fix by creation of symlink
locate command with locations of; followed with fix by creation of symlink

5. Rinse and repeat.

In my case, running ./Fritzing finally launched Fritzing, but you may have other dependencies that need addressing. Now that you know how to fix these missing dependencies this shouldn’t be too much of a problem. Enjoy!

As always, feel free to ask me any questions about any problems you run into.

Category: Linux | LEAVE A COMMENT
April 28

Expand Your Arduino’s Storage with an External EEPROM Part II: Reading from the AT24C256 – A Tutorial in How to Use the I2C Protocol Continued

Wiring Diagram Showing Connections between AT24C256 and Arduino

We first began our journey into learning the I2C protocol three weeks ago. In that post, we learned to write to an external EEPROM over the I2C protocol using nothing more than a datasheet and the Arduino’s built-in Wire library. Before learning to read from that EEPROM, which we will do today, we needed to gain the prerequisite knowledge of how data is stored in memory and how pointers work. From there, we learned how the data stored in these variables is passed along through to functions and what an array really is.

It’s been a daunting few weeks, but we’re finally there. Let’s read the data we wrote to our EEPROM armed with nothing more than the datasheet and the I2C protocol.

1. Review the Datasheet

Using the same strategy as before, we look for the command we’re interested in on the datasheet. Since we last wrote to the EEPROM using a page write, it should be pretty easy to guess that to read off our same data, we probably want a page read (also known as a sequential read).

Going to the Sequential Read section of the datasheet, we’re given the following description:

SEQUENTIAL READ: Sequential reads are initiated by either a current address read or a random address read. After the microcontroller receives a data word, it responds with an acknowledge. As long as the EEPROM receives an acknowledge, it will continue to increment the data word address and serially clock out sequential data words. When the memory address limit is reached, the data word address will “roll over” and the sequential read will continue. The sequential read operation is terminated when the microcontroller does not respond with a zero but does generate a following stop condition (see Figure 12 on page 12).


From here, the code practically writes itself, we just need to follow the directions Atmel has given us.

2. Write the Preamble:

Per the datasheet: “Sequential reads are initiated by either a current address read or a random address read.” Well, we wrote our data to a specific address, so we want to read from that address, so we’ll initiate the sequential read by using the random read. Referring to the Random Read section of the datasheet:

RANDOM READ: A random read requires a “dummy” byte write sequence to load in the data word address. Once the device address word and data word address are clocked in and acknowledged by the EEPROM, the microcontroller must generate another start condition. The microcontroller now initiates a current address read by sending a device address with the read/write select bit high. The EEPROM acknowledges the device address and serially clocks out the data word. The microcontroller does not respond with a zero but does generate a following stop condition (see Figure 11 on page 12).

Random Read Preamble: dummy byte write with “the device address word and data word address are clocked in and acknowledged by the EEPROM”

The above-boxed section represents the “the device address word and data word address are clocked in and acknowledged by the EEPROM” portion. We’ll start by coding this section. Thankfully, we’ve already done it in the Page Write post- it’s everything in section 2:

Wire.write(0b0000000); // 7 bits of 0s; this method takes a byte though so it will still transmit a byte's worth of 0s.

Boom. Header done.

3. Read the Data:

RANDOM READ: A random read requires a “dummy” byte write sequence to load in the data word address. Once the device address word and data word address are clocked in and acknowledged by the EEPROM, the microcontroller must generate another start condition. The microcontroller now initiates a current address read by sending a device address with the read/write select bit high. The EEPROM acknowledges the device address and serially clocks out the data word. The microcontroller does not respond with a zero but does generate a following stop condition (see Figure 11 on page 12).


The start condition (with sending a device address) is common to the I2C protocol and is thankfully handled by the Wire library with the simple Wire.requestFrom method, where the first argument is the device address and the second argument is the number of bytes to request from the EEPROM:


Since I only wrote two bytes “Hi” to the address, I’m only requesting two bytes back.

Now to actually read the data, we’ll use a simple while loop:

while(Wire.available()) {

Putting together the full code in its entirety from the past two tutorials:

C source code for Arduino implementing the I2C protocol with an EEPROM.
Source code for both writePage and readPage.

Here is what she looks like on the Serial Monitor:

Serial monitor showing EEPROM read and write.
Serial output showing EEPROM read and write.

In the future, we’ll eventually revise this code to make it more versatile by putting it in a library.

April 27

Pointers, Arrays, and Functions in Arduino C

Array Memory Diagram

Now that we’ve completed our introduction to pointers, I had really wanted to move on and wrap up our section on using an EEPROM with the I2C protocol today. However, I feel like I would be doing a disservice to you without elaborating further on why we would even want to use pointers in the first place.

Just to recap, let’s look at some simple code to demo the syntax of using a pointer:

int myVar = 10;
int *myPointer;
myPointer = &myVar;
*myPointer = 20;

If you were to compile this code and run it, you would see that at the end myVar’s value would now be 20 even though you’ll notice we never set myVar itself to 20. We accomplished this by referencing our pointer, myPointer, to the memory address of myVar using the reference operator (&). We then dereferenced our pointer by using the dereference operator (*) with our pointer and setting its value to 20.

Now, the obvious question you probably have is, “Why in the heck would I want to do that?”

The example above is more of a toy, obviously contrived, but there are very real reasons why you would want to do this, especially when you’re running a microcontroller like the Arduino and you have to handle a lot more low-level operations. To see the value in pointers, you’ll first need to know something about how functions work in C.

I want to keep this explanation of functions at a high-level to keep the concepts easy to understand. For now, just know there are two ways to call a function: by value and by reference.

Function Call By Value:

Pass by Value lvalue-rvalue diagram
Pass By Value- Note how the rvalues are copied

We’ll start with the easiest one- easiest because it’s the one you’re most familiar with; you’ll see that a function call by reference isn’t particular difficult either.

int sum(int x, int y) {
	int sumZ = x + y;
	return sumZ;

 void main {
	int a = 2;
	int b = 3;
	int sumAB = sum(a,b);

Start by reading this kind of code like a computer would: with the main function. We set a = 2 and b =3, we then go to get sumAB by calling the function sum(a,b). When we call that function, we replace a with the value of a (i.e. 2) and b with the value of b (i.e. 3), so we could just as easily say int sumAB = sum(2, 3);

In the sum function we created, we set x = 2 and y = 3 inside the function due to the above arguments that have been passed to it. This is called passing by value because we’ve merely passed the values of the variables. Inside the function those values passed to it (the values of a and b in this case) are copied to its variables (x and y in our example).

You’ll see this can actually be a problem if we wanted the function to actually do something with those original variables (like change them). The function that has had its arguments passed by value can’t actually change the values of the arguments themselves because all it has access to is the function’s own copy of those values (i.e. x and y have a different lvalue from a and b even though their rvalues are the same).

So yeah, it can change the values of x and y, but it doesn’t affect the values of a and b because they reside at a different location in memory. Additionally, the values of x and y cease to exist as soon as we exit the stack (i.e. right after we return sumZ).

Thankfully, there’s a way around this: enter call by reference.

Function Call by Reference:

Pass by reference lvalue-rvalue diagram
Pass by reference: Note how the lvalue (memory address) is copied

So what if we want to actually change the parameters that we are passing to a function? Or what if we simply want to return more than one value? How can we escape the box that is the function’s call stack? That’s where call by reference (pointers) comes to the rescue.

void addOne (int *numA) {
	*numA = *numA + 1;

void main() {
	int varA = 15;

Let’s again think about what’s happening here. We are taking a variable, varA, and extracting its memory address (its lvalue) with the reference operator, &. We are then passing that memory address in for the parameter numA in the function addOne. You can think of this as being the equivalent of declaring and initializing the pointer: int *numA = &varA. Inside the function, we are given direct access to the value stored in varA by dereferencing our numA pointer. The result of this program is the console prints 16 now stored in varA.

In a much more general (and I dare say enlightened sense), another way you can think of this is that in a way, this really is the same as call by value where we simply pass the rvalue of our parameter off to the function’s internal variables. The key difference is that for a pointer, its rvalue is simply the memory address of what it points to, therefore a memory address is what gets copied to the function!

Advanced Topic: This is the perfect opportunity to introduce this. In programming, particularly the C family of languages, there are two distinct categories of variables: value type variables and reference type variables. Now that you’re an expert on function call by value, function call by reference, and pointers, you can appreciate where the terms come from. Value type variables are variables where the rvalue stores an actual value (like an int storing the value 10). Value type variables tend to be associated with the “primitives”- primitive variables like int, char, byte, etc. Reference type variables store a memory address in their rvalue.


You’re undoubtedly familiar with the usual way of looping over an array, where we simply increment the index:

Standard for loop over an array
Standard for loop over an array’s index

But what if I told you, there was another way? Take a look at the following code:

Looping over an array using pointers
Looping over an array using pointers

They have the exact same output!

Serial output showing memory address and value of an array.

But why?! How?!

Look closely at the line where we print our value: Serial.println(*(numArray + i)); That looks like a pointer doesn’t it? Well, that’s because it is. Let’s dissect this a little more and look inside the parentheses. We know I is an int so as this loop progresses we’re adding + 0, +1, +2, etc. What does that tell us about numArray then? Well, that means it has to be a number of some sort for the addition operation to make any sense. But what kind of number? Well, we know this is a pointer so it must be what? That’s right, numArray is an address!

Now that you understand how pointers work, you now understand the implications of what this means. It means that when you define an array, what you’re actually doing is defining a pointer. The name of an array itself, such as numArray, is actually a memory address! If you read that advanced topic blurb above, the implication is that arrays are actually reference type variables and are therefore inherently passed by reference when used in functions.

Before we close this page of the notebook, I want to highlight a “gotcha”. Let’s say numArray had a memory address of 2288 as it apparently does from my screenshot above. If I = 1, why is it on the second iteration of the loop the address is 2290 and not 2289? The reason is because of how the compiler handles pointers. You see, when you define the array initially, the compiler is smart enough to allocate the memory based on the size of the data type used. In our case, we used ints which, in Arduino C, are two bytes long. When you iterate a pointer, the compiler is smart enough to multiply the iteration by the size of the data type for the next memory address. Therefore we start at 2288 and the next memory address for our next item in the array is 2290, followed by 2292, 2294, and so on:

Array memory diagram showing memory addresses.