Sorry man, for some reason it got installed in my datacenter. 😜
This server is running TestMy.net now and I definitely feel the difference.
Had a few networking issues over the weekend. My pfSense server had issues, something bugged out. It was my oldest VM, maybe it had some kind of legacy setting somewhere causing issues. The backups were jacked as well, so I just deleted it and put up a new one...
Not really a big deal, that pfSense router is really only for my internal, non-production network and none of TestMy.net services run through it. So not a big deal, right? WRONG! Side effects are real.
I didn't realize that before I had specifically setup pfSense without IPv6. With the default install it ran normally... until I got to my datacenter and reset my switches. I do this sometimes when I visit, just to get a fresh boot on them. Well, most things came back up normally but TMN was down. Everything indicated that it should be up, I'm getting pings all over the place as expected. Why isn't it working?
Very stressful situation. This should have been a routine visit to quickly rack the new server. But now, nothing is working right! To make matters worse I'm on my laptop where I'm already not as comfortable working. Connected with hotspot on my phone and I realize that I only have a USB-C cable for my phone and no USB-C on my laptop. My battery's at like 30%. So now it feels like Mission Impossible.
I was running in circles at this point, so I took my hands off the keys and thought for 5 minutes.
... then came up with...
"Default Route."
I look closer at the networking on the host and client. I then see an IPv6 address assigned to an adapter that should only have an internal IPv4 address. This device is connected to a VLAN. So traffic was trying to route out the adapter connected to VLAN.
The gotcha is when an IPv6 address is assigned on the same adapter as an IPv4 address that takes an isolated route. Another fix is to tell the adapter "never use as default route".
A different VM had a spare network adapter that wasn't configured, had been sitting like that for years. When this happened, it did DHCP, got an IP and then default routed to that adapter. In that case it was an IPv4 address. Same kind of issue, a little different.
So I worked out a few kinks. When it came down to it, lambo joined right into the cluster, replicated my VMs and migrated them without issues.
The lesson for me, when you're flustered... take a step back and think. The answers often surface.