../

Where I'm at in my own home server escapades -- and why

I tweak my home lab a lot and try to make it more and more robust, flexible, and automated. I feel like I've done several huge upgrades over the last few weeks, so I'll lay them out right now.

10 Gigabit Networking

This is one of the things I've been pining for ages. Faster networking seems silly when you consider that my Internet plan is nowhere near 10 gigabit speeds, so why would someone do this? Quite frankly, it's for the sake of devices interacting on my own LAN. One of the best parts of 10 gigabit networking compared to, say, 1 gig is the ability to move around and transfer files way faster. When it comes to video editing, gaming, and working with virtual machines, the speed boost is fucking huge. I use it every week at least to edit videos off DaVinci Resolve. Speeds are around 600-700MiB, around the speed you'd get on a normal SATA SSD. This allows me to scrub through my gobs of massive ProRes/DNxHR footage like nothing and not have to stress about space or resilience (what happens if my NVMe drive on my PC dies?)

It was surprisingly easy to migrate. The hardest part was finding a switch that fit my budget and needs, and I'm still not super satisfied with what I got. The QNAP QSW-308S is a dumb switch with eight 1 gig RJ45 ports and three (not four, not two, THREE?) SFP+ ports that support 10 gig. It's quiet and fast and fits my needs. I have a fancy VLAN-enabled switch downstairs connected to my router, and I don't need all that functionality here. I just need to connect my home server devices and my computer. It went for around $150 around the time of purchase. It's not a BAD switch, but it's terribly ugly and does not rack mount cleanly. I needed to give an extra "U" of space just to place it on a rack shelf. If you're looking for something cheap to get into the 10 gig space, it's not the worst option, I guess. A lot of the alternatives would end up costing more, like that Mikrotik 10G switch everyone seems to love.

I'm using direct-attach copper cables as opposed to fiber, because it's far, far cheaper and easier to set up. I would have loved to use fiber, but none of the patch cables I saw were anywhere close to the correct size to fit into my switch. I just kind of sling the DAC cables around in an unobstructive way and it gets the job done.

I got these Intel X520-esque 10G cards to install. They're PCIe x8 and run a little hot. However, the driver they use seems to be incredibly popular and widely used, so I can't complain. Popped on of them in my gaming PC/workstation build and the other in my main home server. I was disappointed to see that the Windows enterprise gear scene is still years behind and that I had to go out of my way to install the corresponding driver on Windows 11 (yes, for the time being I'm using Windows.). The i225-V built in chipset in didn't work out of the box either, so clearly there's something strange on Microsoft's end. Linux handles them totally fine so I'm confused.

I attached some 40mm x 10mm FLX Noctua fans onto the heatsinks and hopefully that keeps them relatively cool. I hate the whiny noise of small fans, so I use a low-noise adapter. Good idea? Not sure. Either way, they sound quiet within reason and it lets me focus on work without distractions.

Overall, I think the upgrade was worth it for my needs. However, I'm not a typical user and I can't say it's always a wise choice to upgrade your networking like this. Most people are fine with gigabit everything... hell, some people can make do with 100 mbps.

Flash storage only!

One of the challenges with high-speed networking is that normal hard disk drives can struggle to hit speeds faster than like 200 MB/s on a good day. This means they don't take advantage of the increased speed of your LAN and you're basically just leaving desirable performance on the table.

This is why I... side-graded my mirrored VDEV HDD setup to a RAIDz1 SSD array. I know RAIDz1/RAID5 has its issues, but I have backups and know what I'm doing. I have plenty of space to move everything around with this arrangement and the array hits around 1 GiB/s. That's not insanely fast and a normal NVMe dwarfs it, but conveniently it's perfect for 10 gig networking.

The drives I went with are Samsung PM883s. They're surprisingly cheap, considering their roots in enterprise vs. the pro/consumer market. I'm happy I jumped on the bandwagon and got four of these.

I wanted to do storage "right" this time and so I went with TrueNAS Core and an HBA.

LSI 9211-8i

Jesus CHRIST these are a bitch to work with. I didn't realize how much fiddling you have to do to let these kinds of HBAs act as simple storage cards so ZFS can inherit the drives. I don't want its RAID functionality, I don't want the BIOS shit, I just want boring old SATA ports! The 9211-8i isn't the most power efficient card and it's only SAS2, but my array isn't even getting close to hitting the performance threshold for the card. I'll probably upgrade if that ever changes.

The process of migration involved frantically browing Unraid, TrueNAS, and Reddit forums for instructions on how to flash the correct firmware "IT mode" to the card. I landed on some solid instructions and the process was real simple after that. Broadcom makes it incredibly difficult to find this stuff and I hope that company explodes in a violent fire. I hate how obtuse they make obtaining all the crap they made. Looking at you, VMWare...

The HBA seems to be far more reliable and trustworthy than the Chinesium I bought on Amazon a while back. That thing is a PCIe x1 card using the ASM1166 chipset. I'm realizing I want that thing nowhere near my data. This HBA has its limitations (heat, size, setup), but it's great for peace of mind. And yes, TRIM and other stuff does work correctly with my SSDs.

What I've done within my VM hypervisor (Proxmox) is pass through the entire HBA so TrueNAS Core can have access to SMART settings and all sorts of other goodies. Before, I wasn't able to do this and PCI passthrough was difficult to attain. It turns out all I needed to do was set up ACS overrides (kids, don't try that at home). I used to pass through both the 10G NIC and my HBA to TrueNAS Core, but I'm realizing that it's better to just virtualize a bridge with that network card. The advantage of virtualizing networking is it allows for far more sustainable speeds across my VMs. Before, transferring files from the TNAS Core VM to, say, a Debian installation capped at a little under 940Mbps. Now? I'm getting 13-14Gbps casually and everything is lightning fast. I highly recommend you look into a solution like this too if you're considering passing through your NICs. I can't think of any downside to this approach, and brief testing showed no different in terms of out-of-hypervisor networking speeds. Same 10 gigabit.

I give Core about 32GB of 128GB of the RAM installed on that system and it flies. Highly recommend.

I also attached yet another Noctua fan to the heatsink here. This one seems to run hotter than the NIC. Whatever.

/foss/ /networking/ /bsd/ /freebsd/ /linux/ /tech/