Featured image of post Why I Ditched My VM NAS and Went Bare-Metal (And You Should Too)

Why I Ditched My VM NAS and Went Bare-Metal (And You Should Too)

From Fragile VM to Bulletproof Bare-Metal NAS

My Jellyfin server used to forget movies. Random ones wouldn’t show up, or new ones would vanish into the ether. Reboot Proxmox, and poof, they’re back. The culprit? A race condition nightmare from running my NAS in a Proxmox-hosted VM.

I tried everything: automount, systemd, UID hacks, ritual sacrifices to the filesystem gods. Best I ever got was “mostly works.” And that’s not good enough when Jellyfin’s your nightly unwind ritual.

So I ditched the VM, went bare-metal with XFS and MergerFS, and finally, finally, built a NAS that boots clean and mounts right. Every. Single. Time.

💭 TL;DR
Running your NAS inside a VM? That's fine until you're fighting race conditions you didn't sign up for. Go bare-metal with XFS and MergerFS for simplicity, speed, and rock-solid reliability. Just know you're trading Proxmox's creature comforts for predictable behavior that actually works.
Fractal Design Define 7 XL

Fractal Design Define 7 XL

The Define 7 XL can accommodate up to 18 HDDs/SSDs plus five additional SSDs in the Storage Layout, with flexible configurations using included multi-brackets and HDD/SSD trays.

Contains affiliate links. I may earn a commission at no cost to you.

Why VM NAS Setups Sound Sexy (But Aren’t)

Sure, it looks efficient on paper. One box. Multiple VMs. Snapshots. Live migrations. The homelab porn writes itself. But reality hits a bit different:

  • Race conditions from hell: Proxmox boots, VMs start spinning up, but your containers beat NFS to the punch. Result? Jellyfin loads with a library that looks like Swiss cheese.

  • Mounting nightmares: Getting the Proxmox host to mount NFS shares after the NAS VM boots is like herding cats. I tried automounts, failed spectacularly. Switched to systemd mounts, same story. Finally built UID-mapped folders to sidestep Proxmox’s 100000 offset nonsense. Worked 97% of the time, but “mostly working” storage is like being “mostly pregnant.”

  • Death by a thousand cuts: Every virtualization layer (Proxmox → QEMU → ext4/XFS → NFS → LXC) adds latency. You bleed throughput. You sacrifice reliability.

  • The debugging tax: When things break (and they will), you’re troubleshooting across multiple abstraction layers. Is it the VM? The host? The container? The mount? Good luck figuring that out Sunday afternoon when your family is rioting when they can’t watch their favorite movie or the latest episode of their current indulgence.

“There is not much difference in performance if any at all.” – Some Reddit user who clearly never spent a weekend troubleshooting missing folders and/or files.

Try saying that after your fourth reboot hoping your media will magically reappear.

Why Bare-Metal XFS + MergerFS Actually Wins

  • No more race conditions: NFS shares mount early via proper systemd ordering. Containers see their media, first boot, every boot, forever.

  • Direct I/O that doesn’t suck: XFS is battle-tested and fast. MergerFS pools drives seamlessly without the virtualization overhead tax. Your drives work at their actual speed.

  • Predictable boots: No more crossing your fingers hoping your storage VM came up in time. No more UID hacks. Just clean systemd dependencies that do what they say on the tin.

Bonus wins:

  • Full 10GbE bandwidth (not strangled by virtio drivers)
  • Simpler storage stack = fewer things to break
  • Cleaner disaster recovery: rsync, backups, and mounts you actually understand
  • Sleep peacefully knowing your storage isn’t playing startup roulette

Bare-Metal vs. VM: The Real Scorecard

Setup Pros Cons
VM NAS ✔️ Snapshots
✔️ Easy Proxmox backups
✔️ Service consolidation
✔️ Looks good in /r/homelab
❌ NFS race conditions
❌ I/O performance tax
❌ Complex UID/GID mapping
❌ Multi-layer debugging hell
Bare-Metal NAS ✔️ Predictable boots
✔️ Zero virtualization overhead
✔️ Simple, direct mounts
✔️ Full hardware performance
⚠️ Manual backup strategy
⚠️ Extra box to power & manage
⚠️ No VM convenience features

Power tradeoff? Absolutely. That’s why I went with a G3220. It sips power like a gentleman but handles the workload without breaking a sweat. The 10Gb NIC and HBA get to stretch their legs properly.

Build Blueprint: What Actually Works

Hardware (Stuff I had lying around):

  • Intel G3220
  • Gigabyte GA-Z87X-D3H
  • 16GB DDR3
  • LSI 9300-8i HBA in IT mode
  • 10Gb NIC

Software Stack:

  • Debian Trixie
  • XFS on each drive
  • MergerFS for seamless pooling
  • NFS for rock-solid container access
  • systemd for proper mount ordering

Why this combo works: Dead simple architecture. Nothing fancy. It just boots and works, every time.

For Details on MergerFS and HBAs:

LSI 9300-8i IT MODE

LSI 9300-8i

Already Flashed to IT mode.

Contains affiliate links. I may earn a commission at no cost to you.

The Cold, Hard Truth About My Experience

I burned two entire weekends trying to make VM-NAS race conditions disappear. They laughed at my attempts.

I deployed automounts like a hopeful fool. I crafted systemd units with the precision of a Swiss watchmaker. I even UID-hacked workarounds that would make a kernel developer weep. And still, some folders only materialized after rebooting the entire Proxmox host.

Bare-metal isn’t perfect. You lose snapshots. You lose centralized VM backups. You lose the satisfaction of running everything on one box.

But you gain something precious: predictability. When you power on your NAS, it works. When containers start, they see their media. When users browse your library, the files are actually there.

One box, one job, one stack to debug. That clarity is worth the extra 20 watts, especially when your CPU barely registers on the power meter and your Jellyfin setup never misses a beat.

The Bottom Line

A VM NAS can work if you enjoy weekend troubleshooting sessions and the thrill of uncertainty. But a bare-metal NAS does work, every single time, without drama or surprise downtime.

If you’re tired of startup order roulette, phantom mount points, and explaining to family why half the movie collection disappeared again. Go physical. Go simple. Go fast.

Your sanity will thank you. Your users will thank you. Your Saturday mornings will thank you.

Ready to Build It Right?

Ditch the VM complexity. Build your NAS properly with bare-metal XFS and MergerFS. Stop trusting virtualization layers to mount your media collection in the correct order.

I’ve got fstab configs, systemd unit files, and plenty of battle scars if you need guidance.

Just ask.

Seagate Barracuda 24TB Internal Hard Drive

Seagate Barracuda 24TB Internal Hard Drive

Contains affiliate links. I may earn a commission at no cost to you.

© 2024 - 2025 DiyMediaServer

Buy Me a Coffee

Built with Hugo
Using a modified Theme Stack designed by Jimmy