Build your own NAS

During the past two years, my homelab underwent many changes.
At some point, I had a crazy setup with four ESXi and three TrueNAS-based storage servers.
The electricity bill was a shocker.

My next step was to swap the TrueNAS boxes with three Qnap arrays.
Those guys make a lot of noise, so I returned them.

In early 2022 I downsized a lot and ran a powerful but single TrueNAS Scale box in hyper-converged mode.
I liked the roadmap, but the development didn’t work out as planned. That’s not a big deal, and I understand it, working for a software company myself.
The other challenge was unfortunately that I hit many system limitations as iXSystems blocks apt access, and the VM management needs improvement.

So, I was asking myself, why not build your own NAS?

The Hardware

Around 2000, I worked in a small computer shop, building, fixing, and selling machines to end customers or small businesses. I started building computers even before that and consider myself somewhat experienced.
In each of my own builds since the 90s, I used the same brands, as I never really run into any trouble with them: Asus boards, WD disks, Micron memory, and some others.
But lately, Asus went a bit crazy with their product diversification and, more importantly, their pricing.

I created the system based on an ASRock X570 board with a 5600G APU and 128 GB.
I don’t need QuickSync so that I can save some money on the CPU, and I went for standard memory and not ECC.

The board has a NIC attached which I’m going to use for the setup, and there’s an additional 10gBit NIC plugged in.
This card is a fascinating one as it provides 10gig over copper and also two additional NVMe slots.
It comes with a fan on top of a copper heatsink, but I ran it for quite a while now and noticed that if there’s enough air movement in the case, the other fan isn’t required, so I just unplugged it.

For Storage, I’m currently running an LSI 9305-16i with 10x 8 TB WD Red Plus attached and four NVMe overall. 
The case is a Fractal Design 7 XL – no windows. Just big, black, and solid. I added a few additional HDD tray kits and a lot of Noctuas.
Airflow is vital with that many rotating drives.

This configuration should give me enough power and space for a while. And yes, the case is enormous, and that’s two 20cm fans on top:

Build your own NAS

The Software

I’m following basic Linux setup steps and chose Debian as the distribution.

nano /etc/ssh/sshd_confi g

Uncomment “PermitRootLogin” and make it a yes.
While I’m at it, I install and set up SNMP to monitor the box later:

apt install -y snmpd snmp
rm  /etc/snmp/snmpd.conf && sudo nano /etc/snmp/snmpd.conf
	agentAddress udp:161,udp6:[::1]:161
	view   all  included   .1
	rocommunity public  default    -V all
	rocommunity6 public  default   -V all
	sysLocation    Berlin
	sysContact     [email protected]

Pointing to my NTP finished the default Linux setup.

As I’m installing on physical gear and not a VM, I need a few drivers that aren’t included in the default Debian package.
Here is an explanation, but basically, it’s just down to this:

apt install isenkram && isenkram-autoinstall-firmware -y

Run updates, and reboot.

Build your own NAS – Installing Cockpit

While I would be okay dealing with NAS features over the CLI, I think I want something more user-friendly for dealing with VMs.

There are a few options that you can pick based on your need.
A simple version would be running virt-manager in a Debian-Desktop VM.

If boredom strikes hard, another option would be to add an OpenStack layer and use the built-in Horizon dashboard.

I remembered using Cockpit a few years back and discovered that it suits excellent for my project as there are 3rd party plugins available.

Let’s install Cockpit first.

The documentation suggests using backports to get the latest release.

. /etc/os-release
echo "deb http://deb.debian.org/debian bullseye-backports main" > \
    /etc/apt/sources.list.d/backports.list
apt update
apt install -t bullseye-backports cockpit -y

Once that is finished, I install a few requirements for the plugins:

apt install git curl sudo wget software-properties-common -y
apt-add-repository contrib
apt install linux-headers-$(uname -r) linux-image-amd64 spl kmod zfsutils-linux zfs-dkms zfs-zed -y

Ignore the license warning regarding ZFS on Linux.

Install the plugins for ZFS, KVM, and file sharing:

apt install cockpit-machines -y
apt install -t bullseye-backports cockpit-machines -y
git clone https://github.com/optimans/cockpit-zfs-manager.git
cp -r cockpit-zfs-manager/zfs /usr/share/cockpit
curl -LO https://github.com/45Drives/cockpit-file-sharing/releases/download/v3.2.0/cockpit-file-sharing_3.2.0-1focal_all.deb
apt install ./cockpit-file-sharing_3.2.0-1focal_all.deb -y

Our last step in the CLI is creating the shares:

apt install samba smbclient cifs-utils
nano /etc/samba/smb.conf
    [share]
   comment = share
   path = /pool01/share/
   writable = yes
   guest ok = no
   valid users = @smbshare
   force create mode = 770
   force directory mode = 770
   inherit permissions = yes

    [downloads]
   comment = downloads
   path = /pool02/downloads
   writable = yes
   guest ok = no
   valid users = @smbshare
   force create mode = 770
   force directory mode = 770
   inherit permissions = yes

Perfect, that’s a milestone reached!

Let’s jump into Cockpit.

Continuing setup in the GUI

Linux Cockpit Dashboard

Let’s deal with the network configuration so it’s out of the way.

As we’re going to run VMs, and the machine comes with two NICs, we have two options:
Either use both and assign one to VMs or use one with a bridge.
I chose the latter and created a bridge on my 10gBit NIC.

Reboot, and don’t forget to plug the cable into the other NIC.

Now we deal with Storage.
As I have run TrueNas before, I can import my ZFS pools. The process with Cockpit is straightforward, but creating a new pool would be simple, too.

To explain what we see here, Pool01 contains my media folders and uses 8×8 TB in a Raid-Z2 configuration.
There are numerous discussions all over the web concerning Z2 vs. Raid10, and you’re invited to read them all. Both configurations have pros and cons, and your scenario may differ slightly from mine.

And while I’m at it; I run daily backups to a small four-bay Qnap with a 2.5gBit link which sits in a different room and automatically turns itself on at 10:00 in the morning, runs four other backup jobs over rsync, and shuts down at noon again. Maybe I should also write an article about this config, but not now!


Pool02 is a simple 2-disk stripe pool that contains my Downloads folder and has enough space for experiments. I don’t want to use my media pool for tinkering.

Debian sits on the small 250GB NVME, and I will use the other three for dedicated VM storage.

I will start from scratch with them:

Creating a VM

Nothing to see yet:

As I will use the three NVMEs for VM storage, I will create three storage pools:

While I’m here, I create a storage volume for my first VM:

Plenty of space for Plex! How many years will it last? Well, I’m sure I burn everything and start from scratch before I would run out of space. But hey, why not?
Now let’s create the VM itself, pointing to the storage volume:

I’m generous with six cores and 12 gigs of mem, but that’s to speed up library scanning. I’m going to reduce it in two days when everything is sorted.

Oh, look, that’s it for now.

We not only created a NAS from scratch with Linux, but we also went above and beyond to look at a hyper-converged white-label box.
Okay, it’s a black box, but that’s fine, too.

The vanilla Debian gives us complete control, and we’re not limited by any middleware or proprietary changes as introduced by most NAS software. Yes, the setup required a few more steps, but as you read through it, I’m sure you agree it’s no rocket science.
That’s pretty cool.

One VM is running; more to come.

Update: Want backups?

More homelab posts:

1 2 3 4

Leave a Comment

Your email address will not be published.