You're building a server, so use server-grade hardware. The “gamer” motherboard you bought may work (may even work well), but isn't going to be ideal. It probably won't have enough SATA ports, memory capability, CPU I/O channels etc. and probably won't support ECC Memory, it may have a lower quality Realtek NIC[1], and it will probably have other built-in hardware that you don't need and can't use e.g. a sound card, bluetooth, wifi etc. Laptops are generally entirely unsuitable.
If you can afford to buy a commercial NAS box (from a reputable manufacturer), even if it is 2nd user, then save yourself time, effort and heartache by doing so. Avoid older hardware that doesn't support the minimum memory requirement of 8GB. If you can afford it, the iXsystems hardware is pretty good. Alternatively 2nd user commercial servers (from manufacturers like Dell or HP) can be purchased relatively cheaply too.
For a home / family business environment, doing network storage, media streaming and a few smallish applications, the CPU, memory and SSD requirements are much much smaller than you might think, so older (and therefore cheaper) equipment can be perfectly suitable.
[1] Old-time TrueNAS community user @jgreco’s article entitled “Is my Realtek ethernet really that bad?” explains why Realtek NICs can be fine for workstations and small home / family business TrueNAS servers but may not be up to the multi-user needs for bigger environments.
A multicore 64 bit Intel / AMD processor is a minimum requirement.
Low power (CPU power and electrical wattage power) 2 core CPUs (which can be embedded i.e. soldered onto the motherboard as opposed to installed in a socket) may well be adequate for home / family business servers doing network storage, media streaming and a few smallish apps. Obviously a 4-core processor will be better and may not cost much more. 12 core i9s or Ryzen processors will likely be a waste of money unless you are running multiple heavy apps / VMs.
If you follow the advice on ECC memory below then the processor and motherboard must also support ECC memory (not all do).
If you want to use disk or network encryption, select a processor that supports AES-NI on chip. Otherwise, don’t use encryption as it creates too much of a performance hit. In fact, unless you have a specific legal requirement for full-disk encryption, don't use it. The risk of data loss is too great.
Most modern (i.e., Sandy Bridge or newer) Intel CPUs will have adequate performance for any sort of file sharing over a gigabit network.
If possible use Error Correcting Memory (ECC memory). Whilst it is a bit more expensive, with non-ECC memory you are introducing a possible source of corruption of your data before it is written to disk or whilst it is held in cache. That said, servers and workstations have run for decades with non-ECC memory, so if your system is for home or family business use, you may be perfectly happy with non-ECC memory - but the more memory you have and the longer your server runs between reboots, the greater the likelihood of a random cosmic ray flipping a memory bit somewhere, and ECC fixes these without you noticing. That said, if your RAM goes bad (as opposed to random cosmic rays) non-ECC memory can introduce serious amounts of permanent data corruption as it is written - so if the data you are writing is critical, don't economise and build an ECC system.[2][3]
The absolute minimum amount of RAM for a TrueNAS system is 8GB, and for home / family business use you can often see cache hit ratios of 99%+ with only 8GB of memory (with TrueNAS O/S and services using c. 4GB, leaving c. 4GB for ARC). Obviously more memory is better, and if your server supports more memory then 16GB should be considered a more realistic minimum. If you find that your your ARC cache hit ratio is below 99%, then adding RAM is the easiest and cheapest way to improve your network response times since as soon as you add it, ZFS will use it for caching.
[2] If you want fine detail on this, old-time TrueNAS community user @Cyberjock’s article entitled “ECC vs non-ECC RAM and ZFS” goes into detail about how cosmic rays and permanent bad bits can cause data corruption.
[3] For a cautionary tale of bad memory, the Reddit article “FreeNAS All Disks Suddenly Degraded” explains the potential real-life consequences.
Server motherboards are more expensive but the best way to go.
If you follow the advice on ECC memory then the motherboard must support it (not all do).
It’s a good idea to use a CPU and memory and NVMe cards from the manufacturer’s Qualified Vendors List (QVL) where possible. The QVL is a list of hardware that has been tested by the motherboard manufacturer for compatibility.
Make sure you match the memory and the processor to the motherboard. Memory comes in different types (DDR 3/4/5), speeds, capacities, physical formats (DIMM, SODIMM) etc. Processors are designed for specific sockets, have different speeds, etc. The motherboard must support your choices.
If you plan to support your NAS remotely (or just want an easier time when you install TrueNAS), get a motherboard that features the Intelligent Platform Management Interface (IPMI) if you can - this allows you to do e.g. BIOS settings and other console ops over the network.
Prefer motherboards with Intel NICs are recommended, and try to avoid Realtek NICs if at all possible (see above).
Avoid SMR (Shingled Magnetic Recording) technology HDDs at all costs[4] (e.g. some models of WD Red drives) - they are literally completely unsuitable for use by ZFS. Check the detailed specs to confirm the drives you are planning to use are CMR technology and not SMR because drive are often marketed without showing whether they are CMR or SMR drives.
If possible use HDDs specifically rated for NAS usage e.g. WD Red Plus/Pro, Seagate IronWolf/IronWolf Pro etc.
Because ZFS is providing error recover using redundancy and checksums, controllable error recovery timeout[5] is a useful HDD feature to have because it avoids an I/O error on one disk from holding up the successful I/Os on the other drives (which are still providing all the data needed for a successful read/write).
Because ZFS combines multiple drives into a single storage pool, and because (without redundancy) the loss of a single drive will result in the loss of the entire data across the whole storage pool, it is absolutely essential to use redundant drives on any pool containing more than one drive, and unless you cannot do so for some reason it should be considered essential to provide redundancy on all non-boot pools. So accept the need and the cost implications.
The more drives you have in a pool, and the larger the drives are, the greater the level of redundancy you need. You should plan for adding 20%-33% more drives for redundancy of RAIDZ pools, and (obviously) 100% redundancy for mirror pools. Use RAIDZ2 (or RAIDZ3) on all RAIDZ vDevs with 6 or more drives and only use RAIDZ1 on vDevs of 5 or less with drives of 8TB size or less and even then only if you absolutely have to.
Use HBA cards and not RAID cards. If you can, use an LSI-based HBA card flashed into IT-mode (HBA rather than RAID). If you use a RAID card it must be flashed into IT-mode/JBOD-mode.[6]
The HBA or RAID (IT Mode) firmware version must match with the driver version in TrueNAS. TrueNAS will give a warning if this is not the case. This is important as a mismatch can lead to possible data corruption.
Using a backplane can seriously reduce the amount of cabling in your server.
Do not use SATA port multipliers.
[6]RAID cards work by hiding the physical drives from the operating system and instead presenting multiple physical drives as a single large drive. Meanwhile ZFS absolutely needs to be able to see the physical drives and thus RAID cards are incompatible with and unsuitable for use with ZFS. Many RAID cards do not support a true JBOD mode, in which each disk is directly presented to the operating system, which is why HBAs are preferred over RAID cards.
ZFS has several special SSD devices that can be used to improve performance (Metadata, L2ARC, SLOG, Deduplication).
Most home / family-business servers will NOT need any of these special ZFS devices. Never include these by default because you think “well, why not” - if you have a genuine need for one or more of these, then if you configure them correctly they are sound features, but in many cases they will make your ZFS configuration (and any recovery if things go wrong) substantially more complex whilst not providing any benefit. If you think you have a need, then ask in the TrueNAS Forums because some of these cannot be removed once they have been added.
The TrueNAS OS must reside on a separate drive. It cannot be installed on the HDD/s you will use for data storage.
A Solid State Disk (SSD) is recommended for your boot drive.
A minimum of 16GB capacity for the TrueNAS boot device is recommended. Any larger than 32 GB is wasteful.
Using the first 16GB-32GB of a much larger drive as a boot partition whilst using the rest of the boot drive for other uses (such as an SSD apps pool) is neither supported nor recommended, but it is easily achievable and works just fine, and it saves you using a separate SSD or NVMe drive for non-boot SSD needs.
It is neither supported nor recommended to attach your boot SSD using USB rather than SATA or NVMe, but if you don't have M.2 slots and your SATA ports are limited, it is possible to do this. USB has a reputation for having occasional disconnects (especially USB3 ports) and if it does disconnect, your TrueNAS server will hang.
If you are going to use a USB boot drive it must be an SSD and not a flash drive. TrueNAS writes frequently to the boot drive (e.g. syslog) and a flash drive's limited amount of Total Bytes Written will quickly be exhausted and the drive will stop working. A proper USB connected SSD will, however, work (with the above caveats) - you can either use a USB→SATA connector, or buy a specialised USB SSD (e.g. by manufacturer SKK).
Don’t buy low quality or low-power PSUs. Hard drives use much less power than a modern high-end GPU, but the power draw of significant numbers of HDDs still adds up.
When selecting a PSU for a quiet server build chose one that will operate at around 50%–60% of its rated maximum wattage. So if your server draws 300W, select a PSU with a maximum rating of around 600W.
If you have time have a look at this article by Jgreco. It is excellent and should help.
ZFS is highly resilient to power failures - you may lose data that has not yet been written to disk, but ZFS is designed so that the disk is always in a consistent state (so CHKDSK / FSCK scans are a thing of the past - which is just as well when the file system spans multiple large disks and a pool wide CHKDSK/FSCK could take hours or even days to complete).
Nevertheless, power outages are often very short, and an Uninterruptable Power Supply (UPS) can tide you over these short interruptions and signal for an orderly power down if it is a longer one.
If your budget allows, invest in an Uninterruptible Power Supply (UPS). If your budget doesn't allow, adjust your budget until it does. It may not be the most seductive bit of kit (incidentally, the most seductive bit of kit I ever bought was a leather thong, but that’s another story) and it’s reasonably expensive (the UPS, not the thong!), but it is good practice to use one.
Some server PSUs will not work with a UPS that uses a simulated sine wave output. Some PSUs don’t seem to mind. The reasons are beyond the scope of this guide. I have seen the output of some simulated sine wave UPSs on an oscilloscope and some of them are shockingly bad and do not even come close to a sine wave. If you don’t want to take a chance, then get a UPS that provides a proper sine wave at its output. Unfortunately, this will cost you more.
Whatever UPS you choose make sure it is supported by Debian's Network UPS Tools (NUT) - see the NUT hardware compatibility list. This will connect the UPS to the server via a USB, serial, or network cable, and allow the server to monitor the UPS and if the mains power fails, to wait for a period to see if the power comes back and if not to shut down the server in an orderly manner.
Don't forget to (somehow) fix the power cable into the back of your server so that e.g. a passing cat doesn't pull it out!!
You do not need to be a ZFS expert to build a TrueNAS system. However, it is easy to make a bad decision when designing your ZFS pools and it can be VERY hard (or impossible) to fix this at a later date - so before finalising your design, take advice from more experienced users by asking in the TrueNAS Forums.
If you get any sort of Pool issues, ask for advice in the TrueNAS Forums before taking any actions as attempting incorrect recovery actions can easily turn a recoverable pool where your data stays online or can be brought back online into an irrecoverable one where your data is offline and permanently lost.
Try (hard) to use your empty but configured new TrueNAS system to simulate a disk failure (when any mistakes will not lose your data - because there is no data on it as yet). Shut down, remove a RAID or mirror drive and boot again and see what happens. Add a file to the pool, then do the same actions to reinsert the drive. Now try to recover the pool using the UI.
Other than this wiki (which is not intended to be an in-depth tutorial on ZFS anyway) there are several resources you can read to lean about ZFS basics:
No matter how well you train them, don’t use ferrets to build your server (that’s 3 years of my life I’m not getting back).
Same goes for pigeons (but they can be trained to make great cocktails).
Make sure that you use all recommended safety features - S.M.A.R.T monitoring and tests, Scrubs, Snapshots and sometimes Replications - and make sure that you get Lurch to include cleaning the server's dust filters in his regular spring-cleaning routine.