This section of the Wiki will help you to translate the disk space calculations you did in section 2.1.3 Data Volumes into the types and numbers of disks you need to procure and even more importantly the number of SATA ports you need on your server.
If you already have your server hardware and you know how many SATA ports you have, then the decision is straight forward. If you are still to choose your hardware, you may need to come up with 2 or 3 disk designs with varying numbers of disks so that you can compare the matching combinations of server and disks.
The Addams family experience is that SATA ports is very often the constraining factor - so bear that in mind when reading the remainder of this page.
As previously discussed the disk requirements can pretty much be split into 4 main areas, for each of which you need to make decisions on redundancy:
You can have more pools if you need, but this is probably the simplest set. In general terms, unless you have a VERY specific reason to have separate pools, you should only create a new pool if the performance characteristics will be different (i.e. NVMe vs. SSD vs. HDD, RAIDZ vs. mirror) because each pool has separate free space, and the whole point of ZFS is to safely group similar disk configurations together in order to have a single free-space and allow ZFS to manage this and spread the data around to achieve best performance rather than having to manually manage this as was common in LVM or with separate disks.
If you decide you have a really exceptional situation and genuinely need special devices such as SLOG, L2ARC or Metadata - which most systems don't really need - then you will need additional SSD or NVMe disks for these.
When thinking about your disk layout, you should also think about future expansion if you run out of disk space - hopefully your space planning will have allowed you to calculate and buy an appropriate amount of disk space to last for the anticipated lifetime of your NAS, however if for cost reasons you need to start smaller, then expansion is possible.
Pools that consist of mirrored data vDevs, can easily be expanded by adding another mirrored pair when needed (assuming that you have physical disk slots and controller ports available).
As a rule of thumb, unless your performance needs are especially critical, it is probably better to start with a single large mirror pair, and leave room for expansion later rather than fill your NAS slots with mirrored drives.
Hurrah - the Gomez family are partying in true gothic style (let your imagination run riot when thinking about their party clothes - they are genuinely weird people) because in the latest version of TrueNAS Scale - Electric Eel - it is now possible to add a disk to an existing RAIDZ vDev.
In all previous TrueNAS versions, if you needed to add disk space to a RAIDZ pool, you needed to add an entirely new RAIDZ vDev consisting of several disks - you could not just add a single disk and increase your vDev size. So you either needed to keep a substantial number of slots free for expansion, or to populate your entire NAS - and it did not make sense to do anything in between.
However, thanks to iX Systems who sponsored the development of this ZFS functionality, in Electric Eel it is now possible using "RAIDZ Expansion" to add a drive to an existing RAIDZ vDev, and this enables more flexible strategies for future growth. For example:
A few technical notes:
The requirements for a system / boot pool are very small - a minimum of 16GB. The cost of 16GB SSDs can be close to (or even exceed) the cost of larger SSDs, so consider buying a larger drive instead.
It is absolutely recommended that you use an SSD for the boot pool and not a HDD - a HDD will work, but it is going to be slow to boot and your electricity costs will be higher. Wednesday has asked me to tell you “NO! JUST DON'T DO IT!!”.
The only data held on the boot pool is the system configuration file and this can be backed up on a daily basis to one of the other pools that has redundancy - so the purpose for having redundancy (i.e. a mirror) on the boot pool is only to avoid down-time if the boot device breaks. Opinion is divided about having a mirrored boot pool for two reasons:
Another options is to purchase a spare (identical?) boot drive and keep it in a drawer. Occasionally (e.g. after every version upgrade) run a script to copy your boot pool in a bootable way to a USB flash drive. Don't forget to test that this copy is indeed bootable. If your boot drive fails, install the spare, boot from the USB stick, copy the boot pool to the new boot drive, reboot, and restore the configuration file from your (at least daily) backup you have made on another pool. (If anyone creates such a script then please edit this wiki page and add a link here.)
If having read the above you still want boot-pool redundancy then you will need to dedicate 2 SATA ports (or 2 NVME/M.2 slots if you have them) for boot pool - if you believe that you can live with down time if the system disk fails, then you will need a single SATA port (or if you are prepared to go outside ixSystems recommendations you can use a USB attached SSD - but not a USB flash drive which will almost certainly fail within weeks because of the limited write capability).
ixSystems also recommends that you dedicate the disk(s) solely to the boot pool and not put any other data on the same disk. If you follow this recommendation, then you cannot use the extra space on the boot drive(s) and you will likely need additional SSDs and SATA ports for applications and possibly some data.
If you run Apps, then you will need to store the app itself and the data they create/use on disk somewhere. There is no reason that this cannot be on a HDD based pool, however you probably want your apps to load with the operating system and their startup time is probably as important as the system startup time, and Apps tend to read and write data more frequently than network data.
It is not as strong a recommendation as for the boot pool, but it is recommended that the pool for apps and their data is also on an SSD.
VMs tend to have similar characteristics for their virtual boot and apps, so you probably want their boot area to and executables be on mirrored SSDs.
You may also have network data that for performance reasons you want to sit on an SSD.
Next you need to make a similar decision about whether your Apps/VM/Network-data SSD pool needs redundancy. If you need to ensure that the data written a minute ago is not lost if the drive fails, then you need redundancy. If you are happy that e.g. nightly copies of the SSD data to HDD will suffice in the event that the SSD fails (and to reconfigure temporarily so that your apps use the HDD backup to run), then you can probably do without redundancy on your Apps/VM/SSD Network data pool. Lurch advises that it is probably NOT a good idea to use non-redundant pools for network data, because you won't know in advance just how vital data you are going to save at some future point and whether you can accept losing it.
As an alternative to redundancy for the Apps pool, depending on the recovery requirements you can consider e.g. using TrueNAS/ZFS “replication” from the apps pool to the HDD pool on (say) a nightly basis. In the event of the apps pool drive(s) failing you can then create a temporary softlink from the backup location to the normal mount point of the apps pool in order for things to keep running whilst you procure a new drive for the apps pool and then you can restore from the replica to the new apps-pool SSD once it is installed.
Finally, for this pool, you need to decide whether you want to follow the ixSystems recommendation and have this separately from the boot disk(s), using another one or two SATA ports or whether you are willing to ignore this recommendation and use the same disk(s) for boot & apps pools. It is pretty likely that you can get a low-cost relatively small SSD and that it will still be large enough for boot pool and apps, and since you are buying a boot SSD(s) anyway using the same hardware can save you a reasonable amount of $$ and save you a SATA port or two.
This is often the simplest pool to select hardware for. Larger storage is still really only available / economic on spinning hard drives though this is likely to change in the future.
Unless you have special requirements, a general recommendation would be to have a single pool made up of one or more vDevs each of which has up to 12 disks including one or two disks for redundancy (2 or 3 disks for redundancy for the upper range or for large e.g. 12TB+ drives). The issue with a wide vDev or large drives, and the reason to have 2 or 3 redundant disks is the extended time it can take to recover full redundancy after a disk replacement (technically called "resilvering"), and the stress that the resilvering places on the remaining drives whilst this is happening which can cause them to fail during the resilver.
There are almost certainly several ways that you can fulfil the disk space you calculated you need using several disks. If your requirement was for (say) 14TB of data, then:
Of course this is the simplified explanation.
So by widening the raid array and using smaller disks, you can reduce the cost of the redundancy disks, but it is likely that you will pay more for NAS hardware that has more drive bays so it's a trade-off.
A discussion about the performance characteristics of each of the above options for random-read, sequential-read e.g. streaming, and write operations is a bit complex to cover here, but if you think that your requirements require above average performance, and you want to include performance as part of your decision process, then ixSystems have an excellent ZFS Storage Pool Layout White Paper which can provide you with a lot more technical flim-flam than even Gomez could achieve in his hay-day.
ZFS has several types of special devices (vDevs) which can be used to improve performance - usually these will be SSDs or NVMe disks used to provide performance improvements for HDDs (because you will get far less and possible zero performance improvement using an SSD special device to speed up an SSD pool.
The types of special ZFS devices you can use are as follows…
SLOG is a device that can potentially speed up Synchronous writes to HDD. A Synchronous-Write is one where the network client or app or Virtual Machine expects to wait until the data has been written to disk and TrueNAS confirms this before it continues with other work. By comparison an Asynchronous-Write is one where TrueNAS immediately stores the data in normal (volatile) RAM, confirms it to the client, and then writes it as soon as it can to disk - and the risk is that the client thinks that the data has been written when in fact it is still in memory and will be lost of there is an operating system crash or power outage. For transactional databases (e.g. for bank transactions) need to be certain that data has been written and is consistent, so writes need to be synchronous - but for the vast majority of file sharing data, and possibly for many non-mission-critical apps, asynchronous-writes are perfectly normal and safe.
Note: ZFS itself is extremely robust - whilst the extremely recent asynchronous data may be lost in the event of an O/S crash or a power failure, the file system itself is pretty much guaranteed to remain consistent - the Windows SCANDISK or Linux FSCK scans are never needed with ZFS.
Some guidelines:
All pools store metadata about their ZFS datasets, the directories and files, the attributes of these directories and files, the location of files and freespace etc., and accessing this metadata from HDDs can contribute significantly to the perceived performance of your NAS, particularly soon after boot when there is no metadata in the RAM-based cache (ARC) - however as metadata is read and used it will be cached in RAM and reused on later accesses.
A Metadata special device allows you to store the metadata of an HDD pool on an SSD or (even better/faster) an NVMe drive - and for performance critical applications this can speed up both metadata-reads (for metadata not in cache) and metadata-writes.
However, because the Metadata special device holds the primary / only copy of the metadata, if your metadata special device fails, you will lose all the data in your pool so it is essential that a Metadata special device is mirrored (either a 2x or 3x mirror). This therefore requires significant hardware resources to do safely.
Some guidelines:
If you are primarily concerned with speeding up metadata reads from HDD, then you might want to consider a Metadata-only L2ARC device instead.
ZFS provides a very efficient cache as standard using all the unused RAM in the system (ARC) and this caches both metadata and actual data on the basis of both most recently used and most frequently used data. Metadata tends to be used more frequently than actual data, and so metadata tends to remain in the ARC in preference to most data. ZFS also does read-ahead into ARC for e.g. media streaming, which increases the ARC hit rate and reduces the need for L2ARC.
However it is possible to use an SSD or NVMe drive to cache metadata and / or actual data read from HDD in addition to the ARC. If metadata or data is not in the RAM-based ARC, then if it is on SSD/NVMe in the L2ARC, it can be read from there much quicker than from HDD.
Because it is a cache copy rather than a primary source, the failure of an L2ARC device does not cause the pool to be lost, although the performance improvements from the L2ARC will no longer be available.
Some guidelines:
The ZFS file system that TrueNAS uses has some specific requirements, which can be briefly summarised as follows:
How to configure the physical disks into storage areas (called Pools) is also important, and some generic guidelines are:
Hopefully by now you will be able to calculate the various drive options and SATA ports they need, ready for considering what Server hardware to use.