Choosing the Right Hard Drive for Your Lab

Choosing the Right Hard Drive for Your Lab

At Teknophiles, we run a fairly large number of hard drives in our lab servers. These drives fulfill several different duties, but typically fall into three primary categories. Over the years, we’ve tried a bunch of combinations, been through several iterations, and found some setups that worked well and others that didn’t. We’ll walk through our criteria for each type of drive, and hopefully help you choose the right drive for the application at hand.

Performance Drives

First, let’s talk about performance drives. We typically use 2.5″ enterprise SATA solid state drives, designed for high IOPS and long service lives. Similar in performance to high-end consumer drives, these typically have additional  features such as power-loss protection, higher write endurance, a greater number of spare blocks, and better error correction. These drives are great for arrays housing virtual machines, databases, or other high workload operations. What you get in performance, however, is offset by a much higher price per GB compared to other types of drives. You’re likely not going to put your media collection on these drives, unless of course you’ve got really deep pockets! For these drives, look for used, low-hour enterprise SSDs. These can typically be found with much of their useful service life left after retirement from a data center.

Storage Drives

The 4TB Seagate STDR4000100 is an excellent candidate for shucking

Second, we have storage drives. These drives comprise the SATA arrays that contain mostly static data, and typically fall into the write once/read many (WORM) category of service. With these drives, we’re not so much concerned with raw performance, nor ultra-high reliability. In a lab environment, we’re looking for three primary attributes with our storage drives: 1) low price per GB, 2) storage density (TB per rack unit) and 3) low watts per TB. Since it takes a significant number of drives to assemble a 30, 50, or 100 TB array, meeting these criteria keeps the overall costs of drives down, takes up less space in the rack and requires lower energy costs to operate. Individually, these drives may be quite slow – even 5400 RPM spindles will suffice, but in the proper configuration can still saturate 1Gbps or even 10Gbps links. And since we’ll be employing a number of these drives in a single array, we’ll be assuming a, “strength in numbers” approach, both from a performance and reliability standpoint. A popular, low-cost strategy for sourcing 2.5″ or 3.5″ HDDs is shucking external USB drives from several different vendors. A bit of research will reveal which drives are housed in each external drive model. But be careful! Not all external USB drives use a standard SATA connector internally, and you’re also sacrificing your warranty by doing this. It’s best to thoroughly check the drive for errors before disassembling the USB enclosure, and make a warranty claim if necessary. However, because you can save tens of dollars per drive by adopting this strategy, you can save enough to essentially “self-warranty” the drives by using the savings to keep a spare drive around, with the added benefit of limiting downtime in the event of a failure.

Archive & Backup Drives

A third category of drive is the archive or backup drive. These drives are typically not configured into RAID arrays, though they can be if one so chooses. In our lab environment, we choose to use individual backup disks grouped into a large storage pool. This gives us the benefit of a single, large backup target, but without the added cost and complexity of RAID groups. We have redundancy in our primary storage arrays, so if a single backup disk fails, the next backup job will simply copy that data back to the pool. Like storage drives, the backup drives are large, inexpensive, relatively slow disks. We typically use 4TB-6TB or larger 3.5″ disks for this purpose. Again like storage drives, many people choose to shuck drives like the WD My Book external drives and adopt a self-warranty strategy.

OS Drives

The final category we’d like to mention is server OS drives. Why do we consider OS drives to fall into their own category? Simple – efficient use of disk space. With many drives, whether SSD or spindle HDDs, you’ll likely find that after installing your OS to a RAID1 array, you have much more space than you’ll ever need. Unless you’re purposing servers for multiple duties, you’ll find that most Windows Server OSes use less than 20 GB of disk space, and even applications like WSUS which employ the Windows Internal Database (WID) will use less than 40 GB of space for the C: drive. Thus, it makes little sense to use drives that are terabytes, or even only a few hundred gigabytes, since the majority of that space will just be wasted. And though not paramount, some reasonable amount of performance is desirable for these drives, as it speeds reboot times and increases overall responsiveness of the server. To that end, small, consumer SSDs fit the bill perfectly for these drives. They’re inexpensive (sometimes, under $40 each), reasonably fast, mostly reliable, and we don’t have to worry much about write cycles, since a typical OS workload is primarily read (75-80% in our tests in Enterprise environments). While there aren’t a huge number of drives that fit these criteria, there are still a reasonable number of 50-60 GB drives available and plenty of affordable 120 GB options out there as well.

One note if you choose to use a small SSD for your OS drives.  In most cases it won’t be an issue, but do exercise caution if you use a relatively small drive for a hypervisor with a significant amount of RAM.  Since a Windows managed paged file can grow quite large (as much as 3x RAM!), you can see how the page file could easily fill a 60 GB drive. Consider a system with as 64 GB of RAM.  When performing a complete system crash dump, a full 1x RAM is required to write out the dump to either the page file or dedicated dump file.  This would quickly overwhelm the drive, even though a page file likely wouldn’t be needed at all during normal operation, assuming the system’s memory was well managed.  Given this potential issue, some Admins choose to manually set the page file to a specific value to prevent the drive from filling.  This comes with the tradeoff of not being able to perform the full system dump, however.  Check here for more information on Microsoft’s recommendations for calculating page file sizes.


If you have multiple servers, try to stick with the same drive, or at least a drive of the same capacity. This way, you can stockpile one extra drive to serve as a cold spare for all of your servers.

Other Drives

You might have noticed that we neglected to mention Enterprise SAS and ultra high-end SSDs. These drives certainly have value in specific applications, but a home lab environment is probably not the best use case. SAS drives can be expensive, power-hungry, require SAS controllers, and are finicky about mixed-use with SATA drives. And while you might have one in your high-end gaming rig, it’s not likely you’re going to fill your home lab with a dozen or more PCIe or NVMe drives, due to cost alone. We find it’s best to keep things relatively simple when it comes to your home lab, and we hope these tips will help you select a storage strategy that will serve you well into the future.

Leave a Reply

Your email address will not be published. Required fields are marked *