Samsung PM853T SSD Review

Samsung PM853T SSD Review

Here at Teknophiles, we don’t believe in a once-size-fits-all approach to selecting hard drives for our lab servers. We prefer to adhere to the rule of specificity, where drives have a defined purpose and drive selection is based on several criteria that suit that purpose. In no particular order, we evaluate capacity, cost, reliability, performance and form factor when selecting a drive for a particular role.

Looking at this list of attributes, it’s easy to reach the conclusion that simply selecting the fastest drive would be a no-brainer for all applications. But fast drives come at an expense – both literal expense, as well as capacity expense. And, frankly, there are times where you just don’t need the the capacity or even the raw performance that some drives offer. One example, as detailed in our Silicon Power S60 60GB SSD Review, are server OS drives. On nearly every server we build, any serious workload is going to be performed on a dedicated array or SAN LUN, where IOPS and throughput are known quantities that are appropriately sized. As such, dedicated operating system drives typically experience low I/O and are approximately 75-80% read operation. You just won’t see much benefit by spending extra cash on a blazing fast SSD for your OS. And when you further consider that we nearly always configure our OS drives in RAID-1, even relatively “slow” SSDs will yield perfectly usable read speeds for a typical operating system. Heck, reliability even takes somewhat of a back seat when using cheaper drives in a RAID-1 configuration – by simply by keeping an extra $30 drive or two on hand as spares, you’ll still come out ahead financially, with no little down time waiting on a new drive to arrive.

Have Your Cake and Eat It Too

But what about those times where you do need speed, capacity, and reliability? It’s sorta like that old muscle car adage: cheap, fast, reliable – pick any two. The same premise generally holds true for computer components, including hard drives. Simply put, if you want fast and cheap, it likely won’t be reliable. Reliable and cheap? It’s not gonna be fast. You want all three? Unfortunately, you’re going to have to pay for it.

Or are you?

Perhaps there’s a happy medium – as long as you understand what it is you’re looking for. You see, performance is relative. There are certainly applications that require an abundance of IOPS. Others require significant write endurance. Others yet are heavily read biased. For each of these use cases, there are drives that fit the specific profile. For us, we needed some reliable, reasonably fast drives that will be 80% read-biased, but not break the bank. We’ve think we’ve found the sweet spot with the Samsung PM853T.

The Samsung PM853T

The Samsung PM853T series drives were mass produced around 2014-2016, so all the drives floating around out there are data center pulls, some with low hours, and in some cases, even New Old-Stock (NOS). Still, these are a great deal and can be had for as low as $0.10 per GB. Keep in mind that many of these drives are OEM drives that were sold bundled with servers, and thus will carry no warranty from the drive manufacturer even if they had a recent manufacture date. At this price point, however, having a cold-spare on hand is certainly achievable, and is highly recommended.

Samsung considers these drives to be a mixed workload drive with high sustained performance, which is perfect for our purposes. Note that the PM853T is an TLC SATA III 6 Gb/s drive, so like other SATA SSDs, it’s limited to a theoretical 600 MB/s. In our case, this is mostly irrelevant, however, as we’ll be using these in RAID-1/0 arrays as the disk subsystem for Hyper-V clusters. Given a minimum of 4 disks in an array (and possibly many more), this configuration can easily saturate the 2000 MB/s maximum throughput of a single 4-lane SFF-808x connector on an older SAS2 HBA like the LSI-Avago SAS 9210-8i.

Attributes

Samsung offered the PM853T in 240 / 480 / 960 GB sizes, and the drive offers many features not found on Samsung’s consumer drives.

Samsung PM853T – Specifications
Form factor 2.5 inches
Capacity 240 / 480 / 960 GB
Host Interface SATA3 – 6 Gb/s
Encryption AES 256-bit Hardware Encryption
Mean time 2.0 million hours
Uncorrectable bit 1 in 10^17
Power consumption Active Read/Write : 2.7 Watt/3.8 Watt, Idle : 1.2 Watt
TBW – 240 GB : 150 TBW
– 480 GB : 300 TBW
– 960 GB : 600 TBW
Cache power protection Supported
Sequential R/W (MB/s) Up to 530 / 420 MB/s
Random R/W (IOPs) Up to 90,000 / 14,000
Physical dimensions 100mm x 70mm x 7mm
Weight 63g

Among the features, Samsung lists the following:

    Consistent high-quality performance. Delivers consistent performance under diverse workloads to meet various data center demands.
    Advanced Error-Correcting Code (ECC) engine. Corrects read failures to greatly improve the reliability of the data stored in the memory for higher error correction and endurance than the BCH code can deliver alone.
    End-to-end protection. Extends error detection to cover the entire path, from the host interface to the NAND flash memory in the SSD for superior data integrity.
    Power-loss protection. Ensures no data loss during unexpected power failures by using the power supply of tantalum capacitors to borrow enough time to store all cached data to flash memory.
    SMART technology. Anticipates failures and warns users of impending drive failure, enabling time to replace the ailing drive to avoid data loss and system failure malfunctions.
    Thermal throttling. Regulates the temperature of the hardware components automatically to protect them from overheating by managing its performance level to prevent data loss.

Performance

So how does it perform? Samsung provides an enormous amount of data in their product brief on the PM853T, but here are some highlights of tests conducted in Samsung’s data lab using a PM853T 480 GB drive against a competitor’s product. Samsung uses the following tools to generate this data: Fio 2.1.3, Jetstress, and IOMeter.

Sustained Performance Tests

In this test, Samsung pitted the PM853T against an competitor’s MLC SSD drive during an 11 hour workload. The results indicate that the Samsung drive shows much lower latency with less standard deviation (more consistency). Overall the Samsung drive also had overall higher average IOPS.

Read/Write Tests

Additionally, the Samsung drive outperformed it’s competitor in both sustained random, as well as sequential read/write tests, achieving nearly 160000 IOPS at 100% random read in RAID-5 configurations, and 30000 IOPS at 100% random write in RAID-1 configurations.

In the sequential read tests throughput reached approximately 1500 MB/s in RAID-1 and over 1200 MB/s write in RAID-5 and outperforms its competitor as much as 29%, depending on RAID configuration and queue depth.

In mixed workloads it’s a similar story – the PM853T performs outperforms its competitor at all queue depths, in both non-RAID and RAID configurations, achieving more than 60,000 IOPs in RAID-1 at a RW ratio of 75:25, which is similar to typical virtual environment workloads.

Latency

In terms of average and maximum latency, the PM853T again performs admirably against a competitor.

Application Workloads

Finally, in both virtual environments using multiple VMs, as well in as various real-world application workloads, the PM853T again outperforms its competitor across the board.

In our own much simpler tests, we used a Samsung PM853T 960 GB drive. This drive was a server pull that, as you can see, had very low hours.

We saw read/write performance very much in line with Samsung’s official claims and consistently saw over sequential reads over 550 MB/s read and sequential writes over 420 MB/s.

Limitations

All this said, these drives do have certain limitations that should be at least touched upon:

    Form Factor. These drives are 2.5″, so they may not fit in your existing NAS, at least not without an adapter. Though at 7 mm z-height these will easily fit in all 2.5″ drive locations.
    SATA III. The PM853T is SATA III, not SAS or PCI-E, so if you need the raw performance of PCI-E or the expanded feature set of SAS such as multiple initiators, full duplex speeds or multipath I/O, then these drives are not for you.
    Write Speed. Being a read-biased drive, one would expect write performance to take a bit of a hit. These drives certainly do not display write speeds as fast as modern PCI-E/NVMe based drives. That said, they’re no slouch either, especially in RAID arrays. And at $0.10 per GB, you can actually afford to build an array with them.
    Endurance. Again, being a read-centric TLC SSD, the PM853T is only rated at 0.3 drive writes per day (DWPD). SLC drives can typically handle as many as 10x the number of write cycles that MLC or TLC drives can. This translates to nearly 300 GB in drive writes daily for 5 years. Unless you have some atypical use case, these drives should last a very long time in a typical 80%/20% R/W virtualization scenario.

Conclusion

So how exactly are we using these drives at Teknophiles? We’re currently running nearly 30 virtual machines on a single 1.8TB RAID-1/0 volume (4 x 960GB Samsung PM853T) and these drives don’t break a sweat. Even when hammering the environment with Windows Updates, mass live migrations or boot storms, these drives hold up well. The PM853T’s random IO performance and low latency makes it quite suitable to meet the demands of the mixed workloads that virtual machines place on the disk subsystem. Additionally, with numerous 853T drives currently in play (4 x 960 GB and 2 x 480 GB), we’ve not had a single failure in more than 10k hours use – these seem to be quite reliable drives. Simply put, for a home Hyper-V or ESX lab, it’s hard to imagine a better drive for the money. Factor in the quite excellent IO and throughput per watt of power consumption these drives produce, and you have a clear winner with the PM853T.

Resizing the Linux Root Partition in a Gen2 Hyper-V VM

Resizing the Linux Root Partition in a Gen2 Hyper-V VM

Without a doubt, modern virtualization has changed the landscape of enterprise computing forever. Since virtual machines are abstracted away from the physical hardware, changes in compute, memory, and storage resources become mere clicks of a mouse. And, as hypervisors mature, many operations that were once thought of as out-of-band tasks, such as adding storage or even memory can now be done with little, or even zero downtime.

Hyper-V SCSI Disks and Linux

In many cases, hypervisors are backed by large storage area networks (SANs). This provides shared storage for hypervisor nodes that supports failover clustering and high availability. Additionally, it gives administrators the ability to scale the virtual environment, including the ability to easily add or expand storage on existing virtual servers. Microsoft’s Hyper-V 2012 introduced Generation 2 VMs, which extends this functionality. Among the many benefits of Gen2 VMs, was the ability to boot from a SCSI disk rather than IDE. This requires UEFI rather than a legacy BIOS, so it’s only supported among newer operating systems. Many admins I talk to think this is limited to Microsoft Server 2012 and newer, probably because of the sub-optimal phrasing in the Hyper-V VM creation UI that altogether fails to mention Linux operating systems.

The fact is, however, that many newer Linux OSes also support this ability, as shown in these tables from Microsoft.

More Disk, Please

Once you’ve built a modern Linux VM and you’re booting from synthetic SCSI disks rather than emulated IDE drives, you gain numerous advantages, not the least of which is the ability to resize the OS virtual hard disk (VHDX) on the fly. This is really handy functionality – after all, what sysadmin hasn’t had an OS drive run low on disk space at some point in their career? This is simply done from the virtual machine settings in Hyper-V Manager or Failover Cluster Manager by editing the VHDX.

Now, if you’re a Microsoft gal or guy, you already know that what comes next is pretty straightforward. Open the Disk Management MMC, rescan the disks, extend the file system, and viola, you now automagically have a bigger C:\ drive. But what about for Linux VMs? Though it might be a little less intuitive, we can still accomplish the same goal of expanding the primary OS disk with zero down time in Linux.

On-the-Fly Resizing

To demonstrate this, let’s start with a vanilla, Hyper-V Generation 2, CentOS 7.6 VM with a 10GB VHDX attached to a SCSI controller in our VM. Let’s also assume we’re using the default LVM partitioning scheme during the CentOS install. Looking at the block devices in Linux, we can see that we have a 10GB disk called sda which has three partitions – sda1, sda2 and sda3. We’re interested in sda3, since that contains our root partition, which is currently 7.8GB, as demonstrated here by the lsblk command.

Now let’s take a look at df. Here we can see an XFS filesystem on our 7.8GB partition, /dev/mapper/centos-root which is mounted on root.

Finally, let’s have a look at our LVM summary:

From this information we can see that there’s currently no room to expand our physical volume or logical volume, as the entirety of /dev/sda is consumed. In the past, with a Gen1 Hyper-V virtual machine, we would have had to shut the VM down and edit the disk, since it used an emulated IDE controller. Now that we have a Gen2 CentOS VM with a SCSI controller, however, we can simply edit the disk on the fly, expanding it to 20GB.

Once the correct virtual disk is located, select the “Expand” option.

Next, provide the size of the new disk. We’ll bump this one to 20GB.

Finally, click “Finish” to resize the disk. This process should be instant for dynamic virtual hard disks, but may take a few seconds to a several minutes for fixed virtual hard disks, depending on the size of the expansion and speed of your storage subsystem. You can then verify the new disk size by inspecting the disk.

OK, so we’ve expanded the VHDX in Hyper-V, but we haven’t done anything to make our VM’s operating system aware of the new space. As seen here with lsblk, the OS is indifferent to the expanded drive.

Taking a look at parted, we again see that our /dev/sda disk is still showing 10.7GB. We need to make the CentOS operating system aware of the new space. A reboot would certainly do this, but we want to perform this entire operation with no downtime.



Issue the following command to rescan the relevant disk – sda in our case. This tells the system to rescan the SCSI bus for changes, and will report the new space to the kernel without a restart.

Now, when we look at parted again, we’re prompted to move the GPT table to the back of the disk, since the secondary table is no longer in the proper location after the VHDX expansion. Type “Fix” to correct this, and then once again to edit the GPT to use all the available disk space. Once this is complete, we can see that /dev/sda is now recognized as 20GB, but our sda3 partition is still only 10GB.

Next, from the parted CLI, next use the resizepart command to grow the partition to the end of the disk.

Our sda3 partition is now using the maximum space available, 20.2GB. The lsblk command also now correctly reports our disk as 20GB.

But what about our LVM volumes? As suspected, our physical volumes, volume groups and logical volumes all remain unchanged.

We need to first tell our pv to expand into the available disk space on the partition. Do this with the pvresize command as follows:

Sure enough, our pv is now 18.8GB with 10.00GB free. Now we need to extend the logical volume and it’s associated filesystem into the free pv space. We can do this with a single command:

Looking at our logical volumes confirms that our root lv is now 17.80GB of the 18.80GB total, or exactly 10.0GB larger than we started with, as one would expect to see.

A final confirmation with the df command illustrates that our XFS root filesystem was also resized.

Conclusion

So there you have it. Despite some hearsay to the contrary, modern Linux OSes run just fine as Gen2 VMs on Hyper-V. Coupled with a SCSI disk controller for the OS VHDX, this yields the advantage of zero-downtime root partition resizing in Linux, though it’s admittedly a few more steps than a Windows server requires. And though Linux on Hyper-V might not seem like the most intuitive choice to some sysadmins, Hyper-V has matured significantly over the past several releases and is quite a powerful and stable platform for both Linux and Windows. And one last thing – when you run critically low on disk space on Linux, don’t forget to check those reserved blocks for a quick fix!