Samsung PM853T SSD Review

Samsung PM853T SSD Review

Here at Teknophiles, we don’t believe in a once-size-fits-all approach to selecting hard drives for our lab servers. We prefer to adhere to the rule of specificity, where drives have a defined purpose and drive selection is based on several criteria that suit that purpose. In no particular order, we evaluate capacity, cost, reliability, performance and form factor when selecting a drive for a particular role.

Looking at this list of attributes, it’s easy to reach the conclusion that simply selecting the fastest drive would be a no-brainer for all applications. But fast drives come at an expense – both literal expense, as well as capacity expense. And, frankly, there are times where you just don’t need the the capacity or even the raw performance that some drives offer. One example, as detailed in our Silicon Power S60 60GB SSD Review, are server OS drives. On nearly every server we build, any serious workload is going to be performed on a dedicated array or SAN LUN, where IOPS and throughput are known quantities that are appropriately sized. As such, dedicated operating system drives typically experience low I/O and are approximately 75-80% read operation. You just won’t see much benefit by spending extra cash on a blazing fast SSD for your OS. And when you further consider that we nearly always configure our OS drives in RAID-1, even relatively “slow” SSDs will yield perfectly usable read speeds for a typical operating system. Heck, reliability even takes somewhat of a back seat when using cheaper drives in a RAID-1 configuration – by simply by keeping an extra $30 drive or two on hand as spares, you’ll still come out ahead financially, with no little down time waiting on a new drive to arrive.

Have Your Cake and Eat It Too

But what about those times where you do need speed, capacity, and reliability? It’s sorta like that old muscle car adage: cheap, fast, reliable – pick any two. The same premise generally holds true for computer components, including hard drives. Simply put, if you want fast and cheap, it likely won’t be reliable. Reliable and cheap? It’s not gonna be fast. You want all three? Unfortunately, you’re going to have to pay for it.

Or are you?

Perhaps there’s a happy medium – as long as you understand what it is you’re looking for. You see, performance is relative. There are certainly applications that require an abundance of IOPS. Others require significant write endurance. Others yet are heavily read biased. For each of these use cases, there are drives that fit the specific profile. For us, we needed some reliable, reasonably fast drives that will be 80% read-biased, but not break the bank. We’ve think we’ve found the sweet spot with the Samsung PM853T.

The Samsung PM853T

The Samsung PM853T series drives were mass produced around 2014-2016, so all the drives floating around out there are data center pulls, some with low hours, and in some cases, even New Old-Stock (NOS). Still, these are a great deal and can be had for as low as $0.10 per GB. Keep in mind that many of these drives are OEM drives that were sold bundled with servers, and thus will carry no warranty from the drive manufacturer even if they had a recent manufacture date. At this price point, however, having a cold-spare on hand is certainly achievable, and is highly recommended.

Samsung considers these drives to be a mixed workload drive with high sustained performance, which is perfect for our purposes. Note that the PM853T is an TLC SATA III 6 Gb/s drive, so like other SATA SSDs, it’s limited to a theoretical 600 MB/s. In our case, this is mostly irrelevant, however, as we’ll be using these in RAID-1/0 arrays as the disk subsystem for Hyper-V clusters. Given a minimum of 4 disks in an array (and possibly many more), this configuration can easily saturate the 2000 MB/s maximum throughput of a single 4-lane SFF-808x connector on an older SAS2 HBA like the LSI-Avago SAS 9210-8i.

Attributes

Samsung offered the PM853T in 240 / 480 / 960 GB sizes, and the drive offers many features not found on Samsung’s consumer drives.

Samsung PM853T – Specifications
Form factor 2.5 inches
Capacity 240 / 480 / 960 GB
Host Interface SATA3 – 6 Gb/s
Encryption AES 256-bit Hardware Encryption
Mean time 2.0 million hours
Uncorrectable bit 1 in 10^17
Power consumption Active Read/Write : 2.7 Watt/3.8 Watt, Idle : 1.2 Watt
TBW – 240 GB : 150 TBW
– 480 GB : 300 TBW
– 960 GB : 600 TBW
Cache power protection Supported
Sequential R/W (MB/s) Up to 530 / 420 MB/s
Random R/W (IOPs) Up to 90,000 / 14,000
Physical dimensions 100mm x 70mm x 7mm
Weight 63g

Among the features, Samsung lists the following:

    Consistent high-quality performance. Delivers consistent performance under diverse workloads to meet various data center demands.
    Advanced Error-Correcting Code (ECC) engine. Corrects read failures to greatly improve the reliability of the data stored in the memory for higher error correction and endurance than the BCH code can deliver alone.
    End-to-end protection. Extends error detection to cover the entire path, from the host interface to the NAND flash memory in the SSD for superior data integrity.
    Power-loss protection. Ensures no data loss during unexpected power failures by using the power supply of tantalum capacitors to borrow enough time to store all cached data to flash memory.
    SMART technology. Anticipates failures and warns users of impending drive failure, enabling time to replace the ailing drive to avoid data loss and system failure malfunctions.
    Thermal throttling. Regulates the temperature of the hardware components automatically to protect them from overheating by managing its performance level to prevent data loss.

Performance

So how does it perform? Samsung provides an enormous amount of data in their product brief on the PM853T, but here are some highlights of tests conducted in Samsung’s data lab using a PM853T 480 GB drive against a competitor’s product. Samsung uses the following tools to generate this data: Fio 2.1.3, Jetstress, and IOMeter.

Sustained Performance Tests

In this test, Samsung pitted the PM853T against an competitor’s MLC SSD drive during an 11 hour workload. The results indicate that the Samsung drive shows much lower latency with less standard deviation (more consistency). Overall the Samsung drive also had overall higher average IOPS.

Read/Write Tests

Additionally, the Samsung drive outperformed it’s competitor in both sustained random, as well as sequential read/write tests, achieving nearly 160000 IOPS at 100% random read in RAID-5 configurations, and 30000 IOPS at 100% random write in RAID-1 configurations.

In the sequential read tests throughput reached approximately 1500 MB/s in RAID-1 and over 1200 MB/s write in RAID-5 and outperforms its competitor as much as 29%, depending on RAID configuration and queue depth.

In mixed workloads it’s a similar story – the PM853T performs outperforms its competitor at all queue depths, in both non-RAID and RAID configurations, achieving more than 60,000 IOPs in RAID-1 at a RW ratio of 75:25, which is similar to typical virtual environment workloads.

Latency

In terms of average and maximum latency, the PM853T again performs admirably against a competitor.

Application Workloads

Finally, in both virtual environments using multiple VMs, as well in as various real-world application workloads, the PM853T again outperforms its competitor across the board.

In our own much simpler tests, we used a Samsung PM853T 960 GB drive. This drive was a server pull that, as you can see, had very low hours.

We saw read/write performance very much in line with Samsung’s official claims and consistently saw over sequential reads over 550 MB/s read and sequential writes over 420 MB/s.

Limitations

All this said, these drives do have certain limitations that should be at least touched upon:

    Form Factor. These drives are 2.5″, so they may not fit in your existing NAS, at least not without an adapter. Though at 7 mm z-height these will easily fit in all 2.5″ drive locations.
    SATA III. The PM853T is SATA III, not SAS or PCI-E, so if you need the raw performance of PCI-E or the expanded feature set of SAS such as multiple initiators, full duplex speeds or multipath I/O, then these drives are not for you.
    Write Speed. Being a read-biased drive, one would expect write performance to take a bit of a hit. These drives certainly do not display write speeds as fast as modern PCI-E/NVMe based drives. That said, they’re no slouch either, especially in RAID arrays. And at $0.10 per GB, you can actually afford to build an array with them.
    Endurance. Again, being a read-centric TLC SSD, the PM853T is only rated at 0.3 drive writes per day (DWPD). SLC drives can typically handle as many as 10x the number of write cycles that MLC or TLC drives can. This translates to nearly 300 GB in drive writes daily for 5 years. Unless you have some atypical use case, these drives should last a very long time in a typical 80%/20% R/W virtualization scenario.

Conclusion

So how exactly are we using these drives at Teknophiles? We’re currently running nearly 30 virtual machines on a single 1.8TB RAID-1/0 volume (4 x 960GB Samsung PM853T) and these drives don’t break a sweat. Even when hammering the environment with Windows Updates, mass live migrations or boot storms, these drives hold up well. The PM853T’s random IO performance and low latency makes it quite suitable to meet the demands of the mixed workloads that virtual machines place on the disk subsystem. Additionally, with numerous 853T drives currently in play (4 x 960 GB and 2 x 480 GB), we’ve not had a single failure in more than 10k hours use – these seem to be quite reliable drives. Simply put, for a home Hyper-V or ESX lab, it’s hard to imagine a better drive for the money. Factor in the quite excellent IO and throughput per watt of power consumption these drives produce, and you have a clear winner with the PM853T.

Resizing the Linux Root Partition in a Gen2 Hyper-V VM

Resizing the Linux Root Partition in a Gen2 Hyper-V VM

Without a doubt, modern virtualization has changed the landscape of enterprise computing forever. Since virtual machines are abstracted away from the physical hardware, changes in compute, memory, and storage resources become mere clicks of a mouse. And, as hypervisors mature, many operations that were once thought of as out-of-band tasks, such as adding storage or even memory can now be done with little, or even zero downtime.

Hyper-V SCSI Disks and Linux

In many cases, hypervisors are backed by large storage area networks (SANs). This provides shared storage for hypervisor nodes that supports failover clustering and high availability. Additionally, it gives administrators the ability to scale the virtual environment, including the ability to easily add or expand storage on existing virtual servers. Microsoft’s Hyper-V 2012 introduced Generation 2 VMs, which extends this functionality. Among the many benefits of Gen2 VMs, was the ability to boot from a SCSI disk rather than IDE. This requires UEFI rather than a legacy BIOS, so it’s only supported among newer operating systems. Many admins I talk to think this is limited to Microsoft Server 2012 and newer, probably because of the sub-optimal phrasing in the Hyper-V VM creation UI that altogether fails to mention Linux operating systems.

The fact is, however, that many newer Linux OSes also support this ability, as shown in these tables from Microsoft.

More Disk, Please

Once you’ve built a modern Linux VM and you’re booting from synthetic SCSI disks rather than emulated IDE drives, you gain numerous advantages, not the least of which is the ability to resize the OS virtual hard disk (VHDX) on the fly. This is really handy functionality – after all, what sysadmin hasn’t had an OS drive run low on disk space at some point in their career? This is simply done from the virtual machine settings in Hyper-V Manager or Failover Cluster Manager by editing the VHDX.

Now, if you’re a Microsoft gal or guy, you already know that what comes next is pretty straightforward. Open the Disk Management MMC, rescan the disks, extend the file system, and viola, you now automagically have a bigger C:\ drive. But what about for Linux VMs? Though it might be a little less intuitive, we can still accomplish the same goal of expanding the primary OS disk with zero down time in Linux.

On-the-Fly Resizing

To demonstrate this, let’s start with a vanilla, Hyper-V Generation 2, CentOS 7.6 VM with a 10GB VHDX attached to a SCSI controller in our VM. Let’s also assume we’re using the default LVM partitioning scheme during the CentOS install. Looking at the block devices in Linux, we can see that we have a 10GB disk called sda which has three partitions – sda1, sda2 and sda3. We’re interested in sda3, since that contains our root partition, which is currently 7.8GB, as demonstrated here by the lsblk command.

Now let’s take a look at df. Here we can see an XFS filesystem on our 7.8GB partition, /dev/mapper/centos-root which is mounted on root.

Finally, let’s have a look at our LVM summary:

From this information we can see that there’s currently no room to expand our physical volume or logical volume, as the entirety of /dev/sda is consumed. In the past, with a Gen1 Hyper-V virtual machine, we would have had to shut the VM down and edit the disk, since it used an emulated IDE controller. Now that we have a Gen2 CentOS VM with a SCSI controller, however, we can simply edit the disk on the fly, expanding it to 20GB.

Once the correct virtual disk is located, select the “Expand” option.

Next, provide the size of the new disk. We’ll bump this one to 20GB.

Finally, click “Finish” to resize the disk. This process should be instant for dynamic virtual hard disks, but may take a few seconds to a several minutes for fixed virtual hard disks, depending on the size of the expansion and speed of your storage subsystem. You can then verify the new disk size by inspecting the disk.

OK, so we’ve expanded the VHDX in Hyper-V, but we haven’t done anything to make our VM’s operating system aware of the new space. As seen here with lsblk, the OS is indifferent to the expanded drive.

Taking a look at parted, we again see that our /dev/sda disk is still showing 10.7GB. We need to make the CentOS operating system aware of the new space. A reboot would certainly do this, but we want to perform this entire operation with no downtime.



Issue the following command to rescan the relevant disk – sda in our case. This tells the system to rescan the SCSI bus for changes, and will report the new space to the kernel without a restart.

Now, when we look at parted again, we’re prompted to move the GPT table to the back of the disk, since the secondary table is no longer in the proper location after the VHDX expansion. Type “Fix” to correct this, and then once again to edit the GPT to use all the available disk space. Once this is complete, we can see that /dev/sda is now recognized as 20GB, but our sda3 partition is still only 10GB.

Next, from the parted CLI, next use the resizepart command to grow the partition to the end of the disk.

Our sda3 partition is now using the maximum space available, 20.2GB. The lsblk command also now correctly reports our disk as 20GB.

But what about our LVM volumes? As suspected, our physical volumes, volume groups and logical volumes all remain unchanged.

We need to first tell our pv to expand into the available disk space on the partition. Do this with the pvresize command as follows:

Sure enough, our pv is now 18.8GB with 10.00GB free. Now we need to extend the logical volume and it’s associated filesystem into the free pv space. We can do this with a single command:

Looking at our logical volumes confirms that our root lv is now 17.80GB of the 18.80GB total, or exactly 10.0GB larger than we started with, as one would expect to see.

A final confirmation with the df command illustrates that our XFS root filesystem was also resized.

Conclusion

So there you have it. Despite some hearsay to the contrary, modern Linux OSes run just fine as Gen2 VMs on Hyper-V. Coupled with a SCSI disk controller for the OS VHDX, this yields the advantage of zero-downtime root partition resizing in Linux, though it’s admittedly a few more steps than a Windows server requires. And though Linux on Hyper-V might not seem like the most intuitive choice to some sysadmins, Hyper-V has matured significantly over the past several releases and is quite a powerful and stable platform for both Linux and Windows. And one last thing – when you run critically low on disk space on Linux, don’t forget to check those reserved blocks for a quick fix!

Pass-through Disks vs. VHDX and the VhdxTool

Pass-through Disks vs. VHDX and the VhdxTool

When evaluating storage options for a Microsoft Hyper-V guest machine, there are several options available these days. Solutions like iSCSI and Fibre Channel present block storage directly to virtual machines via Virtual Switches and Virtual SANs. While offering physical server-like performance, these solutions require significant hardware, infrastructure and the skill-sets to manage them. Two popular options that don’t require extravagant disk subsystems are pass-through disks and VHD/VHDX, however. Both offer the ability to attach the disk to the guest in the virtual machine settings, so there’s little configuration in the virtual machine itself. Let’s take a quick look at these two options.

Hyper-V Pass-Through Disks

For those not familiar, pass-through disks are disks present on the Hyper-V server that could either be local to the hypervisor or they could be LUNS mapped to the hypervisor via iSCSI or Fibre Channel. In this configuration, the disks are reserved on the hypervisor to enable exclusive access to the disk by the VM. This is done by initializing the pass-through disk in Disk Manager on the hypervisor, and then placing the disk in an Offline state.

You can see what this looks like in both diskpart and disk manager.

diskmgmt

Virtual machine configuration for pass-through disks is straightforward as well. Once the disk is offlined in the hypervisor, simply open the settings for the virtual machine and click on the storage controller and add a hard disk. Select the radio button for, “Physical hard disk:” and choose the appropriate disk from the drop-down. Again, this MUST be an initialized disk, that has been placed Offline in Disk Management on the hypervisor.

phy_disk

Now that you’ve seen a bit about pass-through disks, let’s talk about the pros and cons of this type of storage.

Pros:

  1. Performance.  This is the most oft cited reason for using pass-through disks.  Proponents like to talk about the advantage of not virtualizing the disk, and the near-physical performance of pass-through disks.

And, that’s about it.  Performance is really the only reason you’ll hear for using pass-through disks.  And while performance is a great reason to select a particular configuration, many experts would argue that the advantage of pass-through disks over VHDs even in older versions of Hyper-V (2008 and 2008R2) was small.  Typically, numbers such as 15%-20% are thrown around.  With the improvements in Hyper-V 2012 and 2012R2 and the VHDX format, this advantage shrinks.  Use fixed rather than dynamic VHDXs, and the advantage shrinks further, to “virtually” nothing (pun intended).

Cons:

  1. Not portable.  Pass-through disks are not easily moved.  Rather than being able to copy or migrate a virtual disk to new storage, a pass-through disk must be physically moved.  Given the myriad of server and storage controller configurations, this is not typically an easy affair.
  2. Uses the entire disk.  Since the whole disk is reserved for the virtual machine, no other virtual machines can use the disk.
  3. Not recommended for OS installations.  OS installations can be problematic on pass-through disks since the VM configuration files must be located on another disk.
  4. No host-level backups.  Backups must occur at the guest level rather than the host level, since the VM has exclusive access to the disk.  As a result, backup and recovery becomes significantly more cumbersome.
  5. Difficult live-migrations.  Live migrations require storage attached to a virtual machine to migrate along with the VM.  Hyper-V clusters can be configured with pass-through disks, but it requires special considerations, and it’s not an optimal or recommended configuration.
  6. Cannot take snapshots.  Snapshots are a super-handy tool and an important advantage of using virtual machines over physical servers.  Losing this ability is a huge con.
  7. Cannot be dynamically expanded. Although dynamic disks are generally not recommended for production scenarios, they do have their use cases. Pass-through disks do not offer this functionality.



VHD/VHDX

Clearly, there are numerous drawbacks to using pass-through disks.  Now let’s take a look at the alternative in this discussion – VHD/VHDX.  This is Microsoft’s implementation of virtual disks and is now the preferred method for storage in Hyper-V.  Generally, there’s no reason to use VHDs over VHDXs with modern hypervisors and VMs (there are a few environment-specific reasons beyond the scope of this article to use VHD).  VHDX supports much larger disks (64TB vs. 2TB) and is considerably more resilient to corruption, especially after a crash or power loss.  VHDX also offers online resizing, allowing you to grow or shrink a virtual disk while the VM is running.

Looking at the pros and cons of VHDX, it’s basically the reverse of pass-through disks. Like pass-through disks, VHDXs can be stored on local disks on the hypervisor or SAN LUNs attached to the hypervisor. And given there’s at most a few percent advantage in performance of a pass-through disk over a fixed-size VHDX, it’s no wonder that Microsoft pushes VHDX as the preferred storage method for VM storage.

Creating VHDXs is also a straightforward affair. From Hyper-V simply select New > Hard Disk from the Action Pane in Hyper-V Manager.
vhdx_create1

Next, select VHDX unless you have a specific reason to use the VHD format.
vhdx_create2

Choose the type of virtual hard disk you’d like to create.
vhdx_create3

Select the name and location for the new VHDX.
vhdx_create4

Now choose the size of the VHDX.
vhdx_create5

Finally, click, “Finish,” to create the VHDX.
vhdx_create6

Attaching the VHDX to the VM is much like with pass-through disks. In the VM settings, simply select the virtual hard disk radio button and provide the path to the VHDX.
vhdx_attach

So far so good. Now this is where is one tiny wrinkle rears its head. If you’re following along and creating a VHDX while reading this, you’re likely still waiting for the previous step to complete, especially if the virtual disk is a large one, and you’re not using enterprise SSD storage. If you’re like us, and creating VHDXs on a small mirror or large, but relatively slow RAID arrays (i.e. SATA RAID-5 or RAID-6), this process can take a while. For something like a 4TB VHDX on slower disks, this can really take a while.

So we have to ask, “what’s happening during the VHDX creation that takes so dang long?”  Well, as it turns out, during this process, the entire space required to store the VHDX is zeroed out on the disk.  This is a conscious decision by Microsoft, as there are security implications in not doing so.  As Hyper-V Program Manager Ben Armstrong explains here, VHDX creation could be nearly instantaneous. If the zeroing is not done, however, data may be recovered from the underlying disk(s).  This would be a huge security no-no, of course, so Microsoft has no choice but to opt for safe route.

But what about for new disks on which data has never been stored?  Clearly, there’s no security risk there.  Many IT Pros would prefer the ability to decide for themselves whether or not a quick VHDX creation is appropriate.  After all, we make decisions that have major security implications every day in our jobs.  Microsoft has provided a tool to do just this for VHDs in the past.  Unfortunately, Microsoft did not release such a tool for VHDX files.

What about an option within Hyper-V?  Maybe a feature request for the next version?  Not likely.  As Mr. Armstrong notes, “the problem is that we would be providing a “do this in an insecure fashion if you know what you are doing checkbox” which would need a heck of a lot of text to try and explain to people why you do not want to do it – and then most people would not read the text anyway.”

VhdxTool

Enter VhdxTool, from the good folks over at Systola.  They’ve picked up where Microsoft has left off and provided a tool to create and resize VHDXs nearly instantaneously, even with multi-terabyte disks.  They explicitly state this software is to be used at your own risk – it should only be used on new disks that contain no data, and not on disks that may contain data, especially when that VHDX may be accessed by end-users.

So how fast is it?  We tested VhdxTool on four 4TB drives containing 4TB VHDXs, with pretty astounding results.

So in a little over one second, a 4TB VHDX was created to attach to our VM. Not bad. Additionally, there are also a number of command line options to accommodate any scenario. These are well documented on Systola’s site, but allow you to create, extend, convert, upgrade or view VHDXs.

Conclusion

Due to it’s many advantages, using VHDXs in place of pass-through disks is clearly the way forward in Hyper-V.  If you’ve ever had reservations about using large VHDX files due to long creation times, Systola has provided an indispensable tool that gives IT Pros the option to fast-create VHDXs.  Just remember, use good discretion when doing so and be sure to keep your data safe.