Installing Zabbix 4.0 LTS on CentOS 7

Installing Zabbix 4.0 LTS on CentOS 7

When it comes to enterprise monitoring solutions, there are a myriad of options to choose from, both paid and free and open-source software (FOSS). Here at Teknophiles, we’ve used just about all of them, and we can tell you that many are good, some are even great, but none are perfect. Even at some of the higher price points, we usually find something we don’t care for. Generally speaking, paying good money for monitoring software will get you support, and perhaps some ease of configuration that open source solutions do not offer.

That said, there are numerous free and open-source monitoring options that get the job done quite well. After using Nagios for many years, we’ve recently become quite enamored with Zabbix. Though not trivial to configure, especially in large environments (what monitoring solution is?), Zabbix is stable, polished and quite extensible. In this article – the first of several in a series on installing and configuring Zabbix – we’ll walk through a typical Zabbix install on Linux, using the latest long-term release from Zabbix, so you can take it for a test drive.

Zabbix Server Sizing

To get started, we’ll first assume you have a running CentOS 7 box, either a virtual or physical machine. With respect to hardware requirements, lab/test/home environments shouldn’t require more than 1-2 CPU cores and 2-4 GB of RAM. We’re currently monitoring over 3600 items, with over 2000 triggers on a virtual server with 1 CPU core and 2GB of RAM, and the server rarely shows any significant resource utilization.

Though disk usage depends on many factors, the Zabbix database will be the primary offender when it comes to disk consumption. Once Zabbix is installed and configured, the application files will change very little in terms of size, so whether or not the database is local or remote will greatly impact disk sizing for the Zabbix server. Regardless of the database location, make sure you have enough space for the database to grow as you accumulate historical data. This will be influenced by not only the number of hosts and items monitored, but also historical and trend storage periods. As a general starting point for a lab server with a local database, a root partition of 15-20 GB should suffice. You can see here that with just over 3600 items, our lab database is chewing up approximately 600 MB/month. This will likely stabilize at some point after all data retention periods are reached, but should be a consideration when sizing the disks.

Do some due diligence and size your Zabbix environment appropriately up front, however. Though this can depend on many factors, such as whether your database is local or remote, whether you choose to use MySQL InnoDB vs. PostgreSQL, the number of hosts you wish to monitor and whether or not you choose to use Zabbix Proxies, it’s much easier to provision resources earlier in the process than later, especially on physical hardware. Also, keep in mind that in medium to large installations (500-1000 hosts), disk I/O will also start become an important factor, along with CPU and memory considerations. Plan the disk subsystem accordingly. Some general sizing recommendations from the folks at Zabbix can be found here.


For the purposes of this article, we’re also going to assume you’re a responsible Linux sysadmin and will be installing Zabbix with SELinux in enforcing mode. We strongly recommend that you leave SELinux this way. We’re aware that SELinux can be challenging and the knee-jerk reaction by many is to disable SELinux – we’ve been tempted ourselves at times. SELinux naturally adds a few steps to the process, but it’s completely manageable with the tools available, and will ultimately leave you with a better security stance. You can check the current status of SELinux as follows:


The Zabbix server installation requires several prerequisite components to function – an Apache web server, PHP, and a database server. First, we’ll install the standard Apache httpd server with mod_ssl.

Now start the Apache server and ensure the service persists through reboots.

Next, we need to install PHP along with the necessary PHP modules. First, install yum-utils and enable the Remi PHP repo. In this instance, we’re opting to use PHP 7.1.

Lastly, we’ll install the database. We’re using MariaDB, a open-source and actively-developed fork of MySQL. Install MariaDB as follows:

Now start the MariaDB server and ensure the service persists through reboots.

Next, complete the secure installation for MariaDB.

Finally, create the Zabbix Database with the following mysql commands:

Installing Zabbix

Now that the prerequisites are satisfied, we can proceed with installing the Zabbix application from packages. First, add the Zabbix repo. Make sure you have the URL for the correct version – we want Zabbix 4.0, as shown below.

Now install the Zabbix server and web frontend.

Next, import the initial Zabbix schema into the database using the zabbix database user and password previously created.


The ‘zabbix’ parameter after the -p is NOT the password. This is a common misconception – however, the password would have no space after the -p option. In this case, the ‘zabbix’ parameter is specifying the database for the mysql connection to use. You will be prompted to provide the password for the zabbix database user after you enter the command.

Configure the server to connect to the database as shown here. Some of these parameters may already be set correctly in the config, while others may be commented out by default.

Finally, modify the Apache config for Zabbix as follows. Comment out the section for mod_php5.c, replacing it with a stanza for PHP 7, using the parameters below. Restart Apache after saving the config.

Starting the Zabbix Server

We’re now finally ready to start the Zabbix server for the first time.

After attempting to start the server, however, we can see that the service failed to start.

SELinux Configuration

So what’s going on here? We suspect SELinux is interfering with something out-of-the-box, so let’s do some investigation to confirm. First, install the SELinux troubleshooting tools.

Now run the setroubleshoot cli tool to search the audit log for SELinux alerts.

So we can see here just how useful the SELinux troubleshooting tools are. Not only does the utility tell us exactly what is being blocked (Zabbix Server attempting create access on the zabbix_server_alerter.sock sock_file), but it also gives us the exact command we need to resolve the issue. Not so bad, eh? Simply execute the suggested commands to allow the proper access to the zabbix_server_alerter.sock file, as shown here:

Now let’s attempt to restart Zabbix and run the setroubleshoot tool again.

Now we see a similar error as before, except this time Zabbix needs access to the zabbix_server_preprocessing.sock. Again, we can allow this access with the suggested commands.

And again, restart the Zabbix server.

Now things seem much happier.

Let’s run the setroubleshoot tool once more to ensure there are no more errors.

Now that the Zabbix server appears to be happy, be sure to set the server to start automatically.

So now that Zabbix server is running, we still need to finish the web frontend install. Before we’re able to connect to the UI, however, we’ll need to open the necessary ports in the Linux firewalld daemon. If we look at the ports currently open, we can see that neither http nor https are allowed.

Simply add the following service rules with firewall-cmd to allow http/https.

While we’re at it, let’s go ahead and add the firewall rules to allow the active and passive Zabbix agent checks:

Reload the firewalld daemon to apply the new rules.

We can now view the new rules in the allowed services and ports.

And, finally, restart Apache.

Zabbix Frontend Configuration

In a browser, connect to the frontend configuration UI at http://server/zabbix where “server” is the IP address of your Zabbix server. On the welcome screen, click “Next step” to continue.

Once all the prerequisites have been satisfactorily met, click “Next step” to continue.

Provide the database connection information, using the zabbix database user and password from above and click, “Next step” to continue.

Next, configure the Zabbix server details.

Review the installation summary for accuracy and click, “Next step” to finalize the install.

The Zabbix frontend installation should now be completed. Click, “Finish” to exit.

Log into the Zabbix Admin Console with the following credentials:

Username: Admin
Password: zabbix

Upon logging in, you’ll likely see an error indicating that Zabbix is still not properly running.

Let’s head back to see our old friend, the setroubleshoot tool, to see if SELinux is again the culprit.

Sure enough. As we can see from the log, now that things are up and running, httpd is being prevented from communicating on the Zabbix port 10051. It then clearly gives us some guidance on what we need to do to allow this behavior. Run the suggested commands as follows:

Now restart Apache and Zabbix.

After refreshing the Zabbix console, we can see that our error is now gone and the server appears to be functioning properly.

SSL Configuration

As a final step before we begin to configure our new Zabbix server, let’s generate a certificate and enable SSL. Of course, if your organization has it’s own PKI, or you purchase named or wildcard certificates for your domain, you’ll want to follow those processes rather than the one shown here.

The process detailed here will generate a self-signed certificate. Replace the information below with the relevant location, server name, domain, and IP information, as it relates to your organization. First, generate a CNF file from which to generate the certificate info.

Next, generate the certificate with OpenSSL.

Edit the Apache SSL configuration to use the newly created certificate and private key. Locate the section headed “<VirtualHost _default_:443>” and edit as follows:

Restart Apache after the SSL configuration has been saved.

You should now be able to reach your Zabbix server and login via SSL.


That sums up the Zabbix 4.0 LTS install on CentOS 7. You should now have a working Zabbix server and be ready to configure, add hosts, roll out agents and explore the various and sundry items that Zabbix can monitor.

Silicon Power Bolt B10 USB SSD Review

Silicon Power Bolt B10 USB SSD Review

It’s no secret that the advent of solid state drives (SSDs) has irrevocably changed the technology landscape. Quite simply, SSDs eliminated what has been the largest bottleneck in computing performance for quite some time – the spinning disk. With SSDs, performance metrics on workstations such as latency, throughput and input/output operations per second (IOPS) are nearly irrelevant, as other system components such as the SATA interface become the limiting factor.

There are other attributes of SSDs that aren’t often emphasized, however, such as size, weight and portability. This is likely owing in large part to the fact that drives typically conform to a standard 2.5″ or 3.5″ form factor to maintain backwards compatibility. However, crack open a typical 2.5″ SSD and it becomes pretty obvious that SSDs could be significantly smaller than the standard dictates. And though hard disk drives (HDDs) have also packed more and more data onto their 2.5″ and 3.5″ platters, it is stated that SSD storage density surpassed that of HDDs in 2016, and is expected to continue to out-pace advances in HDD recording technologies.

Portable Drives

When I travel, I always like to have some sort of portable storage with me. And, to be frank, portable is a term that’s thrown around pretty loosely these days. My mobile phone is “portable,” yet it barely fits in my front jeans pocket, and I certainly can’t climb into my buddy’s Tundra with it in there. This, it seems, has been the unfortunate trend in portable devices these days – as performance demand increases, so does the size. Often times, I simply grab a USB thumb drive or two when I’m on the road. But when it comes to thumb drives, even the best performing of the bunch leave a little to be desired. I could carry a 2.5″ external drive in my laptop bag, I suppose, but the added bulk becomes cumbersome, especially when competing for space with a mouse, extra batteries, an assortment of cables, and the laptop’s AC adapter. What I really want is something small, light, and fast. I want something easy to carry, but not limited with respect to capacity or performance – you know, like an SSD.

Silicon Power Bolt B10

Having been a longtime fan of Silicon Power’s wallet-friendly line of SSDs for our lab server OS drives, I was delighted when Silicon Power sent me a Bolt B10 portable SSD to review. This is the first portable SSD I’ve intimately used and, spoiler alert, I won’t be going back to thumb drives when I travel any time soon. Now let’s dig into the details of this little gem.

Form Factor

Conceptually, one of the things I like most about portable SSDs is that we’re finally capable of breaking existing form factor molds. Before SSDs, 2.5″ and 3.5″ hard drives defined the size and shape of portable mass storage. This meant relatively large, clunky, and generally unattractive boxes. Since internal SSDs only conform to these form factors to remain relevant in server, desktop, and laptop applications, design can take a front row with portable SSDs. Portable SSDs tend to possess sleeker lines, smaller packages, and generally more attractive aesthetics. They don’t even have to be rectangular – some manufacturers like Silicon Power even offer round flavors of portable SSDs.

The Bolt B10 conforms to a traditional rectangular shape, though by no means is this a bad thing. The drive is understated, yet attractive, and it’s credit card sized form fits comfortably in the palm of your hand. The drive is featherweight at 25g, almost too light, though not cheap feeling, and the smooth plastic just feels right. It’s the kind of combination that makes you want to fidget with it, like the satisfying click of a well-made ball point pen. You can see here that the drive has a smaller footprint than a Logitech M325 wireless mouse.

Inside the drive you’ll find the PCB that occupies about half of the actual drive enclosure volume, which is to say it’s quite small. Directing the show, you’ll find the Phison PS3111-S11 controller, which gets generally favorable reviews. We’ve had nothing but luck with Phison-controlled SSDs, so we’ve got no complaints here. You can also see the 4 x 32GB NAND chips as well as a micro-USB 3.1 Gen1 connector soldered to the PCB.

Power, Heat & Noise

One of the numerous benefits of small form-factor 2.5″ hard drives is that their enclosures can be driven solely from USB power. The Bolt B10 is no exception. It’s a USB 3.1 Gen 1 device, but is also 2.0 backwards-compatible so it’s power draw cannot exceed the USB 2.0 specification of 5 power units of 100mA, or 500mA total. At 5V this equates to a maximum power consumption 2.5 W, though I suspect the B10 draws about half of that, even when continuously reading or writing.

In fact one of the more interesting use cases (for me) of portable hard drives is slinging lossless music around when I’m on the go. Specifically, I like to be able to plug a drive into an after market stereo head-unit with front-panel USB. Unfortunately, the front-panel USB just doesn’t deliver enough power to spin up most standard 2.5″ USB drives. The B10 works flawlessly in this application, giving you up to 512 GB of easily transportable lossless music for your commute.

Additionally, solid state drives generate less heat and noise than their spinning counterparts, as one might expect. The SP Bolt B10 makes no discernible noise during operation and the tiny case feels cool to the touch even after long continuous writes.

Specifications & Features

Included in the box is the Bolt B10 and a Micro USB 3.0 cable as shown here.

Now let’s take a look at the manufacturer specifications for the B10:

Power supply DC 5V
Cable Micro-B (B10) to Type-A (PC/NB)
Capacity 128GB, 256GB, 512GB
Dimensions 80.0 x 49.5 x 9.4mm
Weight 25g
Material Plastic
Color Black
Interface USB 3.1 Gen1 / USB 3.0, USB 2.0 compatible
Performance Read(max.)400MB/s
Performance Write(max.)400MB/s
Supported OS Windows 10/8.1/8/7/Vista/XP, Mac OS 10.5.x or later, Linux 2.6.31 or later
Operating Temperature 0℃~ 70℃
Storage Temperature -40℃~ 85℃
Certification CE/FCC/BSMI/Green dot/WEEE/RoHS/EAC/KCC/RCM
Warranty 3 years

Performance and Features

Ultimately, performance is probably what people care about most in a portable SSD. The USB interface has a long history of offering underwhelming performance. USB 2.0 offered pretty measly transfer rates of 480 Mbps or 60 MB/s. Due to bus limitations, real-world speeds were closer to 35 MB/s, however. Even in 2000 when USB 2.0 was introduced, an average spinning drive could easily saturate the USB link. It wasn’t until the advent of USB 3.0, nearly 10 years later, that the USB interface was no longer the bottleneck. With transfer speeds of 5 Gbps (625 MB/s), USB 3.0 suddenly made spinning drives feel slow, and the thought of portable SSDs began to make a lot of sense.

In this case, the Bolt B10 tested was a 128 GB model and testing was performed on a modest Dell Laptop: a Latitude E6430, Core i7-3720QM CPU @ 2.60 GHz, 8 GB RAM, Silicon Power S60 120 GB SSD. Given that the Bolt B10 has theoretical maximum throughput of 400 MB/s, we should not be bottlenecked by the USB 3.0 interface.

With the queue depth set to 4, ATTO Benchmarks showed write speeds very near the claimed 400 MB/s, peaking at nearly 360 MB/s, while read speeds exceeded the listed specifications, reaching speeds of approximately 430 MB/s.

CrystalDiskMark’s numbers weren’t quite as glowing, but were still quite good overall.

Some real-world file copies yielded satisfactory results. Large file copies were generally characterized by peaking at over 280MB/s then leveling out to ~130-150MB/s for the duration of the copy.

Small file copies can be quite taxing for any storage media. The results here were also on par with other similar drives we’ve tested. Here you see the copy of the WinSXS Folder – 5.96 GB (6,405,198,876 bytes) containing 76,348 Files, 23,163 Folders.

Finally, Silicon Power lists the drive’s features as follows:

  • Ultra-compact and lightweight for great portability
  • Clean and smooth exterior design
  • Large storage capacity of up to 512GB
  • Superfast transfer rates of up to 400MB/s read & write speed*
    *The transmission speed will vary depending on system performance, such as hardware, software, file system, usage and product capacity. Speeds are tested by Silicon Power with FAT32 (for cross-platform sharing) or NTFS (for single file over 4GB) file formats using CDM or ATTO tests.
  • Supports LDPC error correction and Wear Leveling
  • Free download of SP Widget software for data backup & restore, AES 256-bit encryption, and cloud storage
  • One thing to note is that out of the box the Bolt B10 was formatted to FAT32, which is an interesting choice. As such, I could not initially copy files larger than 4 GB to the drive. Now to someone who’s been in IT for 20 years, this isn’t a big deal, and a quick reformat to NTFS resolved the issue. However, I can easily see how this might confuse someone a bit less technology savvy. Additionally, one of my pet peeves about many external hard drives are the hordes of autorun software that come pre-loaded. Most people simply want to drag and drop files to their USB drives, so this software is ordinarily just a nuisance. On a positive note, the SP Bolt B10 contains very little on the drive out of the box. In fact, the only files present on the drive were there to remind you to register your product with Silicon Power.


    It should be no surprise that SSDs are now the logical choice when it comes to no-compromise portable storage. And though you’re certainly not going to tote around 4TB SSDs anytime soon (unless you have really deep pockets), affordable, portable SSDs are now large enough to meet most users’ needs. Silicon Power offers just such a drive in the Bolt B10. Are there faster portable SSDs out there? Sure, at least on paper. Considering that you’ll be tossing this in your bag and possibly leaving it on the table at the coffee shop, I’m not sure I can justify the extra cash for a few arbitrary MB/s. Additionally, it seems that many manufacturers rate their products in the lab, and under conditions that are hard to replicate in the real-world. It’s been my experience, however, that Silicon Power’s products usually meet or exceed claimed specifications. Frankly, realistic product specifications are a breath of fresh air, and make you feel like you’re getting your money’s worth, all while patronizing a company that clearly wants to earn your trust.

Disk Pooling in Linux with mergerFS

Disk Pooling in Linux with mergerFS

Remember the old days when we used to marvel at disk drives that measured in the hundreds of megabytes? In retrospect it seems funny now, but it wasn’t uncommon to hear someone mutter, “Man, we’ll never fill that thing up.”

If you don’t remember that, you probably don’t recall life before iPhones either, but we old timers can assure you that, once upon a time, hard drives were a mere fraction of the size they are today. Oddly enough, though, you’ll still hear the same old tripe, even from fellow IT folks. The difference now, however, is that they’re holding a helium-filled 10TB drive in their hands. But just like yesteryear, we’ll fill it up, and it’ll happen faster than you think.

In the quest to build a more scalable home IT lab, we grew tired of the old paradigm of building bigger and bigger file servers and migrating hordes of data as we outgrew the drive arrays. Rather than labor with year-over-year upgrades, we ultimately arrived at a storage solution that we feel is the best compromise of scalability, performance, and cost, while delivering SAN-NAS hybrid flexibility.

We’ll detail the full storage build in another article, but for now we want to focus specifically on the Linux filesystem that we chose for the NAS component of our storage server: mergerFS.

What is mergerFS?

Before we get into the specifics regarding mergerFS, let’s provide some relevant definitions:

Filesystem – Simply put, a filesystem is a system of organization used to store and retrieve data in a computer system. The filesystem manages the space used, filenames, directories, and file metadata such as allocated blocks, timestamps, ACLs, etc.

Union Mount – A union mount is a method for joining multiple directories into a single directory. To a user interfacing with the union, the directory would appear to contain the aggregate contents of the directories that have been combined.

FUSE – Filesystem in Userspace. FUSE is software that allows non-privileged users to create and mount their own filesystems that run as an unprivileged process or daemon. Userspace filesystems have a number of advantages. Unlike kernel-space filesystems, which are rigorously tested and reviewed prior to acceptance into the Linux Kernel, userspace filesystems, since they are abstracted away from the kernel, can be developed much more nimbly. It’s also unlikely that any filesystem that did not address mainstream requirements would be accepted into the kernel, so userspace filesystems are able to address more niche needs. We could go on about FUSE, but suffice it to say that userspace filesystems provides unique opportunities for developers to do some cool things with filesystems that were not easily attainable before.

So what is mergerFS? MergerFS is a union filesystem, similar to mhddfs, UnionFS, and aufs. MergerFS enables the joining of multiple directories that appear to the user as a single directory. This merged directory will contain all of the files and directories present in each of the joined directories. Furthermore, the merged directory will be mounted on a single point in the filesystem, greatly simplifying access and management of files and subdirectories. And, when each of the merged directories themselves are mount points representing individual disks or disk arrays, mergerFS effectively serves as a disk pooling utility, allowing us to group disparate hard disks, arrays or any combination of the two. At Teknophiles, we’ve used several union filesystems in the past, and mergerFS stands out to us for two reasons: 1) It’s extremely easy to install, configure, and use, and 2) It just plain works.

An example of two merged directories, /mnt/mergerFS01 and /mnt/mergerFS02, and the resultant union.

mergerFS vs. LVM

You might be asking yourself, “why not just use LVM, instead of mergerFS?” Though the Logical Volume Manager can address some of the same challenges that mergerFS does, such as creating a single large logical volume from numerous physical devices (disks and/or arrays), they’re quite different animals. LVM sits just above the physical disks and partitions, while mergerFS sits on top of the filesystems of those partitions. For our application, however, LVM has one significant drawback: When multiple physical volumes make up a single volume group and a logical volume is carved out from that volume group, it’s possible, and even probable, that a file will be comprised of physical extents from multiple devices. Consider the diagram below.

The practical concern here is that should an underlying physical volume in LVM fail catastrophically, be it a single disk or even a whole RAID array, any data on, or that spans, the failed volume will be impacted. Of course, the same is true of mergerFS, but since the data does not span physical volumes, it’s much easier to determine which files are impacted. There’s unfortunately no easy way that we’ve found to determine which files are located on which physical volumes in LVM.

Flexibility & Scalability

As we’ve alluded to a few times, mergerFS doesn’t care if the underlying physical volumes are single disks or RAID arrays, or LVM volumes. Since the two technologies operate at different levels with respect to the underlying disks, nothing prevents the use of LVM with mergerFS. In fact, we commonly use the following formula to create large mergerFS disk pools: multiple mdadm 4-Disk RAID10 arrays > LVM > mergerFS. In our case, the LVM is usually just used for management, we typically do not span multiple physical volumes with any volume groups, though you easily could.

This gives you incredible flexibility and scalability – want to add an individual 5400 RPM 4TB disk to an existing 12TB RAID6 array comprised of 6+2 7200 RPM 2TB drives, for 16TB total? No problem. Want to later add an LVM logical volume that spans two 8TB 2+2 RAID10 arrays for another 16TB. MergerFS is fine with that, too. In fact, mergerFS is completely agnostic to disk controller, drive size, speed, form factor, etc. With mergerFS, you can grow your storage server as you see fit.

mergerFS & Your Data

One interesting note about mergerFS is that since it is just a proxy for your data, it does not manipulate the data in any way. Prior to being part of a mergerFS union, each of your component disks, arrays, and logical volumes will already have a filesystem. This makes data recovery quite simple – should a mergerFS pool completely crash (though, unlikely), just remove the component storage devices, drop them a compatible system, mount as usual, and access your data.

What’s more, you can equally as easily add a disk to mergerFS that already has data on it. This allows you to decide at some later point if you wish to add an in-use disk to the mergerFS pool (try that with LVM). The existing data will simply show up in the mergerFS filesystem, along with whatever data is on the other volumes. It just doesn’t get any more straightforward!

mergerFS & Samba

As we stated earlier, we selected mergerFS for the NAS component of our Teknophiles Ultimate Home IT Lab storage solution. Since this is a NAS that is expected to serve files to users in a Windows Domain, we also run Samba to facilitate the Windows file sharing. Apparently, there are rumblings regarding issues with mergerFS and Samba, however, according to the author of mergerFS, this is likely due to improper Samba configuration.

Here at Teknophiles, we can unequivocally say that in server configurations based on our, “Linux File Servers in a Windows Domain,” article, Samba is perfectly stable with mergerFS. In fact, in one mergerFS pool, we’re serving up over 20TB of data spread over multiple mdadm RAID arrays. The server in question is currently sitting at 400 days of uptime, without so much as a hiccup from Samba or mergerFS.

Installing mergerFS

OK, so that’s enough background, now let’s get to the fun part. To install mergerFS, first download the latest release for your platform. We’re installing this on an Ubuntu 14.04 LTS server, so we’ll download the Trusty 64-bit .deb file. Once downloaded, install via the Debian package manager.

Creating mergerFS volumes

Now we’re going to add a couple of volumes to a mergerFS pool. You can see here that we have a virtual machine with two 5GB virtual disks, /dev/sdb and /dev/sdc. You can also see that each disk has an ext4 partition.

Next, create the mount points for these two disks and mount the disks in their respective directories.

Now we need to create a mount point for the union filesystem, which we’ll call ‘virt’ for our virtual directory.

And finally, we can mount the union filesystem. The command follows the syntax below, where <srcmounts> is a colon delimited list of directories you wish to merge.

mergerfs -o<options> <srcmounts> <mountpoint>

Additionally, you can also use globbing for the source paths, but you must escape the wildcard character.

There are numerous additional options that are available for mergerFS, but the above command will work well in most scenarios. From the mergerFS man page, here’s what the above options do:

defaults: a shortcut for FUSE’s atomic_o_trunc, auto_cache, big_writes, default_permissions, splice_move, splice_read, and splice_write. These options seem to provide the best performance.

allow_other: a libfuse option which allows users besides the one which ran mergerfs to see the filesystem. This is required for most use-cases.

use_ino: causes mergerfs to supply file/directory inodes rather than libfuse. While not a default it is generally recommended it be enabled so that hard linked files share the same inode value.

fsname=name: sets the name of the filesystem as seen in mount, df, etc. Defaults to a list of the source paths concatenated together with the longest common prefix removed.

You can now see that we have a new volume called, “mergerFS,” which is the aggregate 10GB, mounted on /mnt/virt. This new mount point can be written to, used in local applications, or served up via Samba just as any other mount point.

Other Options

Although getting a little into the weeds, it’s worth touching on an additional option in mergerFS that is both interesting and quite useful. The FUSE function policies determine how a number of different commands behave when acting upon the data in the mergerFS disk pool.

func.<func>=<policy>: sets the specific FUSE function’s policy. See below for the list of value types. Example: func.getattr=newest

Below we can see the FUSE functions and their category classifications, as well as the default policies.

Category FUSE Functions
action chmod, chown, link, removexattr, rename, rmdir, setxattr, truncate, unlink, utimens
create create, mkdir, mknod, symlink
search access, getattr, getxattr, ioctl, listxattr, open, readlink
N/A fallocate, fgetattr, fsync, ftruncate, ioctl, read, readdir, release, statfs, write
Category Default Policy
action all
create epmfs
search ff

To illustrate this a bit better, let’s look at an example. First, let’s consider file or directory creation. File and directory creation fall under the “create” category. Looking at the default policy for the create category we see that it is called, “epmfs.” From the man pages, the epmfs policy is defined as follows:

epmfs (existing path, most free space)
Of all the drives on which the relative path exists choose the drive with the most free space. For create category functions it will exclude readonly drives and those with free space less than min-freespace. Falls back to mfs.

Breaking this down further, we can see that epmfs is a “path-preserving policy,” meaning that only drives that have the existing path will be considered. This gives you a bit of control over where certain files are placed. Consider, for instance if you have four drives in your mergerFS pool, but only two of the drives contain a directory called /pictures. When using the epmfs policy, only the two drives with the pictures directory will be considered when you copy new images to the mergerFS pool.

Additionally, the epmfs policy will also serve to fill the drive with the most free space first. Once drives reach equal or near-equal capacities, epmfs will effectively round-robin the drives, as long as they also meet the existing path requirement.

There are a number of equally interesting policies, including ones that do not preserve paths (create directories as needed), fill the drive with the least free space (i.e. fill drive1, then drive2, etc.), or simply use the first drive found. Though the defaults will generally suffice, it’s a good idea to become familiar with these policies to ensure that your mergerFS configuration best suits your needs.

Adding Volumes

Similar to creating a mergerFS pool, adding disks is also quite simple. Let’s say, for instance, you want to also use the space on your existing operating system disk for the mergerFS pool. We can simply create another directory in the root filesystem to use in the union. We created ours in /mnt for demonstration purposes, but your home directory might equally suit.

Next, we unmount our existing mergerFS pool and remount it including the new directory.

Notice now our total mergerFS volume space is 19GB – the original 10GB from our two 5GB disks, plus the 9GB from /dev/sda2. And now, since we now have truly disparate volumes, let’s test our default epmfs policy. Start by creating three files in our mergerFS pool mount:

Based on our expectations of how epmfs works, we should see these files in the /mnt/mergerFS00 folder, since /dev/sda1 has the most free space.

Sure enough, this appears to work as we anticipated. Now let’s create a folder on one disk only.

Replicating our previous experiment, we’ll create a few more files, but this time in the pics directory in our mergerFS pool.

Since the epmfs policy should preserve the path pics/, and that path only exists on /mnt/mergerFS01, this is where we expect to see those files.

Removing Volumes

Removing a volume from the mergerFS pool follows the same procedure as adding a drive. Simply remount the pool without the path you wish to remove.

Notice now file1, file2, and file2 are no longer present, since they were located on /dev/sda2, which has been removed. Additionally, our total space is now back to its previous size.

mergerFS & fstab

Typically, you’re going to want your mergerFS pool to be persistent upon reboot. To do this we can simply leverage fstab as we would for any mount point. Using our example above, fstab should follow the following format:

Performance Considerations

We would be remiss to end this article without disucssing the possible performance implications of mergerFS. As with any disk pooling utility, a possible weakness of this type of this configuration is a lack of striping across drives. In RAID or LVM configurations, striping may be used to take advantage of simultaneous I/O and throughput of the available spindles. RAID level, array size, and LVM configuration play dramatically into exactly what this looks like, but even a 6+2 RAID6 array with commodity drives can achieve read speeds that will saturate a 10Gbps network. Use LVM to stripe across multiple arrays, and you can achieve stunning performance. If you’re only using single disks in your mergerFS pool, however, you’ll always be limited to the performance of a single drive. And maybe that’s OK, depending on your storage goals. Of course, careful planning of the disk subsystem and using disk arrays in your mergerFS pool can give you the best of both worlds – excellent performance and the flexibility and scalability of disk pooling.

Lastly, it’s worth noting that mergerFS is yet another layer on top of your filesystems and FUSE filesystems by their very nature add some overhead. This overhead is generally negligible, however, especially in low to moderate I/O environments. You might not want to put your busy MySQL database in userspace, but you’ll likely not notice the difference in storing your family picture albums there.

Reclaim Linux Filesystem Reserved Space

Reclaim Linux Filesystem Reserved Space

As IT Pros, we have a myriad of tools available to us to configure and tweak and tune the systems we manage. So much so, there are often everyday tools right under our noses that might have applications we may not immediately realize. In a Linux environment, tune2fs is an indispensable tool, used to tune parameters on ext2/ext3/ext4 filesystems. Most Linux sysadmins that have used mdadm software RAID will certainly recognize this utility if they’ve ever had to manipulate the stride size or stripe width in an array.


First, Let’s take a look at the disks on an Ubuntu file server so we can see what this tool does.

Now, we can use the tune2fs with the -l option to list the existing parameters of the filesystem superblock on /dev/sdb1.

Reserved Blocks?

As you can see, there are a number of parameters from the filesystem that we can view, including a number that can be tuned with tune2fs. In this article however, we’re going to focus on a rather simple and somewhat innocuous parameter – reserved block count. Let’s take a look at that parameter again:

At first glance, it isn’t obvious what this parameter means. In fact, I’ve worked with Linux sysadmins with years of experience that weren’t aware of this little gem. To understand this parameter, we probably have to put it’s origins in a bit of context. Once upon a time, SSDs didn’t exist, and no one knew what a terabyte was. In fact, I remember shelling out well north of a $100 for my first 20GB drive. To date myself even further, I remember the first 486-DX PC I built with my father in the early ’90s, and it’s drive was measured in megabytes. Crazy, I know. Since drive space wasn’t always so plentiful, and the consequences of running out of disk space on the root partition in a Linux system are numerous, early filesystem developers did something smart – they reserved a percentage of filesystem blocks for privileged processes. This ensured that even if disk space ran precariously low, the root user could still log in, and the system could still execute critical processes.

That magic number? Five percent.

And while five percent of that 20GB drive back in 1998 wasn’t very much space, imagine that new 4-disk RAID1/0 array you just created with 10TB WD Red Pros. That’s five percent of 20TB of usable space, or a full terabyte. You see, though this was likely intended for the root filesystem, by default this setting applies to every filesystem created. Now, I don’t know about you, but at $450 for a 10TB WD Red Pro, that’s not exactly space I’d want to throw away.

We Don’t Need No Stinking Reserved Blocks!

The good news, however, is that space isn’t lost forever. If you forget to initially set this parameter when you create the filesystem, tune2fs allows you to retroactively reclaim that space with the -m option.

Here you can see we’ve set the reserved blocks on /dev/sdb1 to 0%. Again, this isn’t something you’d want to do on a root filesystem, but for our “multimedia” drive, this is fine – more on that later. Now, let’s look at our filesystem parameters once again.

Notice now that our reserved blocks is set to zero. Finally, let’s have a look at our free disk space to see the real world impact. Initially, we had 50GB of 382GB free. Now we can see that, although neither the size of the disk nor the amount of used space has changed, we now have 69GB free, reclaiming 19GB of space.

Defrag Implications

Lastly, I’d be remiss if I didn’t mention that there’s one other function these reserved blocks serve. As always, in life there’s no such thing as a free lunch (or free space, in this case). The filesystem reserved blocks also serve to provide the system with free blocks with which to defragment the filesystem. Clearly, this isn’t something you’d want to do on a filesystem that contained a database, or in some other situation in which you had a large number of writes and deletions. However, if like in our case, you’re dealing with mostly static data, in a write once, read many (WORM) type configuration, this shouldn’t have a noticeable impact. In fact, the primary developer for tune2fs, Google’s Theodore Ts’o, can be seen here confirming this supposition.

So there you have it. You may be missing out on some valuable space, especially in those multi-terabyte arrays out there. And though it’s no longer 1998, and terabytes do come fairly cheap these days, it’s still nice to know you’re getting all that you paid for.

Silicon Power S60 60GB SSD Review

Silicon Power S60 60GB SSD Review

As we’ve mentioned in a previous article, choosing the correct hard drive for each application is critical to the performance and longevity of the drive. Different types of drives may be well-suited for some applications, but much less so for others.

Server OS Drives

One area that’s becoming somewhat challenging is finding drives to serve as operating system drives in our lab servers. On the surface, the requirements for such a drive do not seem too difficult to fulfill – we need drives that work well in a RAID1 array, display good read characteristics, are durable, and are inexpensive. Incidentally, modern consumer drives fit this bill quite well – with one caveat. You see, finding good consumer SSDs is an easy task. There are excellent options from Samsung, Crucial/Micron, Western Digital, SanDisk, Silicon Power, Toshiba/OCZ and many others. In fact, if we were looking for a 500GB SSD, the hardest part might be choosing between what seems to be an endless number of similarly performing drives.

If we were looking for a 500GB SSD, that is. In selecting a server OS drive, a 500GB drive would mostly be wasted space. A Windows Server OS might use 20GB – 30GB, unless you run a large database locally. Domain Controllers typically stay under the 15GB mark. Linux servers are considerably smaller, yet – a storage server with a few applications might consume 10GB – 15GB of disk space including swap. And it would be pretty egregious if a mail server used beyond 6GB -7GB. Obviously, this is where virtualization and shared resources becomes so advantageous. And while we firmly believe virtualization is a key component of any good home or work IT lab (which we’ll discuss in great detail later), you may not either be at the point where you need to virtualize or may not have the resources to do so. Furthermore, even if you virtualize, there are still several use cases where physical boxes are desirable or even necessary. And physical boxes need OS drives.

The Silicon Power S60

While in the hunt for a small, inexpensive, consumer SSDs, we’ve run across a few models that have worked well. Initially, we employed several of the Mushkin ECO2 60GB SSDs, as they were inexpensive, sized right, and, though they never got dreamy reviews, seemed pretty solid. In the end, however, these drives have appeared to be phased out by the manufacturer, and we have admittedly had about a 20% failure rate over a few years. So, in an effort to find a suitable replacement to the now defunct Mushkin ECO2s, we stumbled upon the Silicon Power line of SSDs. Like many SSD manufacturers, Silicon Power offers a number of SSD model ranges, and it can be sometimes hard to discern significant differences among the models. From entry-level consumer “laptop upgrade” S55 and S60 models, to “prosumer” gaming models like the S85 with a five-year warranty, as well as the TLC 3D NAND-based A55 model, Silicon Power has an offering for most applications. This ultimately led us to the Silicon Power S60. The S60 is a “consumer plus grade” SSD, which means it’s designed to fit the inexpensive laptop upgrade niche. And while it’s quite suitable for such duties, we like the fact that it’s available in a 60GB model – perfect for operating system drives. Best yet, it can be regularly found for under $35 per drive, making it a relative bargain given it’s small capacity. Consumer reviews of the S60 drive are solid as well, with nearly 200 reviews averaging 4.3 stars on Amazon at the time of this post. Since, we run a double-digit number of these drives, I always keep a couple cold spares on hand, but it’s still nice to know I can have a replacement at my door in two days if necessary (shameless Amazon Prime plug).

From the Silicon Power site, the S60’s specifications are listed as follows:

Capacity 32GB, 60GB, 120GB, 240GB, 480GB
Dimensions 100.0 X 69.9 X 7.0mm
Weight 63g (max.)
Interface SATA III
Performance Read(max.) ATTO:
480GB: 560MB/s
240GB: 550MB/s
120GB: 520MB/s
480GB: 520MB/s
240GB: 500MB/s
120GB: 420MB/s
Performance Write(max.) ATTO:
480GB, 240GB: 500MB/s
120GB: 490MB/s
480GB: 460MB/s
240GB: 300MB/s
120GB: 170MB/s
MTBF 1,500,000 hours
Operation Voltage 5V
Vibration Resistance Test 20G
Shock Resistance Test 1500G Max
Warranty 3 years
Note Performance result may vary, depending on system platform, software, interface and capacity.

Performance and Features

Though Silicon Power doesn’t list the performance specifications of the 60GB model, we tested the drive performance with both the manufacturer’s software, SP Toolbox, as well as ATTO Disk Benchmark and Crystal Disk Mark. These tests were performed in Windows Server 2012 R2 on a Supermicro A1SRI-2558F’s SoC SATA3 (6Gbps) ports.

As you can see, we found that this drive performs quite well for a smaller drive, and our read performance numbers in CDM actually exceeds the manufacturers marks for a 120GB drive by ten percent at 472MB/s. ATTO read numbers fell in line with what we’d expect – just shy of the numbers cited for the 120GB model at around 475MB/s.  Likewise, the write performance numbers for ATTO fell inline with expectations, though somewhat lower than 120GB model at around 265MB/s, while we exceed the listed speeds for the 120GB model in CDM by over 50% at 262MB/s!

Additionally, one might notice that the SSD controller is conspicuously absent from the drive specifications. There has previously been some concern that drive manufacturers may use multiple controllers in budget drives such as the S60. It would seem that this is indeed the case with the S60, which has been known to contain a SandForce or Phison controller. It doesn’t seem that Silicon Power is trying to hide this fact, however, as some manufacturers have been accused of, nor do they claim that it is one controller vs. the other. In fact, one can easily see that they reference both controllers in the SP SSD Firmware Update User Manual, indicating that your SSD may contain one or the other. To be truthful, we’ve had success with both controllers, though we ran the Firmware Update just to satisfy our curiosity. Unfortunately, we were not able to get the SSD Firmware Update to recognize the S60 in multiple systems, and were therefore unable to confirm whether the drive we tested contained the SandForce or Phison controller. Again, the drive performs as expected in both RAID and single-drive configurations, so this is not a major concern to us.

Finally, Silicon Power lists the drive’s features as follows:

  • Adopts MLC NAND flash and “SLC Cache Technology” to improve overall performance
  • 15 x faster than a standard 5400 HDD*
    *Based on “out-of-box performance” using a SATA Revision 3.0 motherboard. Performance result may vary, depending on system platform, software, interface, and capacity.
  • 7mm slim design suitable for ultrabooks and ultra-slim laptops
  • Supports TRIM command and garbage collection technology
  • NCQ and RAID ready
  • ECC (error correction code) technology to guarantee reliable data transmission
  • S.M.A.R.T. monitoring system
  • Low power consumption, shock and vibration-proof, noiseless and low latency
  • Free SP ToolBox software download for disk information such as self-monitoring analysis report, extent of consumption, and SSD diagnostics

It is heartening to note that RAID is specifically listed as a feature. It’s worth noting that all of our S60s are in RAID1 arrays, so while we cannot comment on stability or peformance of the S60 in parity arrays, we have successfully tested them with numerous disk controllers such as the HP P410i, LSI/Avago SAS 92xx HBAs in IT/IR modes, Adaptec SATA HostRaid, and software RAID via numerous onboard SATA controllers. All seem to work flawlessly with the S60, though controllers like the HP P410 will not report SSD Wear Status, which is not unexpected from a consumer drive without HP firmware. In a few cases, we even replaced failed Mushkin ECO2 60GB drives with an SP S60, and now have mixed RAID1 arrays with one 60GB Mushkin and one 60GB S60. Though the S60 reports slightly larger capacity than the Mushkin, rebuilds proceeded quickly and without issue.


All in all, we’re quite pleased with the the Silicon Power S60. And though we’re probably not using it in a capacity that Silicon Power ever intended, it seems to fill this niche nicely. Heck, we’ve even used a few of the larger S60 models for their intended purpose: to breathe new life into old laptops to throw around in the garage as dataloggers (we’re occasional drag racers) or machines for the kids to beat on. Regardless of the application, the S60 has thus far dutifully served it’s purpose. And though it may not be as fast as the latest generation of drives out there, it’s not hard to make the case that the S60 is one a heck of a value.

Pass-through Disks vs. VHDX and the VhdxTool

Pass-through Disks vs. VHDX and the VhdxTool

When evaluating storage options for a Microsoft Hyper-V guest machine, there are several options available these days. Solutions like iSCSI and Fibre Channel present block storage directly to virtual machines via Virtual Switches and Virtual SANs. While offering physical server-like performance, these solutions require significant hardware, infrastructure and the skill-sets to manage them. Two popular options that don’t require extravagant disk subsystems are pass-through disks and VHD/VHDX, however. Both offer the ability to attach the disk to the guest in the virtual machine settings, so there’s little configuration in the virtual machine itself. Let’s take a quick look at these two options.

Hyper-V Pass-Through Disks

For those not familiar, pass-through disks are disks present on the Hyper-V server that could either be local to the hypervisor or they could be LUNS mapped to the hypervisor via iSCSI or Fibre Channel. In this configuration, the disks are reserved on the hypervisor to enable exclusive access to the disk by the VM. This is done by initializing the pass-through disk in Disk Manager on the hypervisor, and then placing the disk in an Offline state.

You can see what this looks like in both diskpart and disk manager.


Virtual machine configuration for pass-through disks is straightforward as well. Once the disk is offlined in the hypervisor, simply open the settings for the virtual machine and click on the storage controller and add a hard disk. Select the radio button for, “Physical hard disk:” and choose the appropriate disk from the drop-down. Again, this MUST be an initialized disk, that has been placed Offline in Disk Management on the hypervisor.


Now that you’ve seen a bit about pass-through disks, let’s talk about the pros and cons of this type of storage.


  1. Performance.  This is the most oft cited reason for using pass-through disks.  Proponents like to talk about the advantage of not virtualizing the disk, and the near-physical performance of pass-through disks.

And, that’s about it.  Performance is really the only reason you’ll hear for using pass-through disks.  And while performance is a great reason to select a particular configuration, many experts would argue that the advantage of pass-through disks over VHDs even in older versions of Hyper-V (2008 and 2008R2) was small.  Typically, numbers such as 15%-20% are thrown around.  With the improvements in Hyper-V 2012 and 2012R2 and the VHDX format, this advantage shrinks.  Use fixed rather than dynamic VHDXs, and the advantage shrinks further, to “virtually” nothing (pun intended).


  1. Not portable.  Pass-through disks are not easily moved.  Rather than being able to copy or migrate a virtual disk to new storage, a pass-through disk must be physically moved.  Given the myriad of server and storage controller configurations, this is not typically an easy affair.
  2. Uses the entire disk.  Since the whole disk is reserved for the virtual machine, no other virtual machines can use the disk.
  3. Not recommended for OS installations.  OS installations can be problematic on pass-through disks since the VM configuration files must be located on another disk.
  4. No host-level backups.  Backups must occur at the guest level rather than the host level, since the VM has exclusive access to the disk.  As a result, backup and recovery becomes significantly more cumbersome.
  5. Difficult live-migrations.  Live migrations require storage attached to a virtual machine to migrate along with the VM.  Hyper-V clusters can be configured with pass-through disks, but it requires special considerations, and it’s not an optimal or recommended configuration.
  6. Cannot take snapshots.  Snapshots are a super-handy tool and an important advantage of using virtual machines over physical servers.  Losing this ability is a huge con.
  7. Cannot be dynamically expanded. Although dynamic disks are generally not recommended for production scenarios, they do have their use cases. Pass-through disks do not offer this functionality.


Clearly, there are numerous drawbacks to using pass-through disks.  Now let’s take a look at the alternative in this discussion – VHD/VHDX.  This is Microsoft’s implementation of virtual disks and is now the preferred method for storage in Hyper-V.  Generally, there’s no reason to use VHDs over VHDXs with modern hypervisors and VMs (there are a few environment-specific reasons beyond the scope of this article to use VHD).  VHDX supports much larger disks (64TB vs. 2TB) and is considerably more resilient to corruption, especially after a crash or power loss.  VHDX also offers online resizing, allowing you to grow or shrink a virtual disk while the VM is running.

Looking at the pros and cons of VHDX, it’s basically the reverse of pass-through disks. Like pass-through disks, VHDXs can be stored on local disks on the hypervisor or SAN LUNs attached to the hypervisor. And given there’s at most a few percent advantage in performance of a pass-through disk over a fixed-size VHDX, it’s no wonder that Microsoft pushes VHDX as the preferred storage method for VM storage.

Creating VHDXs is also a straightforward affair. From Hyper-V simply select New > Hard Disk from the Action Pane in Hyper-V Manager.

Next, select VHDX unless you have a specific reason to use the VHD format.

Choose the type of virtual hard disk you’d like to create.

Select the name and location for the new VHDX.

Now choose the size of the VHDX.

Finally, click, “Finish,” to create the VHDX.

Attaching the VHDX to the VM is much like with pass-through disks. In the VM settings, simply select the virtual hard disk radio button and provide the path to the VHDX.

So far so good. Now this is where is one tiny wrinkle rears its head. If you’re following along and creating a VHDX while reading this, you’re likely still waiting for the previous step to complete, especially if the virtual disk is a large one, and you’re not using enterprise SSD storage. If you’re like us, and creating VHDXs on a small mirror or large, but relatively slow RAID arrays (i.e. SATA RAID-5 or RAID-6), this process can take a while. For something like a 4TB VHDX on slower disks, this can really take a while.

So we have to ask, “what’s happening during the VHDX creation that takes so dang long?”  Well, as it turns out, during this process, the entire space required to store the VHDX is zeroed out on the disk.  This is a conscious decision by Microsoft, as there are security implications in not doing so.  As Hyper-V Program Manager Ben Armstrong explains here, VHDX creation could be nearly instantaneous. If the zeroing is not done, however, data may be recovered from the underlying disk(s).  This would be a huge security no-no, of course, so Microsoft has no choice but to opt for safe route.

But what about for new disks on which data has never been stored?  Clearly, there’s no security risk there.  Many IT Pros would prefer the ability to decide for themselves whether or not a quick VHDX creation is appropriate.  After all, we make decisions that have major security implications every day in our jobs.  Microsoft has provided a tool to do just this for VHDs in the past.  Unfortunately, Microsoft did not release such a tool for VHDX files.

What about an option within Hyper-V?  Maybe a feature request for the next version?  Not likely.  As Mr. Armstrong notes, “the problem is that we would be providing a “do this in an insecure fashion if you know what you are doing checkbox” which would need a heck of a lot of text to try and explain to people why you do not want to do it – and then most people would not read the text anyway.”


Enter VhdxTool, from the good folks over at Systola.  They’ve picked up where Microsoft has left off and provided a tool to create and resize VHDXs nearly instantaneously, even with multi-terabyte disks.  They explicitly state this software is to be used at your own risk – it should only be used on new disks that contain no data, and not on disks that may contain data, especially when that VHDX may be accessed by end-users.

So how fast is it?  We tested VhdxTool on four 4TB drives containing 4TB VHDXs, with pretty astounding results.

So in a little over one second, a 4TB VHDX was created to attach to our VM. Not bad. Additionally, there are also a number of command line options to accommodate any scenario. These are well documented on Systola’s site, but allow you to create, extend, convert, upgrade or view VHDXs.


Due to it’s many advantages, using VHDXs in place of pass-through disks is clearly the way forward in Hyper-V.  If you’ve ever had reservations about using large VHDX files due to long creation times, Systola has provided an indispensable tool that gives IT Pros the option to fast-create VHDXs.  Just remember, use good discretion when doing so and be sure to keep your data safe.

Taming the HP DL180 G5

Taming the HP DL180 G5


The HP ProLiant DL180 G5 can be an excellent, inexpensive storage platform for the home or lab.  It uses tried and true HP hardware, accommodates dual Xeon CPUs plus 12 x 3.5″ drives in a 2RU chassis, and both server and backplane play nicely with the ubiquitous LSI HBAs in IT mode, should you choose to use Linux software RAID.  With well-equipped models going for under $300 shipped on ebay, it’s a solid foundation on which to build a file server or shared storage platform.  But there is one problem when using the DL180 outside of a datacenter environment – it’s loud.  Not quite 747-taking-off-loud once it idles down, but it’s still loud enough that my wife wants to know why she can hear one of my “wind machines” through the master bathroom floor vents.  Of course, here at Teknophiles, we’re not going to let a little thing like fan noise stop us from using what is otherwise a compelling platform, are we?  Nope.  With a few simple modifications, we’ll show you how to domesticate this unruly beast.

DL180 Cooling

This thing blows!
This thing blows!

The DL180 G5 employs four 60mm x 38mm Delta PFC0612DE-7Q1F PWM fans that serve double-duty as chassis and processor cooling fans.  Very capable fans, these Deltas churn out air at a rate of 68 CFM at a whopping 12000 RPM.  They also consume as much as 16.8 W (each!), and produce an unfriendly 65.5 dBA.  Not exactly ideal for a basement or lab situation in which you must share the work space.  Even at 50%, these fans spin at nearly 6000 RPM and produce a good bit of noise.

There’s some good news, however.  Unlike many servers, the HP DL180 uses standard 5-pin fan connectors to connect the PWM fans rather than a proprietary connector or a fan module.  This simply means that, since we only plan on running a single Xeon L5420 processor, we should be able to find a suitable replacement fan that’s lower RPM, and therefore much quieter, yet still offers decent enough air flow to keep the drives and CPU from overheating.  Conceptually, the server should ramp fan RPM based on CPU and various board temperatures, so even a slower fan should be able provide airflow above the Delta fan’s idle flow rate, should the additional cooling be needed.

These theories are all well and good, but the engineers at HP are some pretty smart folks.  Most enterprise chassis are quite good at monitoring hardware, including the cooling components.  Some servers might have a minimum fan speed threshold, below which the server may go into a “limp mode” (i.e. full fan power), or even fail to POST altogether.  So, before we shell out money for new fans, we need a proof of concept to demonstrate that the server will not balk at a slower RPM fan.

Fan Test

Like many dutiful IT guys, we have piles of abandoned computer crap in our basements.  This may make our wives nuts, but occasionally we get to smugly smile when some old part fits the bill perfectly for a “critical” test we’re performing.  Such was the case here, when an old AMD PWM processor fan was called into service for our DL180 G5 proof of concept.

Spinning is good.


With the test fan in place, the server was booted into the BIOS to check fan speeds.  As fate would have it, the HP DL180 G5 doesn’t appear to be phased that one of it’s fans is spinning at a mere fraction of it’s normal speed.  Here you can see the AMD CPU fan rotating at a modest 1776 RPM, while the stock fans spin at 5800 RPM.  You may also note that the lower critical (LC) value for fan RPM is 0.00.  As long as a fan is detected, the server should boot and operate as normal.  It’s on us, then, to monitor temps and make sure we don’t melt anything!

One of these is not like the others.
One of these is not like the others.


New Fan Installation

Now, on to the fun stuff.  After sampling a few different fans in the 60mm x 38mm form factor, we settled on a MagLev Sunon PWM fan.  Known for their quiet bearing operation and long life, The MagLev Sunon fans are some of our favorite fans for numerous applications.  We selected the PSD1206PMV3-A, which can often times be found under analogous Dell part numbers.  This fan flows just over 34 CFM, but only consumes 3.4 W, generating significant power savings when multiplied across four fans.  Best yet, it’s max rotational speed is around 8000 RPM, which should yield much lower sound levels at idle speeds.

It was evident upon receiving the new Sunons that this wasn’t going to be a plug-and-play affair, however.  As you can clearly see, the new fans use a non-standard 4-pin connector, and also have a quite short pigtail.  To make these fans work in the DL180, we’re going have to break out the soldering iron and the crimp tool.

De-pinning the connector

After cutting off the old 4-pin connector, appropriately colored wires were soldered onto the pigtail to extend the wires to the proper length.  This will allow to us the DL180 chassis’ cable management to keep things neatly tucked away inside the case.   One thing we do have to be careful of is the order of the pins in the connector.  Unfortunately, you cannot always count on fan manufacturers to use a standard color pattern to denote the purpose of each wire.  As you can see in the table below, there’s quite a bit of variance in color coding – even within the same manufacturer.  Improperly wiring your fan may prevent the fan from spinning, or could even cause damage to the fan or motherboard.  Make sure you’re comfortable with a multimeter and test the motherboard pinout for voltage.  Don’t assume!

Fan Color Codes
Fan Wiring Color Codes

Once the proper wiring pattern was established, new fan connector pins were crimped onto the wires and then inserted into a standard 4 or 5-pin fan connector.  The reason a 4 or 5-pin connector will work in this case is that HP uses a 5-position fan header on the motherboard, but one of the pins is unused.  Either a 4 or 5-pin connector will work since both connectors are keyed such that they can only be inserted one way.  Alternatively, you could sacrifice another 4 or 5-pin fan such as the aforementioned AMD CPU fan.  Simply solder the wires with the 4 or 5-pin fan connector in the proper order and simultaneously extend the wiring without having to crimp on new pins.  After the soldering and pinning is complete, be sure to tidy up your wires with some loom or heat shrink to ensure airflow remains optimal in the server chassis.

Ready for action!
Ready for action!

Once all four fans were rewired and inserted into their retention brackets as shown above, the fans were inserted back into their respective slots and plugged into the fan headers.   After the initial POST sequence the fans quieted down to a much more manageable level.  A peek in the BIOS shows that the new fans are idling between 3500 – 4000 RPMs.

Much quieter
Much quieter now


After several days of monitoring, drive and CPU temps appear to be holding steady within a safe range with our new fans.   With room for 12 x 2TB drives, reliable hardware, and a compact 2U form factor the HP DL180 G5 makes a great budget storage platform.  Ours is humming happily along  with 24TB of raw storage and shows that, with a little ingenuity, a server once only suited for a data center can be right at home in a basement or lab environment.

Clear the Disk Read-Only Flag in Windows

Clear the Disk Read-Only Flag in Windows

While recently adding a new disk to one of our backup servers, one of the disks changed device letters in Linux. Ordinarily this is not a big deal, but since this particular disk was a iblock device in an LIO backstore, and was defined by the /dev/sd[x] notation, it was no longer listed correctly. Oddly, the disk was still listed in the Disk Manager on the hypervisors, but any attempt at I/O would result in errors. The disk was ultimately removed from the LIO configuration, which then caused the LUN to drop from the hypervisor nodes.

After adding the disk back to LIO using a slicker method as detailed here, the disk reappeared on the hypervisors, and we reconnected the disk to the VM in Hyper-V. However, after adding the storage back, we noticed the LUN from LIO was marked as read-only in the virtual server, and would not permit any writes. Should you run into a similar situation, the fix is usually pretty simple, as noted below.

First, start the diskpart utility from a Windows CLI and list the available disks:


Next, select the disk in question, in this case Disk 6. Notice that when we look at the disk details in diskpart, this disk is definitely listed as read-only:


With the disk still selected, clear the readonly attribute for the disk with the following command:


The disk should now be listed as “Read-Only: No,” and available for writing. You can verify its status with the detail command as before.

We’re still not quite sure what caused this little issue, as we’ve removed and added several disks back in LIO without this cropping up. Perhaps it was the less than graceful removal of the disk from the hypervisor while it was attempting IO. Whatever the case, though an old utility, diskpart can still prove to be a useful tool when the need arises.