Configuring Fibre in LIO

Configuring Fibre in LIO

Here’s a quick walk-through to get you up and running with fibre channel in LIO.

1) Install LIO if not already installed

2) Create the qla2xxx.conf file to configure fibre HBA to target mode

(the x’s are a part of the file name and not just blanks for the actual model number)

3) Add the following line to the qla2xxx.conf file

4) Save the file and exit

5) Now we must update initramfs with the new changes

6) Restart the server to apply changes

7) Launch targetcli to configure LIO

8) Add the LVM luns to Lio-Target (LVM luns are added to the iblock option under /backstores)

9) Now find the WWN(s) for the fibre card that is located in the server

10) Change to the qla2xxx directory

11) Now we need to add the WWN(s) from step 9 into targetcli (Do this step for each WWN that you wish to use)

(fill out the WWN from step 9)

12) Change to the WWN that you wish to add storage to

13) Add the storage from step 8 to the WWN that you just changed to.

14) Repeat steps 12 and 13 for each WWN and storage that you wish to use

15) Now change to the ACLS directory so we can add the ACL so the host can talk to the USAN

16) Now create the ACL for the WWN of the host you are trying to present the LUN to

(This is the WWN on the host)

17) Review all the configuration changes that were just made

18) Once all configuration changes have been verified, save the configuration


A Better Block Device in LIO

A Better Block Device in LIO

If you’ve read our previous articles on LIO , you’ve probably gathered that LIO is one of our favorite Linux utilities. We love the ability to use inexpensive hardware and FC or iSCSI cards to create a rock-solid Linux-based SAN to provide back-end storage for Hyper-V cluster shared volumes, highly-available shared VHDXs, or LUNs for Windows File Servers. We also love the flexibility that Linux MDADM/LVM offers to seamlessly add or expand storage arrays or add new LUNs. It really gives the IT Pro the ability to use many Enterprise features in a home lab you’d only be able to otherwise replicate with expensive, impractical hardware.

In the end, all this flexibility means we will inevitably tinker with configurations, add and remove hardware, and just generally screw around with things until we break them, then fix them, then break them again. That’s what we do. And, as it so often goes in IT, with any luck we’ll learn a thing or two along then way.

This was exactly the case when we recently expanded one of our backup Ubuntu SANs by adding a new disk. After the new volume was added, it became apparent that the previous method of using the typical Linux device notation for harddisks (/dev/sda, /dev/sdb, etc.) was not an optimal configuration.

Consider the following LIO backstores configuration:

This configuration has worked fine for months. However, after adding new disk, we quickly realized one of the volumes being presented to a two-node Hyper-V cluster was now listed as “Offline – Not initialized,” and any attempts to bring it online failed with I/O errors.

Looking at the backup uSAN, the disk that was formerly /dev/sdg was now /dev/sdh, and LIO’s ACLs were no longer correct. Though quick and dirty, using the /dev/sdx notation is clearly not the best way to add a single disk to the LIO backstores, since these values are subject to change. Looking in /dev/disk, we see a few different options that may be helpful:

Typically by-UUID is a good option – we’ve used it in the past for other operations. However, we’re specifically exporting block devices, and UUID only shows disks with partitions. /dev/disk/by-label, /dev/disk/by-partlabel, and /dev/disk/by-partuuid are much the same way, not all disks will have a label or partitions to view. /dev/disk/by-path is promising, but only if all the relevant disks are hanging off a SAS controller. Since many lab environments, such as ours, may make use of both on-board SATA headers as well as PCIe SAS controllers, that only leaves /dev/disk/by-id. Now listing disks by /dev/disk/by-id appears a bit messy at first, but if you look carefully you’ll a see neat and tidy way of referencing disks.

Specifically, let’s look a this systems’ disk in question, /dev/sdh.

Very nice — we see a disk ID that not only tells us the make and model of the drive, but also appends the drive’s serial number to the end. This is quite handy in a system with 10 or 20 drives, many of which may be the same model. Now, let’s go back to LIO’s targetcli and try to add the block device using this new identification, rather than the device letter.

OK. Looks like that works just fine. Now, let’s create the associated LUN.

Again, all appears well. To summarize the final steps, we then added the storage in Failover Cluster Manager, created the Cluster Shared Volume from the new disk, created a VHDX to fill the LUN, and attached the new virtual disk to our virtual machine with out issue. Now, when we add or swap drives, change disk controllers, or even completely move disks to new motherboard/chassis, we no longer have to worry about device letters, as this new (and better) method removes any ambiguity as to which disk is which.