Resizing the Linux Root Partition in a Gen2 Hyper-V VM

Resizing the Linux Root Partition in a Gen2 Hyper-V VM

Without a doubt, modern virtualization has changed the landscape of enterprise computing forever. Since virtual machines are abstracted away from the physical hardware, changes in compute, memory, and storage resources become mere clicks of a mouse. And, as hypervisors mature, many operations that were once thought of as out-of-band tasks, such as adding storage or even memory can now be done with little, or even zero downtime.

Hyper-V SCSI Disks and Linux

In many cases, hypervisors are backed by large storage area networks (SANs). This provides shared storage for hypervisor nodes that supports failover clustering and high availability. Additionally, it gives administrators the ability to scale the virtual environment, including the ability to easily add or expand storage on existing virtual servers. Microsoft’s Hyper-V 2012 introduced Generation 2 VMs, which extends this functionality. Among the many benefits of Gen2 VMs, was the ability to boot from a SCSI disk rather than IDE. This requires UEFI rather than a legacy BIOS, so it’s only supported among newer operating systems. Many admins I talk to think this is limited to Microsoft Server 2012 and newer, probably because of the sub-optimal phrasing in the Hyper-V VM creation UI that altogether fails to mention Linux operating systems.

The fact is, however, that many newer Linux OSes also support this ability, as shown in these tables from Microsoft.

More Disk, Please

Once you’ve built a modern Linux VM and you’re booting from synthetic SCSI disks rather than emulated IDE drives, you gain numerous advantages, not the least of which is the ability to resize the OS virtual hard disk (VHDX) on the fly. This is really handy functionality – after all, what sysadmin hasn’t had an OS drive run low on disk space at some point in their career? This is simply done from the virtual machine settings in Hyper-V Manager or Failover Cluster Manager by editing the VHDX.

Now, if you’re a Microsoft gal or guy, you already know that what comes next is pretty straightforward. Open the Disk Management MMC, rescan the disks, extend the file system, and viola, you now automagically have a bigger C:\ drive. But what about for Linux VMs? Though it might be a little less intuitive, we can still accomplish the same goal of expanding the primary OS disk with zero down time in Linux.

On-the-Fly Resizing

To demonstrate this, let’s start with a vanilla, Hyper-V Generation 2, CentOS 7.6 VM with a 10GB VHDX attached to a SCSI controller in our VM. Let’s also assume we’re using the default LVM partitioning scheme during the CentOS install. Looking at the block devices in Linux, we can see that we have a 10GB disk called sda which has three partitions – sda1, sda2 and sda3. We’re interested in sda3, since that contains our root partition, which is currently 7.8GB, as demonstrated here by the lsblk command.

Now let’s take a look at df. Here we can see an XFS filesystem on our 7.8GB partition, /dev/mapper/centos-root which is mounted on root.

Finally, let’s have a look at our LVM summary:

From this information we can see that there’s currently no room to expand our physical volume or logical volume, as the entirety of /dev/sda is consumed. In the past, with a Gen1 Hyper-V virtual machine, we would have had to shut the VM down and edit the disk, since it used an emulated IDE controller. Now that we have a Gen2 CentOS VM with a SCSI controller, however, we can simply edit the disk on the fly, expanding it to 20GB.

Once the correct virtual disk is located, select the “Expand” option.

Next, provide the size of the new disk. We’ll bump this one to 20GB.

Finally, click “Finish” to resize the disk. This process should be instant for dynamic virtual hard disks, but may take a few seconds to a several minutes for fixed virtual hard disks, depending on the size of the expansion and speed of your storage subsystem. You can then verify the new disk size by inspecting the disk.

OK, so we’ve expanded the VHDX in Hyper-V, but we haven’t done anything to make our VM’s operating system aware of the new space. As seen here with lsblk, the OS is indifferent to the expanded drive.

Taking a look at parted, we again see that our /dev/sda disk is still showing 10.7GB. We need to make the CentOS operating system aware of the new space. A reboot would certainly do this, but we want to perform this entire operation with no downtime.



Issue the following command to rescan the relevant disk – sda in our case. This tells the system to rescan the SCSI bus for changes, and will report the new space to the kernel without a restart.

Now, when we look at parted again, we’re prompted to move the GPT table to the back of the disk, since the secondary table is no longer in the proper location after the VHDX expansion. Type “Fix” to correct this, and then once again to edit the GPT to use all the available disk space. Once this is complete, we can see that /dev/sda is now recognized as 20GB, but our sda3 partition is still only 10GB.

Next, from the parted CLI, next use the resizepart command to grow the partition to the end of the disk.

Our sda3 partition is now using the maximum space available, 20.2GB. The lsblk command also now correctly reports our disk as 20GB.

But what about our LVM volumes? As suspected, our physical volumes, volume groups and logical volumes all remain unchanged.

We need to first tell our pv to expand into the available disk space on the partition. Do this with the pvresize command as follows:

Sure enough, our pv is now 18.8GB with 10.00GB free. Now we need to extend the logical volume and it’s associated filesystem into the free pv space. We can do this with a single command:

Looking at our logical volumes confirms that our root lv is now 17.80GB of the 18.80GB total, or exactly 10.0GB larger than we started with, as one would expect to see.

A final confirmation with the df command illustrates that our XFS root filesystem was also resized.

Conclusion

So there you have it. Despite some hearsay to the contrary, modern Linux OSes run just fine as Gen2 VMs on Hyper-V. Coupled with a SCSI disk controller for the OS VHDX, this yields the advantage of zero-downtime root partition resizing in Linux, though it’s admittedly a few more steps than a Windows server requires. And though Linux on Hyper-V might not seem like the most intuitive choice to some sysadmins, Hyper-V has matured significantly over the past several releases and is quite a powerful and stable platform for both Linux and Windows. And one last thing – when you run critically low on disk space on Linux, don’t forget to check those reserved blocks for a quick fix!

Joining a Windows Domain with Centrify Express

Joining a Windows Domain with Centrify Express

As you’ve seen us mention in our Linux File Servers in a Windows Domain article, Linux systems have become an omnipresent fixture of the IT landscape, even in companies that are heavily invested in Windows infrastructure. But one challenge to managing such infrastructure diversity is maintaining standardization among disparate systems. Ask any IT Admin who’s managed Linux systems over the last 20 years and you’ll likely hear numerous stories about rogue servers, shared accounts, a lack of password complexity enforcement, and a plain lack of standardization. Things have changed a lot over the past decade, however. With a general shift to virtualization, as well as provisioning tools from Ubuntu and Red Hat, and third-party tools like Ansible, system standardization on Linux has become easier than ever.

Yet another complication of a diverse environment is identity management. Especially with the ever-looming threat of virtual machine sprawl, managing disparate systems and their respective logins can be tiresome at best, and a downright security risk at worst. Thus, when considering systems and applications, a good engineer should always ask questions about identity management and integration with Active Directory, LDAP, or other directory service. Along the way Samba has offered some ability to integrate Linux systems into Active Directory, but it wasn’t always easy to implement, and identity management wasn’t Samba’s primary focus. Today, however, there are now dedicated tools to manage identities across numerous systems, including Linux, UNIX, Macs, and beyond.

One such tool that simplifies and unifies identity management across multiple platforms is Centrify. Centrify was founded in 2004 and offers software designed to thwart the number one point of entry in a data breach – compromised credentials. To say it’s a trusted platform would be an understatement – Centrify claims over half of the Fortune 100 trusts some form of their identity and access management to Centrify.

Here at Teknophiles, we look for a couple of things in a software package for use in our lab:

  1. Does the software offer Enterprise-level performance and functionality?
  2. Does the vendor provide IT professionals with an inexpensive (or free) means to test or use the software, at least in some limited capacity?

We’re not a big fan of 30-day trials at Teknophiles, because as any IT Pro knows, it’s really tough to dig into a software package and learn the ins and outs in such a limited time frame, especially with a day job. And in a lab environment, it’s almost mandatory to be able to leave a piece of hardware or software in place for an extended length of time, so that it can be tested in future configurations and scenarios (Take note, Microsoft, Re: TechNet!). We also understand that software companies are in business to make money, which is why we like the, “Free, but limited” model. In the, “Free, but limited” approach, software may be limited to a certain scope of install, number of nodes, or reduced feature set. There are lots of great examples that follow this model – Nagios Core, Thycotic Secret Server, Sophos UTM, among others, and Centrify is no exception. To get the full feature set, one must upgrade to the licensed version. Typically, there’s an easy upgrade path to the licensed version, which gives IT Admins the added confidence that they can stand up a piece of software and, if they and their superiors decide it’s a good fit, simply upgrade without standing up a completely new system.

I happen to use Centrify daily at my full-time job. It’s become instrumental in managing and standardizing access to the numerous Oracle Linux and RHEL systems we deploy. Given my experience with Centrify in the Enterprise environment, I was delighted to learn that they offer a limited package, called Centrify Express, that allows for installation on 200 servers, albeit with a reduced feature set. Here’s a table with a brief comparison of features between Centrify Infrastructure Services and Centrify Express: Reasons to Upgrade

Before You Start

In this article, we’ll briefly cover the installation of the Centrify Express Agent on CentOS 7. Before we dig in, however, there are a couple of things you need to make sure you have in order beforehand.

First, it probably goes without saying, but you need a working Active Directory environment with at least one functioning Domain Controller. You also need the credentials for a user who has the ability to add systems to the domain. It’s probably easiest if you use an account that is a Domain Admin or has similarly delegated permissions within Active Directory.

Second, as we’ve mentioned previously, DNS is a critical component of a functional Active Directory implementation. Be sure you have good FQDN (server.domain.com) resolution between the Domain Controllers and the system you wish to become a domain member, as well as name resolution from the Linux member server to the Domain Controllers (DC01.domain.com, DC02.domain.com, etc.)

Finally, give yourself at least one local Linux account with sudo access that is NOT the same as any AD account. Should you at some point find yourself unable to log into the server via Active Directory, you can use this account as a fallback.

WARNING

If the local account has the same name as an AD account, the system will assume you are trying to use the AD account and you may be unable to log in. If this is the only local account with sudo access, you could find yourself without the ability to administer the server!

With that out of the way, let’s move on to the fun parts.



Staging the Centrify Express Installation Files

Begin by downloading Centrify Express for the proper flavor of Linux here. You’ll have to fill out a short form to gain access to the installers, but it’s free and there’s no commitment. In fact, unlike some other software vendors, I haven’t been plagued by sales calls, either.

Once, you have the proper installer downloaded, simply SFTP the file to the server you wish to become a domain member. On the server, create a folder to extract the installation files.

Next, extract the installation files as follows.

Now, simply begin the installation by executing the install.sh script.

Select X for the Centrify Express installation.

Select Y to continue to install in Express mode.

Select Y to verify the AD environment. This is a good idea, as it will flag any major issues prior to attempting the domain join.

Enter the domain you wish to join.

Confirm you want to join an Active Directory Domain by selecting Y.

Again, enter the name of the domain you wish to join.

Next, enter the username and password of an admin with permission to join systems to the domain. It doesn’t have to be a Domain Admin, but it using a Domain Admin account may simplify permissions troubleshooting:

Verify the computer name and the container DN within Active Directory. In most cases the Computers container will suffice, as you can always move the machine later for organizational purposes. If you choose, you can also specify a different container, as shown here.

Now, enter the name of the Domain Controller you wish to use to join the domain. Typically this can be left as auto detect, but you can also specify a DC.

Choose whether you want the system to reboot upon completion of the install and domain join. Though this is not required, other services may need to be restarted for full integration.

Finally, confirm the options you provided and select Y to proceed.

View the output of the installation summary and resolve any issues. In this case you can see we had one warning regarding SSH configuration. Any issues may need to be addressed for complete integration.

Before logging out as root, be sure to add domain admins or other Linux administrator AD group to the sudoers file, by adding the following line.

You should now be able to log in via a domain account. Use only the user’s shortname (not FQDN).

You can quickly confirm that your AD account is working properly by viewing your user’s domain group memberships.

That’s it for the install! We hope to provide additional Centrify walk-throughs in a future article (Centrify Samba/PuTTY), but this should quickly get you up and running with single sign-on for your domain-integrated Linux servers. As you can see, Centrify provides a neat and tidy package to manage identities across multiple server platforms in a Windows Domain. It’s rock-solid, and right at home in a small home lab or large Enterprise. We’ve been using it for years now without issue in both of environments, and hope you’ll find it as useful as we have.

Linux File Servers in a Windows Domain

Linux File Servers in a Windows Domain

Let’s face it: Though Linux is experiencing a bit of a renaissance lately, it’s still a Windows world out there. This seems to be especially true in the Enterprise. Users love their Windows. They love their Start Menu, their Task Bar, their Internet Explorer, their My Documents folder and all the problems and pain that go along with it. Don’t get me wrong, I like Windows, too.  And in fact, when it comes to my own use, I’m not much different than the users I’ve supported for the last 20 years. Sure, I try to strike a balance between Linux and Windows for my daily-use machines, and I’m a huge Linux advocate. But ultimately, I find myself gravitating back to Windows for most of my daily tasks, even for email and general web browsing, and certainly for music and entertainment.

But in many Enterprises, Windows’ roots run even deeper. With Exchange Server, SharePoint, Lync (now Skype for Business), MSSQL, System Center and a myriad of other offerings, Microsoft’s server solutions just make sense for many IT shops which have hundreds, even thousands, of desktops and laptops running Windows. Perhaps most ubiquitous of all, however, is the Windows Domain Controller. Microsoft’s Active Directory seems to be the go-to product for authentication and policy management in the Enterprise. In fact, it’s routinely stated that 95% of Fortune 500 companies use Active Directory.  And though the push to, “the cloud,” has some folks decrying the inflexibility of Active Directory as more and more services are moved off-premise, it seems pretty safe to say we won’t see the demise of AD any time soon.

With all this said, according to Red Hat, by 2013 over 90% of Fortune 500 companies relied on Linux in some capacity.  It seems pretty clear that it’s a good bet to have both Windows and Linux skill sets in today’s technology landscape. It also shouldn’t come as much surprise that integrating non-Windows systems into a Windows Domain is big business these days. In this article we hope to demonstrate an example of just that – how to integrate a Linux server into a Windows domain as a file server for Windows clients.

Jump To:

Samba
Why Linux?
Prerequisites
Webmin
IP Address Configuration
DNS Configuration
Installing Samba Components
Samba and Kerberos Configuration
Samba Winbind Options
Joining the Windows Domain
Domain Users & Groups
Extended ACLs
Creating Samba File Shares
Accessing the Share
Configuring AD User Shells & Home Dirs
Conclusion

Enter Samba

For nearly 25 years, Samba has been providing interoperability between Linux/Unix and Windows. Samba allows Linux or Unix-like systems to become Windows domain members in a Windows domain. And though it’s beyond the scope of this article, newer versions of Samba will even allow a Linux/Unix server to act as a domain controller.  In turn, Samba facilitates communication between Windows systems and a Linux/Unix server over the Server Message Block (SMB)/Common Internet File System (CIFS) protocol. In essence, your Windows machine will talk to the Samba server just as though it’s a Windows file/print server.

Why Linux?

So if we’re essentially emulating a Windows server with Linux, why not just use a Windows operating system?  Well, there are a few of scenarios where this configuration may make sense:

  1. Application Compatibility.  You may have an application that runs on Linux or Unix only, but you still need the connectivity that Samba provides.  In this case, you can still run your native Linux application, but allow your Windows clients to access file shares on the server.
  2. Licensing.  Licensing costs for Windows Server may be another factor. For some SMBs, an additional Windows license for a file/print server may be prohibitively expensive.  Many flavors of Linux, on the other hand, are free.
  3. Hardware.  Windows generally requires beefier hardware than Linux.  Even an old desktop in the basement can make a fine home or lab file server.
  4. Software RAID. For others, Linux offers the unique ability to inexpensively provide something that many folks do not trust Windows to do – software RAID. Linux software RAID (aka MDADM) doesn’t have the strict requirements of hardware RAID controllers, and many times can be done less expensively. It also is mostly hardware agnostic. You can typically lift a Linux MDADM RAID array from one box and drop it in another, assemble the RAID array, and find the data intact.  Since hardware RAID controllers are often tied to their disks via specific metadata, this is critical for the hobbyist or home lab, where you may not have an endless supply of identical controllers should one fail.  We’ve even done this with name-brand SOHO NAS appliances which had kernel failures and rescued terabytes of data for a client!

Prerequisites

There are a few things you need to have working before this exercise, however.  Make sure you are at least somewhat familiar with the technologies mentioned.

  1. Domain Controller/Active Directory.  You need to have a working domain controller running Active Directory.  We’ve only tested this configuration on a Windows domain controller, but a Samba 4.0 or newer domain controller emulates this functionality as well.  We’ve been doing this since Windows Server 2008 through Windows Server 2012 R2, so any recent Windows Server should work just fine.
  2. DNS Server.  You also need a properly functioning DNS server, preferably Active Directory integrated.  DNS is critical to the domain join process, so make sure your DNS server(s) are working properly – more on this later.
  3. Linux Server.  This is the member server that is to join the domain.  It doesn’t need anything fancy for this exercise, but must be able to communicate with your domain controller/DNS server.  It can even be a virtual machine for proof of concept, although most home/lab file servers will likely be physical machines.
  4. Windows Domain Member.  This machine can be a desktop, laptop, or virtual machine, as long as it’s joined to the domain and can reach the three servers listed above over the network.

Webmin

Now, we like to perform most operations from the command-line in Linux, as many GUIs typically aren’t very mature in Linux or don’t offer the same functionality as the CLI.  In addition, the CLI gives you an intimate view of applications and configurations you just don’t get from a GUI.  On the other hand, when it makes sense to use a GUI to do something, we won’t shy away from it just to impress our friends.  Webmin is a perfect example of this.  Webmin offers a web-based interface for completing many Linux/Unix administration tasks.  Available for most distributions, it simplifies many operations, eliminating the need to manually edit configuration files.   Joining a Linux server to a Windows domain is one area we like to use Webmin, so our first task will be to install Webmin on our Linux server.  We’ll use Webmin for much of this walk-through, but also show the configuration changes in the file system when possible, so you can become familiar with the underlying files that are affected.

Our Linux server in this case is Ubuntu 14.04, so you’ll see some specificity to Ubuntu, such as using aptitude for package installation.  Most other distros should work much the same as what’s shown in this guide, but obviously some commands and steps will have to be altered.

The simplest way to install Webmin is to download the bits and use the Debian package manager to perform the installation.  First, install any necessary dependencies for Webmin:

Once any dependency issues are resolved, find the latest version of Webmin here and dowload it using wget.  We want the debian version that is offered:

Finally, install Webmin using the Debian package manager:

Once installed, you should find Webmin listening on port 10000.  Keep in mind that distros which have a firewall like iptables enabled by default may need firewall rule modifications to allow access to Webmin.

From your Windows workstation, you should now be able to log into Webmin by browsing to https://servername:10000, where servername is the host name of your new Linux server.  If you cannot resolve it by hostname, IP Address should work as well.  Keep in mind, however, that name resolution of your Linux server will need to work at some point.

IP Address Configuration

It’s best to assign your server a static IP, so you won’t run into any stale DNS record issues.  DHCP may work, at least in the short term, but future problems could arise.  Although IP Address changes can be made via Webmin, this is one area that it’s usually best to be in front of a console.  I also find this configuration change to be easier and quicker from the command line.  Log into the server and edit the interface configuration as follows, substituting the IP and domain information for your network.

Once you’ve saved the configuration file, bounce the network adapter.

 

webmin_network

Now log into Webmin on your Linux server and verify the changes that were just made.  From the left-hand navigation menu, expand, “Networking,” and click on, “Network Configuration.”  The Network Configuration module contains all the settings related to interface configuration, routing and gateways and DNS and hostnames.

 

webmin_if

Select the Network Interfaces link.  Here we should see both the active and at-boot configurations for our Ethernet adapter with the static IP Address we just assigned.  You can also verify the default gateway and DNS server settings for your server here.

 

 

DNS Configuration

As mentioned before, DNS is critical for the domain-join process.  Your Linux server relies on name resolution to locate the domain controller and begin authentication.  We set the preferred DNS servers in the previous step, so the Linux server should be able to resolve the domain controller(s) on your network now.  Test name resolution by running a simple ping test from the Linux server to the domain controller.

 

webmin_hosts

Now we’ll add a hosts file entry for the loopback adapter on the Linux server.  In the Networking > Network Configuration module, select the Hosts Addresses configuration.  Click on the entry for 127.0.0.1 that lists the server’s hostname.  Add the FQDN of the Linux server as the first entry in the list as shown.

 

Again, test the settings by running a simple ping test from the Linux server.  You should see replies containing the FQDN.

We also need to make sure we have name resolution in the other direction.  Since we set a static IP address, we will likely need to create an A-record on the DNS server for the Linux box.  Once the A-record is created, ensure that you have name resolution to the Linux server from both your domain controller and your Windows client.



Installing Samba, Winbind & Kerberos for Authentication

Next, use aptitude to install samba and winbind.  These components will allow you to communicate with the domain controller and use Windows-based accounts in a Linux or Unix environment.

Next, install Kerberos.  Kerberos was developed at the Massachusetts Institute of Technology as a means of providing mutual authentication.  All versions of Windows since Windows 2000 use Kerberos as their default authentication mechanism, and thus is necessary for our Linux server to provide authentication in a Windows domain.

Samba and Kerberos Configuration

Back in Webmin, refresh the modules to display the newly installed applications.  You should now see Samba listed under Servers, and Kerberos5 listed under Networking.  First, click on Kerberos5.  Here, provide the following information based on your environment.

  • webmin_krb5Realm:  Your domain name – IN ALL CAPS
  • Domain name:  Your domain name – in all lowercase
  • Default domain name:  Your domain name – in all lowercase
  • Use DNS to lookup KDC: Select Yes
  • KDC: FQDN of your domain controller – use port 88 unless you know otherwise
  • Admin server: FQDN of your domain controller – use port 88 unless you know otherwise

 

Next, click on the Samba Windows File Sharing under Servers and click on the Windows Networking icon.  Again, provide the following information based on your environment.

  • webmin_winnetWorkgroup: Pre-Windows 2000 (short) domain name – in all lowercase
  • WINS mode:  Use Server – IP of your domain controller
  • Server description: %h server (something descriptive)
  • Master browser priority: 20
  • Highest protocol: default
  • Master browser: Yes
  • Security: Active Directory
  • Password server: FQDN of your domain controller

 

Samba Winbind Options

webmin_winbind

Finally, click on Winbind Options in the Samba module.  Select the options here, and click save.  Frustratingly, the options here seems to have inconsistent results in the configuration file, so we’ll need to verify them in the config.  On the server, backup and then edit the smb.conf file as follows.  You’ll notice a number of the other changes we’ve made have been stored here.

 

 

Locate the [global] section and edit as follows.  Comment out the following two lines if present:

Now add these lines to the end of the global section if they do not exist:

Interestingly, you can see the options reflected in the Winbind Options in the Webmin Samba module.  If you look closely in the Webmin UI, the setting, “Disallow listing of users/groups?” is clearly set to “Yes.”  However, we’ve just set the winbind enumeration to “yes” in the smb.conf file. These settings appears to be contradictory, and you can have strange results if you make changes in the GUI after effecting the changes in the config.  Once things are working, it’s best not to make any additional changes to the Samba Winbind options in Webmin.

Joining the Windows Domain

We’re finally ready to join the Windows domain now.  Issue the following command, where the user, “username” is a domain user that has the permissions necessary to join computers to the domain. It’s always best to use an account with the least amount of privileges to perform an action, but if you are in doubt or if you encounter errors, use a Domain Admin account to rule out permissions issues.  If you’ve carefully applied the settings, however, and DNS is working properly, you should achieve success here and see the new computer account in Active Directory.

Before we move on, let’s break this command down a bit.  The net commands are useful tools for managing Samba/CIFS on your domain-joined Linux server.  In this instance, the ‘net ads join’ command tells Samba that we’re working with the AD command set, hence the ‘ads’ component, while the ‘join’ directive tells Samba that we want to join an Active Directory domain.  The next three options are not specific to Active Directory, but modify the ‘net’ portion of the command.  The -S option specifies the target server (Domain Controller) and the -U specifies the username of the user to use for the domain join.  As mentioned above, this user must have the necessary rights to create objects in AD.  The -k option states that we wish to use Kerberos as the authentication mechanism.

ad_os

The final options, createcomputer, osName and osVer are not required, although they do add some useful features.  First, ‘createcomputer’ creates the new computer account in a specific OU within AD.  This can be handy if you want to keep your Windows and Linux servers separated for policy or organization purposes.  The ‘osName’ and ‘osVer’ options are pretty self-explanatory, but if you like things neatly documented, this will prepopulate the Name and Version fields for the new computer object in AD.

In additional to joining a domain, you can leave a domain, view logon server info, query domain users and groups, and even dynamically update Active Directory integrated DNS records.  The full list of net ads commands can be viewed by simply typing ‘net ads.’

Domain Users & Groups

Next, we need to configure our Linux server to look to the domain controller for users and group authentication.  To do this, we need to simply edit the nsswitch.conf file.  For the passwd and group directives, simply add “winbind” after the compat parameter on each line.  After saving the file, restart all relevant daemons.

We can now verify the configuration as follows.  The wbinfo command let’s us know that Winbind is successfully working and we’re able to connect to the DC to enumerate users and groups.

And now check to verify that the passwd and group databases on the Linux server are populated with the domain users and groups.  The output has been abbreviated a bit, but notice that after the usual passwd file entries, we see our domain accounts beginning with the id 10000.

Extended ACLs

One thing that’s important to keep in mind when we’re talking about Windows file shares, is that permissions, or access control lists (ACLs) are a crucial component to ensure users can see the files they should, but are restricted from those they shouldn’t.  In the Windows world permissions are further divided into two components – share permissions and file system permissions.  Without both properly set, users may experience issues with access.

First, let’s discuss file system permissions in our Linux-in-a-Windows-domain environment that we’ve created.  Traditional Unix permissions aren’t much good to us if we want our new Linux file server to work like Windows, as we would be limited to a single user and group on each directory or file.  You certainly may have a situation where you’d want both Accounting and Finance to have read-write access to a directory, but perhaps HR to only have read access to that same directory.  Enter Linux Extended ACLs.  We dig into the Extended ACL package in detail here, but suffice to say that Extended ACLs are the icing on the metaphorical Linux file server cake.  Extended ACLs gives us more Windows NTFS-like permissions; without them much of the power of Linux domain integration is lost.  To see this in action we need to install the acl package with the following command.

We’ll also need a directory to share out so let’s assume we have an empty 5 GB partition to work with.  First, we need to create an EXT4 file system on the partition as shown below.

Now we need to mount our partition on the server.  First, create the directory to hold our shares, and a subdirectory in which we want to mount our partition.

Next, edit the fstab file to auto mount our new partition to ensure it persists after a reboot.  I prefer to do this by using the disk UUID rather than the device letter and partition number (i.e. sda1, sda2, sdb1, etc.), as device letters may change if disks are swapped around on a SATA or SAS controller, a new controller or disk enclosure is added, or if disks are moved to a different system.  Disk UUIDs are easily determined by listing the devices as shown.  Locate the disk UUID for /dev/sdb1 and use the unique identifier in the fstab file.  Note also that the our disk is to be mounted with the acl option.  This enables us to use the extended ACLs package we just installed.

Finally, mount the partition from fstab.  We can then easily verify the newly available space by taking a quick peek at the disk file systems with the df command.

Now that we have a place to share our files, let’s modify the traditional Unix permission set on the shares directory, but leverage the domain groups we now have available.

Note the double backslash when setting permissions.  To make the Windows users and groups work, we must escape the backslash that typically separates the domain\user and domain\group since it’s a special character in Linux.  The first command sets the owner to a domain user called shareadmin; the second command sets the group to a domain group called share admins.  Finally, the last command sets the traditional POSIX rw- permissions.  So, there’s not much new here, but we can start to see the additional flexibility our AD integrated server offers.

Next, let’s consider the same directory called files, but suppose we want further granularity than just the owner and group permissions.  This is where the extended ACL commands become quite powerful.  To first take a look at any ACLs that exist on this directory, we’ll use the getfacl command.  Getfacl will not only show us the traditional UNIX permissions, but also any additional ACLs applied to the file or directory.  Again, not much to see here yet, but this will start to take shape soon.

The setfacl command allows us to set ACLs on file or directory, separate from the traditional UNIX permissions set above. The setfacl -m parameter specifies that we want to modify the ACL, and the u: or g: parameter indicates whether we’re modifying a user or group permission.  Additionally, the -d parameter, along with the ‘chmod g+s’ command, gives us the ability to set default ACLs on the directory, so that new files and subdirectories inherit the parent ACL.

So in the above example, we’re turning on inheritance, setting the default permissions for the default user and group, and we’re also assigning three separate group default permissions to this directory.  The first two groups, Domain Admins and Share Admins both have read/write/execute, while the third group, Backup Admins, has read and execute only.  Now taking a look at getfacl again on this directory, we can see a clear difference from our vanilla directory:

Finally, we want to grant explicit ACLs on the parent folder – remember the previous ACLs we assigned were only defaults.  These commands look similar to the default ACLs, less the -d parameter.

Now we have a full set of permissions, and any new subdirectories will inherit these permissions as well.



Creating Samba File Shares

Now for the part we all came for – creating the file shares.  Again, this is one of the operations that just plain easier to manage in Webmin.  In the Samba module, click on the “Create a new file share,” link.  Here, provide the basic share information.

  • createshareShare name:  Something logical but succinct, such as Music or Pictures
  • Directory to share:  The directory on the Linux server that contains the files we want to share out
  • Available:  Yes
  • Browseable:  Yes (No, if you want the share to be hidden)
  • Comment:  Not required, but can be a longer description of the share contents

Once done, click the Create button to commit the settings.  You should now see the share in the Samba share list.  Click on the new share name in the list and click the ‘Security and Access Control’ link. Recall before we said that file server permissions were comprised of two components – file system and share permissions.  We’ve configured the file system permissions with Linux Extended ACLs, but here we’ll set the share permissions.

On the Edit Security page, provide the information for share permissions.  We will use the same groups we discussed in the setfacl examples.

  • sharesecurityWritable:  Yes
  • Guest Access:  None
  • Limit to possible list?  No
  • Hosts to allow:  All (unless you choose to restrict access by host)
  • Hosts to deny:  None (unless you choose to restrict access by host)
  • Revalidate users?  No
  • Valid groups:  “domain\share admins” “domain\domain admins” “domain\backup admins”
  • Read only groups:  “domain\backup admins”
  • Read/write groups:  “domain\share admins” “domain\domain admins”

Click the Save button when complete.  Regarding the group information, be sure to provide this information as shown here – each entry should be enclosed in quotes, with a single slash between domain and group, and the list should be delimited by a single space.

Here’s what this new share looks like in the smb.conf file:

Finally, restart the samba daemons to fully implement the share.

Accessing the Share

run

Now, from our Windows client, we should be able to access our new share.  First, ensure you’re logged into Windows as a user that is in one of the groups we assigned to the share.  Then, from the run line, simply type \\servername.

You should now see a familiar Windows Explorer window and you should see the new file share.  You should also be able to create, copy or move files and folders to the new share.  Try this by creating a folder called ‘Dir1’.  If we then take a look at Dir1 with getfacl, we should see a pattern similar to our previous examples.  Note that the only exception is that the owner is the user who created the file, in this case user1.

writepermissions

The beauty of this configuration is that we can now manage files and subdirectories from Windows, using the familiar right-click > Properties context menu.   As a final test, look at the properties for Dir1 from your Windows client.  On the security tab, click the ‘Edit’ button to change permissions.  Highlight Backup Admins in the list of group or user names and check the box for Write permissions under the Allow column.  Click, ‘OK’ and ‘OK’ again to close the dialogue boxes.

Now let’s look at Dir1 again with getfacl.  Note that the Backup Admins group now has rwx permissions.

Configuring AD User Shells & Home Dirs

As a final exercise, you can also configure your domain-joined Linux server to leverage Samba for single sign-on, so Active Directory users may log into the Linux file server, using Kerberos authentication.  First, to automatically have home directories created for domain users upon login, create the following directory.  This folder will house the home folders for domain users, keeping them separate from any Unix users, and avoiding any naming collisions.

Add the following line to the PAM common-session file.

Now add the Domain Admins group to the sudoers file so that any Domain Admins will have sudo capabilities upon login.  Additionally, set the group_source to dynamic in the sudo.conf file.  This will allow any member of the Domain Admins group to also manage Webmin.

Finally restart the samba, winbind, and webmin daemons to enable these settings.

Conclusion

Though not without a few quirks, a Windows domain-integrated Linux file server is a great alternative for those environments in which running a Windows file server doesn’t quite fit the bill.  Linux file servers are flexible, can be relatively inexpensive, and can give you excellent performance and reliability when properly configured. This walk-through hopefully gives you the necessary information to make Linux work nearly seamlessly for you and your users in your Windows domain.

Linux Permissions and Extended ACLs

Linux Permissions and Extended ACLs

At Teknophiles, we love the speed and flexibility of Linux.  We also love Linux servers’ streamlined and utilitarian nature.  When coming from a Windows background, however, you may find some areas where you miss the ease of management the Windows operating system offers.  One aspect that we typically find somewhat lacking in Linux is the out-of-the-box permission management in most file systems.

Traditional Unix Permissions

While simple and straightforward, traditional Unix permissions leave a bit to be desired at times.  In this model, each file or directory’s permissions are broken into three broad categories – user (owner), group, and other (world).  The owner is assigned one set of permissions, while the group is assigned a different set of permissions.  All users that are not either the owner or a member of the specified group are covered by the “other” permissions.  Permissions in their most basic form are defined as read (r), write (w), execute (x), or any combination of these three.  For example, if the owner of a file has rwx permissions, they are said to have read, write, and execute permissions.  If the group has r-x permissions they are said to have read and execute only.

A file or directory’s permissions can be neatly displayed by listing the contents of a directory in Linux using the ls command.

This output can be divided into 8 columns as shown here:

acls

Each column represents the following:

  1. Directory (d) or regular file (-) notation
  2. The Unix permission set
  3. The number of links (hard links for files, sub-directories for directories)
  4. The user (owner)
  5. The group
  6. The file or directory size in bytes
  7. The date/timstamp
  8. The file or directory name

The Unix permissions (2) can be further broken down:

  • Positions 1-3 – User (Owner) Permissions
  • Positions 4-6 – Group Permissions
  • Positions 4-9 – Other (World) Permissions

Using our example above, we can see that files/ is a directory, and the owner of the directory is user1 and the group is group1.  The directory (not its contents) is 4KB in size and was last modified on October 17th of this year.  Looking at the Unix permissions, we can see that the owner has read, write and execute permissions.  The group also has read, write and execute permissions, while all other users have no access to the directory.

Changing permissions on a file is primarily performed by executing three commands – chown, chgroup, and chmod.  The chown command changes the file’s owner and takes the basic form: chown [OWNER] [FILE] as shown here:

Similarly, the chgroup command changes the file’s group:

These two commands can be combined and simplified as follows:

Changing a file’s permissions is a bit more complicated, and there are two methods for doing so, symbolic mode and numeric mode.  Symbolic mode is a bit more intuitive, but not quite as succinct as numeric mode.  The chmod command in symbolic mode takes the basic form, chmod [ugoa] [-+=] [FILE], where:

u = user
g = group
o = other
a = all

and

– removes the permission
+ adds the permission
= assigns the permission

For instance, if we wanted to give all users read, write and execute permissions, we could use this command:

Now to remove the write permission from other:

To change a file’s permissions in numeric mode, you must use the chmod command to specify a three-digit number, with each digit assigned a value of 0-7.  The first digit in the three-digit number specifies the owner permissions, the second digit specifies the group permissions, while the third digit specifies the other permissions.  Subsequently, each digit is calculated by adding the values of each bit in the rwx designation.  The values are assigned to each permission as follows:

r   w x  –
4  2  1  0

Thus, rwx corresponds to a value of 7 (4+2+1), while r-x corresponds to a value of 5 (4+0+1) and rw- a value 6 (4+2+0).

This may be hard to visualize, so let’s repeat the previous exercise and give all users read, write and execute permissions as before.  To do this using the chmod command in numeric mode, it would look like this:

The first seven grants rwx (4+2+1) to the owner, the second seven grants the group rwx (4+2+1) permissions and the third seven grants rwx (4+2+1) to everyone else.  And to remove all permissions for other, we simply reissue the command with different chmod values:

The first two values work just as before.  The final zero is the symbolic equivalent of – – – or no access to this file.

Extended ACLs

But now let’s consider a scenario in which we want to implement more complex permissions.  This is where traditional Unix permissions tend to fall short.  With traditional Unix permissions, you cannot for instance, add multiple groups or users to a file or directory.  This becomes increasingly troublesome if you intend to use your Linux box as a file server in a Windows environment.  Fortunately, Linux offers an extended ACLs package that solves this problem, called acl.  Extended ACLs offer a full set of permissions that allows us to apply permissions and even inheritance with nearly the same ease we’re used to on a Windows file server.

Install the extended ACLs as follows:

The package is comprised of two main components – getfacl and setfacl.  The getfacl command retrieves the ACLs on a folder or file, while setfacl adds, modifies or removes ACLs.  Before we modify any permissions, however, let’s take a look at the same directory called files with getfacl:

As we can see from this example, the output is quite different than the ls command, though we get much of the same information as before – the owner is user1, the group is user1, and the Unix permissions are 770, or rwxrwx – – – .

Extended ACLs offers a much greater range of permissions than what we see here, however.  Using the setfacl command, we can add multiple users or groups, and even set inheritance on the file.  Using setfacl with the -m parameter specifies that we want to modify an ACL on a directory, while -x specifies that we want to remove an ACL entry from a directory.  Additionally,  the u: or g: notation is used to determine whether the ACL entry applies to the owner (u:) or group (g:).  Furthermore, the -d parameter gives us the ability to set a default ACL on the directory, so that new files and subdirectories will inherit the parent ACL.

Let’s start by taking a look at an example where we want to set defaults on a parent directory:

Now let’s take a look at what these commands are doing.  In the first two lines, we’re setting the permissions for the default user and group.  In the last three lines, we’re setting a default ACL for not one, but three separate groups’ permissions on this directory.  The first two groups, group1 and group2, both have read, write and execute, while group3 has read and execute only.  Taking a look at getfacl again on this directory, we can start to see the permissions taking shape:

Next, let’s grant explicit ACLs on the parent folder.  These commands look similar to the default ACLs, less the -d parameter.

Now we have a full set of permissions on this folder.  We’ve set defaults for the owner and the group, as well as defaults for three specific groups.  We then added permissions for the owner, group and three groups to the folder explicitly, giving us much greater granularity than with traditional Unix permissions alone.

There’s one more item we might want to consider, however.  Note what happens when we create a new sub-directory within the files directory while logged in as user1.

We see that since the folder was created as user1, whose primary group is also called user1, the group for this new directory also becomes user1.  Though not technically part of the extended acl set, we can use an additional chmod command to offset this behavior, called the setgid flag.  Now, it’s important to note that the setgid flag means two very different things when applied to directories versus files.  When applied to a file, the setgid flag allows users to execute a file as if they were a member of the file’s group; the command actually means, “set group id upon execution.”  In our case, this isn’t the behavior we desire.  When applied to a directory, however, setgid takes on a different meaning.  In the context of a directory, setgid forces new files and folders created within the parent to take on the parent’s group, regardless of the group to which the user that created the file or folder belongs.

Let’s now set the setgid flag on the parent folder as follows:

Looking at the permissions we see a new attribute, called flags, with the set id flag in the group position.  Finally, let’s recreate the new sub-directory as user1.

Note that now, although the new directory was created by user1, the group attribute has retained the value of the parent, group1.  It’s also important to note that our new sub-folder has inherited the defaults of the parent folder as well as the specific user and group permissions from the parent.

File Masks

One final note regarding the mask attribute before we conclude.  Simply put, the mask implies the maximum permissions that may be applied to a file or directory. In all of our examples you’ll note that the mask is set to rwx, so rwx is the maximum permission allowed.  Were we to implement a mask of r – – , however, read-only permissions would be the maximum permissions allowed, regardless of the explicit permissions for a named user or group.  Put another way, if a mask is r – – but a user has rwx, the effective permission is the overlap between the mask and the explicit permission.  In this case, the permission becomes r – – since the read permission is the only position that overlaps between the two.

This becomes a bit clearer when we see it in action.  Let’s take the parent folder from before, but assign a default and explicit mask of r – – .

Notice now that when we view the permissions on this folder, we see that although group1 and group2 explicitly have rwx permissions, they only overlap the mask on the read-only value, thus read-only becomes the effective permission.  From this we can see why it’s important to keep an eye on the mask attribute, as it can “mask” the permissions we intend to set for a user or group.  As a rule of thumb when using extended ACLs, it’s sometimes more straightforward to simply leave the mask as rwx, control access by thoroughly applying group permissions, and ensuring no other users have access to the files.  This prevents any unexpected behavior for explicitly defined user or group ACLs.

Conclusion

While this is by no means an exhaustive review of Linux ACLs or even traditional Unix permissions, this article has hopefully given you a good overview of the additional flexibility that ACLs bring to the Linux file system.  Coupled with traditional Unix permissions, extended ACLs give you near Windows-like file and folder manageability.  This becomes even more evident when using Linux to serve files or FTP, especially in a Windows environment.

List Directories and Files with Tree

List Directories and Files with Tree

On one of our backup servers, we run StableBit’s DrivePool with great success. As we’ve mentioned, this is a great program that allows you to pool disparate hard drives on a Windows Desktop or Server and has some great features and options. We use it to simply pool a number of drives to provide a large (20+ TB) backup target for our uSANs. After all, it’s backup, and in a home lab, you may not want to spend extra money on parity drives in your backup server when you already have parity and redundancy at other levels. And though it’s been working without fail for some time now, there’s one nagging thought that always lurks in the shadows for me.

As with any virtual file system layered on top of a drive pool, not knowing exactly where your files are is just how things work. After all, that’s what it’s designed to do – obfuscate the disk subsystem to provide a single large file system to place your files. Copy your stuff to the pool and let the software do the rest. To the user, all your files transparently appear in one neat and tidy place.

Perhaps it’s my OCD, but I still like to know where everything is. In a pinch, say if a backup drive fails, I like knowing exactly what’s gone. It’s like the old saying, “you don’t know what you don’t know.” “But Bill,” you say, “if a disk fails, simply rerun your backup scripts and let the system do it’s thing.” I know, and you’re exactly right, but you still can’t convince my OCD of that.



So, without further ado, here’s a simple command-line tool in Windows that will output a list of your files for reference should you need it — tree. Tree is included with nearly all versions of Windows and it’s quite easy to use. In it’s simplest form, tree simply outputs a list of directories, beginning with the current directory, and does so in a visual tree form that shows the directory structure. In system32 for instance, it looks like this:

Tree only has a couple of command line switches, but both can be useful.  Running tree with /F also displays the names of the files in each folder. As you can imagine, the output could get quite lengthy for a folder like system32, but sending the output to a logfile allows you review or search the output as needed. Using the /A outputs the results using ASCII characters instead of extended characters.  This is important when sending the output to a plain-text file, in which extended characters may not appear properly.

The simple command just looks like this:

Output is neatly sent to a plain-text file, which documents the file and folder layout.
tree_log

For our backup pool, we simply send the output tree to a log file as part of a daily scheduled task.  Should a drive in the pool fail, we can simply reference the log file for that day to determine exactly which files were lost.

Clear the Disk Read-Only Flag in Windows

Clear the Disk Read-Only Flag in Windows

While recently adding a new disk to one of our backup servers, one of the disks changed device letters in Linux. Ordinarily this is not a big deal, but since this particular disk was a iblock device in an LIO backstore, and was defined by the /dev/sd[x] notation, it was no longer listed correctly. Oddly, the disk was still listed in the Disk Manager on the hypervisors, but any attempt at I/O would result in errors. The disk was ultimately removed from the LIO configuration, which then caused the LUN to drop from the hypervisor nodes.

After adding the disk back to LIO using a slicker method as detailed here, the disk reappeared on the hypervisors, and we reconnected the disk to the VM in Hyper-V. However, after adding the storage back, we noticed the LUN from LIO was marked as read-only in the virtual server, and would not permit any writes. Should you run into a similar situation, the fix is usually pretty simple, as noted below.

First, start the diskpart utility from a Windows CLI and list the available disks:

 

Next, select the disk in question, in this case Disk 6. Notice that when we look at the disk details in diskpart, this disk is definitely listed as read-only:

 

With the disk still selected, clear the readonly attribute for the disk with the following command:

 

The disk should now be listed as “Read-Only: No,” and available for writing. You can verify its status with the detail command as before.

We’re still not quite sure what caused this little issue, as we’ve removed and added several disks back in LIO without this cropping up. Perhaps it was the less than graceful removal of the disk from the hypervisor while it was attempting IO. Whatever the case, though an old utility, diskpart can still prove to be a useful tool when the need arises.