Understanding ZFS Pool – Part 2 of 2 – Tutorial

Oracle Solaris 0 Comments

This is the second part of Understanding ZFS Pool articles series, where i’m trying to simplify things and to address the most used ZFS file system features.
In the first part, we saw how to use physical disks to create a zfs pool and how to troubleshoot and replace failed devices.
In this part we will see how to use those zpools and create a zfs file system (datasets) on top of them.

Here is the actual configuration of our zfs pool, which is a raid-0 configuration with one spare disk:

root@sol01:~# zpool list mypool
NAME    SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
mypool  187M  50.5M  137M  27%  1.00x  ONLINE  -
root@sol01:~# zpool status  mypool
  pool: mypool
 state: ONLINE
  scan: resilvered 50.2M in 0h0m with 0 errors on Tue Feb  9 22:20:19 2016
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
        spares
          c1t4d0    AVAIL

errors: No known data errors

Creating a ZFS File System

Let’s create two zfs datasets

root@sol01:~# zfs create mypool/data
root@sol01:~# zfs create mypool/logs
root@sol01:~# zfs list -r mypool
NAME          USED  AVAIL  REFER  MOUNTPOINT
mypool       50.3M   105M  50.0M  /mypool
mypool/data    31K   105M    31K  /mypool/data
mypool/logs    31K   105M    31K  /mypool/logs

I have used the -r option to recursively list all datasets related to my zfs pool. To have a more specifique output, we can use the -t option to specify the type of the dataset that we want to show. We can use four alias with the -t option: fs (filesystem), snap (snapshot), and vol (volume) to display a specific type of datasets.

For example to only list snapshot related to the zfs pool “rpool”, we can use the following command:

root@sol01:~# zfs list -r -t snap rpool
NAME                            USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/solaris@install      112M      -  3.96G  -
rpool/ROOT/solaris/var@install  152M      -   389M  -
rpool/newhome/david@snap1        20K      -  5.04M  -
rpool/newhome/david@snap2        20K      -  10.0M  -
rpool/project1@monday             1K      -    33K  -
rpool/project1@mondayIncr         1K      -    33K  -
rpool/project1@tuesdayIncr         0      -    33K  -

Mounting a ZFS File System

The beauty of zfs is that it take care of all the necessary steps required to mount the created file system, that is, updating the /etc/vfstab (solaris) and automatically mounting the file system at boot time using the SMF service svc:/system/filesystem/local:default.
We can use the zfs mount and zfs unmount respectively to mount and unmount a file system. ZFS inherit the mount point name from the file system name.
We can specify this at file system creation or we can change it after, in this case zfs will remount the file system using the new mount point.

Let’s see how we can change the mount point of our dataset mypool/data from /mypool/data to /data

root@sol01:~# zfs list -r -t all mypool
NAME          USED  AVAIL  REFER  MOUNTPOINT
mypool       50.3M   105M  50.0M  /mypool
mypool/data    31K   105M    31K  /mypool/data
mypool/logs    31K   105M    31K  /mypool/logs

root@sol01:~# zfs set mountpoint=/data mypool/data
root@sol01:~# zfs list -r -t all mypool
NAME          USED  AVAIL  REFER  MOUNTPOINT
mypool       50.6M   104M  50.0M  /mypool
mypool/data    31K   104M    31K  /data
mypool/logs    31K   104M    31K  /mypool/logs

Easy, clean and simple! this actually how zfs handle mounting a file system, the mountpoint property can also be set to none, which will prevent the file system from being mounted at boot time.
The other way to mount a zfs file system, is to use the legacy mode, in this case we use the traditional mount command to handle the mounting operation, after that, and we should update our /etc/vfstab to reflect the change.
This can be accomplished using the steps below:
1- Create a zfs pool.
2- Set the mountpoint property to legacy

root@sol01:~# zfs set mountpoint=legacy mypool

3- Create the directory where to mount the file system

root@sol01:~# mkdir /data

4- Mount the FS using the tradional way

root@sol01:~# mount -F zfs mypool /data

5- Update the /etc/vfstab to automatically mount the file system at boot time

root@sol01:~#echo "mypool - /data zfs - yes -" >> /etc/vfstab

From here, all the management tasks must be done using the legacy mount command.

Setting Quota and Reservation

Suppose that we want to allocate 75% of our pool to the mypool/data and the rest can be allocated to the mypool/logs dataset.
We can use reservation to grantee that we will have this amount of storage reserved from the pool.
In this case our zfs file system is 154MB in size, we will allocated ~75% of it size to mypool/data

root@sol01:~# zfs set reservation=116M mypool/data

And then we can use quota to be sure that we won't exceed our allocated storage space
root@sol01:~# zfs set quota=116M mypool/data

ZFS space usage listing after setting the quota and reservation:
ZFS pool space usage listing

Only ~25% is now allowed to mypool/logs dataset.

Sharing ZFS using NFS or SMB

Starting from Oracle Solaris 11.1, two zfs file system properties can be used to share a file system, share.nfs for sharing using the NFS protocol, and share.smb for SMB sharing.
SMF will handle all the necessary tasks to share the file system during system boot.
Let’s see how we can share our pool using NFS.

root@sol01:~# zfs set share.nfs=on mypool/data

We can view all shared file systems using the share command.

root@sol01:~# share
mypool_data     /data   nfs     sec=sys,rw

We can also set more nfs sharing properties if we want, like disabling the root squash (mapping the root account to nobody) to allow the root read/write access from another host named sol02 for example.

root@sol01:~# zfs set share.nfs.sec.default.root=sol02 mypool/data
root@sol01:~# share
mypool_data     /data   nfs     sec=default,root=sol02

To list all the properties that we can change, we can use the following command:

root@sol01:~# zfs help -l properties | grep share.nfs
share.nfs                            YES      YES  on | off
share.nfs.aclok                      YES      YES  on | off
share.nfs.anon                       YES      YES  
share.nfs.charset.euc-cn             YES      YES  
share.nfs.charset.euc-jp             YES      YES  
share.nfs.charset.euc-jpms           YES      YES  
share.nfs.charset.euc-kr             YES      YES  
share.nfs.charset.euc-tw             YES      YES  
share.nfs.charset.iso8859-1          YES      YES  
share.nfs.charset.iso8859-13         YES      YES  
share.nfs.charset.iso8859-15         YES      YES  
share.nfs.charset.iso8859-2          YES      YES  
share.nfs.charset.iso8859-5          YES      YES  
share.nfs.charset.iso8859-6          YES      YES  
share.nfs.charset.iso8859-7          YES      YES  
share.nfs.charset.iso8859-8          YES      YES  
share.nfs.charset.iso8859-9          YES      YES  
share.nfs.charset.koi8-r             YES      YES  
share.nfs.cksum                      YES      YES  
share.nfs.index                      YES      YES  
share.nfs.log                        YES      YES  
share.nfs.noaclfab                   YES      YES  on | off
share.nfs.nosub                      YES      YES  on | off
share.nfs.nosuid                     YES      YES  on | off
share.nfs.public                     YES       NO  on | off
share.nfs.sec                        YES      YES  
share.nfs.sec.default.none           YES      YES  
share.nfs.sec.default.ro             YES      YES  
share.nfs.sec.default.root           YES      YES  
share.nfs.sec.default.root_mapping   YES      YES  
share.nfs.sec.default.rw             YES      YES  
share.nfs.sec.dh.none                YES      YES  
share.nfs.sec.dh.ro                  YES      YES  
....

ZFS pool and file system administration is really simple and easy, with some practice, you will get familiar with it, especially if you come from a Linux background.

Find this useful! Be Sociable, Share your Knowledge!

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *