ZFS¶
ZFS is a combined file system and logical volume manager designed by Sun Microsystems. ZFS is scalable, and includes extensive protection against data corruption, support for high storage capacities, efficient data compression, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z, native NFSv4 ACLs, and can be very precisely configured. The two main implementations, by Oracle and by the OpenZFS project, are extremely similar, making ZFS widely available within Unix-like systems.
Installation¶
The ZFS filesystem is available for Ubuntu as either a FUSE module or a native kernel module. The kernel module is provided by default. To install the user-level tools, simply install:
sudo apt install zfsutils-linux
For all current versions from 16.04 onward. In addition to be able to have ZFS on root, install:sudo apt install zfs-initramfs
Datapool management¶
Managing datapools is done via
zpool
a CLI tool.
zpool create -o ashift=12 datapool
mirror /dev/disk/by-id/scsi-3600605b00b7ea6801f8e50c04c7651d8 /dev/disk/by-id/scsi-3600605b00b7ea6801f8e50c94d0213ed
mirror /dev/disk/by-id/scsi-3600605b00b7ea6801f8e50ce4d4a867b /dev/disk/by-id/scsi-3600605b00b7ea6801f8e50d34d929f1f
Real pools should be created with the option
-o ashift=12 -d /dev/disk/by-id
. Newer hard discs work with sectors of 4096 bytes.ashift=12
aligns the ZFS sizes to that sector size. wiki.ubuntuusers.de
You can list the state of the pool by running zpool list
what will result in an output like
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
datapool 3,25T 417G 2,84T - 13% 12% 1.00x ONLINE -
You can destroy a pool by running zpool destroy
.
Further insights for pool management can be found here.
Pool managemnt¶
zpool status
zpool status datapool
zpool status -v datapool
zpool scrub datapool
Check integrity.
You can monitor the pool status with monit e.g. by generating daily reports
0 0 * * * /sbin/zpool status datapool > /var/zfsdaily
further you can scrub the datapool automatically
0 0 1 * * /sbin/zpool scrub datapool
Drive management¶
Once a datapool (e.g. /datapool
) is created, individual drives can be managed with the CLI tool zfs
.
zfs create datapool/nameofthedrive
zfs list
zfs destroy datapool/nameofthedrive
Snapshots¶
Snapshots for drives can be created, listed, rolled backed to and deleted via
zfs snapshot tank/home/ahrens@friday
zfs list -t snapshot
zfs list -r -t snapshot -o name,creation tank/home
zfs list -o space
zfs destroy tank/home/ahrens@friday
zfs rollback tank/home/ahrens@tuesday
use
-r
to force deletion of all following snapshotszfs rollback -r tank/home/ahrens@tuesday
Tools like zfs-auto-snapshot facilitate the snapshot management and allow for rotation as shown in this stackoverflow entry.
vim /etc/cron.d/zfs-auto-snapshot
PATH="/usr/bin:/bin:/usr/sbin:/sbin"
*/5 * * * * root /usr/local/sbin/zfs-auto-snapshot -q -g --label=frequent --keep=24 poolname
00 * * * * root /usr/local/sbin/zfs-auto-snapshot -q -g --label=hourly --keep=24 poolname
59 23 * * * root /usr/local/sbin/zfs-auto-snapshot -q -g --label=daily --keep=14 poolname
59 23 * * 0 root /usr/local/sbin/zfs-auto-snapshot -q -g --label=weekly --keep=4 poolname
00 00 1 * * root /usr/lolca/sbin/zfs-auto-snapshot -q -g --label=monthly --keep=4 poolname
Restoring from Snapshots¶
Each datapool has an invisble hidden folder .zfs
that contains all the snapshots.
🚨 The internal file ownerships of restored LXC containers will be messed up. Restoring LXC containers from ZFS snapshots should be a last resort.