zfs镜像功能测试
4月底新的ubuntu22.04系统发布的时候,好奇心让我选择了zfs文件系统,早有耳闻zfs,后来自己安装更新linux内核deb包的时候,发现无法引导系统,彼时有点慌,因为zfs文件系统无法识别,没办法只能上网查询资料,原来因为开源协议的问题,内核并没有像ext4那样包含zfs的相应处理模块,有纠纷的话,Oracle的律师会发律师函。
题外话不多说,既然ubuntu官方提供了对应的文件系统选项,加上bsd等也选用zfs文件系统,肯定有其独到之处,对于穷苦大众来说,zfs的snapshot即快照功能绝对够实用,测试过程如下
1. 先看看基本状况
➜ mephisto.cc git:(main) zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
bpool 1.88G 244M 1.64G - - 1% 12% 1.00x ONLINE -
rpool 472G 23.1G 449G - - 3% 4% 1.00x ONLINE -
➜ mephisto.cc git:(main) zpool status
pool: bpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
bpool ONLINE 0 0 0
5caca429-191c-f046-b1dc-3167ba2d43fa ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
1c85b674-1367-5d40-8553-2ba3439be776 ONLINE 0 0 0
errors: No known data errors
➜ mephisto.cc git:(main) zfs --version
zfs-2.1.2-1ubuntu3
zfs-kmod-2.1.2-1ubuntu3
➜ mephisto.cc git:(main) zfs list
NAME USED AVAIL REFER MOUNTPOINT
bpool 244M 1.51G 96K /boot
bpool/BOOT 243M 1.51G 96K none
bpool/BOOT/ubuntu_o54gp2 243M 1.51G 243M /boot
rpool 23.1G 434G 96K /
rpool/ROOT 7.42G 434G 96K none
rpool/ROOT/ubuntu_o54gp2 7.42G 434G 5.74G /
rpool/ROOT/ubuntu_o54gp2/srv 96K 434G 96K /srv
rpool/ROOT/ubuntu_o54gp2/usr 140M 434G 96K /usr
rpool/ROOT/ubuntu_o54gp2/usr/local 139M 434G 139M /usr/local
rpool/ROOT/ubuntu_o54gp2/var 1.55G 434G 96K /var
rpool/ROOT/ubuntu_o54gp2/var/games 96K 434G 96K /var/games
rpool/ROOT/ubuntu_o54gp2/var/lib 1.51G 434G 1.38G /var/lib
rpool/ROOT/ubuntu_o54gp2/var/lib/AccountsService 112K 434G 112K /var/lib/AccountsService
rpool/ROOT/ubuntu_o54gp2/var/lib/NetworkManager 144K 434G 144K /var/lib/NetworkManager
rpool/ROOT/ubuntu_o54gp2/var/lib/apt 84.4M 434G 84.4M /var/lib/apt
rpool/ROOT/ubuntu_o54gp2/var/lib/dpkg 46.3M 434G 46.3M /var/lib/dpkg
rpool/ROOT/ubuntu_o54gp2/var/log 41.3M 434G 41.3M /var/log
rpool/ROOT/ubuntu_o54gp2/var/mail 96K 434G 96K /var/mail
rpool/ROOT/ubuntu_o54gp2/var/snap 1M 434G 1M /var/snap
rpool/ROOT/ubuntu_o54gp2/var/spool 116K 434G 116K /var/spool
rpool/ROOT/ubuntu_o54gp2/var/www 96K 434G 96K /var/www
rpool/USERDATA 15.7G 434G 96K /
rpool/USERDATA/mephisto_fmrvh5 15.7G 434G 15.7G /home/mephisto
rpool/USERDATA/root_fmrvh5 1.03M 434G 1.03M /root
2. 确认测试文件404.jpg存在
➜ mephisto.cc git:(main) ls -al ~/Pictures/iPhone
total 47
drwxrwxrwx 2 mephisto mephisto 3 May 29 12:53 .
drwxrwxrwx 4 mephisto mephisto 7 May 26 14:49 ..
-rw-rw-r-- 1 mephisto mephisto 43408 May 29 12:53 404.jpg
3. 创建镜像,然后删除文件, 创建镜像非常快,秒级
➜ mephisto.cc git:(main) sudo zfs snapshot rpool/USERDATA/mephisto_fmrvh5@9527
[sudo] password for mephisto:
➜ mephisto.cc git:(main) rm ~/Pictures/iPhone/404.jpg
➜ mephisto.cc git:(main) ls ~/Pictures/iPhone/404.jpg
ls: cannot access '/home/mephisto/Pictures/iPhone/404.jpg': No such file or directory
4. 再回滚镜像,抢救被删除的404.jpg,实测回滚也非常快,毕竟改动少
➜ mephisto.cc git:(main) sudo zfs rollback rpool/USERDATA/mephisto_fmrvh5@9527
➜ mephisto.cc git:(main) ls ~/Pictures/iPhone/404.jpg
/home/mephisto/Pictures/iPhone/404.jpg
➜ mephisto.cc git:(main) echo "nubility"
nubility
5. 查看当前有哪些快照,9527就是刚刚创建的
➜ mephisto.cc git:(main) zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
rpool/USERDATA/mephisto_fmrvh5@9527 3.17M - 15.7G -
6. 再创建个9528的快照, 不重复测试了,文件系统还是很靠谱的
➜ mephisto.cc git:(main) sudo zfs snapshot rpool/USERDATA/mephisto_fmrvh5@9528
[sudo] password for mephisto:
➜ mephisto.cc git:(main) ✗ zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
rpool/USERDATA/mephisto_fmrvh5@9527 3.98M - 15.7G -
rpool/USERDATA/mephisto_fmrvh5@9528 0B - 15.7G -
另外,zfs的存储池zpools也很好使用,不需要额外的卷管理系统去使用一个以上的设备,个人创建NAS的时可用上,因为没有额外的磁盘进行测试,有条件的用户可以查阅OpenZFS或者Oracle的相关文档进行操作