I'm back. Busy week at work. Worked until late last nigh. The had to work on my truck, to get is ready for work for this week.
I was thinking about htis in the back of my head while I was working...
First do snapshots. recursively using the "-r" flag. The datasets yu want to make sanpshots of are bpool/boot, rpool/ROOT, and rpool/USERDATA. Then send/receive, replicating the whole datasets using the "-R" flag... To an external disk. That way you can snapshot and restore without carrying any of set pool options that are causing this problem.
I mount my backup disk as a pool called "backups", mounted at a directory called /backups...
Code:
sudo su --
zfs snapshot -r bpool/BOOT@migrate
zfs snapshot -r rpool/ROOT@migrate
zfs snapshot -r rpool/USERDATA@migrate
zfs send -R bpool/BOOT@migrate > /backups/bpool-boot.migrate
zfs send -R rpool/ROOT@migrate > /backups/rpool-root.migrate
zfs send -R rpool/USERDATA@migrate > backups/rpool-userdata.migrate
This will be the easiest way to migrate to a newer larger disk, and not carry over the problem...
First do this to find the name of the disk:
Code:
ls -e7 -o name,size,model
For example sake, lets just say it turned up as "sdb". We then use that value to find the unique disk ID
Code:
ls -l /dev/disk/by-id | grep -e 'sdb'
Lets say it came back like this
Code:
lrwxrwxrwx 1 root root 9 Feb 18 10:26 ata-Samsung_SSD_870_QVO_2TB_S6R4NJ0R624350W -> ../../sdb
We then use that value, to create a reusable variable, so we don't have to retype it each time we need to use it
Code:
DISK=/dev/disk/by-id/ata-Samsung_SSD_870_QVO_2TB_S6R4NJ0R624350W
Then we create new partitions on that disk. This is what the canned Ubuntu install does by rule of thumb:
Code:
NAME LABEL SIZE FSTYPE MOUNTPOINT
sda <whatever size>
+-sda1 512M vfat /boot/efi
+-sda2 2G swap [SWAP]
+-sda3 bpool 2G zfs_mem
+-sda4 rpool <balance> zfs_mem
I use different sizes for things to correct size issues for what I do:
Code:
I set EFI at between 750M to 1G, depending if I am going to add any EFI apps.
I set swap at 1.5 to 2 times the RAM
I set bpool at 3G to allow room for snaphots
I give what balance to rpool, or whatever I decide as appropriate (I usually reserve 10%-20% as reserve for emergencies).
That would translate to someting like this, adjusting for your planned needs:
Code:
sudo su --
## Add partitions
# For ESP EFI
sgdisk -n1:1M:1G -c1:EFI -t1:EF00 $DISK
# For Linux Swap
sgdisk -n2:0:+32G -c2:swap -t2:BF02 $DISK
# For bpool boot
sgdisk -n3:0:+3G -c3:bpool -t3:BE00 $DISK
# For rpool root
sgdisk -n4:0:0 -c4:rpool -t4:BF00 $DISK
Then we are going to do a few things...
copy over the EFI for old to new:
Code:
sudo dd if=/dev/sda1 of=/dev/sdb11
The easiest way to migrate rpool, is to attach ${DISK}-part4 to rpool, (will create it as a mirror of the old). After it finishes resilvering, then "detach" the old partion then remove the old partition, then set "autexpand=on"...
There is actually 3 wyas to do that, with either add/remove, replace or attach/detach. I find with people, that attach/detach has a better success rate, and you can verify it before the old is removed...
The commands are going to look something like this
Code:
zfs attach ${DISK}-part4 rpool
zpool status -v rpool
Until it is done resilvering, then it will look similar to this
Code:
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
5d133589-303b-1940-80fc-568124015599 ONLINE 0 0 0
ata-Samsung_SSD_870_QVO_2TB_S6R4NJ0R624350W ONLINE 0 0 0
errors: No known data errors
So then you are going to detach the old pool
Code:
zfs detach 5d133589-303b-1940-80fc-568124015599 rpool
For bpool, it going to be a bit different... First, we capture the UUID variable thta was used during its origianl install...
Code:
UUID=$(zfs list | awk -F "_" '/bpool\/BOOT\/ubuntu_/ {print $2}' | awk '{print $1}')
Then rename the old bpool, so it doesnt conflist wiht the new one, so we still have access to both
Code:
umount /boot/efi
zpool rename bpool bpool-bak
zpool set mouuntpoint=/mnt bpool-bak
Then create the new bpool, with the new options:
Code:
zpool create \
-o ashift=12 \
-o autotrim=on \
-o cachefile=/etc/zfs/zpool.cache \
-o feature@async_destroy=enabled \
-o feature@empty_bpobj=active \
-o feature@lz4_compress=active \
-o feature@multi_vdev_crash_dump=disabled \
-o feature@spacemap_histogram=active \
-o feature@enabled_txg=active \
-o feature@hole_birth=active \
-o feature@extensible_dataset=disabled \
-o feature@embedded_data=active \
-o feature@bookmarks=disabled \
-o feature@filesystem_limits=disabled \
-o feature@large_blocks=disabled \
-o feature@large_dnode=disabled \
-o feature@sha512=disabled \
-o feature@skein=disabled \
-o feature@edonr=disabled \
-o feature@userobj_accounting=disabled \
-o feature@encryption=disabled \
-o feature@project_quota=disabled \
-o feature@device_removal=disabled \
-o feature@obsolete_counts=disabled \
-o feature@zpool_checkpoint=disabled \
-o feature@spacemap_v2=disabled \
-o feature@allocation_classes=disabled \
-o feature@resilver_defer=disabled \
-o feature@bookmark_v2=disabled \
-o feature@redaction_bookmarks=disabled \
-o feature@redacted_datasets=disabled \
-o feature@bookmark_written=disabled \
-o feature@log_spacemap=disabled \
-o feature@livelist=disabled \
-o feature@device_rebuild=disabled \
-o feature@zstd_compress=disabled \
-o feature@draid=disabled \
-o feature@zilsaxattr=disabled \
-o feature@head_errlog=disabled \
-o feature@blake3=disabled \
-o feature@block_cloning=disabled \
-o feature@vdev_zaps_v2=disabled \
-o compatibility=grub2,ubuntu-22.04 \
-O devices=off \
-O acltype=posixacl \
-O xattr=sa \
-O compression=lz4 \
-O normalization=formD \
-O relatime=on \
-O canmount=off -O mountpoint=/boot -R /mnt \
bpool ${DISK}-part3
zfs create -o canmount=off -o mountpoint=none bpool/BOOT
zfs create -o mountpoint=/boot bpool/BOOT/ubuntu_$UUID
ls /mnt # To verify it is there...
Then copy (rsync) over the filesystem from old-to-new, or restore via send/recieve from the dataset snapshot backup from your recovery pool.
rsync would look like this
Code:
rsync -r /boot/ /mnt/boot
send/receive would look like this... Note that the dataset cannot exist before it is restored from full...)
Code:
zfs destroy bpool/BOOT/ubuntu_$UUID
zfs destroy bpool/BOOT
zfs receive -F -d bpool/BOOT < /backups/bpool-boot.migrate
Bookmarks