Fascinating–I had just finished a scrub on June 6. Then a drive started dying. Now I’ve attempted to replace it, but an adjacent drive also had errors. I feel like I’m in a bit of a pickle. This snapshot had the error, so I deleted the snapshot and did a zfs clear tank, and the scrub process automatically restarted:
tank/VMs/l_4548-f30m64r@0000-installed:/lf541-f30-64-sda.img
pool: tank
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Jul 4 09:31:08 2020
3.61T scanned out of 5.02T at 474M/s, 0h51m to go
464G resilvered, 71.96% done
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
wwn-0x5000c500536ffa79 ONLINE 0 0 0
replacing-1 DEGRADED 0 0 0
8130638507939855275 UNAVAIL 0 0 0 was /dev/disk/by-id/wwn-0x5000c5006e3d9d3f-part1
wwn-0x5000cca24cd7f690 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
wwn-0x5000c5005226b37b ONLINE 0 0 0
wwn-0x5000c500522766b5 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
wwn-0x5000cca223ca0714 ONLINE 0 0 0
wwn-0x5000cca224cb6a3b ONLINE 0 0 0
logs
mirror-2 ONLINE 0 0 0
nvme-SAMSUNG_MZVPW128HEGM-00000_S347NY0HB06043-part1 ONLINE 0 0 0
nvme-SAMSUNG_MZVPW128HEGM-00000_S347NY0HB06165-part1 ONLINE 0 0 0
cache
nvme-eui.002538cb61020e02-part2 ONLINE 0 0 0
nvme-eui.002538cb61020e7c-part2 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
<0xb081>:<0x2>
So at the bottom it has a unnamed file reference and it is prolly going to stick at 71.x% for the next two hours.
