Complete
Details
Details
Assignee
Waqar
WaqarReporter
Wolfgang Demeter
Wolfgang DemeterLabels
Impact
High
Time remaining
0m
Components
Fix versions
Priority
Katalon Platform
Katalon Platform
Created August 31, 2021 at 7:06 AM
Updated May 9, 2023 at 8:26 PM
Resolved May 9, 2023 at 6:49 PM
Regarding this thread: https://www.truenas.com/community/threads/cannot-replace-disks-in-pool.91490/
I ran into a an issue where the GUI could not replace a faulty disk with the error "already in replacing/spare config; wait for completion or use 'zpool detach'". Sorry, i don't have the exact additional error-Message from the GUI anymore.
There was no other replacing in progress. And the pool does not have a spare-disk anymore (there was one until a year or so back).
The pool itself is dating back to FreeNAS 9 which was expanded with one additional VDEV (raidz2-4) under TrueNAS 12.0. Currently running TrueNAS 12.0-U2.
zpool status -v ggmtank01
pool: ggmtank01
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace' .
scan: scrub repaired 8K in 07 :32:07 with 0 errors on Sun Aug 29 07 :32:37 2021
config:
NAME STATE READ WRITE CKSUM
ggmtank01 DEGRADED 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/071d138c-9644-11e8-8380-000743400660 ONLINE 0 0 0
gptid/07d35682-9644-11e8-8380-000743400660 ONLINE 0 0 0
gptid/ef627048-743e-11eb-8d93-e4434bb19fe0 ONLINE 0 0 0
gptid/167a10f2-7aa8-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
gptid/e82432e5-8585-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/73340a80-5449-11e9-b326-000743400660 ONLINE 0 0 0
gptid/49297ee3-00c5-11ec-bdc3-e4434bb19fe0 ONLINE 0 0 0
gptid/625036be-8586-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
gptid/44286cae-7aa9-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
gptid/3ffd3cdc-7440-11eb-8d93-e4434bb19fe0 ONLINE 0 0 0
raidz2-2 ONLINE 0 0 0
gptid/c071d681-743c-11eb-8d93-e4434bb19fe0 ONLINE 0 0 0
gptid/0f702ce5-9644-11e8-8380-000743400660 ONLINE 0 0 0
gptid/d1bdee26-78df-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
gptid/f3bcbd88-7aa9-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
gptid/0826e283-8587-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
raidz2-4 DEGRADED 0 0 0
gptid/8aaceda5-9ea7-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
gptid/8b2fc90b-9ea7-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
gptid/8c2ee1c1-9ea7-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
gptid/8bf70e18-9ea7-11eb-bdc3-e4434bb19fe0 ONLINE 0 0 0
gptid/8c8ddaa8-9ea7-11eb-bdc3-e4434bb19fe0 OFFLINE 0 0 0
logs
mirror-3 ONLINE 0 0 0
gptid/123e1981-9644-11e8-8380-000743400660 ONLINE 0 0 0
gptid/12b0bdb1-9644-11e8-8380-000743400660 ONLINE 0 0 0
cache
gptid/f4918c31-ff0f-11e9-b449-000743400660 ONLINE 0 0 0
errors: No known data errors
zpool reported the following ashift for the Pool:
zpool get ashift ggmtank01
NAME PROPERTY VALUE SOURCE
ggmtank01 ashift 0 default
But zdb shows the following ashift-values for the Pool, and it is different for the newly added VDEV (children[4])! I am pretty sure i haven't changed any Default-settings when adding the new VDEV - but that's a couple of months back!
zdb -C -U /data/zfs/zpool.cache
(truncated)
ggmtank01:
vdev_tree:
children[0]:
type: 'raidz'
nparity: 2
ashift: 12
children[1]:
type: 'raidz'
nparity: 2
ashift: 12
children[2]:
type: 'raidz'
nparity: 2
ashift: 12
children[3]:
type: 'mirror'
ashift: 12
children[4]:
type: 'raidz'
nparity: 2
ashift: 9
Nonetheless - thanks to the mentioned thread - i was able to replace the faulted disk with the following statement and the pool is currently resilvering.
zpool replace -o ashift=9 ggmtank01 gptid/<faulty-rawuuid> gptid/<new-rawuuid>
I am not sure, if this is a bug or simply a bad GUI message?