Exporting pool fails if two pools are imported
Description
Problem/Justification
None
Impact
None
SmartDraw Connector
Katalon Manual Tests (BETA)
Activity
Show:
Marco January 9, 2021 at 2:28 PM
Marco
January 9, 2021 at 2:28 PM
That error looks different than the one I get. I've setup a test vm . I'll shoot you the creds.
Waqar January 9, 2021 at 1:43 AM
Waqar
January 9, 2021 at 1:43 AM
@Marco these are the steps I tried on a fresh VM ( pools were created before these steps and export command at the end is basically the same what the UI uses )
root@truenas[~]# iocage list
Setting up zpool [pool1] for iocage usage
If you wish to change please use "iocage activate"
Creating pool1/iocage
Creating pool1/iocage/download
Creating pool1/iocage/images
Creating pool1/iocage/jails
Creating pool1/iocage/log
Creating pool1/iocage/releases
Creating pool1/iocage/templates
Default configuration missing, creating one
+-----+------+-------+---------+-----+
| JID | NAME | STATE | RELEASE | IP4 |
+=====+======+=======+=========+=====+
+-----+------+-------+---------+-----+
root@truenas[~]#
root@truenas[~]# iocage fetch -r 12.2-RELEASE
^C
Aborted!
root@truenas[~]# iocage fetch -r 12.2-RELEASE -NU
Fetching: 12.2-RELEASE
Downloading: MANIFEST [####################] 100%
Downloading: base.txz [####################] 100%
Downloading: lib32.txz [####################] 100%
Downloading: src.txz [####################] 100%
Extracting: base.txz...
Extracting: lib32.txz...
Extracting: src.txz...
root@truenas[~]#
root@truenas[~]# zfs set readonly=yes pool1
cannot set property for 'pool1': 'readonly' must be one of 'on | off'
root@truenas[~]# zfs set readonly=on pool1
root@truenas[~]# zfs snapshot -r pool1@snap #zfs send -R pool1@snap | zfs receive -Fd pool2
usage:
snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ...
For the property list, run: zfs set|get
For the delegated permission list, run: zfs allow|unallow
cannot receive: failed to read from stream
root@truenas[~]# zfs snapshot -r pool1@snap
root@truenas[~]# zfs send -R pool1@snap | zfs receive -Fd pool2
root@truenas[~]# midclt call jail.query
pool1
pool2
You have 2 poolsmarked active for iocage usage.
Run "iocage activate ZPOOL" on the preferred pool.
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 137, in call_method
result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1202, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1106, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/jail_freebsd.py", line 700, in query
self.check_dataset_existence()
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/jail_freebsd.py", line 1008, in check_dataset_existence
IOCCheck(migrate=True, reset_cache=True)
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_check.py", line 50, in __init__
self.pool = iocage_lib.ioc_json.IOCJson(
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_json.py", line 1372, in __init__
super().__init__(location, checking_datasets, silent, callback)
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_json.py", line 429, in __init__
self.pool, self.iocroot = self.get_pool_and_iocroot()
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_json.py", line 553, in get_pool_and_iocroot
pool = get_pool()
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_json.py", line 476, in get_pool
raise RuntimeError(f'{pools}\nYou have {len(matches)} pools'
RuntimeError: pool1
pool2
You have 2 poolsmarked active for iocage usage.
Run "iocage activate ZPOOL" on the preferred pool.
root@truenas[~]# midclt call -job pool.export 1
Status: Reconfiguring system dataset
Total Progress: [################________________________] 40.00%
[EFAULT] [Errno 30] Read-only file system: '/var/db/system/cores'
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 361, in run
await self.future
File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 397, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 1515, in export
raise CallError(sysds_job.error)
middlewared.service_exception.CallError: [EFAULT] [Errno 30] Read-only file system: '/var/db/system/cores'
root@truenas[~]#
Please let me know if I got it wrong ? Otherwise SSH access would be very nice when you have time to configure it so i can move around and examine the state of middleware.
Waqar January 8, 2021 at 10:42 PM
Waqar
January 8, 2021 at 10:42 PM
@Marco ^^^
Waqar January 8, 2021 at 10:34 PM
Waqar
January 8, 2021 at 10:34 PM
I think a teamviewer session would be quicker, my email id is "waqar@ixsystems.com". Please email me credentials, thank you
Complete
Created December 20, 2020 at 2:37 PM
Updated July 1, 2022 at 4:59 PM
Resolved January 14, 2021 at 1:42 PM
The following error occurs when exporting one of two pools:
Error exporting/disconnecting pool.
cannot create 'tank/iocage': dataset already exists
Details:
Error: Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_check.py", line 97, in _check_datasets_
raise ZFSException(-1, 'Dataset does not exist')
iocage_lib.zfs.ZFSException: Dataset does not exist
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/iocage_lib/zfs.py", line 20, in run
cp.check_returncode()
File "/usr/local/lib/python3.8/subprocess.py", line 444, in check_returncode
raise CalledProcessError(self.returncode, self.args, self.stdout,
subprocess.CalledProcessError: Command '['zfs', 'create', '-o', 'compression=lz4', '-o', 'aclmode=passthrough', '-o', 'aclinherit=passthrough', 'tank/iocage']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 137, in call_method
result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
return await methodobj(*prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 1634, in attachments
return await self.middleware.call('pool.dataset.attachments_with_path', pool['path'])
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
return await self._call(
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
return await methodobj(*prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 3714, in attachments_with_path
for attachment in await delegate.query(path, True):
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/jail_freebsd.py", line 1687, in query
for j in await self.middleware.call('jail.query', [['OR', [('state', '=', 'up'), ('boot', '=', 1)]]]):
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
return await self._call(
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1206, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/local/lib/python3.8/site-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/jail_freebsd.py", line 711, in query
self.check_dataset_existence()
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/jail_freebsd.py", line 1019, in check_dataset_existence
IOCCheck(migrate=True, reset_cache=True)
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_check.py", line 58, in _init_
self._check_datasets_()
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_check.py", line 127, in _check_datasets_
ds.create({'properties': dataset_options})
File "/usr/local/lib/python3.8/site-packages/iocage_lib/dataset.py", line 42, in create
return create_dataset({'name': self.resource_name, **data})
File "/usr/local/lib/python3.8/site-packages/iocage_lib/zfs.py", line 155, in create_dataset
return run([
File "/usr/local/lib/python3.8/site-packages/iocage_lib/zfs.py", line 22, in run
raise ZFSException(cp.returncode, cp.stderr)
iocage_lib.zfs.ZFSException: cannot create 'tank/iocage': dataset already exists
To reproduce:
1) set pool to readonly
2) create a recursive snapshot on the pool
3) create a 2nd pool
4) zfs send -R firstpool@snap | zfs receive -Fd secondpool
Then go to Storage → Pools → firstpool: Pool Operations → Export/Disconnect and the following error shows up:
Error exporting/disconnecting pool.
cannot create 'tank/iocage': dataset already exists
Exporting the 2nd new created pool does not trigger the error. After the 2nd pool is exported, the first one can be exported without error.
I have attached the debug output from the console because the “Save Debug” fails (https://ixsystems.atlassian.net/browse/NAS-108746#icft=NAS-108746) in this case.