scale - zfs recordsize artificially capped at 1M. Should be tunable to 16M

Description

the zfs max record size can be set up to 16M:

https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-max-recordsize

I'm not sure why it's CAPPED at (the tunable's DEFAULT value of) 1M for scale, but I assert that it aughtnt be.

Compression on some datasets (large video files, 3d models) benefit greatly from an increased recordsize.

if you've adjusted the tunable, and set the recordsize higher than 1M on a dataset, truenas' scale's shares UI isn't happy. (value out of the range it's expecting)

Problem/Justification

None

Impact

None

Activity

Show:

Bug Clerk 
April 20, 2022 at 2:38 PM

Bug Clerk 
April 8, 2022 at 11:59 AM

Bug Clerk 
April 6, 2022 at 11:05 PM

Alexander Motin 
April 6, 2022 at 3:11 PM

I don't know who influenced whom, but this topic just arise upstream: https://github.com/openzfs/zfs/pull/13302 , so there is a chance for the default to be increased.

Wolf Noble 
April 6, 2022 at 3:59 AM

this is not a LIMITATION, rather a DEFAULT generally good enough value.

yes, I compared compression values and read speed on datasets with many multi-gigabyte files.. that’s where a recordsize of 8m was beneficial.

no, i don’t believe the default value should be set higher for $everyone.

that being said, i don’t agree that because upstream default is 1M truenas should hard cap this to 1M, either.

I think that if you wanna get trixy, have a max recordsize tunable someplace to adjust the kernel module value. and use this tunable to inform the UI of the max.. that feels the most flexible and DRY way of facilitating IMO (but wth do i know )

Complete

Details

Assignee

Reporter

Labels

Time remaining

0m

Components

Priority

Katalon Platform

Created April 4, 2022 at 3:56 PM
Updated July 1, 2022 at 5:57 PM
Resolved April 26, 2022 at 3:24 PM