1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or https://opensource.org/licenses/CDDL-1.0.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
22.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
23.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
24.\" Copyright (c) 2017 Datto Inc.
25.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
26.\" Copyright 2017 Nexenta Systems, Inc.
27.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
28.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
29.\" Copyright (c) 2023, Klara Inc.
30.\"
31.Dd April 18, 2023
32.Dt ZPOOLPROPS 7
33.Os
34.
35.Sh NAME
36.Nm zpoolprops
37.Nd properties of ZFS storage pools
38.
39.Sh DESCRIPTION
40Each pool has several properties associated with it.
41Some properties are read-only statistics while others are configurable and
42change the behavior of the pool.
43.Pp
44User properties have no effect on ZFS behavior.
45Use them to annotate pools in a way that is meaningful in your environment.
46For more information about user properties, see the
47.Sx User Properties
48section.
49.Pp
50The following are read-only properties:
51.Bl -tag -width "unsupported@guid"
52.It Sy allocated
53Amount of storage used within the pool.
54See
55.Sy fragmentation
56and
57.Sy free
58for more information.
59.It Sy bcloneratio
60The ratio of the total amount of storage that would be required to store all
61the cloned blocks without cloning to the actual storage used.
62The
63.Sy bcloneratio
64property is calculated as:
65.Pp
66.Sy ( ( bclonesaved + bcloneused ) * 100 ) / bcloneused
67.It Sy bclonesaved
68The amount of additional storage that would be required if block cloning
69was not used.
70.It Sy bcloneused
71The amount of storage used by cloned blocks.
72.It Sy capacity
73Percentage of pool space used.
74This property can also be referred to by its shortened column name,
75.Sy cap .
76.It Sy expandsize
77Amount of uninitialized space within the pool or device that can be used to
78increase the total capacity of the pool.
79On whole-disk vdevs, this is the space beyond the end of the GPT –
80typically occurring when a LUN is dynamically expanded
81or a disk replaced with a larger one.
82On partition vdevs, this is the space appended to the partition after it was
83added to the pool – most likely by resizing it in-place.
84The space can be claimed for the pool by bringing it online with
85.Sy autoexpand=on
86or using
87.Nm zpool Cm online Fl e .
88.It Sy fragmentation
89The amount of fragmentation in the pool.
90As the amount of space
91.Sy allocated
92increases, it becomes more difficult to locate
93.Sy free
94space.
95This may result in lower write performance compared to pools with more
96unfragmented free space.
97.It Sy free
98The amount of free space available in the pool.
99By contrast, the
100.Xr zfs 8
101.Sy available
102property describes how much new data can be written to ZFS filesystems/volumes.
103The zpool
104.Sy free
105property is not generally useful for this purpose, and can be substantially more
106than the zfs
107.Sy available
108space.
109This discrepancy is due to several factors, including raidz parity;
110zfs reservation, quota, refreservation, and refquota properties; and space set
111aside by
112.Sy spa_slop_shift
113(see
114.Xr zfs 4
115for more information).
116.It Sy freeing
117After a file system or snapshot is destroyed, the space it was using is
118returned to the pool asynchronously.
119.Sy freeing
120is the amount of space remaining to be reclaimed.
121Over time
122.Sy freeing
123will decrease while
124.Sy free
125increases.
126.It Sy guid
127A unique identifier for the pool.
128.It Sy health
129The current health of the pool.
130Health can be one of
131.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
132.It Sy leaked
133Space not released while
134.Sy freeing
135due to corruption, now permanently leaked into the pool.
136.It Sy load_guid
137A unique identifier for the pool.
138Unlike the
139.Sy guid
140property, this identifier is generated every time we load the pool (i.e. does
141not persist across imports/exports) and never changes while the pool is loaded
142(even if a
143.Sy reguid
144operation takes place).
145.It Sy size
146Total size of the storage pool.
147.It Sy unsupported@ Ns Em guid
148Information about unsupported features that are enabled on the pool.
149See
150.Xr zpool-features 7
151for details.
152.El
153.Pp
154The space usage properties report actual physical space available to the
155storage pool.
156The physical space can be different from the total amount of space that any
157contained datasets can actually use.
158The amount of space used in a raidz configuration depends on the characteristics
159of the data being written.
160In addition, ZFS reserves some space for internal accounting that the
161.Xr zfs 8
162command takes into account, but the
163.Nm
164command does not.
165For non-full pools of a reasonable size, these effects should be invisible.
166For small pools, or pools that are close to being completely full, these
167discrepancies may become more noticeable.
168.Pp
169The following property can be set at creation time and import time:
170.Bl -tag -width Ds
171.It Sy altroot
172Alternate root directory.
173If set, this directory is prepended to any mount points within the pool.
174This can be used when examining an unknown pool where the mount points cannot be
175trusted, or in an alternate boot environment, where the typical paths are not
176valid.
177.Sy altroot
178is not a persistent property.
179It is valid only while the system is up.
180Setting
181.Sy altroot
182defaults to using
183.Sy cachefile Ns = Ns Sy none ,
184though this may be overridden using an explicit setting.
185.El
186.Pp
187The following property can be set only at import time:
188.Bl -tag -width Ds
189.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
190If set to
191.Sy on ,
192the pool will be imported in read-only mode.
193This property can also be referred to by its shortened column name,
194.Sy rdonly .
195.El
196.Pp
197The following properties can be set at creation time and import time, and later
198changed with the
199.Nm zpool Cm set
200command:
201.Bl -tag -width Ds
202.It Sy ashift Ns = Ns Ar ashift
203Pool sector size exponent, to the power of
204.Sy 2
205(internally referred to as
206.Sy ashift ) .
207Values from 9 to 16, inclusive, are valid; also, the
208value 0 (the default) means to auto-detect using the kernel's block
209layer and a ZFS internal exception list.
210I/O operations will be aligned to the specified size boundaries.
211Additionally, the minimum (disk)
212write size will be set to the specified size, so this represents a
213space/performance trade-off.
214For optimal performance, the pool sector size should be greater than
215or equal to the sector size of the underlying disks.
216The typical case for setting this property is when
217performance is important and the underlying disks use 4KiB sectors but
218report 512B sectors to the OS (for compatibility reasons); in that
219case, set
220.Sy ashift Ns = Ns Sy 12
221(which is
222.Sy 1<<12 No = Sy 4096 ) .
223When set, this property is
224used as the default hint value in subsequent vdev operations (add,
225attach and replace).
226Changing this value will not modify any existing
227vdev, not even on disk replacement; however it can be used, for
228instance, to replace a dying 512B sectors disk with a newer 4KiB
229sectors device: this will probably result in bad performance but at the
230same time could prevent loss of data.
231.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
232Controls automatic pool expansion when the underlying LUN is grown.
233If set to
234.Sy on ,
235the pool will be resized according to the size of the expanded device.
236If the device is part of a mirror or raidz then all devices within that
237mirror/raidz group must be expanded before the new space is made available to
238the pool.
239The default behavior is
240.Sy off .
241This property can also be referred to by its shortened column name,
242.Sy expand .
243.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
244Controls automatic device replacement.
245If set to
246.Sy off ,
247device replacement must be initiated by the administrator by using the
248.Nm zpool Cm replace
249command.
250If set to
251.Sy on ,
252any new device, found in the same physical location as a device that previously
253belonged to the pool, is automatically formatted and replaced.
254The default behavior is
255.Sy off .
256This property can also be referred to by its shortened column name,
257.Sy replace .
258Autoreplace can also be used with virtual disks (like device
259mapper) provided that you use the /dev/disk/by-vdev paths setup by
260vdev_id.conf.
261See the
262.Xr vdev_id 8
263manual page for more details.
264Autoreplace and autoonline require the ZFS Event Daemon be configured and
265running.
266See the
267.Xr zed 8
268manual page for more details.
269.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
270When set to
271.Sy on
272space which has been recently freed, and is no longer allocated by the pool,
273will be periodically trimmed.
274This allows block device vdevs which support
275BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
276supports hole-punching, to reclaim unused blocks.
277The default value for this property is
278.Sy off .
279.Pp
280Automatic TRIM does not immediately reclaim blocks after a free.
281Instead, it will optimistically delay allowing smaller ranges to be aggregated
282into a few larger ones.
283These can then be issued more efficiently to the storage.
284TRIM on L2ARC devices is enabled by setting
285.Sy l2arc_trim_ahead > 0 .
286.Pp
287Be aware that automatic trimming of recently freed data blocks can put
288significant stress on the underlying storage devices.
289This will vary depending of how well the specific device handles these commands.
290For lower-end devices it is often possible to achieve most of the benefits
291of automatic trimming by running an on-demand (manual) TRIM periodically
292using the
293.Nm zpool Cm trim
294command.
295.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns Op / Ns Ar dataset
296Identifies the default bootable dataset for the root pool.
297This property is expected to be set mainly by the installation and upgrade
298programs.
299Not all Linux distribution boot processes use the bootfs property.
300.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
301Controls the location of where the pool configuration is cached.
302Discovering all pools on system startup requires a cached copy of the
303configuration data that is stored on the root file system.
304All pools in this cache are automatically imported when the system boots.
305Some environments, such as install and clustering, need to cache this
306information in a different location so that pools are not automatically
307imported.
308Setting this property caches the pool configuration in a different location that
309can later be imported with
310.Nm zpool Cm import Fl c .
311Setting it to the value
312.Sy none
313creates a temporary pool that is never cached, and the
314.Qq
315.Pq empty string
316uses the default location.
317.Pp
318Multiple pools can share the same cache file.
319Because the kernel destroys and recreates this file when pools are added and
320removed, care should be taken when attempting to access this file.
321When the last pool using a
322.Sy cachefile
323is exported or destroyed, the file will be empty.
324.It Sy comment Ns = Ns Ar text
325A text string consisting of printable ASCII characters that will be stored
326such that it is available even if the pool becomes faulted.
327An administrator can provide additional information about a pool using this
328property.
329.It Sy compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns …
330Specifies that the pool maintain compatibility with specific feature sets.
331When set to
332.Sy off
333(or unset) compatibility is disabled (all features may be enabled); when set to
334.Sy legacy Ns
335no features may be enabled.
336When set to a comma-separated list of filenames
337(each filename may either be an absolute path, or relative to
338.Pa /etc/zfs/compatibility.d
339or
340.Pa /usr/share/zfs/compatibility.d )
341the lists of requested features are read from those files, separated by
342whitespace and/or commas.
343Only features present in all files may be enabled.
344.Pp
345See
346.Xr zpool-features 7 ,
347.Xr zpool-create 8
348and
349.Xr zpool-upgrade 8
350for more information on the operation of compatibility feature sets.
351.It Sy dedupditto Ns = Ns Ar number
352This property is deprecated and no longer has any effect.
353.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
354Controls whether a non-privileged user is granted access based on the dataset
355permissions defined on the dataset.
356See
357.Xr zfs 8
358for more information on ZFS delegated administration.
359.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
360Controls the system behavior in the event of catastrophic pool failure.
361This condition is typically a result of a loss of connectivity to the underlying
362storage device(s) or a failure of all devices within the pool.
363The behavior of such an event is determined as follows:
364.Bl -tag -width "continue"
365.It Sy wait
366Blocks all I/O access until the device connectivity is recovered and the errors
367are cleared with
368.Nm zpool Cm clear .
369This is the default behavior.
370.It Sy continue
371Returns
372.Er EIO
373to any new write I/O requests but allows reads to any of the remaining healthy
374devices.
375Any write requests that have yet to be committed to disk would be blocked.
376.It Sy panic
377Prints out a message to the console and generates a system crash dump.
378.El
379.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
380The value of this property is the current state of
381.Ar feature_name .
382The only valid value when setting this property is
383.Sy enabled
384which moves
385.Ar feature_name
386to the enabled state.
387See
388.Xr zpool-features 7
389for details on feature states.
390.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
391Controls whether information about snapshots associated with this pool is
392output when
393.Nm zfs Cm list
394is run without the
395.Fl t
396option.
397The default value is
398.Sy off .
399This property can also be referred to by its shortened name,
400.Sy listsnaps .
401.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
402Controls whether a pool activity check should be performed during
403.Nm zpool Cm import .
404When a pool is determined to be active it cannot be imported, even with the
405.Fl f
406option.
407This property is intended to be used in failover configurations
408where multiple hosts have access to a pool on shared storage.
409.Pp
410Multihost provides protection on import only.
411It does not protect against an
412individual device being used in multiple pools, regardless of the type of vdev.
413See the discussion under
414.Nm zpool Cm create .
415.Pp
416When this property is on, periodic writes to storage occur to show the pool is
417in use.
418See
419.Sy zfs_multihost_interval
420in the
421.Xr zfs 4
422manual page.
423In order to enable this property each host must set a unique hostid.
424See
425.Xr genhostid 1
426.Xr zgenhostid 8
427.Xr spl 4
428for additional details.
429The default value is
430.Sy off .
431.It Sy version Ns = Ns Ar version
432The current on-disk version of the pool.
433This can be increased, but never decreased.
434The preferred method of updating pools is with the
435.Nm zpool Cm upgrade
436command, though this property can be used when a specific version is needed for
437backwards compatibility.
438Once feature flags are enabled on a pool this property will no longer have a
439value.
440.El
441.
442.Ss User Properties
443In addition to the standard native properties, ZFS supports arbitrary user
444properties.
445User properties have no effect on ZFS behavior, but applications or
446administrators can use them to annotate pools.
447.Pp
448User property names must contain a colon
449.Pq Qq Sy \&:
450character to distinguish them from native properties.
451They may contain lowercase letters, numbers, and the following punctuation
452characters: colon
453.Pq Qq Sy \&: ,
454dash
455.Pq Qq Sy - ,
456period
457.Pq Qq Sy \&. ,
458and underscore
459.Pq Qq Sy _ .
460The expected convention is that the property name is divided into two portions
461such as
462.Ar module : Ns Ar property ,
463but this namespace is not enforced by ZFS.
464User property names can be at most 256 characters, and cannot begin with a dash
465.Pq Qq Sy - .
466.Pp
467When making programmatic use of user properties, it is strongly suggested to use
468a reversed DNS domain name for the
469.Ar module
470component of property names to reduce the chance that two
471independently-developed packages use the same property name for different
472purposes.
473.Pp
474The values of user properties are arbitrary strings and
475are never validated.
476All of the commands that operate on properties
477.Po Nm zpool Cm list ,
478.Nm zpool Cm get ,
479.Nm zpool Cm set ,
480and so forth
481.Pc
482can be used to manipulate both native properties and user properties.
483Use
484.Nm zpool Cm set Ar name Ns =
485to clear a user property.
486Property values are limited to 8192 bytes.
487