xref: /freebsd/sys/contrib/openzfs/man/man8/zpool.8 (revision e3aa18ad)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
22.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
23.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
24.\" Copyright (c) 2017 Datto Inc.
25.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
26.\" Copyright 2017 Nexenta Systems, Inc.
27.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
28.\"
29.Dd March 16, 2022
30.Dt ZPOOL 8
31.Os
32.
33.Sh NAME
34.Nm zpool
35.Nd configure ZFS storage pools
36.Sh SYNOPSIS
37.Nm
38.Fl ?V
39.Nm
40.Cm version
41.Nm
42.Cm subcommand
43.Op Ar arguments
44.
45.Sh DESCRIPTION
46The
47.Nm
48command configures ZFS storage pools.
49A storage pool is a collection of devices that provides physical storage and
50data replication for ZFS datasets.
51All datasets within a storage pool share the same space.
52See
53.Xr zfs 8
54for information on managing datasets.
55.Pp
56For an overview of creating and managing ZFS storage pools see the
57.Xr zpoolconcepts 7
58manual page.
59.
60.Sh SUBCOMMANDS
61All subcommands that modify state are logged persistently to the pool in their
62original form.
63.Pp
64The
65.Nm
66command provides subcommands to create and destroy storage pools, add capacity
67to storage pools, and provide information about the storage pools.
68The following subcommands are supported:
69.Bl -tag -width Ds
70.It Xo
71.Nm
72.Fl ?\&
73.Xc
74Displays a help message.
75.It Xo
76.Nm
77.Fl V , -version
78.Xc
79.It Xo
80.Nm
81.Cm version
82.Xc
83Displays the software version of the
84.Nm
85userland utility and the ZFS kernel module.
86.El
87.
88.Ss Creation
89.Bl -tag -width Ds
90.It Xr zpool-create 8
91Creates a new storage pool containing the virtual devices specified on the
92command line.
93.It Xr zpool-initialize 8
94Begins initializing by writing to all unallocated regions on the specified
95devices, or all eligible devices in the pool if no individual devices are
96specified.
97.El
98.
99.Ss Destruction
100.Bl -tag -width Ds
101.It Xr zpool-destroy 8
102Destroys the given pool, freeing up any devices for other use.
103.It Xr zpool-labelclear 8
104Removes ZFS label information from the specified
105.Ar device .
106.El
107.
108.Ss Virtual Devices
109.Bl -tag -width Ds
110.It Xo
111.Xr zpool-attach 8 Ns / Ns Xr zpool-detach 8
112.Xc
113Increases or decreases redundancy by
114.Cm attach Ns ing or
115.Cm detach Ns ing a device on an existing vdev (virtual device).
116.It Xo
117.Xr zpool-add 8 Ns / Ns Xr zpool-remove 8
118.Xc
119Adds the specified virtual devices to the given pool,
120or removes the specified device from the pool.
121.It Xr zpool-replace 8
122Replaces an existing device (which may be faulted) with a new one.
123.It Xr zpool-split 8
124Creates a new pool by splitting all mirrors in an existing pool (which decreases its redundancy).
125.El
126.
127.Ss Properties
128Available pool properties listed in the
129.Xr zpoolprops 7
130manual page.
131.Bl -tag -width Ds
132.It Xr zpool-list 8
133Lists the given pools along with a health status and space usage.
134.It Xo
135.Xr zpool-get 8 Ns / Ns Xr zpool-set 8
136.Xc
137Retrieves the given list of properties
138.Po
139or all properties if
140.Sy all
141is used
142.Pc
143for the specified storage pool(s).
144.El
145.
146.Ss Monitoring
147.Bl -tag -width Ds
148.It Xr zpool-status 8
149Displays the detailed health status for the given pools.
150.It Xr zpool-iostat 8
151Displays logical I/O statistics for the given pools/vdevs.
152Physical I/O operations may be observed via
153.Xr iostat 1 .
154.It Xr zpool-events 8
155Lists all recent events generated by the ZFS kernel modules.
156These events are consumed by the
157.Xr zed 8
158and used to automate administrative tasks such as replacing a failed device
159with a hot spare.
160That manual page also describes the subclasses and event payloads
161that can be generated.
162.It Xr zpool-history 8
163Displays the command history of the specified pool(s) or all pools if no pool is
164specified.
165.El
166.
167.Ss Maintenance
168.Bl -tag -width Ds
169.It Xr zpool-scrub 8
170Begins a scrub or resumes a paused scrub.
171.It Xr zpool-checkpoint 8
172Checkpoints the current state of
173.Ar pool ,
174which can be later restored by
175.Nm zpool Cm import Fl -rewind-to-checkpoint .
176.It Xr zpool-trim 8
177Initiates an immediate on-demand TRIM operation for all of the free space in a pool.
178This operation informs the underlying storage devices of all blocks
179in the pool which are no longer allocated and allows thinly provisioned
180devices to reclaim the space.
181.It Xr zpool-sync 8
182This command forces all in-core dirty data to be written to the primary
183pool storage and not the ZIL.
184It will also update administrative information including quota reporting.
185Without arguments,
186.Nm zpool Cm sync
187will sync all pools on the system.
188Otherwise, it will sync only the specified pool(s).
189.It Xr zpool-upgrade 8
190Manage the on-disk format version of storage pools.
191.It Xr zpool-wait 8
192Waits until all background activity of the given types has ceased in the given
193pool.
194.El
195.
196.Ss Fault Resolution
197.Bl -tag -width Ds
198.It Xo
199.Xr zpool-offline 8 Ns / Ns Xr zpool-online 8
200.Xc
201Takes the specified physical device offline or brings it online.
202.It Xr zpool-resilver 8
203Starts a resilver.
204If an existing resilver is already running it will be restarted from the beginning.
205.It Xr zpool-reopen 8
206Reopen all the vdevs associated with the pool.
207.It Xr zpool-clear 8
208Clears device errors in a pool.
209.El
210.
211.Ss Import & Export
212.Bl -tag -width Ds
213.It Xr zpool-import 8
214Make disks containing ZFS storage pools available for use on the system.
215.It Xr zpool-export 8
216Exports the given pools from the system.
217.It Xr zpool-reguid 8
218Generates a new unique identifier for the pool.
219.El
220.
221.Sh EXIT STATUS
222The following exit values are returned:
223.Bl -tag -compact -offset 4n -width "a"
224.It Sy 0
225Successful completion.
226.It Sy 1
227An error occurred.
228.It Sy 2
229Invalid command line options were specified.
230.El
231.
232.Sh EXAMPLES
233.\" Examples 1, 2, 3, 4, 11, 12 are shared with zpool-create.8.
234.\" Examples 5, 13 are shared with zpool-add.8.
235.\" Examples 6, 15 are shared with zpool-list.8.
236.\" Examples 7 are shared with zpool-destroy.8.
237.\" Examples 8 are shared with zpool-export.8.
238.\" Examples 9 are shared with zpool-import.8.
239.\" Examples 10 are shared with zpool-upgrade.8.
240.\" Examples 14 are shared with zpool-remove.8.
241.\" Examples 16 are shared with zpool-status.8.
242.\" Examples 13, 16 are also shared with zpool-iostat.8.
243.\" Make sure to update them omnidirectionally
244.Ss Example 1 : No Creating a RAID-Z Storage Pool
245The following command creates a pool with a single raidz root vdev that
246consists of six disks:
247.Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf
248.
249.Ss Example 2 : No Creating a Mirrored Storage Pool
250The following command creates a pool with two mirrors, where each mirror
251contains two disks:
252.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd
253.
254.Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions
255The following command creates a non-redundant pool using two disk partitions:
256.Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2
257.
258.Ss Example 4 : No Creating a ZFS Storage Pool by Using Files
259The following command creates a non-redundant pool using files.
260While not recommended, a pool based on files can be useful for experimental
261purposes.
262.Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b
263.
264.Ss Example 5 : No Adding a Mirror to a ZFS Storage Pool
265The following command adds two mirrored disks to the pool
266.Ar tank ,
267assuming the pool is already made up of two-way mirrors.
268The additional space is immediately available to any datasets within the pool.
269.Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb
270.
271.Ss Example 6 : No Listing Available ZFS Storage Pools
272The following command lists all available pools on the system.
273In this case, the pool
274.Ar zion
275is faulted due to a missing device.
276The results from this command are similar to the following:
277.Bd -literal -compact -offset Ds
278.No # Nm zpool Cm list
279NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
280rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
281tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
282zion       -      -      -         -      -      -      -  FAULTED -
283.Ed
284.
285.Ss Example 7 : No Destroying a ZFS Storage Pool
286The following command destroys the pool
287.Ar tank
288and any datasets contained within:
289.Dl # Nm zpool Cm destroy Fl f Ar tank
290.
291.Ss Example 8 : No Exporting a ZFS Storage Pool
292The following command exports the devices in pool
293.Ar tank
294so that they can be relocated or later imported:
295.Dl # Nm zpool Cm export Ar tank
296.
297.Ss Example 9 : No Importing a ZFS Storage Pool
298The following command displays available pools, and then imports the pool
299.Ar tank
300for use on the system.
301The results from this command are similar to the following:
302.Bd -literal -compact -offset Ds
303.No # Nm zpool Cm import
304  pool: tank
305    id: 15451357997522795478
306 state: ONLINE
307action: The pool can be imported using its name or numeric identifier.
308config:
309
310        tank        ONLINE
311          mirror    ONLINE
312            sda     ONLINE
313            sdb     ONLINE
314
315.No # Nm zpool Cm import Ar tank
316.Ed
317.
318.Ss Example 10 : No Upgrading All ZFS Storage Pools to the Current Version
319The following command upgrades all ZFS Storage pools to the current version of
320the software:
321.Bd -literal -compact -offset Ds
322.No # Nm zpool Cm upgrade Fl a
323This system is currently running ZFS version 2.
324.Ed
325.
326.Ss Example 11 : No Managing Hot Spares
327The following command creates a new pool with an available hot spare:
328.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc
329.Pp
330If one of the disks were to fail, the pool would be reduced to the degraded
331state.
332The failed device can be replaced using the following command:
333.Dl # Nm zpool Cm replace Ar tank Pa sda sdd
334.Pp
335Once the data has been resilvered, the spare is automatically removed and is
336made available for use should another device fail.
337The hot spare can be permanently removed from the pool using the following
338command:
339.Dl # Nm zpool Cm remove Ar tank Pa sdc
340.
341.Ss Example 12 : No Creating a ZFS Pool with Mirrored Separate Intent Logs
342The following command creates a ZFS storage pool consisting of two, two-way
343mirrors and mirrored log devices:
344.Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf
345.
346.Ss Example 13 : No Adding Cache Devices to a ZFS Pool
347The following command adds two disks for use as cache devices to a ZFS storage
348pool:
349.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd
350.Pp
351Once added, the cache devices gradually fill with content from main memory.
352Depending on the size of your cache devices, it could take over an hour for
353them to fill.
354Capacity and reads can be monitored using the
355.Cm iostat
356subcommand as follows:
357.Dl # Nm zpool Cm iostat Fl v Ar pool 5
358.
359.Ss Example 14 : No Removing a Mirrored top-level (Log or Data) Device
360The following commands remove the mirrored log device
361.Sy mirror-2
362and mirrored top-level data device
363.Sy mirror-1 .
364.Pp
365Given this configuration:
366.Bd -literal -compact -offset Ds
367  pool: tank
368 state: ONLINE
369 scrub: none requested
370config:
371
372         NAME        STATE     READ WRITE CKSUM
373         tank        ONLINE       0     0     0
374           mirror-0  ONLINE       0     0     0
375             sda     ONLINE       0     0     0
376             sdb     ONLINE       0     0     0
377           mirror-1  ONLINE       0     0     0
378             sdc     ONLINE       0     0     0
379             sdd     ONLINE       0     0     0
380         logs
381           mirror-2  ONLINE       0     0     0
382             sde     ONLINE       0     0     0
383             sdf     ONLINE       0     0     0
384.Ed
385.Pp
386The command to remove the mirrored log
387.Ar mirror-2 No is:
388.Dl # Nm zpool Cm remove Ar tank mirror-2
389.Pp
390The command to remove the mirrored data
391.Ar mirror-1 No is:
392.Dl # Nm zpool Cm remove Ar tank mirror-1
393.
394.Ss Example 15 : No Displaying expanded space on a device
395The following command displays the detailed information for the pool
396.Ar data .
397This pool is comprised of a single raidz vdev where one of its devices
398increased its capacity by 10 GiB.
399In this example, the pool will not be able to utilize this extra capacity until
400all the devices under the raidz vdev have been expanded.
401.Bd -literal -compact -offset Ds
402.No # Nm zpool Cm list Fl v Ar data
403NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
404data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
405  raidz1    23.9G  14.6G  9.30G         -    48%
406    sda         -      -      -         -      -
407    sdb         -      -      -       10G      -
408    sdc         -      -      -         -      -
409.Ed
410.
411.Ss Example 16 : No Adding output columns
412Additional columns can be added to the
413.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c .
414.Bd -literal -compact -offset Ds
415.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size
416   NAME     STATE  READ WRITE CKSUM vendor  model        size
417   tank     ONLINE 0    0     0
418   mirror-0 ONLINE 0    0     0
419   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
420   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
421   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
422   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
423   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
424   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
425
426.No # Nm zpool Cm iostat Fl vc Pa size
427              capacity     operations     bandwidth
428pool        alloc   free   read  write   read  write  size
429----------  -----  -----  -----  -----  -----  -----  ----
430rpool       14.6G  54.9G      4     55   250K  2.69M
431  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
432----------  -----  -----  -----  -----  -----  -----  ----
433.Ed
434.
435.Sh ENVIRONMENT VARIABLES
436.Bl -tag -compact -width "ZPOOL_IMPORT_UDEV_TIMEOUT_MS"
437.It Sy ZFS_ABORT
438Cause
439.Nm
440to dump core on exit for the purposes of running
441.Sy ::findleaks .
442.It Sy ZFS_COLOR
443Use ANSI color in
444.Nm zpool Cm status
445output.
446.It Sy ZPOOL_IMPORT_PATH
447The search path for devices or files to use with the pool.
448This is a colon-separated list of directories in which
449.Nm
450looks for device nodes and files.
451Similar to the
452.Fl d
453option in
454.Nm zpool import .
455.It Sy ZPOOL_IMPORT_UDEV_TIMEOUT_MS
456The maximum time in milliseconds that
457.Nm zpool import
458will wait for an expected device to be available.
459.It Sy ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE
460If set, suppress warning about non-native vdev ashift in
461.Nm zpool Cm status .
462The value is not used, only the presence or absence of the variable matters.
463.It Sy ZPOOL_VDEV_NAME_GUID
464Cause
465.Nm
466subcommands to output vdev guids by default.
467This behavior is identical to the
468.Nm zpool Cm status Fl g
469command line option.
470.It Sy ZPOOL_VDEV_NAME_FOLLOW_LINKS
471Cause
472.Nm
473subcommands to follow links for vdev names by default.
474This behavior is identical to the
475.Nm zpool Cm status Fl L
476command line option.
477.It Sy ZPOOL_VDEV_NAME_PATH
478Cause
479.Nm
480subcommands to output full vdev path names by default.
481This behavior is identical to the
482.Nm zpool Cm status Fl P
483command line option.
484.It Sy ZFS_VDEV_DEVID_OPT_OUT
485Older OpenZFS implementations had issues when attempting to display pool
486config vdev names if a
487.Sy devid
488NVP value is present in the pool's config.
489.Pp
490For example, a pool that originated on illumos platform would have a
491.Sy devid
492value in the config and
493.Nm zpool Cm status
494would fail when listing the config.
495This would also be true for future Linux-based pools.
496.Pp
497A pool can be stripped of any
498.Sy devid
499values on import or prevented from adding
500them on
501.Nm zpool Cm create
502or
503.Nm zpool Cm add
504by setting
505.Sy ZFS_VDEV_DEVID_OPT_OUT .
506.Pp
507.It Sy ZPOOL_SCRIPTS_AS_ROOT
508Allow a privileged user to run
509.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
510Normally, only unprivileged users are allowed to run
511.Fl c .
512.It Sy ZPOOL_SCRIPTS_PATH
513The search path for scripts when running
514.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
515This is a colon-separated list of directories and overrides the default
516.Pa ~/.zpool.d
517and
518.Pa /etc/zfs/zpool.d
519search paths.
520.It Sy ZPOOL_SCRIPTS_ENABLED
521Allow a user to run
522.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
523If
524.Sy ZPOOL_SCRIPTS_ENABLED
525is not set, it is assumed that the user is allowed to run
526.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
527.\" Shared with zfs.8
528.It Sy ZFS_MODULE_TIMEOUT
529Time, in seconds, to wait for
530.Pa /dev/zfs
531to appear.
532Defaults to
533.Sy 10 ,
534max
535.Sy 600 Pq 10 minutes .
536If
537.Pf < Sy 0 ,
538wait forever; if
539.Sy 0 ,
540don't wait.
541.El
542.
543.Sh INTERFACE STABILITY
544.Sy Evolving
545.
546.Sh SEE ALSO
547.Xr zfs 4 ,
548.Xr zpool-features 7 ,
549.Xr zpoolconcepts 7 ,
550.Xr zpoolprops 7 ,
551.Xr zed 8 ,
552.Xr zfs 8 ,
553.Xr zpool-add 8 ,
554.Xr zpool-attach 8 ,
555.Xr zpool-checkpoint 8 ,
556.Xr zpool-clear 8 ,
557.Xr zpool-create 8 ,
558.Xr zpool-destroy 8 ,
559.Xr zpool-detach 8 ,
560.Xr zpool-events 8 ,
561.Xr zpool-export 8 ,
562.Xr zpool-get 8 ,
563.Xr zpool-history 8 ,
564.Xr zpool-import 8 ,
565.Xr zpool-initialize 8 ,
566.Xr zpool-iostat 8 ,
567.Xr zpool-labelclear 8 ,
568.Xr zpool-list 8 ,
569.Xr zpool-offline 8 ,
570.Xr zpool-online 8 ,
571.Xr zpool-reguid 8 ,
572.Xr zpool-remove 8 ,
573.Xr zpool-reopen 8 ,
574.Xr zpool-replace 8 ,
575.Xr zpool-resilver 8 ,
576.Xr zpool-scrub 8 ,
577.Xr zpool-set 8 ,
578.Xr zpool-split 8 ,
579.Xr zpool-status 8 ,
580.Xr zpool-sync 8 ,
581.Xr zpool-trim 8 ,
582.Xr zpool-upgrade 8 ,
583.Xr zpool-wait 8
584