xref: /freebsd/sys/contrib/openzfs/man/man8/zpool.8 (revision 1d386b48)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or https://opensource.org/licenses/CDDL-1.0.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
22.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
23.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
24.\" Copyright (c) 2017 Datto Inc.
25.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
26.\" Copyright 2017 Nexenta Systems, Inc.
27.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
28.\"
29.Dd March 16, 2022
30.Dt ZPOOL 8
31.Os
32.
33.Sh NAME
34.Nm zpool
35.Nd configure ZFS storage pools
36.Sh SYNOPSIS
37.Nm
38.Fl ?V
39.Nm
40.Cm version
41.Nm
42.Cm subcommand
43.Op Ar arguments
44.
45.Sh DESCRIPTION
46The
47.Nm
48command configures ZFS storage pools.
49A storage pool is a collection of devices that provides physical storage and
50data replication for ZFS datasets.
51All datasets within a storage pool share the same space.
52See
53.Xr zfs 8
54for information on managing datasets.
55.Pp
56For an overview of creating and managing ZFS storage pools see the
57.Xr zpoolconcepts 7
58manual page.
59.
60.Sh SUBCOMMANDS
61All subcommands that modify state are logged persistently to the pool in their
62original form.
63.Pp
64The
65.Nm
66command provides subcommands to create and destroy storage pools, add capacity
67to storage pools, and provide information about the storage pools.
68The following subcommands are supported:
69.Bl -tag -width Ds
70.It Xo
71.Nm
72.Fl ?\&
73.Xc
74Displays a help message.
75.It Xo
76.Nm
77.Fl V , -version
78.Xc
79.It Xo
80.Nm
81.Cm version
82.Xc
83Displays the software version of the
84.Nm
85userland utility and the ZFS kernel module.
86.El
87.
88.Ss Creation
89.Bl -tag -width Ds
90.It Xr zpool-create 8
91Creates a new storage pool containing the virtual devices specified on the
92command line.
93.It Xr zpool-initialize 8
94Begins initializing by writing to all unallocated regions on the specified
95devices, or all eligible devices in the pool if no individual devices are
96specified.
97.El
98.
99.Ss Destruction
100.Bl -tag -width Ds
101.It Xr zpool-destroy 8
102Destroys the given pool, freeing up any devices for other use.
103.It Xr zpool-labelclear 8
104Removes ZFS label information from the specified
105.Ar device .
106.El
107.
108.Ss Virtual Devices
109.Bl -tag -width Ds
110.It Xo
111.Xr zpool-attach 8 Ns / Ns Xr zpool-detach 8
112.Xc
113Increases or decreases redundancy by
114.Cm attach Ns ing or
115.Cm detach Ns ing a device on an existing vdev (virtual device).
116.It Xo
117.Xr zpool-add 8 Ns / Ns Xr zpool-remove 8
118.Xc
119Adds the specified virtual devices to the given pool,
120or removes the specified device from the pool.
121.It Xr zpool-replace 8
122Replaces an existing device (which may be faulted) with a new one.
123.It Xr zpool-split 8
124Creates a new pool by splitting all mirrors in an existing pool (which decreases
125its redundancy).
126.El
127.
128.Ss Properties
129Available pool properties listed in the
130.Xr zpoolprops 7
131manual page.
132.Bl -tag -width Ds
133.It Xr zpool-list 8
134Lists the given pools along with a health status and space usage.
135.It Xo
136.Xr zpool-get 8 Ns / Ns Xr zpool-set 8
137.Xc
138Retrieves the given list of properties
139.Po
140or all properties if
141.Sy all
142is used
143.Pc
144for the specified storage pool(s).
145.El
146.
147.Ss Monitoring
148.Bl -tag -width Ds
149.It Xr zpool-status 8
150Displays the detailed health status for the given pools.
151.It Xr zpool-iostat 8
152Displays logical I/O statistics for the given pools/vdevs.
153Physical I/O operations may be observed via
154.Xr iostat 1 .
155.It Xr zpool-events 8
156Lists all recent events generated by the ZFS kernel modules.
157These events are consumed by the
158.Xr zed 8
159and used to automate administrative tasks such as replacing a failed device
160with a hot spare.
161That manual page also describes the subclasses and event payloads
162that can be generated.
163.It Xr zpool-history 8
164Displays the command history of the specified pool(s) or all pools if no pool is
165specified.
166.El
167.
168.Ss Maintenance
169.Bl -tag -width Ds
170.It Xr zpool-scrub 8
171Begins a scrub or resumes a paused scrub.
172.It Xr zpool-checkpoint 8
173Checkpoints the current state of
174.Ar pool ,
175which can be later restored by
176.Nm zpool Cm import Fl -rewind-to-checkpoint .
177.It Xr zpool-trim 8
178Initiates an immediate on-demand TRIM operation for all of the free space in a
179pool.
180This operation informs the underlying storage devices of all blocks
181in the pool which are no longer allocated and allows thinly provisioned
182devices to reclaim the space.
183.It Xr zpool-sync 8
184This command forces all in-core dirty data to be written to the primary
185pool storage and not the ZIL.
186It will also update administrative information including quota reporting.
187Without arguments,
188.Nm zpool Cm sync
189will sync all pools on the system.
190Otherwise, it will sync only the specified pool(s).
191.It Xr zpool-upgrade 8
192Manage the on-disk format version of storage pools.
193.It Xr zpool-wait 8
194Waits until all background activity of the given types has ceased in the given
195pool.
196.El
197.
198.Ss Fault Resolution
199.Bl -tag -width Ds
200.It Xo
201.Xr zpool-offline 8 Ns / Ns Xr zpool-online 8
202.Xc
203Takes the specified physical device offline or brings it online.
204.It Xr zpool-resilver 8
205Starts a resilver.
206If an existing resilver is already running it will be restarted from the
207beginning.
208.It Xr zpool-reopen 8
209Reopen all the vdevs associated with the pool.
210.It Xr zpool-clear 8
211Clears device errors in a pool.
212.El
213.
214.Ss Import & Export
215.Bl -tag -width Ds
216.It Xr zpool-import 8
217Make disks containing ZFS storage pools available for use on the system.
218.It Xr zpool-export 8
219Exports the given pools from the system.
220.It Xr zpool-reguid 8
221Generates a new unique identifier for the pool.
222.El
223.
224.Sh EXIT STATUS
225The following exit values are returned:
226.Bl -tag -compact -offset 4n -width "a"
227.It Sy 0
228Successful completion.
229.It Sy 1
230An error occurred.
231.It Sy 2
232Invalid command line options were specified.
233.El
234.
235.Sh EXAMPLES
236.\" Examples 1, 2, 3, 4, 11, 12 are shared with zpool-create.8.
237.\" Examples 5, 13 are shared with zpool-add.8.
238.\" Examples 6, 15 are shared with zpool-list.8.
239.\" Examples 7 are shared with zpool-destroy.8.
240.\" Examples 8 are shared with zpool-export.8.
241.\" Examples 9 are shared with zpool-import.8.
242.\" Examples 10 are shared with zpool-upgrade.8.
243.\" Examples 14 are shared with zpool-remove.8.
244.\" Examples 16 are shared with zpool-status.8.
245.\" Examples 13, 16 are also shared with zpool-iostat.8.
246.\" Make sure to update them omnidirectionally
247.Ss Example 1 : No Creating a RAID-Z Storage Pool
248The following command creates a pool with a single raidz root vdev that
249consists of six disks:
250.Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf
251.
252.Ss Example 2 : No Creating a Mirrored Storage Pool
253The following command creates a pool with two mirrors, where each mirror
254contains two disks:
255.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd
256.
257.Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions
258The following command creates a non-redundant pool using two disk partitions:
259.Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2
260.
261.Ss Example 4 : No Creating a ZFS Storage Pool by Using Files
262The following command creates a non-redundant pool using files.
263While not recommended, a pool based on files can be useful for experimental
264purposes.
265.Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b
266.
267.Ss Example 5 : No Adding a Mirror to a ZFS Storage Pool
268The following command adds two mirrored disks to the pool
269.Ar tank ,
270assuming the pool is already made up of two-way mirrors.
271The additional space is immediately available to any datasets within the pool.
272.Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb
273.
274.Ss Example 6 : No Listing Available ZFS Storage Pools
275The following command lists all available pools on the system.
276In this case, the pool
277.Ar zion
278is faulted due to a missing device.
279The results from this command are similar to the following:
280.Bd -literal -compact -offset Ds
281.No # Nm zpool Cm list
282NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
283rpool  19.9G  8.43G  11.4G         -    33%    42%  1.00x  ONLINE  -
284tank   61.5G  20.0G  41.5G         -    48%    32%  1.00x  ONLINE  -
285zion       -      -      -         -      -      -      -  FAULTED -
286.Ed
287.
288.Ss Example 7 : No Destroying a ZFS Storage Pool
289The following command destroys the pool
290.Ar tank
291and any datasets contained within:
292.Dl # Nm zpool Cm destroy Fl f Ar tank
293.
294.Ss Example 8 : No Exporting a ZFS Storage Pool
295The following command exports the devices in pool
296.Ar tank
297so that they can be relocated or later imported:
298.Dl # Nm zpool Cm export Ar tank
299.
300.Ss Example 9 : No Importing a ZFS Storage Pool
301The following command displays available pools, and then imports the pool
302.Ar tank
303for use on the system.
304The results from this command are similar to the following:
305.Bd -literal -compact -offset Ds
306.No # Nm zpool Cm import
307  pool: tank
308    id: 15451357997522795478
309 state: ONLINE
310action: The pool can be imported using its name or numeric identifier.
311config:
312
313        tank        ONLINE
314          mirror    ONLINE
315            sda     ONLINE
316            sdb     ONLINE
317
318.No # Nm zpool Cm import Ar tank
319.Ed
320.
321.Ss Example 10 : No Upgrading All ZFS Storage Pools to the Current Version
322The following command upgrades all ZFS Storage pools to the current version of
323the software:
324.Bd -literal -compact -offset Ds
325.No # Nm zpool Cm upgrade Fl a
326This system is currently running ZFS version 2.
327.Ed
328.
329.Ss Example 11 : No Managing Hot Spares
330The following command creates a new pool with an available hot spare:
331.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc
332.Pp
333If one of the disks were to fail, the pool would be reduced to the degraded
334state.
335The failed device can be replaced using the following command:
336.Dl # Nm zpool Cm replace Ar tank Pa sda sdd
337.Pp
338Once the data has been resilvered, the spare is automatically removed and is
339made available for use should another device fail.
340The hot spare can be permanently removed from the pool using the following
341command:
342.Dl # Nm zpool Cm remove Ar tank Pa sdc
343.
344.Ss Example 12 : No Creating a ZFS Pool with Mirrored Separate Intent Logs
345The following command creates a ZFS storage pool consisting of two, two-way
346mirrors and mirrored log devices:
347.Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf
348.
349.Ss Example 13 : No Adding Cache Devices to a ZFS Pool
350The following command adds two disks for use as cache devices to a ZFS storage
351pool:
352.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd
353.Pp
354Once added, the cache devices gradually fill with content from main memory.
355Depending on the size of your cache devices, it could take over an hour for
356them to fill.
357Capacity and reads can be monitored using the
358.Cm iostat
359subcommand as follows:
360.Dl # Nm zpool Cm iostat Fl v Ar pool 5
361.
362.Ss Example 14 : No Removing a Mirrored top-level (Log or Data) Device
363The following commands remove the mirrored log device
364.Sy mirror-2
365and mirrored top-level data device
366.Sy mirror-1 .
367.Pp
368Given this configuration:
369.Bd -literal -compact -offset Ds
370  pool: tank
371 state: ONLINE
372 scrub: none requested
373config:
374
375         NAME        STATE     READ WRITE CKSUM
376         tank        ONLINE       0     0     0
377           mirror-0  ONLINE       0     0     0
378             sda     ONLINE       0     0     0
379             sdb     ONLINE       0     0     0
380           mirror-1  ONLINE       0     0     0
381             sdc     ONLINE       0     0     0
382             sdd     ONLINE       0     0     0
383         logs
384           mirror-2  ONLINE       0     0     0
385             sde     ONLINE       0     0     0
386             sdf     ONLINE       0     0     0
387.Ed
388.Pp
389The command to remove the mirrored log
390.Ar mirror-2 No is :
391.Dl # Nm zpool Cm remove Ar tank mirror-2
392.Pp
393The command to remove the mirrored data
394.Ar mirror-1 No is :
395.Dl # Nm zpool Cm remove Ar tank mirror-1
396.
397.Ss Example 15 : No Displaying expanded space on a device
398The following command displays the detailed information for the pool
399.Ar data .
400This pool is comprised of a single raidz vdev where one of its devices
401increased its capacity by 10 GiB.
402In this example, the pool will not be able to utilize this extra capacity until
403all the devices under the raidz vdev have been expanded.
404.Bd -literal -compact -offset Ds
405.No # Nm zpool Cm list Fl v Ar data
406NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
407data        23.9G  14.6G  9.30G         -    48%    61%  1.00x  ONLINE  -
408  raidz1    23.9G  14.6G  9.30G         -    48%
409    sda         -      -      -         -      -
410    sdb         -      -      -       10G      -
411    sdc         -      -      -         -      -
412.Ed
413.
414.Ss Example 16 : No Adding output columns
415Additional columns can be added to the
416.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c .
417.Bd -literal -compact -offset Ds
418.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size
419   NAME     STATE  READ WRITE CKSUM vendor  model        size
420   tank     ONLINE 0    0     0
421   mirror-0 ONLINE 0    0     0
422   U1       ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
423   U10      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
424   U11      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
425   U12      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
426   U13      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
427   U14      ONLINE 0    0     0     SEAGATE ST8000NM0075 7.3T
428
429.No # Nm zpool Cm iostat Fl vc Pa size
430              capacity     operations     bandwidth
431pool        alloc   free   read  write   read  write  size
432----------  -----  -----  -----  -----  -----  -----  ----
433rpool       14.6G  54.9G      4     55   250K  2.69M
434  sda1      14.6G  54.9G      4     55   250K  2.69M   70G
435----------  -----  -----  -----  -----  -----  -----  ----
436.Ed
437.
438.Sh ENVIRONMENT VARIABLES
439.Bl -tag -compact -width "ZPOOL_IMPORT_UDEV_TIMEOUT_MS"
440.It Sy ZFS_ABORT
441Cause
442.Nm
443to dump core on exit for the purposes of running
444.Sy ::findleaks .
445.It Sy ZFS_COLOR
446Use ANSI color in
447.Nm zpool Cm status
448and
449.Nm zpool Cm iostat
450output.
451.It Sy ZPOOL_IMPORT_PATH
452The search path for devices or files to use with the pool.
453This is a colon-separated list of directories in which
454.Nm
455looks for device nodes and files.
456Similar to the
457.Fl d
458option in
459.Nm zpool import .
460.It Sy ZPOOL_IMPORT_UDEV_TIMEOUT_MS
461The maximum time in milliseconds that
462.Nm zpool import
463will wait for an expected device to be available.
464.It Sy ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE
465If set, suppress warning about non-native vdev ashift in
466.Nm zpool Cm status .
467The value is not used, only the presence or absence of the variable matters.
468.It Sy ZPOOL_VDEV_NAME_GUID
469Cause
470.Nm
471subcommands to output vdev guids by default.
472This behavior is identical to the
473.Nm zpool Cm status Fl g
474command line option.
475.It Sy ZPOOL_VDEV_NAME_FOLLOW_LINKS
476Cause
477.Nm
478subcommands to follow links for vdev names by default.
479This behavior is identical to the
480.Nm zpool Cm status Fl L
481command line option.
482.It Sy ZPOOL_VDEV_NAME_PATH
483Cause
484.Nm
485subcommands to output full vdev path names by default.
486This behavior is identical to the
487.Nm zpool Cm status Fl P
488command line option.
489.It Sy ZFS_VDEV_DEVID_OPT_OUT
490Older OpenZFS implementations had issues when attempting to display pool
491config vdev names if a
492.Sy devid
493NVP value is present in the pool's config.
494.Pp
495For example, a pool that originated on illumos platform would have a
496.Sy devid
497value in the config and
498.Nm zpool Cm status
499would fail when listing the config.
500This would also be true for future Linux-based pools.
501.Pp
502A pool can be stripped of any
503.Sy devid
504values on import or prevented from adding
505them on
506.Nm zpool Cm create
507or
508.Nm zpool Cm add
509by setting
510.Sy ZFS_VDEV_DEVID_OPT_OUT .
511.Pp
512.It Sy ZPOOL_SCRIPTS_AS_ROOT
513Allow a privileged user to run
514.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
515Normally, only unprivileged users are allowed to run
516.Fl c .
517.It Sy ZPOOL_SCRIPTS_PATH
518The search path for scripts when running
519.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
520This is a colon-separated list of directories and overrides the default
521.Pa ~/.zpool.d
522and
523.Pa /etc/zfs/zpool.d
524search paths.
525.It Sy ZPOOL_SCRIPTS_ENABLED
526Allow a user to run
527.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
528If
529.Sy ZPOOL_SCRIPTS_ENABLED
530is not set, it is assumed that the user is allowed to run
531.Nm zpool Cm status Ns / Ns Cm iostat Fl c .
532.\" Shared with zfs.8
533.It Sy ZFS_MODULE_TIMEOUT
534Time, in seconds, to wait for
535.Pa /dev/zfs
536to appear.
537Defaults to
538.Sy 10 ,
539max
540.Sy 600 Pq 10 minutes .
541If
542.Pf < Sy 0 ,
543wait forever; if
544.Sy 0 ,
545don't wait.
546.El
547.
548.Sh INTERFACE STABILITY
549.Sy Evolving
550.
551.Sh SEE ALSO
552.Xr zfs 4 ,
553.Xr zpool-features 7 ,
554.Xr zpoolconcepts 7 ,
555.Xr zpoolprops 7 ,
556.Xr zed 8 ,
557.Xr zfs 8 ,
558.Xr zpool-add 8 ,
559.Xr zpool-attach 8 ,
560.Xr zpool-checkpoint 8 ,
561.Xr zpool-clear 8 ,
562.Xr zpool-create 8 ,
563.Xr zpool-destroy 8 ,
564.Xr zpool-detach 8 ,
565.Xr zpool-events 8 ,
566.Xr zpool-export 8 ,
567.Xr zpool-get 8 ,
568.Xr zpool-history 8 ,
569.Xr zpool-import 8 ,
570.Xr zpool-initialize 8 ,
571.Xr zpool-iostat 8 ,
572.Xr zpool-labelclear 8 ,
573.Xr zpool-list 8 ,
574.Xr zpool-offline 8 ,
575.Xr zpool-online 8 ,
576.Xr zpool-reguid 8 ,
577.Xr zpool-remove 8 ,
578.Xr zpool-reopen 8 ,
579.Xr zpool-replace 8 ,
580.Xr zpool-resilver 8 ,
581.Xr zpool-scrub 8 ,
582.Xr zpool-set 8 ,
583.Xr zpool-split 8 ,
584.Xr zpool-status 8 ,
585.Xr zpool-sync 8 ,
586.Xr zpool-trim 8 ,
587.Xr zpool-upgrade 8 ,
588.Xr zpool-wait 8
589