xref: /dragonfly/sbin/hammer2/hammer2.8 (revision d9d67b59)
1.\" Copyright (c) 2015 The DragonFly Project.  All rights reserved.
2.\"
3.\" This code is derived from software contributed to The DragonFly Project
4.\" by Matthew Dillon <dillon@backplane.com>
5.\"
6.\" Redistribution and use in source and binary forms, with or without
7.\" modification, are permitted provided that the following conditions
8.\" are met:
9.\"
10.\" 1. Redistributions of source code must retain the above copyright
11.\"    notice, this list of conditions and the following disclaimer.
12.\" 2. Redistributions in binary form must reproduce the above copyright
13.\"    notice, this list of conditions and the following disclaimer in
14.\"    the documentation and/or other materials provided with the
15.\"    distribution.
16.\" 3. Neither the name of The DragonFly Project nor the names of its
17.\"    contributors may be used to endorse or promote products derived
18.\"    from this software without specific, prior written permission.
19.\"
20.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
21.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
22.\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
23.\" FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE
24.\" COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
25.\" INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING,
26.\" BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
27.\" LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
28.\" AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29.\" OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
30.\" OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
31.\" SUCH DAMAGE.
32.\"
33.Dd March 26, 2015
34.Dt HAMMER2 8
35.Os
36.Sh NAME
37.Nm hammer2
38.Nd hammer2 file system utility
39.Sh SYNOPSIS
40.Nm
41.Fl h
42.Nm
43.Op Fl s Ar path
44.Op Fl t Ar type
45.Op Fl u Ar uuid
46.Ar command
47.Op Ar argument ...
48.Sh DESCRIPTION
49The
50.Nm
51utility provides miscellaneous support functions for a
52HAMMER2 file system.
53.Pp
54The options are as follows:
55.Bl -tag -width indent
56.It Fl s Ar path
57Specify the path to a mounted HAMMER2 filesystem.
58At least one PFS on a HAMMER2 filesystem must be mounted for the system
59to act on all PFSs managed by it.
60Every HAMMER2 filesystem typically has a PFS called "LOCAL" for this purpose.
61.It Fl t Ar type
62Specify the type when creating, upgrading, or downgrading a PFS.
63Supported types are MASTER, SLAVE, SOFT_MASTER, SOFT_SLAVE, CACHE, and DUMMY.
64If not specified the pfs-create directive will default to MASTER if no
65uuid is specified, and SLAVE if a uuid is specified.
66.It Fl u Ar uuid
67Specify the cluster uuid when creating a PFS.  If not specified, a unique,
68random uuid will be generated.
69Note that every PFS also has a unique pfs_id which is always generated
70and cannot be overridden with an option.
71The { pfs_clid, pfs_fsid } tuple uniquely identifies a component of a cluster.
72.El
73.Pp
74.Nm
75directives are as shown below.
76Note that most directives require you to either be CD'd into a hammer2
77filesystem, specify a path to a mounted hammer2 filesystem via the
78.Fl s
79option, or specify a path after the directive.
80It depends on the directive.
81All hammer2 filesystem have a PFS called "LOCAL" which is typically mounted
82locally on the host in order to be able to issue commands for other PFSs
83on the filesystem.
84The mount also enables PFS configuration scanning for that filesystem.
85.Bl -tag -width indent
86.\" ==== connect ====
87.It Cm connect Ar target
88Add a cluster link entry to the volume header.
89The volume header can support up to 255 link entries.
90This feature is not currently used.
91.\" ==== destroy ====
92.It Cm destroy Ar path
93Destroy the specified directory entry in a hammer2 filesystem.  This bypasses
94all normal checks and will unconditionally destroy the directory entry.
95The underlying inode is not checked and, if it does exist, its nlinks count
96is not decremented.
97This directive should only be used to destroy a corrupted directory entry
98which no longer has a working inode.
99.Pp
100Note that this command may desynchronize the system namecache for the
101specified entry.  If this happens, you may have to unmount and remount the
102filesystem.
103.\" ==== disconnect ====
104.It Cm disconnect Ar target
105Delete a cluster link entry from the volume header.
106This feature is not currently used.
107.\" ==== info ====
108.It Cm info Op devpath
109Access and print the status and super-root entries for all HAMMER2
110partitions found in /dev/serno or the specified device path(s).
111The partitions do not have to be mounted.
112Note that only mounted partitions will be under active management.
113This is accomplished by mounting at least one PFS within the partition.
114Typically at least the @LOCAL PFS is mounted.
115.\" ==== mountall ====
116.It Cm mountall Op devpath
117This directive mounts the @LOCAL PFS on all HAMMER2 partitions found
118in /dev/serno, or the specified device path(s).
119The partitions are mounted as /var/hammer2/LOCAL.<id>.
120Mounts are executed in the background and this command will wait a
121limited amount of time for the mounts to complete before returning.
122.\" ==== status ====
123.It Cm status Ar path...
124Dump a list of all cluster link entries configured in the volume header.
125.\" ==== hash ====
126.It Cm hash Ar filename...
127Compute and print the directory hash for any number of filenames.
128.\" ==== pfs-list ====
129.It Cm pfs-list Op path...
130List all local PFSs available on a mounted HAMMER2 filesystem, their type,
131and their current status.
132You must mount at least one PFS in order to be able to access the whole list.
133.\" ==== pfs-clid ====
134.It Cm pfs-clid Ar label
135Print the cluster id for a PFS specified by name.
136.\" ==== pfs-fsid ====
137.It Cm pfs-fsid Ar label
138Print the unique filesystem id for a PFS specified by name.
139.\" ==== pfs-create ====
140.It Cm pfs-create Ar label
141Create a local PFS on a mounted HAMMER2 filesystem.
142If no uuid is specified the pfs-type defaults to MASTER.
143If a uuid is specified via the
144.Fl u
145option the pfs-type defaults to SLAVE.
146Other types can be specified with the
147.Fl t
148option.
149.Pp
150If you wish to add a MASTER to an existing cluster, you must first add it as
151a SLAVE and then upgrade it to MASTER to properly synchronize it.
152.Pp
153The DUMMY pfs-type is used to tie network-accessible clusters into the local
154machine when no local storage is desired.
155This type should be used on minimal H2 partitions or entirely in ram for
156netboot-centric systems to provide a tie-in point for the mount command,
157or on more complex systems where you need to also access network-centric
158clusters.
159.Pp
160The CACHE or SLAVE pfs-type is typically used when the main store is on
161the network but local storage is desired to improve performance.
162SLAVE is also used when a backup is desired.
163.Pp
164Generally speaking, you can mount any PFS element of a cluster in order to
165access the cluster via the full cluster protocol.
166There are two exceptions.
167If you mount a SOFT_SLAVE or a SOFT_MASTER then soft quorum semantics are
168employed... the soft slave or soft master's current state will always be used
169and the quorum protocol will not be used.  The soft PFS will still be
170synchronized to masters in the background when available.
171Also, you can use
172.Sq mount -o local
173to mount ONLY a local HAMMER2 PFS and
174not run any network or quorum protocols for the mount.
175All such mounts except for a SOFT_MASTER mount will be read-only.
176Other than that, you will be mounting the whole cluster when you mount any
177PFS within the cluster.
178.Pp
179DUMMY - Create a PFS skeleton intended to be the mount point for a
180more complex cluster, probably one that is entirely network based.
181No data will be synchronized to this PFS so it is suitable for use
182in a network boot image or memory filesystem.
183This allows you to create placeholders for mount points on your local
184disk, SSD, or memory disk.
185.Pp
186CACHE - Create a PFS for caching portions of the cluster piecemeal.
187This is similar to a SLAVE but does not synchronize the entire contents of
188the cluster to the PFS.
189Elements found in the CACHE PFS which are validated against the cluster
190will be read, presumably a faster access than having to go to the cluster.
191Only local CACHEs will be updated.
192Network-accessible CACHE PFSs might be read but will not be written to.
193If you have a large hard-drive-based cluster you can set up localized
194SSD CACHE PFSs to improve performance.
195.Pp
196SLAVE - Create a PFS which maintains synchronization with and provides a
197read-only copy of the cluster.
198HAMMER2 will prioritize local SLAVEs for data retrieval after validating
199their transaction id against the cluster.
200The difference between a CACHE and a SLAVE is that the SLAVE is synchronized
201to a full copy of the cluster and thus can serve as a backup or be staged
202for use as a MASTER later on.
203.Pp
204SOFT_SLAVE - Create a PFS which maintains synchronization with and provides
205a read-only copy of the cluster.
206This is one of the special mount cases.  A SOFT_SLAVE will synchronize with
207the cluster when the cluster is available, but can still be accessed when
208the cluster is not available.
209.Pp
210MASTER - Create a PFS which will hold a master copy of the cluster.
211If you create several MASTER PFSs with the same cluster id you are
212effectively creating a multi-master cluster and causing a quorum and
213cache coherency protocol to be used to validate operations.
214The total number of masters is stored in each PFSs making up the cluster.
215Filesystem operations will stall for normal mounts if a quorum cannot be
216obtained to validate the operation.
217MASTER nodes which go offline and return later will synchronize in the
218background.
219Note that when adding a MASTER to an existing cluster you must add the
220new PFS as a SLAVE and then upgrade it to a MASTER.
221.Pp
222SOFT_MASTER - Create a PFS which maintains synchronization with and provides
223a read-write copy of the cluster.
224This is one of the special mount cases.  A SOFT_MASTER will synchronize with
225the cluster when the cluster is available, but can still be read AND written
226to even when the cluster is not available.
227Modifications made to a SOFT_MASTER will be automatically flushed to the
228cluster when it becomes accessible again, and vise-versa.
229Manual intervention may be required if a conflict occurs during
230synchronization.
231.\" ==== pfs-delete ====
232.It Cm pfs-delete Ar label
233Delete a local PFS on a mounted HAMMER2 filesystem.
234Deleting a PFS of type MASTER requires first downgrading it to a SLAVE (XXX).
235.\" ==== snapshot ====
236.It Cm snapshot Ar path Op label
237Create a snapshot of a directory.
238This can only be used on a local PFS, and is only really useful if the PFS
239contains a complete copy of what you desire to snapshot so that typically
240means a local MASTER, SOFT_MASTER, SLAVE, or SOFT_SLAVE must be present.
241Snapshots are created simply by flushing a PFS mount to disk and then copying
242the directory inode to the PFS.
243The topology is snapshotted without having to be copied or scanned.
244Snapshots are effectively separate from the cluster they came from
245and can be used as a starting point for a new cluster.
246So unless you build a new cluster from the snapshot, it will stay local
247to the machine it was made on.
248.\" ==== service ====
249.It Cm service
250Start the
251.Nm
252service daemon.
253This daemon is also automatically started when you run
254.Xr mount_hammer2 8 .
255The hammer2 service daemon handles incoming TCP connections and maintains
256outgoing TCP connections.  It will interconnect available services on the
257machine (e.g. hammer2 mounts and xdisks) to the network.
258.\" ==== stat ====
259.It Cm stat Op path...
260Print the inode statistics, compression, and other meta-data associated
261with a list of paths.
262.\" ==== leaf ====
263.It Cm leaf
264XXX
265.\" ==== shell ====
266.It Cm shell
267Start a debug shell to the local hammer2 service daemon via the DMSG protocol.
268.\" ==== debugspan ====
269.It Cm debugspan
270(do not use)
271.\" ==== rsainit ====
272.It Cm rsainit
273Create the
274.Pa /etc/hammer2
275directory and initialize a public/private keypair in that directory for
276use by the network cluster protocols.
277.\" ==== show ====
278.It Cm show Ar devpath
279Dump the radix tree for the HAMMER2 filesystem by scanning a
280block device directly.  No mount is required.
281.\" ==== freemap ====
282Dump the freemap tree for the HAMMER2 filesystem by scanning a
283block device directly.  No mount is required.
284.It Cm freemap Ar devpath
285.\" ==== setcomp ====
286.It Cm setcomp Ar mode[:level] Op path...
287Set the compression mode as specified for any newly created elements at or
288under the path if not overridden by deeper elements.
289Available modes are none, autozero, lz4, or zlib.
290When zlib is used the compression level can be set.
291The default will be 6 which is the best trade-off between performance and
292time.
293.Pp
294newfs_hammer2 will set the default compression to lz4 which prioritizes
295speed over performance.
296Also note that HAMMER2 contains a heuristic and will not attempt to
297compress every block if it detects a sufficient amount of uncompressable
298data.
299.Pp
300Hammer2 compression is only effective when it can reduce the size of dataset
301(typically a 64KB block) by one or more powers of 2.  A 64K block which
302only compresses to 40K will not yield any storage improvement.
303.Pp
304Generally speaking you do not want to set the compression mode to
305.Sq none ,
306as this will cause blocks of all-zeros to be written as all-zero blocks,
307instead of holes.  The
308.Sq autozero
309compression mode detects blocks of all-zeros
310and writes them as holes.  However, HAMMER2 will rewrite data in-place if
311the compression mode is set to
312.Sq none
313and the check code is set to
314.Sq  disabled .
315Formal snapshots will still snapshot such files.  However,
316de-duplication will no longer function on the data blocks.
317.\" ==== setcheck ====
318.It Cm setcheck Ar check Op path...
319Set the check code as specified for any newly created elements at or under
320the path if not overridden by deeper elements.
321Available codes are default, disabled, crc32, xxhash64, or sha192.
322.\" ==== clrcheck ====
323.It Cm clrcheck Op path...
324Clear the check code override for the specified paths.
325Overrides may still be present in deeper elements.
326.\" ==== setcrc32 ====
327.It Cm setcrc32 Op path...
328Set the check code to the ISCSI 32-bit CRC for any newly created elements
329at or under the path if not overridden by deeper elements.
330.\" ==== setxxhash64 ====
331.It Cm setxxhash64 Op path...
332Set the check code to XXHASH64, a fast 64-bit hash
333.\" ==== setsha192 ====
334.It Cm setsha192 Op path...
335Set the check code to SHA192 for any newly created elements at or under
336the path if not overridden by deeper elements.
337.\" ==== bulkfree ====
338.It Cm bulkfree Op path...
339Run a bulkfree pass on a HAMMER2 mount.
340You can specify any PFS for the mount, the bulkfree pass is run on the
341entire partition.
342Note that it takes two passes to actually free space.
343.El
344.Sh SYSCTLS
345.Bl -tag -width indent
346.It Va vfs.hammer2.dedup_enable (default on)
347Enables live de-duplication.  Any recently read data that is on-media
348(already synchronized to media) is tested against pending writes for
349compatibility.  If a match is found, the write will reference the
350existing on-media data instead of writing new data.
351.It Va vfs.hammer2.always_compress (default off)
352This disables the H2 compression heuristic and forces H2 to always
353try to compress data blocks, even if they look uncompressable.
354Enabling this option reduces performance but has higher de-duplication
355repeatability.
356.It Va vfs.hammer2.cluster_data_read (default 4)
357.It Va vfs.hammer2.cluster_meta_read (default 1)
358Set the amount of read-ahead clustering to perform on data and meta-data
359blocks.
360.It Va vfs.hammer2.cluster_write (default 4)
361Set the amount of write-behind clustering to perform in buffers.  Each
362buffer represents 64KB.  The default is 4 and higher values typically do
363not improve performance.  A value of 0 disables clustered writes.
364This variable applies to the underlying media device, not to logical
365file writes, so it should not interfere with temporary file optimization.
366Generally speaking you want this enabled to generate smoothly pipelined
367writes to the media.
368.It Va vfs.hammer2.bulkfree_tps (default 5000)
369Set bulkfree's maximum scan rate.  This is primarily intended to limit
370I/O utilization on SSDs and cpu utilization when the meta-data is mostly
371cached in memory.
372.El
373.Sh SETTING UP /etc/hammer2
374The
375.Sq rsainit
376directive will create the
377.Pa /etc/hammer2
378directory with appropriate permissions and also generate a public key
379pair in this directory for the machine.  These files will be
380.Pa rsa.pub
381and
382.Pa rsa.prv
383and needless to say, the private key shouldn't leave the host.
384.Pp
385The service daemon will also scan the
386.Pa /etc/hammer2/autoconn
387file which contains a list of hosts which it will automatically maintain
388connections to to form your cluster.
389The service daemon will automatically reconnect on any failure and will
390also monitor the file for changes.
391.Pp
392When the service daemon receives a connection it expects to find a
393public key for that connection in a file in
394.Pa /etc/hammer2/remote/
395called
396.Pa <IPADDR>.pub .
397You normally copy the
398.Pa rsa.pub
399key from the host in question to this file.
400The IP address must match exactly or the connection will not be allowed.
401.Pp
402If you want to use an unencrypted connection you can create empty,
403dummy files in the remote directory in the form
404.Pa <IPADDR>.none .
405We do not recommend using unencrypted connections.
406.Sh CLUSTER SERVICES
407Currently there are two services which use the cluster network infrastructure,
408HAMMER2 mounts and XDISK.
409Any HAMMER2 mount will make all PFSs for that filesystem available to the
410cluster.
411And if the XDISK kernel module is loaded, the hammer2 service daemon will make
412your machine's block devices available to the cluster (you must load the
413xdisk.ko kernel module before starting the hammer2 service).
414They will show up as
415.Pa /dev/xa*
416and
417.Pa /dev/serno/*
418devices on the remote machines making up the cluster.
419Remote block devices are just what they appear to be... direct access to a
420block device on a remote machine.  If the link goes down remote accesses
421will stall until it comes back up again, then automatically requeue any
422pending I/O and resume as if nothing happened.
423However, if the server hosting the physical disks crashes or is rebooted,
424any remote opens to its devices will see a permanent I/O failure requiring a
425close and open sequence to re-establish.
426The latter is necessary because the server's drives might not have committed
427the data before the crash, but had already acknowledged the transfer.
428.Pp
429Data commits work exactly the same as they do for real block devices.
430The originater must issue a BUF_CMD_FLUSH.
431.Sh ADDING A NEW MASTER TO A CLUSTER
432When you
433.Xr newfs_hammer2 8
434a HAMMER2 filesystem or use the
435.Sq pfs-create
436directive on one already mounted
437to create a new PFS, with no special options, you wind up with a PFS
438typed as a MASTER and a unique cluster uuid, but because there is only one
439PFS for that cluster (for each PFS you create via pfs-create), it will
440act just like a normal filesystem would act and does not require any special
441protocols to operate.
442.Pp
443If you use the
444.Sq pfs-create
445directive along with the
446.Fl u
447option to specify a cluster uuid that already exists in the cluster,
448you are adding a PFS to an existing cluster and this can trigger a whole
449series of events in the background.
450When you specify the
451.Fl u
452option in a
453.Sq pfs-create ,
454.Nm
455will by default create a SLAVE PFS.
456In fact, this is what must be created first even if you want to add a new
457MASTER to your cluster.
458.Pp
459The most common action a system admin will want to take is to upgrade or
460downgrade a PFS.
461A new MASTER can be added to the cluster by upgrading an existing SLAVE
462to MASTER.
463A MASTER can be removed from the cluster by downgrading it to a SLAVE.
464Upgrades and downgrades will put nodes in the cluster in a transition state
465until the operation is complete.
466For downgrades the transition state is fleeting unless one or more other
467masters has not acknowledged the change.
468For upgrades a background synchronization process must complete before the
469transition can be said to be complete, and the node remains (really) a SLAVE
470until that transition is complete.
471.Sh USE CASES FOR A SOFT_MASTER
472The SOFT_MASTER PFS type is a special type which must be specifically
473mounted by a machine.
474It is a R/W mount which does not use the quorum protocol and is not
475cache coherent with the cluster, but which synchronizes from the cluster
476and allows modifying operations which will synchronize to the cluster.
477The most common case is to use a SOFT_MASTER PFS in a laptop allowing you
478to work on your laptop when you are on the road and not connected to
479your main servers, and for the laptop to synchronize when a connection is
480available.
481.Sh USE CASES FOR A SOFT_SLAVE
482A SOFT_SLAVE PFS type is a special type which must be specifically mounted
483by a machine.
484It is a RO mount which does not use the quorum protocol and is not
485cache coherent with the cluster.  It will receive synchronization from
486the cluster when network connectivity is available but will not stall if
487network connectivity is lost.
488.Sh FSYNC FLUSH MODES
489TODO.
490.Sh RESTORING FROM A SNAPSHOT BACKUP
491TODO.
492.Sh PERFORMANCE TUNING
493Because HAMMER2 implements compression, decompression, and deup natively,
494it always double-buffers file data.  This means that the file data is
495cached via the device vnode (in compressed / dedupped-form) and the same
496data is also cached by the file vnode (in decompressed / non-dedupped form).
497.Pp
498While HAMMER2 will try to age the logical file buffers on its, some
499additional performance tuning may be necessary for optimal operation
500whether swapcache is used or not.  Our recommendation is to reduce the
501number of vnodes (and thus also the logical buffer cache behind the
502vnodes) that the system caches via the
503.Va kern.maxvnodes
504sysctl.
505.Pp
506Too-large a value will result in excessive double-caching and can cause
507unnecessary read disk I/O.
508We recommend a number between 25000 and 250000 vnodes, depending on your
509use case.
510Keep in mind that even though the vnode cache is smaller, this will make
511room for a great deal more device-level buffer caching which can encompasses
512far more data and meta-data than the vnode-level caching.
513.Sh ENVIRONMENT
514TODO.
515.Sh FILES
516.Bl -tag -width ".It Pa <fs>/abc/defghi/<name>" -compact
517.It Pa /etc/hammer2/
518.It Pa /etc/hammer2/rsa.pub
519.It Pa /etc/hammer2/rsa.prv
520.It Pa /etc/hammer2/autoconn
521.It Pa /etc/hammer2/remote/<IP>.pub
522.It Pa /etc/hammer2/remote/<IP>.none
523.El
524.Sh EXIT STATUS
525.Ex -std
526.Sh SEE ALSO
527.Xr mount_hammer2 8 ,
528.Xr mount_null 8 ,
529.Xr newfs_hammer2 8 ,
530.Xr swapcache 8 ,
531.Xr sysctl 8
532.Sh HISTORY
533The
534.Nm
535utility first appeared in
536.Dx 4.1 .
537.Sh AUTHORS
538.An Matthew Dillon Aq Mt dillon@backplane.com
539