1.\" Copyright (c) 2015 The DragonFly Project. All rights reserved. 2.\" 3.\" This code is derived from software contributed to The DragonFly Project 4.\" by Matthew Dillon <dillon@backplane.com> 5.\" 6.\" Redistribution and use in source and binary forms, with or without 7.\" modification, are permitted provided that the following conditions 8.\" are met: 9.\" 10.\" 1. Redistributions of source code must retain the above copyright 11.\" notice, this list of conditions and the following disclaimer. 12.\" 2. Redistributions in binary form must reproduce the above copyright 13.\" notice, this list of conditions and the following disclaimer in 14.\" the documentation and/or other materials provided with the 15.\" distribution. 16.\" 3. Neither the name of The DragonFly Project nor the names of its 17.\" contributors may be used to endorse or promote products derived 18.\" from this software without specific, prior written permission. 19.\" 20.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 21.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 22.\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 23.\" FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE 24.\" COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, 25.\" INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING, 26.\" BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 27.\" LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED 28.\" AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 29.\" OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT 30.\" OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 31.\" SUCH DAMAGE. 32.\" 33.Dd March 26, 2015 34.Dt HAMMER2 8 35.Os 36.Sh NAME 37.Nm hammer2 38.Nd hammer2 file system utility 39.Sh SYNOPSIS 40.Nm 41.Fl h 42.Nm 43.Op Fl s Ar path 44.Op Fl t Ar type 45.Op Fl u Ar uuid 46.Ar command 47.Op Ar argument ... 48.Sh DESCRIPTION 49The 50.Nm 51utility provides miscellaneous support functions for a 52HAMMER2 file system. 53.Pp 54The options are as follows: 55.Bl -tag -width indent 56.It Fl s Ar path 57Specify the path to a mounted HAMMER2 filesystem. 58At least one PFS on a HAMMER2 filesystem must be mounted for the system 59to act on all PFSs managed by it. 60Every HAMMER2 filesystem typically has a PFS called "LOCAL" for this purpose. 61.It Fl t Ar type 62Specify the type when creating, upgrading, or downgrading a PFS. 63Supported types are MASTER, SLAVE, SOFT_MASTER, SOFT_SLAVE, CACHE, and DUMMY. 64If not specified the pfs-create directive will default to MASTER if no 65uuid is specified, and SLAVE if a uuid is specified. 66.It Fl u Ar uuid 67Specify the cluster uuid when creating a PFS. If not specified, a unique, 68random uuid will be generated. 69Note that every PFS also has a unique pfs_id which is always generated 70and cannot be overridden with an option. 71The { pfs_clid, pfs_fsid } tuple uniquely identifies a component of a cluster. 72.El 73.Pp 74.Nm 75directives are as shown below. 76Note that most directives require you to either be CD'd into a hammer2 77filesystem, specify a path to a mounted hammer2 filesystem via the 78.Fl s 79option, or specify a path after the directive. 80It depends on the directive. 81All hammer2 filesystem have a PFS called "LOCAL" which is typically mounted 82locally on the host in order to be able to issue commands for other PFSs 83on the filesystem. 84The mount also enables PFS configuration scanning for that filesystem. 85.Bl -tag -width indent 86.\" ==== connect ==== 87.It Cm connect Ar target 88Add a cluster link entry to the volume header. 89The volume header can support up to 255 link entries. 90This feature is not currently used. 91.\" ==== disconnect ==== 92.It Cm disconnect Ar target 93Delete a cluster link entry from the volume header. 94This feature is not currently used. 95.\" ==== info ==== 96.It Cm info Op devpath 97Access and print the status and super-root entries for all HAMMER2 98partitions found in /dev/serno or the specified device path(s). 99The partitions do not have to be mounted. 100Note that only mounted partitions will be under active management. 101This is accomplished by mounting at least one PFS within the partition. 102Typically at least the @LOCAL PFS is mounted. 103.\" ==== mountall ==== 104.It Cm mountall Op devpath 105This directive mounts the @LOCAL PFS on all HAMMER2 partitions found 106in /dev/serno, or the specified device path(s). 107The partitions are mounted as /var/hammer2/LOCAL.<id>. 108Mounts are executed in the background and this command will wait a 109limited amount of time for the mounts to complete before returning. 110.\" ==== status ==== 111.It Cm status Ar path... 112Dump a list of all cluster link entries configured in the volume header. 113.\" ==== hash ==== 114.It Cm hash Ar filename... 115Compute and print the directory hash for any number of filenames. 116.\" ==== pfs-list ==== 117.It Cm pfs-list Op path... 118List all local PFSs available on a mounted HAMMER2 filesystem, their type, 119and their current status. 120You must mount at least one PFS in order to be able to access the whole list. 121.\" ==== pfs-clid ==== 122.It Cm pfs-clid Ar label 123Print the cluster id for a PFS specified by name. 124.\" ==== pfs-fsid ==== 125.It Cm pfs-fsid Ar label 126Print the unique filesystem id for a PFS specified by name. 127.\" ==== pfs-create ==== 128.It Cm pfs-create Ar label 129Create a local PFS on a mounted HAMMER2 filesystem. 130If no uuid is specified the pfs-type defaults to MASTER. 131If a uuid is specified via the 132.Fl u 133option the pfs-type defaults to SLAVE. 134Other types can be specified with the 135.Fl t 136option. 137.Pp 138If you wish to add a MASTER to an existing cluster, you must first add it as 139a SLAVE and then upgrade it to MASTER to properly synchronize it. 140.Pp 141The DUMMY pfs-type is used to tie network-accessible clusters into the local 142machine when no local storage is desired. 143This type should be used on minimal H2 partitions or entirely in ram for 144netboot-centric systems to provide a tie-in point for the mount command, 145or on more complex systems where you need to also access network-centric 146clusters. 147.Pp 148The CACHE or SLAVE pfs-type is typically used when the main store is on 149the network but local storage is desired to improve performance. 150SLAVE is also used when a backup is desired. 151.Pp 152Generally speaking, you can mount any PFS element of a cluster in order to 153access the cluster via the full cluster protocol. 154There are two exceptions. 155If you mount a SOFT_SLAVE or a SOFT_MASTER then soft quorum semantics are 156employed... the soft slave or soft master's current state will always be used 157and the quorum protocol will not be used. The soft PFS will still be 158synchronized to masters in the background when available. 159Also, you can use 'mount -o local' to mount ONLY a local HAMMER2 PFS and 160not run any network or quorum protocols for the mount. 161All such mounts except for a SOFT_MASTER mount will be read-only. 162Other than that, you will be mounting the whole cluster when you mount any 163PFS within the cluster. 164.Pp 165DUMMY - Create a PFS skeleton intended to be the mount point for a 166more complex cluster, probably one that is entirely network based. 167No data will be synchronized to this PFS so it is suitable for use 168in a network boot image or memory filesystem. 169This allows you to create placeholders for mount points on your local 170disk, SSD, or memory disk. 171.Pp 172CACHE - Create a PFS for caching portions of the cluster piecemeal. 173This is similar to a SLAVE but does not synchronize the entire contents of 174the cluster to the PFS. 175Elements found in the CACHE PFS which are validated against the cluster 176will be read, presumably a faster access than having to go to the cluster. 177Only local CACHEs will be updated. 178Network-accessible CACHE PFSs might be read but will not be written to. 179If you have a large hard-drive-based cluster you can set up localized 180SSD CACHE PFSs to improve performance. 181.Pp 182SLAVE - Create a PFS which maintains synchronization with and provides a 183read-only copy of the cluster. 184HAMMER2 will prioritize local SLAVEs for data retrieval after validating 185their transaction id against the cluster. 186The difference between a CACHE and a SLAVE is that the SLAVE is synchronized 187to a full copy of the cluster and thus can serve as a backup or be staged 188for use as a MASTER later on. 189.Pp 190SOFT_SLAVE - Create a PFS which maintains synchronization with and provides 191a read-only copy of the cluster. 192This is one of the special mount cases. A SOFT_SLAVE will synchronize with 193the cluster when the cluster is available, but can still be accessed when 194the cluster is not available. 195.Pp 196MASTER - Create a PFS which will hold a master copy of the cluster. 197If you create several MASTER PFSs with the same cluster id you are 198effectively creating a multi-master cluster and causing a quorum and 199cache coherency protocol to be used to validate operations. 200The total number of masters is stored in each PFSs making up the cluster. 201Filesystem operations will stall for normal mounts if a quorum cannot be 202obtained to validate the operation. 203MASTER nodes which go offline and return later will synchronize in the 204background. 205Note that when adding a MASTER to an existing cluster you must add the 206new PFS as a SLAVE and then upgrade it to a MASTER. 207.Pp 208SOFT_MASTER - Create a PFS which maintains synchronization with and provides 209a read-write copy of the cluster. 210This is one of the special mount cases. A SOFT_MASTER will synchronize with 211the cluster when the cluster is available, but can still be read AND written 212to even when the cluster is not available. 213Modifications made to a SOFT_MASTER will be automatically flushed to the 214cluster when it becomes accessible again, and vise-versa. 215Manual intervention may be required if a conflict occurs during 216synchronization. 217.\" ==== pfs-delete ==== 218.It Cm pfs-delete Ar label 219Delete a local PFS on a mounted HAMMER2 filesystem. 220Deleting a PFS of type MASTER requires first downgrading it to a SLAVE (XXX). 221.\" ==== snapshot ==== 222.It Cm snapshot Ar path Op label 223Create a snapshot of a directory. 224This can only be used on a local PFS, and is only really useful if the PFS 225contains a complete copy of what you desire to snapshot so that typically 226means a local MASTER, SOFT_MASTER, SLAVE, or SOFT_SLAVE must be present. 227Snapshots are created simply by flushing a PFS mount to disk and then copying 228the directory inode to the PFS. 229The topology is snapshotted without having to be copied or scanned. 230Snapshots are effectively separate from the cluster they came from 231and can be used as a starting point for a new cluster. 232So unless you build a new cluster from the snapshot, it will stay local 233to the machine it was made on. 234.\" ==== service ==== 235.It Cm service 236Start the 237.Nm 238service daemon. 239This daemon is also automatically started when you run 240.Xr mount_hammer2 8 . 241The hammer2 service daemon handles incoming TCP connections and maintains 242outgoing TCP connections. It will interconnect available services on the 243machine (e.g. hammer2 mounts and xdisks) to the network. 244.\" ==== stat ==== 245.It Cm stat Op path... 246Print the inode statistics, compression, and other meta-data associated 247with a list of paths. 248.\" ==== leaf ==== 249.It Cm leaf 250XXX 251.\" ==== shell ==== 252.It Cm shell 253Start a debug shell to the local hammer2 service daemon via the DMSG protocol. 254.\" ==== debugspan ==== 255.It Cm debugspan 256(do not use) 257.\" ==== rsainit ==== 258.It Cm rsainit 259Create the 260.Pa /etc/hammer2 261directory and initialize a public/private keypair in that directory for 262use by the network cluster protocols. 263.\" ==== show ==== 264.It Cm show Ar devpath 265Dump the radix tree for the HAMMER2 filesystem by scanning a 266block device directly. No mount is required. 267.\" ==== freemap ==== 268Dump the freemap tree for the HAMMER2 filesystem by scanning a 269block device directly. No mount is required. 270.It Cm freemap Ar devpath 271.\" ==== setcomp ==== 272.It Cm setcomp Ar mode[:level] Op path... 273Set the compression mode as specified for any newly created elements at or 274under the path if not overridden by deeper elements. 275Available modes are none, autozero, lz4, or zlib. 276When zlib is used the compression level can be set. 277The default will be 6 which is the best trade-off between performance and 278time. 279.Pp 280newfs_hammer2 will set the default compression to lz4 which prioritizes 281speed over performance. 282Also note that HAMMER2 contains a heuristic and will not attempt to 283compress every block if it detects a sufficient amount of uncompressable 284data. 285.Pp 286Hammer2 compression is only effective when it can reduce the size of dataset 287(typically a 64KB block) by one or more powers of 2. A 64K block which 288only compresses to 40K will not yield any storage improvement. 289.\" ==== setcheck ==== 290.It Cm setcheck Ar check Op path... 291Set the check code as specified for any newly created elements at or under 292the path if not overridden by deeper elements. 293Available codes are default, disabled, crc32, crc64, or sha192. 294.\" ==== clrcheck ==== 295.It Cm clrcheck Op path... 296Clear the check code override for the specified paths. 297Overrides may still be present in deeper elements. 298.\" ==== setcrc32 ==== 299.It Cm setcrc32 Op path... 300Set the check code to the ISCSI 32-bit CRC for any newly created elements 301at or under the path if not overridden by deeper elements. 302.\" ==== setcrc64 ==== 303.It Cm setcrc64 Op path... 304Set the check code to CRC64 (not yet specified). 305.\" ==== setsha192 ==== 306.It Cm setsha192 Op path... 307Set the check code to SHA192 for any newly created elements at or under 308the path if not overridden by deeper elements. 309.\" ==== bulkfree ==== 310.It Cm bulkfree Op path... 311Run a bulkfree pass on a HAMMER2 mount. 312You can specify any PFS for the mount, the bulkfree pass is run on the 313entire partition. 314.El 315.Sh SETTING UP /etc/hammer2 316The 'rsainit' directive will create the 317.Pa /etc/hammer2 318directory with appropriate permissions and also generate a public key 319pair in this directory for the machine. These files will be 320.Pa rsa.pub 321and 322.Pa rsa.prv 323and needless to say, the private key shouldn't leave the host. 324.Pp 325The service daemon will also scan the 326.Pa /etc/hammer2/autoconn 327file which contains a list of hosts which it will automatically maintain 328connections to to form your cluster. 329The service daemon will automatically reconnect on any failure and will 330also monitor the file for changes. 331.Pp 332When the service daemon receives a connection it expects to find a 333public key for that connection in a file in 334.Pa /etc/hammer2/remote/ 335called 336.Pa <IPADDR>.pub . 337You normally copy the 338.Pa rsa.pub 339key from the host in question to this file. 340The IP address must match exactly or the connection will not be allowed. 341.Pp 342If you want to use an unencrypted connection you can create empty, 343dummy files in the remote directory in the form 344.Pa <IPADDR>.none . 345We do not recommend using unencrypted connections. 346.Sh CLUSTER SERVICES 347Currently there are two services which use the cluster network infrastructure, 348HAMMER2 mounts and XDISK. 349Any HAMMER2 mount will make all PFSs for that filesystem available to the 350cluster. 351And if the XDISK kernel module is loaded, the hammer2 service daemon will make 352your machine's block devices available to the cluster (you must load the 353xdisk.ko kernel module before starting the hammer2 service). 354They will show up as 355.Pa /dev/xa* 356and 357.Pa /dev/serno/* 358devices on the remote machines making up the cluster. 359Remote block devices are just what they appear to be... direct access to a 360block device on a remote machine. If the link goes down remote accesses 361will stall until it comes back up again, then automatically requeue any 362pending I/O and resume as if nothing happened. 363However, if the server hosting the physical disks crashes or is rebooted, 364any remote opens to its devices will see a permanent I/O failure requiring a 365close and open sequence to re-establish. 366The latter is necessary because the server's drives might not have committed 367the data before the crash, but had already acknowledged the transfer. 368.Pp 369Data commits work exactly the same as they do for real block devices. 370The originater must issue a BUF_CMD_FLUSH. 371.Sh ADDING A NEW MASTER TO A CLUSTER 372When you 373.Xr newfs_hammer2 8 374a HAMMER2 filesystem or use the 'pfs-create' directive on one already mounted 375to create a new PFS, with no special options, you wind up with a PFS 376typed as a MASTER and a unique cluster uuid, but because there is only one 377PFS for that cluster (for each PFS you create via pfs-create), it will 378act just like a normal filesystem would act and does not require any special 379protocols to operate. 380.Pp 381If you use the 'pfs-create' directive along with the 382.Fl u 383option to specify a cluster uuid that already exists in the cluster, 384you are adding a PFS to an existing cluster and this can trigger a whole 385series of events in the background. 386When you specify the 387.Fl u 388option in a 'pfs-create', 389.Nm 390will by default create a SLAVE PFS. 391In fact, this is what must be created first even if you want to add a new 392MASTER to your cluster. 393.Pp 394The most common action a system admin will want to take is to upgrade or 395downgrade a PFS. 396A new MASTER can be added to the cluster by upgrading an existing SLAVE 397to MASTER. 398A MASTER can be removed from the cluster by downgrading it to a SLAVE. 399Upgrades and downgrades will put nodes in the cluster in a transition state 400until the operation is complete. 401For downgrades the transition state is fleeting unless one or more other 402masters has not acknowledged the change. 403For upgrades a background synchronization process must complete before the 404transition can be said to be complete, and the node remains (really) a SLAVE 405until that transition is complete. 406.Sh USE CASES FOR A SOFT_MASTER 407The SOFT_MASTER PFS type is a special type which must be specifically 408mounted by a machine. 409It is a R/W mount which does not use the quorum protocol and is not 410cache coherent with the cluster, but which synchronizes from the cluster 411and allows modifying operations which will synchronize to the cluster. 412The most common case is to use a SOFT_MASTER PFS in a laptop allowing you 413to work on your laptop when you are on the road and not connected to 414your main servers, and for the laptop to synchronize when a connection is 415available. 416.Sh USE CASES FOR A SOFT_SLAVE 417A SOFT_SLAVE PFS type is a special type which must be specifically mounted 418by a machine. 419It is a RO mount which does not use the quorum protocol and is not 420cache coherent with the cluster. It will receive synchronization from 421the cluster when network connectivity is available but will not stall if 422network connectivity is lost. 423.Sh FSYNC FLUSH MODES 424TODO. 425.Sh RESTORING FROM A SNAPSHOT BACKUP 426TODO. 427.Sh PERFORMANCE TUNING 428Because HAMMER2 implements compression, decompression, and deup natively, 429it always double-buffers file data. This means that the file data is 430cached via the device vnode (in compressed / dedupped-form) and the same 431data is also cached by the file vnode (in decompressed / non-dedupped form). 432.Pp 433While HAMMER2 will try to age the logical file buffers on its, some 434additional performance tuning may be necessary for optimal operation 435whether swapcache is used or not. Our recommendation is to reduce the 436number of vnodes (and thus also the logical buffer cache behind the 437vnodes) that the system caches via the 438.Va kern.maxvnodes 439sysctl. 440.Pp 441Too-large a value will result in excessive double-caching and can cause 442unnecessary read disk I/O. 443We recommend a number between 25000 and 250000 vnodes, depending on your 444use case. 445Keep in mind that even though the vnode cache is smaller, this will make 446room for a great deal more device-level buffer caching which can encompasses 447far more data and meta-data than the vnode-level caching. 448.Sh ENVIRONMENT 449TODO. 450.Sh FILES 451.Bl -tag -width ".It Pa <fs>/abc/defghi/<name>" -compact 452.It Pa /etc/hammer2/ 453.It Pa /etc/hammer2/rsa.pub 454.It Pa /etc/hammer2/rsa.prv 455.It Pa /etc/hammer2/autoconn 456.It Pa /etc/hammer2/remote/<IP>.pub 457.It Pa /etc/hammer2/remote/<IP>.none 458.El 459.Sh EXIT STATUS 460.Ex -std 461.Sh SEE ALSO 462.Xr mount_hammer2 8 , 463.Xr mount_null 8 , 464.Xr newfs_hammer2 8 , 465.Xr swapcache 8 , 466.Xr sysctl 8 467.Sh HISTORY 468The 469.Nm 470utility first appeared in 471.Dx 4.1 . 472.Sh AUTHORS 473.An Matthew Dillon Aq Mt dillon@backplane.com 474