1.\" Copyright (c) 2015-2019 The DragonFly Project. All rights reserved. 2.\" 3.\" This code is derived from software contributed to The DragonFly Project 4.\" by Matthew Dillon <dillon@backplane.com> 5.\" 6.\" Redistribution and use in source and binary forms, with or without 7.\" modification, are permitted provided that the following conditions 8.\" are met: 9.\" 10.\" 1. Redistributions of source code must retain the above copyright 11.\" notice, this list of conditions and the following disclaimer. 12.\" 2. Redistributions in binary form must reproduce the above copyright 13.\" notice, this list of conditions and the following disclaimer in 14.\" the documentation and/or other materials provided with the 15.\" distribution. 16.\" 3. Neither the name of The DragonFly Project nor the names of its 17.\" contributors may be used to endorse or promote products derived 18.\" from this software without specific, prior written permission. 19.\" 20.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 21.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 22.\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 23.\" FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE 24.\" COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, 25.\" INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING, 26.\" BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 27.\" LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED 28.\" AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 29.\" OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT 30.\" OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 31.\" SUCH DAMAGE. 32.\" 33.Dd September 29, 2019 34.Dt HAMMER2 8 35.Os 36.Sh NAME 37.Nm hammer2 38.Nd hammer2 file system utility 39.Sh SYNOPSIS 40.Nm 41.Fl h 42.Nm 43.Op Fl s Ar path 44.Op Fl t Ar type 45.Op Fl u Ar uuid 46.Op Fl m Ar mem 47.Ar command 48.Op Ar argument ... 49.Sh DESCRIPTION 50The 51.Nm 52utility provides miscellaneous support functions for a 53HAMMER2 file system. 54.Pp 55The options are as follows: 56.Bl -tag -width indent 57.It Fl s Ar path 58Specify the path to a mounted HAMMER2 filesystem. 59At least one PFS on a HAMMER2 filesystem must be mounted for the system 60to act on all PFSs managed by it. 61Every HAMMER2 filesystem typically has a PFS called "LOCAL" for this purpose. 62.It Fl t Ar type 63Specify the type when creating, upgrading, or downgrading a PFS. 64Supported types are MASTER, SLAVE, SOFT_MASTER, SOFT_SLAVE, CACHE, and DUMMY. 65If not specified the pfs-create directive will default to MASTER if no 66UUID is specified, and SLAVE if a UUID is specified. 67.It Fl u Ar uuid 68Specify the cluster UUID when creating a PFS. 69If not specified, a unique, random UUID will be generated. 70Note that every PFS also has a unique pfs_id which is always generated 71and cannot be overridden with an option. 72The { pfs_clid, pfs_fsid } tuple uniquely identifies a component of a cluster. 73.It Fl m Ar mem 74Specify how much tracking memory to use for certain directives. 75At the moment, this option is only applicable to the 76.Cm bulkfree 77directive, allowing it to operate in fewer passes when given more memory. 78A nominal value for a 4TB drive with a ton of stuff on it would be around 79a gigabyte '-m 1g'. 80.El 81.Pp 82.Nm 83directives are as shown below. 84Note that most directives require you to either be CD'd into a hammer2 85filesystem, specify a path to a mounted hammer2 filesystem via the 86.Fl s 87option, or specify a path after the directive. 88It depends on the directive. 89All hammer2 filesystem have a PFS called "LOCAL" which is typically mounted 90locally on the host in order to be able to issue commands for other PFSs 91on the filesystem. 92The mount also enables PFS configuration scanning for that filesystem. 93.Bl -tag -width indent 94.\" ==== cleanup ==== 95.It Cm cleanup Op path 96Perform manual cleanup passes on paths or all mounted partitions. 97.\" ==== connect ==== 98.It Cm connect Ar target 99Add a cluster link entry to the volume header. 100The volume header can support up to 255 link entries. 101This feature is not currently used. 102.\" ==== destroy ==== 103.It Cm destroy Ar path... 104Destroy the specified directory entry in a hammer2 filesystem. 105This bypasses 106all normal checks and will unconditionally destroy the directory entry. 107The underlying inode is not checked and, if it does exist, its nlinks count 108is not decremented. 109This directive should only be used to destroy a corrupted directory entry 110which no longer has a working inode. 111.Pp 112Note that this command may desynchronize the system namecache for the 113specified entry. 114If this happens, you may have to unmount and remount the filesystem. 115.\" ==== destroy-inum ==== 116.It Cm destroy-inum Ar path... 117Destroy the specified inode in a hammer2 filesystem. 118.\" ==== disconnect ==== 119.It Cm disconnect Ar target 120Delete a cluster link entry from the volume header. 121This feature is not currently used. 122.\" ==== emergency-mode-enable === 123.It Cm emergency-mode-enable Ar target 124Flag emergency operations mode in the filesystem. 125This mode may be used 126as a last resort to delete files and directories from a full filesystem. 127Inode creation, file writes, and certain meta-data cleanups are disallowed 128while emergency mode is active. 129File and directory removal and mode/attr setting is still allowed. 130This mode is extremely dangerous and should only be used as a last resort. 131.Pp 132This mode allows the filesystem to modify blocks in-place when it is unable 133to allocate a copy. 134Thus it is possible to chflags and remove files and 135directories even when the filesystem is completely full. 136However, there is a price. 137This mode of operation WILL LIKELY CORRUPT ANY SNAPSHOTS related 138to this filesystem. 139The filesystem will report this condition if it encounters 140it but if you are forced to use this mode to fix a filesystem full condition 141your snapshots can get a bit dicey. 142It is usually safest to delete any related snapshots when using this mode. 143.Pp 144You can detect whether related snapshots have been corrupted by running 145a bulkfree pass and checking the console output for reported CRC errors. 146If no errors are reported, your snapshots are fine. 147If errors are reported 148you should delete related snapshots until bulkfree reports no further errors. 149.Pp 150The emergency mode will also make meta-data updates unsafe due to the lack of 151copy-on-write, causing potential harm if the system unexpectedly panics or 152loses power. 153GREAT CARE MUST BE TAKEN WHILE THIS MODE IS ACTIVE. 154.Bl -enum 155.It 156Determine that you are unable to recover space with normal file and directory 157removal commands due to 158.Er ENOSPC 159errors being returned by 'rm', or through the 160removal of snapshots (if any). The 'bulkfree' directive must be issued to 161scan the filesystem and free up the actual space, then check with 'df'. 162Continue if you still have insufficient space and are unable to remove items 163normally. 164.It 165If you need any related snapshots, this is a good time to copy them elsewhere. 166.It 167Idle or kill any processes trying to use the filesystem. 168.It 169Issue the emergency-mode-enable directive on the filesystem. 170Once enabled, run 'sync' to update any dirty inodes which may still 171be dirty due to not being able to flush. 172Please remember that this 173directive is a LAST RESORT, is dangerous, and will likely corrupt any 174other snapshots you have based on the filesystem you are removing files 175from. 176.It 177Remove file trees as necessary with 'rm -rf' to free space, being cognizant 178of any warnings issued by the kernel on the console (via 'dmesg') while 179doing so. 180.It 181Issue the 'bulkfree' directive to actually free the space and check that 182sufficient space has been freed with 'df'. 183.It 184If bulkfree reports CHECK errors, or if you have snapshots and insufficient 185space has been freed, you will need to delete snapshots. 186Re-run bulkfree and delete snapshots until no errors are reported. 187.It 188Issue the emergency-mode-disable directive when done. 189It might also be a 190good idea to reboot after using this mode, but theoretically you should not 191have to. 192.It 193Restore services using the filesystem. 194.El 195.\" ==== emergency-mode-disable === 196.It Cm emergency-mode-disable Ar target 197Turn off the emergency operations mode on a filesystem, restoring normal 198operation. 199.\" ==== info ==== 200.It Cm info Op devpath... 201Access and print the status and super-root entries for all HAMMER2 202partitions found in /dev/serno or the specified device path(s). 203The partitions do not have to be mounted. 204Note that only mounted partitions will be under active management. 205This is accomplished by mounting at least one PFS within the partition. 206Typically at least the @LOCAL PFS is mounted. 207.\" ==== mountall ==== 208.It Cm mountall Op devpath... 209This directive mounts the @LOCAL PFS on all HAMMER2 partitions found 210in /dev/serno, or the specified device path(s). 211The partitions are mounted as /var/hammer2/LOCAL.<id>. 212Mounts are executed in the background and this command will wait a 213limited amount of time for the mounts to complete before returning. 214.\" ==== status ==== 215.It Cm status Op path... 216Dump a list of all cluster link entries configured in the volume header. 217.\" ==== hash ==== 218.It Cm hash Op filename... 219Compute and print the directory hash for any number of filenames. 220.\" ==== dhash ==== 221.It Cm dhash Op filename... 222Compute and print the data hash for long directory entry for any number of filenames. 223.\" ==== pfs-list ==== 224.It Cm pfs-list Op path... 225List all local PFSs available on a mounted HAMMER2 filesystem, their type, 226and their current status. 227You must mount at least one PFS in order to be able to access the whole list. 228.\" ==== pfs-clid ==== 229.It Cm pfs-clid Ar label 230Print the cluster id for a PFS specified by name. 231.\" ==== pfs-fsid ==== 232.It Cm pfs-fsid Ar label 233Print the unique filesystem id for a PFS specified by name. 234.\" ==== pfs-create ==== 235.It Cm pfs-create Ar label 236Create a local PFS on a mounted HAMMER2 filesystem. 237If no UUID is specified the pfs-type defaults to MASTER. 238If a UUID is specified via the 239.Fl u 240option the pfs-type defaults to SLAVE. 241Other types can be specified with the 242.Fl t 243option. 244.Pp 245If you wish to add a MASTER to an existing cluster, you must first add it as 246a SLAVE and then upgrade it to MASTER to properly synchronize it. 247.Pp 248The DUMMY pfs-type is used to tie network-accessible clusters into the local 249machine when no local storage is desired. 250This type should be used on minimal H2 partitions or entirely in ram for 251netboot-centric systems to provide a tie-in point for the mount command, 252or on more complex systems where you need to also access network-centric 253clusters. 254.Pp 255The CACHE or SLAVE pfs-type is typically used when the main store is on 256the network but local storage is desired to improve performance. 257SLAVE is also used when a backup is desired. 258.Pp 259Generally speaking, you can mount any PFS element of a cluster in order to 260access the cluster via the full cluster protocol. 261There are two exceptions. 262If you mount a SOFT_SLAVE or a SOFT_MASTER then soft quorum semantics are 263employed... the soft slave or soft master's current state will always be used 264and the quorum protocol will not be used. 265The soft PFS will still be 266synchronized to masters in the background when available. 267Also, you can use 268.Sq mount -o local 269to mount ONLY a local HAMMER2 PFS and 270not run any network or quorum protocols for the mount. 271All such mounts except for a SOFT_MASTER mount will be read-only. 272Other than that, you will be mounting the whole cluster when you mount any 273PFS within the cluster. 274.Pp 275DUMMY - Create a PFS skeleton intended to be the mount point for a 276more complex cluster, probably one that is entirely network based. 277No data will be synchronized to this PFS so it is suitable for use 278in a network boot image or memory filesystem. 279This allows you to create placeholders for mount points on your local 280disk, SSD, or memory disk. 281.Pp 282CACHE - Create a PFS for caching portions of the cluster piecemeal. 283This is similar to a SLAVE but does not synchronize the entire contents of 284the cluster to the PFS. 285Elements found in the CACHE PFS which are validated against the cluster 286will be read, presumably a faster access than having to go to the cluster. 287Only local CACHEs will be updated. 288Network-accessible CACHE PFSs might be read but will not be written to. 289If you have a large hard-drive-based cluster you can set up localized 290SSD CACHE PFSs to improve performance. 291.Pp 292SLAVE - Create a PFS which maintains synchronization with and provides a 293read-only copy of the cluster. 294HAMMER2 will prioritize local SLAVEs for data retrieval after validating 295their transaction id against the cluster. 296The difference between a CACHE and a SLAVE is that the SLAVE is synchronized 297to a full copy of the cluster and thus can serve as a backup or be staged 298for use as a MASTER later on. 299.Pp 300SOFT_SLAVE - Create a PFS which maintains synchronization with and provides 301a read-only copy of the cluster. 302This is one of the special mount cases. 303A SOFT_SLAVE will synchronize with 304the cluster when the cluster is available, but can still be accessed when 305the cluster is not available. 306.Pp 307MASTER - Create a PFS which will hold a master copy of the cluster. 308If you create several MASTER PFSs with the same cluster id you are 309effectively creating a multi-master cluster and causing a quorum and 310cache coherency protocol to be used to validate operations. 311The total number of masters is stored in each PFSs making up the cluster. 312Filesystem operations will stall for normal mounts if a quorum cannot be 313obtained to validate the operation. 314MASTER nodes which go offline and return later will synchronize in the 315background. 316Note that when adding a MASTER to an existing cluster you must add the 317new PFS as a SLAVE and then upgrade it to a MASTER. 318.Pp 319SOFT_MASTER - Create a PFS which maintains synchronization with and provides 320a read-write copy of the cluster. 321This is one of the special mount cases. 322A SOFT_MASTER will synchronize with 323the cluster when the cluster is available, but can still be read AND written 324to even when the cluster is not available. 325Modifications made to a SOFT_MASTER will be automatically flushed to the 326cluster when it becomes accessible again, and vise-versa. 327Manual intervention may be required if a conflict occurs during 328synchronization. 329.\" ==== pfs-delete ==== 330.It Cm pfs-delete Ar label 331Delete a local PFS on a mounted HAMMER2 filesystem. 332Deleting a PFS of type MASTER requires first downgrading it to a SLAVE (XXX). 333.\" ==== snapshot ==== 334.It Cm snapshot Ar path Op label 335Create a snapshot of a directory. 336This can only be used on a local PFS, and is only really useful if the PFS 337contains a complete copy of what you desire to snapshot so that typically 338means a local MASTER, SOFT_MASTER, SLAVE, or SOFT_SLAVE must be present. 339Snapshots are created simply by flushing a PFS mount to disk and then copying 340the directory inode to the PFS. 341The topology is snapshotted without having to be copied or scanned. 342Snapshots are effectively separate from the cluster they came from 343and can be used as a starting point for a new cluster. 344So unless you build a new cluster from the snapshot, it will stay local 345to the machine it was made on. 346.\" ==== snapshot-debug ==== 347.It Cm snapshot-debug Ar path Op label 348Snapshot without filesystem sync. 349.\" ==== service ==== 350.It Cm service 351Start the 352.Nm 353service daemon. 354This daemon is also automatically started when you run 355.Xr mount_hammer2 8 . 356The hammer2 service daemon handles incoming TCP connections and maintains 357outgoing TCP connections. 358It will interconnect available services on the 359machine (e.g. hammer2 mounts and xdisks) to the network. 360.\" ==== stat ==== 361.It Cm stat Op path... 362Print the inode statistics, compression, and other meta-data associated 363with a list of paths. 364.\" ==== leaf ==== 365.It Cm leaf 366XXX 367.\" ==== shell ==== 368.It Cm shell Op host 369Start a debug shell to the local hammer2 service daemon via the DMSG protocol. 370.\" ==== debugspan ==== 371.It Cm debugspan Ar target 372(do not use) 373.\" ==== rsainit ==== 374.It Cm rsainit Op path 375Create the 376.Pa /etc/hammer2 377directory and initialize a public/private keypair in that directory for 378use by the network cluster protocols. 379.\" ==== show ==== 380.It Cm show Ar devpath 381Dump the radix tree for the HAMMER2 filesystem by scanning a 382block device directly. 383No mount is required. 384.\" ==== freemap ==== 385.It Cm freemap Ar devpath 386Dump the freemap tree for the HAMMER2 filesystem by scanning a 387block device directly. 388No mount is required. 389.\" ==== volhdr ==== 390.It Cm volhdr Ar devpath 391Dump the volume header for the HAMMER2 filesystem by scanning a 392block device directly. 393No mount is required. 394.\" ==== setcomp ==== 395.It Cm setcomp Ar mode[:level] Ar path... 396Set the compression mode as specified for any newly created elements at or 397under the path if not overridden by deeper elements. 398Available modes are none, autozero, lz4, or zlib. 399When zlib is used the compression level can be set. 400The default will be 6 which is the best trade-off between performance and 401time. 402.Pp 403newfs_hammer2 will set the default compression to lz4 which prioritizes 404speed over performance. 405Also note that HAMMER2 contains a heuristic and will not attempt to 406compress every block if it detects a sufficient amount of uncompressable 407data. 408.Pp 409Hammer2 compression is only effective when it can reduce the size of dataset 410(typically a 64KB block) by one or more powers of 2. A 64K block which 411only compresses to 40K will not yield any storage improvement. 412.Pp 413Generally speaking you do not want to set the compression mode to 414.Sq none , 415as this will cause blocks of all-zeros to be written as all-zero blocks, 416instead of holes. 417The 418.Sq autozero 419compression mode detects blocks of all-zeros 420and writes them as holes. 421However, HAMMER2 will rewrite data in-place if the compression mode is set to 422.Sq none 423and the check code is set to 424.Sq disabled . 425Formal snapshots will still snapshot such files. 426However, de-duplication will no longer function on the data blocks. 427.\" ==== setcheck ==== 428.It Cm setcheck Ar check Ar path... 429Set the check code as specified for any newly created elements at or under 430the path if not overridden by deeper elements. 431Available codes are default, disabled, crc32, xxhash64, or sha192. 432.\" ==== clrcheck ==== 433.It Cm clrcheck Op path... 434Clear the check code override for the specified paths. 435Overrides may still be present in deeper elements. 436.\" ==== setcrc32 ==== 437.It Cm setcrc32 Op path... 438Set the check code to the ISCSI 32-bit CRC for any newly created elements 439at or under the path if not overridden by deeper elements. 440.\" ==== setxxhash64 ==== 441.It Cm setxxhash64 Op path... 442Set the check code to XXHASH64, a fast 64-bit hash 443.\" ==== setsha192 ==== 444.It Cm setsha192 Op path... 445Set the check code to SHA192 for any newly created elements at or under 446the path if not overridden by deeper elements. 447.\" ==== bulkfree ==== 448.It Cm bulkfree Ar path 449Run a bulkfree pass on a HAMMER2 mount. 450You can specify any PFS for the mount, the bulkfree pass is run on the 451entire partition. 452Note that it takes two passes to actually free space. 453By default this directive will use up to 1/16 physical memory to track 454the freemap. 455The amount of memory used may be overridden with the 456.Op Fl m Ar mem 457option. 458.\" ==== printinode ==== 459.It Cm printinode Ar path 460Dump inode. 461.\" ==== dumpchain ==== 462.It Cm dumpchain Op path Op chnflags 463Dump in-memory chain topology. 464.El 465.Sh SYSCTLS 466.Bl -tag -width indent 467.It Va vfs.hammer2.dedup_enable (default on) 468Enables live de-duplication. 469Any recently read data that is on-media 470(already synchronized to media) is tested against pending writes for 471compatibility. 472If a match is found, the write will reference the 473existing on-media data instead of writing new data. 474.It Va vfs.hammer2.always_compress (default off) 475This disables the H2 compression heuristic and forces H2 to always 476try to compress data blocks, even if they look uncompressable. 477Enabling this option reduces performance but has higher de-duplication 478repeatability. 479.It Va vfs.hammer2.cluster_data_read (default 4) 480.It Va vfs.hammer2.cluster_meta_read (default 1) 481Set the amount of read-ahead clustering to perform on data and meta-data 482blocks. 483.It Va vfs.hammer2.cluster_write (default 4) 484Set the amount of write-behind clustering to perform in buffers. 485Each buffer represents 64KB. 486The default is 4 and higher values typically do not improve performance. 487A value of 0 disables clustered writes. 488This variable applies to the underlying media device, not to logical 489file writes, so it should not interfere with temporary file optimization. 490Generally speaking you want this enabled to generate smoothly pipelined 491writes to the media. 492.It Va vfs.hammer2.bulkfree_tps (default 5000) 493Set bulkfree's maximum scan rate. 494This is primarily intended to limit 495I/O utilization on SSDs and CPU utilization when the meta-data is mostly 496cached in memory. 497.El 498.Sh SETTING UP /etc/hammer2 499The 500.Sq rsainit 501directive will create the 502.Pa /etc/hammer2 503directory with appropriate permissions and also generate a public key 504pair in this directory for the machine. 505These files will be 506.Pa rsa.pub 507and 508.Pa rsa.prv 509and needless to say, the private key shouldn't leave the host. 510.Pp 511The service daemon will also scan the 512.Pa /etc/hammer2/autoconn 513file which contains a list of hosts which it will automatically maintain 514connections to to form your cluster. 515The service daemon will automatically reconnect on any failure and will 516also monitor the file for changes. 517.Pp 518When the service daemon receives a connection it expects to find a 519public key for that connection in a file in 520.Pa /etc/hammer2/remote/ 521called 522.Pa <IPADDR>.pub . 523You normally copy the 524.Pa rsa.pub 525key from the host in question to this file. 526The IP address must match exactly or the connection will not be allowed. 527.Pp 528If you want to use an unencrypted connection you can create empty, 529dummy files in the remote directory in the form 530.Pa <IPADDR>.none . 531We do not recommend using unencrypted connections. 532.Sh CLUSTER SERVICES 533Currently there are two services which use the cluster network infrastructure, 534HAMMER2 mounts and XDISK. 535Any HAMMER2 mount will make all PFSs for that filesystem available to the 536cluster. 537And if the XDISK kernel module is loaded, the hammer2 service daemon will make 538your machine's block devices available to the cluster (you must load the 539xdisk.ko kernel module before starting the hammer2 service). 540They will show up as 541.Pa /dev/xa* 542and 543.Pa /dev/serno/* 544devices on the remote machines making up the cluster. 545Remote block devices are just what they appear to be... direct access to a 546block device on a remote machine. 547If the link goes down remote accesses 548will stall until it comes back up again, then automatically requeue any 549pending I/O and resume as if nothing happened. 550However, if the server hosting the physical disks crashes or is rebooted, 551any remote opens to its devices will see a permanent I/O failure requiring a 552close and open sequence to re-establish. 553The latter is necessary because the server's drives might not have committed 554the data before the crash, but had already acknowledged the transfer. 555.Pp 556Data commits work exactly the same as they do for real block devices. 557The originater must issue a BUF_CMD_FLUSH. 558.Sh ADDING A NEW MASTER TO A CLUSTER 559When you 560.Xr newfs_hammer2 8 561a HAMMER2 filesystem or use the 562.Sq pfs-create 563directive on one already mounted 564to create a new PFS, with no special options, you wind up with a PFS 565typed as a MASTER and a unique cluster UUID, but because there is only one 566PFS for that cluster (for each PFS you create via pfs-create), it will 567act just like a normal filesystem would act and does not require any special 568protocols to operate. 569.Pp 570If you use the 571.Sq pfs-create 572directive along with the 573.Fl u 574option to specify a cluster UUID that already exists in the cluster, 575you are adding a PFS to an existing cluster and this can trigger a whole 576series of events in the background. 577When you specify the 578.Fl u 579option in a 580.Sq pfs-create , 581.Nm 582will by default create a SLAVE PFS. 583In fact, this is what must be created first even if you want to add a new 584MASTER to your cluster. 585.Pp 586The most common action a system admin will want to take is to upgrade or 587downgrade a PFS. 588A new MASTER can be added to the cluster by upgrading an existing SLAVE 589to MASTER. 590A MASTER can be removed from the cluster by downgrading it to a SLAVE. 591Upgrades and downgrades will put nodes in the cluster in a transition state 592until the operation is complete. 593For downgrades the transition state is fleeting unless one or more other 594masters has not acknowledged the change. 595For upgrades a background synchronization process must complete before the 596transition can be said to be complete, and the node remains (really) a SLAVE 597until that transition is complete. 598.Sh USE CASES FOR A SOFT_MASTER 599The SOFT_MASTER PFS type is a special type which must be specifically 600mounted by a machine. 601It is a R/W mount which does not use the quorum protocol and is not 602cache coherent with the cluster, but which synchronizes from the cluster 603and allows modifying operations which will synchronize to the cluster. 604The most common case is to use a SOFT_MASTER PFS in a laptop allowing you 605to work on your laptop when you are on the road and not connected to 606your main servers, and for the laptop to synchronize when a connection is 607available. 608.Sh USE CASES FOR A SOFT_SLAVE 609A SOFT_SLAVE PFS type is a special type which must be specifically mounted 610by a machine. 611It is a RO mount which does not use the quorum protocol and is not 612cache coherent with the cluster. 613It will receive synchronization from 614the cluster when network connectivity is available but will not stall if 615network connectivity is lost. 616.Sh FSYNC FLUSH MODES 617TODO. 618.Sh RESTORING FROM A SNAPSHOT BACKUP 619TODO. 620.Sh PERFORMANCE TUNING 621Because HAMMER2 implements compression, decompression, and dedup natively, 622it always double-buffers file data. 623This means that the file data is 624cached via the device vnode (in compressed / dedupped-form) and the same 625data is also cached by the file vnode (in decompressed / non-dedupped form). 626.Pp 627While HAMMER2 will try to age the logical file buffers on its, some 628additional performance tuning may be necessary for optimal operation 629whether swapcache is used or not. 630Our recommendation is to reduce the 631number of vnodes (and thus also the logical buffer cache behind the 632vnodes) that the system caches via the 633.Va kern.maxvnodes 634sysctl. 635.Pp 636Too-large a value will result in excessive double-caching and can cause 637unnecessary read disk I/O. 638We recommend a number between 25000 and 250000 vnodes, depending on your 639use case. 640Keep in mind that even though the vnode cache is smaller, this will make 641room for a great deal more device-level buffer caching which can encompasses 642far more data and meta-data than the vnode-level caching. 643.Sh ENVIRONMENT 644TODO. 645.Sh FILES 646.Bl -tag -width ".It Pa <fs>/abc/defghi/<name>" -compact 647.It Pa /etc/hammer2/ 648.It Pa /etc/hammer2/rsa.pub 649.It Pa /etc/hammer2/rsa.prv 650.It Pa /etc/hammer2/autoconn 651.It Pa /etc/hammer2/remote/<IP>.pub 652.It Pa /etc/hammer2/remote/<IP>.none 653.El 654.Sh EXIT STATUS 655.Ex -std 656.Sh SEE ALSO 657.Xr mount_hammer2 8 , 658.Xr mount_null 8 , 659.Xr newfs_hammer2 8 , 660.Xr swapcache 8 , 661.Xr sysctl 8 662.Sh HISTORY 663The 664.Nm 665utility first appeared in 666.Dx 4.1 . 667.Sh AUTHORS 668.An Matthew Dillon Aq Mt dillon@backplane.com 669