xref: /dragonfly/share/man/man7/tuning.7 (revision bbb35c81)
1.\" Copyright (c) 2001 Matthew Dillon.  Terms and conditions are those of
2.\" the BSD Copyright as specified in the file "/usr/src/COPYRIGHT" in
3.\" the source tree.
4.\"
5.Dd August 24, 2018
6.Dt TUNING 7
7.Os
8.Sh NAME
9.Nm tuning
10.Nd performance tuning under DragonFly
11.Sh SYSTEM SETUP
12Modern
13.Dx
14systems typically have just three partitions on the main drive.
15In order, a UFS
16.Pa /boot ,
17.Pa swap ,
18and a HAMMER or HAMMER2
19.Pa root .
20In prior years the installer created separate PFSs for half a dozen
21directories, but now we just put (almost) everything in the root.
22The installer will separate stuff that doesn't need to be backed up into
23a /build subdirectory and create null-mounts for things like /usr/obj, but it
24no longer creates separate PFSs for these.
25If desired, you can make /build its own mount to separate-out the
26components of the filesystem which do not need to be persistent.
27.Pp
28Generally speaking the
29.Pa /boot
30partition should be 1GB in size.  This is the minimum recommended
31size, giving you room for backup kernels and alternative boot schemes.
32.Dx
33always installs debug-enabled kernels and modules and these can take
34up quite a bit of disk space (but will not take up any extra ram).
35.Pp
36In the old days we recommended that swap be sized to at least 2x main
37memory.  These days swap is often used for other activities, including
38.Xr tmpfs 5
39and
40.Xr swapcache 8 .
41We recommend that swap be sized to the larger of 2x main memory or
421GB if you have a fairly small disk and 16GB or more if you have a
43modestly endowed system.
44If you have a modest SSD + large HDD combination, we recommend
45a large dedicated swap partition on the SSD.  For example, if
46you have a 128GB SSD and 2TB or more of HDD storage, dedicating
47upwards of 64GB of the SSD to swap and using
48.Xr swapcache 8
49will significantly improve your HDD's performance.
50.Pp
51In an all-SSD or mostly-SSD system,
52.Xr swapcache 8
53is not normally used and should be left disabled (the default), but you
54may still want to have a large swap partition to support
55.Xr tmpfs 5
56use.
57Our synth/poudriere build machines run with at least 200GB of
58swap and use tmpfs for all the builder jails.  50-100 GB
59is swapped out at the peak of the build.  As a result, actual
60system storage bandwidth is minimized and performance increased.
61.Pp
62If you are on a minimally configured machine you may, of course,
63configure far less swap or no swap at all but we recommend at least
64some swap.
65The kernel's VM paging algorithms are tuned to perform best when there is
66swap space configured.
67Configuring too little swap can lead to inefficiencies in the VM
68page scanning code as well as create issues later on if you add
69more memory to your machine, so don't be shy about it.
70Swap is a good idea even if you don't think you will ever need it as it
71allows the
72machine to page out completely unused data and idle programs (like getty),
73maximizing the ram available for your activities.
74.Pp
75If you intend to use the
76.Xr swapcache 8
77facility with a SSD + HDD combination we recommend configuring as much
78swap space as you can on the SSD.
79However, keep in mind that each 1GByte of swapcache requires around
801MByte of ram, so don't scale your swap beyond the equivalent ram
81that you reasonably want to eat to support it.
82.Pp
83Finally, on larger systems with multiple drives, if the use
84of SSD swap is not in the cards or if it is and you need higher-than-normal
85swapcache bandwidth, you can configure swap on up to four drives and
86the kernel will interleave the storage.
87The swap partitions on the drives should be approximately the same size.
88The kernel can handle arbitrary sizes but
89internal data structures scale to 4 times the largest swap partition.
90Keeping
91the swap partitions near the same size will allow the kernel to optimally
92stripe swap space across the N disks.
93Do not worry about overdoing it a
94little, swap space is the saving grace of
95.Ux
96and even if you do not normally use much swap, having some allows the system
97to move idle program data out of ram and allows the machine to more easily
98handle abnormal runaway programs.
99However, keep in mind that any sort of swap space failure can lock the
100system up.
101Most machines are configured with only one or two swap partitions.
102.Pp
103Most
104.Dx
105systems have a single HAMMER or HAMMER2 root.
106PFSs can be used to administratively separate domains for backup purposes
107but tend to be a hassle otherwise so if you don't need the administrative
108separation you don't really need to use multiple PFSs.
109All the PFSs share the same allocation layer so there is no longer a need
110to size each individual mount.
111Instead you should review the
112.Xr hammer 8
113manual page and use the 'hammer viconfig' facility to adjust snapshot
114retention and other parameters.
115By default
116HAMMER1 keeps 60 days worth of snapshots, and HAMMER2 keeps none.
117By convention
118.Pa /build
119is not backed up and contained only directory trees that do not need
120to be backed-up or snapshotted.
121.Pp
122If a very large work area is desired it is often beneficial to
123configure it as its own filesystem in a completely independent partition
124so allocation blowouts (if they occur) do not affect the main system.
125By convention a large work area is named
126.Pa /build .
127Similarly if a machine is going to have a large number of users
128you might want to separate your
129.Pa /home
130out as well.
131.Pp
132A number of run-time
133.Xr mount 8
134options exist that can help you tune the system.
135The most obvious and most dangerous one is
136.Cm async .
137Do not ever use it; it is far too dangerous.
138A less dangerous and more
139useful
140.Xr mount 8
141option is called
142.Cm noatime .
143.Ux
144filesystems normally update the last-accessed time of a file or
145directory whenever it is accessed.
146However, neither HAMMER nor HAMMER2 implement atime so there is usually
147no need to mess with this option.
148The lack of atime updates can create issues with certain programs
149such as when detecting whether unread mail is present, but
150applications for the most part no longer depend on it.
151.Sh SSD SWAP
152The single most important thing you can do to improve performance is to`
153have at least one solid-state drive in your system, and to configure your
154swap space on that drive.
155If you are using a combination of a smaller SSD and a very larger HDD,
156you can use
157.Xr swapcache 8
158to automatically cache data from your HDD.
159But even if you do not, having swap space configured on your SSD will
160significantly improve performance under even modest paging loads.
161It is particularly useful to configure a significant amount of swap
162on a workstation, 32GB or more is not uncommon, to handle bloated
163leaky applications such as browsers.
164.Sh SYSCTL TUNING
165.Xr sysctl 8
166variables permit system behavior to be monitored and controlled at
167run-time.
168Some sysctls simply report on the behavior of the system; others allow
169the system behavior to be modified;
170some may be set at boot time using
171.Xr rc.conf 5 ,
172but most will be set via
173.Xr sysctl.conf 5 .
174There are several hundred sysctls in the system, including many that appear
175to be candidates for tuning but actually are not.
176In this document we will only cover the ones that have the greatest effect
177on the system.
178.Pp
179The
180.Va kern.gettimeofday_quick
181sysctl defaults to 0 (off).  Setting this sysctl to 1 causes
182.Fn gettimeofday
183calls in libc to use a tick-granular time from the kpmap instead of making
184a system call.  Setting this feature can be useful when running benchmarks
185which make large numbers of
186.Fn gettimeofday
187calls, such as postgres.
188.Pp
189The
190.Va kern.ipc.shm_use_phys
191sysctl defaults to 1 (on) and may be set to 0 (off) or 1 (on).
192Setting
193this parameter to 1 will cause all System V shared memory segments to be
194mapped to unpageable physical RAM.
195This feature only has an effect if you
196are either (A) mapping small amounts of shared memory across many (hundreds)
197of processes, or (B) mapping large amounts of shared memory across any
198number of processes.
199This feature allows the kernel to remove a great deal
200of internal memory management page-tracking overhead at the cost of wiring
201the shared memory into core, making it unswappable.
202.Pp
203The
204.Va vfs.write_behind
205sysctl defaults to 1 (on).  This tells the filesystem to issue media
206writes as full clusters are collected, which typically occurs when writing
207large sequential files.  The idea is to avoid saturating the buffer
208cache with dirty buffers when it would not benefit I/O performance.  However,
209this may stall processes and under certain circumstances you may wish to turn
210it off.
211.Pp
212The
213.Va vfs.lorunningspace
214and
215.Va vfs.hirunningspace
216sysctls determines how much outstanding write I/O may be queued to
217disk controllers system wide at any given moment.  The default is
218usually sufficient, particularly when SSDs are part of the mix.
219Note that setting too high a value can lead to extremely poor
220clustering performance.  Do not set this value arbitrarily high!  Also,
221higher write queueing values may add latency to reads occurring at the same
222time.
223The
224.Va vfs.bufcache_bw
225controls data cycling within the buffer cache.  I/O bandwidth less than
226this specification (per second) will cycle into the much larger general
227VM page cache while I/O bandwidth in excess of this specification will
228be recycled within the buffer cache, reducing the load on the rest of
229the VM system at the cost of bypassing normal VM caching mechanisms.
230The default value is 200 megabytes/s (209715200), which means that the
231system will try harder to cache data coming off a slower hard drive
232and less hard trying to cache data coming off a fast SSD.
233.Pp
234This parameter is particularly important if you have NVMe drives in
235your system as these storage devices are capable of transferring
236well over 2GBytes/sec into the system and can blow normal VM paging
237and caching algorithms to bits.
238.Pp
239There are various other buffer-cache and VM page cache related sysctls.
240We do not recommend modifying their values.
241.Pp
242The
243.Va net.inet.tcp.sendspace
244and
245.Va net.inet.tcp.recvspace
246sysctls are of particular interest if you are running network intensive
247applications.
248They control the amount of send and receive buffer space
249allowed for any given TCP connection.
250However,
251.Dx
252now auto-tunes these parameters using a number of other related
253sysctls (run 'sysctl net.inet.tcp' to get a list) and usually
254no longer need to be tuned manually.
255We do not recommend
256increasing or decreasing the defaults if you are managing a very large
257number of connections.
258Note that the routing table (see
259.Xr route 8 )
260can be used to introduce route-specific send and receive buffer size
261defaults.
262.Pp
263As an additional management tool you can use pipes in your
264firewall rules (see
265.Xr ipfw 8 )
266to limit the bandwidth going to or from particular IP blocks or ports.
267For example, if you have a T1 you might want to limit your web traffic
268to 70% of the T1's bandwidth in order to leave the remainder available
269for mail and interactive use.
270Normally a heavily loaded web server
271will not introduce significant latencies into other services even if
272the network link is maxed out, but enforcing a limit can smooth things
273out and lead to longer term stability.
274Many people also enforce artificial
275bandwidth limitations in order to ensure that they are not charged for
276using too much bandwidth.
277.Pp
278Setting the send or receive TCP buffer to values larger than 65535 will result
279in a marginal performance improvement unless both hosts support the window
280scaling extension of the TCP protocol, which is controlled by the
281.Va net.inet.tcp.rfc1323
282sysctl.
283These extensions should be enabled and the TCP buffer size should be set
284to a value larger than 65536 in order to obtain good performance from
285certain types of network links; specifically, gigabit WAN links and
286high-latency satellite links.
287RFC 1323 support is enabled by default.
288.Pp
289The
290.Va net.inet.tcp.always_keepalive
291sysctl determines whether or not the TCP implementation should attempt
292to detect dead TCP connections by intermittently delivering
293.Dq keepalives
294on the connection.
295By default, this is now enabled for all applications.
296We do not recommend turning it off.
297The extra network bandwidth is minimal and this feature will clean-up
298stalled and long-dead connections that might not otherwise be cleaned
299up.
300In the past people using dialup connections often did not want to
301use this feature in order to be able to retain connections across
302long disconnections, but in modern day the only default that makes
303sense is for the feature to be turned on.
304.Pp
305The
306.Va net.inet.tcp.delayed_ack
307TCP feature is largely misunderstood.  Historically speaking this feature
308was designed to allow the acknowledgement to transmitted data to be returned
309along with the response.  For example, when you type over a remote shell
310the acknowledgement to the character you send can be returned along with the
311data representing the echo of the character.   With delayed acks turned off
312the acknowledgement may be sent in its own packet before the remote service
313has a chance to echo the data it just received.  This same concept also
314applies to any interactive protocol (e.g. SMTP, WWW, POP3) and can cut the
315number of tiny packets flowing across the network in half.   The
316.Dx
317delayed-ack implementation also follows the TCP protocol rule that
318at least every other packet be acknowledged even if the standard 100ms
319timeout has not yet passed.  Normally the worst a delayed ack can do is
320slightly delay the teardown of a connection, or slightly delay the ramp-up
321of a slow-start TCP connection.  While we aren't sure we believe that
322the several FAQs related to packages such as SAMBA and SQUID which advise
323turning off delayed acks may be referring to the slow-start issue.
324.Pp
325The
326.Va net.inet.tcp.inflight_enable
327sysctl turns on bandwidth delay product limiting for all TCP connections.
328This feature is now turned on by default and we recommend that it be
329left on.
330It will slightly reduce the maximum bandwidth of a connection but the
331benefits of the feature in reducing packet backlogs at router constriction
332points are enormous.
333These benefits make it a whole lot easier for router algorithms to manage
334QOS for multiple connections.
335The limiting feature reduces the amount of data built up in intermediate
336router and switch packet queues as well as reduces the amount of data built
337up in the local host's interface queue.  With fewer packets queued up,
338interactive connections, especially over slow modems, will also be able
339to operate with lower round trip times.  However, note that this feature
340only affects data transmission (uploading / server-side).  It does not
341affect data reception (downloading).
342.Pp
343The system will attempt to calculate the bandwidth delay product for each
344connection and limit the amount of data queued to the network to just the
345amount required to maintain optimum throughput.  This feature is useful
346if you are serving data over modems, GigE, or high speed WAN links (or
347any other link with a high bandwidth*delay product), especially if you are
348also using window scaling or have configured a large send window.
349.Pp
350For production use setting
351.Va net.inet.tcp.inflight_min
352to at least 6144 may be beneficial.  Note, however, that setting high
353minimums may effectively disable bandwidth limiting depending on the link.
354.Pp
355Adjusting
356.Va net.inet.tcp.inflight_stab
357is not recommended.
358This parameter defaults to 50, representing +5% fudge when calculating the
359bwnd from the bw.  This fudge is on top of an additional fixed +2*maxseg
360added to bwnd.  The fudge factor is required to stabilize the algorithm
361at very high speeds while the fixed 2*maxseg stabilizes the algorithm at
362low speeds.  If you increase this value excessive packet buffering may occur.
363.Pp
364The
365.Va net.inet.ip.portrange.*
366sysctls control the port number ranges automatically bound to TCP and UDP
367sockets.  There are three ranges:  A low range, a default range, and a
368high range, selectable via an IP_PORTRANGE
369.Fn setsockopt
370call.
371Most network programs use the default range which is controlled by
372.Va net.inet.ip.portrange.first
373and
374.Va net.inet.ip.portrange.last ,
375which defaults to 1024 and 5000 respectively.  Bound port ranges are
376used for outgoing connections and it is possible to run the system out
377of ports under certain circumstances.  This most commonly occurs when you are
378running a heavily loaded web proxy.  The port range is not an issue
379when running serves which handle mainly incoming connections such as a
380normal web server, or has a limited number of outgoing connections such
381as a mail relay.  For situations where you may run yourself out of
382ports we recommend increasing
383.Va net.inet.ip.portrange.last
384modestly.  A value of 10000 or 20000 or 30000 may be reasonable.  You should
385also consider firewall effects when changing the port range.  Some firewalls
386may block large ranges of ports (usually low-numbered ports) and expect systems
387to use higher ranges of ports for outgoing connections.  For this reason
388we do not recommend that
389.Va net.inet.ip.portrange.first
390be lowered.
391.Pp
392The
393.Va kern.ipc.somaxconn
394sysctl limits the size of the listen queue for accepting new TCP connections.
395The default value of 128 is typically too low for robust handling of new
396connections in a heavily loaded web server environment.
397For such environments,
398we recommend increasing this value to 1024 or higher.
399The service daemon
400may itself limit the listen queue size (e.g.\&
401.Xr sendmail 8 ,
402apache) but will
403often have a directive in its configuration file to adjust the queue size up.
404Larger listen queues also do a better job of fending off denial of service
405attacks.
406.Pp
407The
408.Va kern.maxvnodes
409specifies how many vnodes and related file structures the kernel will
410cache.
411The kernel uses a modestly generous default for this parameter based on
412available physical memory.
413You generally do not want to mess with this parameter as it directly
414effects how well the kernel can cache not only file structures but also
415the underlying file data.
416.Pp
417However, situations may crop up where you wish to cache less filesystem
418data in order to make more memory available for programs.  Not only will
419this reduce kernel memory use for vnodes and inodes, it will also have a
420tendency to reduce the impact of the buffer cache on main memory because
421recycling a vnode also frees any underlying data that has been cached for
422that vnode.
423.Pp
424It is, in fact, possible for the system to have more files open than the
425value of this tunable, but as files are closed the system will try to
426reduce the actual number of cached vnodes to match this value.
427The read-only
428.Va kern.openfiles
429sysctl may be interrogated to determine how many files are currently open
430on the system.
431.Pp
432The
433.Va vm.swap_idle_enabled
434sysctl is useful in large multi-user systems where you have lots of users
435entering and leaving the system and lots of idle processes.
436Such systems
437tend to generate a great deal of continuous pressure on free memory reserves.
438Turning this feature on and adjusting the swapout hysteresis (in idle
439seconds) via
440.Va vm.swap_idle_threshold1
441and
442.Va vm.swap_idle_threshold2
443allows you to depress the priority of pages associated with idle processes
444more quickly than the normal pageout algorithm.
445This gives a helping hand
446to the pageout daemon.
447Do not turn this option on unless you need it,
448because the tradeoff you are making is to essentially pre-page memory sooner
449rather than later, eating more swap and disk bandwidth.
450In a small system
451this option will have a detrimental effect but in a large system that is
452already doing moderate paging this option allows the VM system to stage
453whole processes into and out of memory more easily.
454.Sh LOADER TUNABLES
455Some aspects of the system behavior may not be tunable at runtime because
456memory allocations they perform must occur early in the boot process.
457To change loader tunables, you must set their values in
458.Xr loader.conf 5
459and reboot the system.
460.Pp
461.Va kern.maxusers
462is automatically sized at boot based on the amount of memory available in
463the system.  The value can be read (but not written) via sysctl.
464.Pp
465You can change this value as a loader tunable if the default resource
466limits are not sufficient.
467This tunable works primarily by adjusting
468.Va kern.maxproc ,
469so you can opt to override that instead.
470It is generally easier formulate an adjustment to
471.Va kern.maxproc
472instead of
473.Va kern.maxusers .
474.Pp
475.Va kern.maxproc
476controls most kernel auto-scaling components.  If kernel resource limits
477are not scaled high enough, setting this tunables to a higher value is
478usually sufficient.
479Generally speaking you will want to set this tunable to the upper limit
480for the number of process threads you want the kernel to be able to handle.
481The kernel may still decide to cap maxproc at a lower value if there is
482insufficient ram to scale resources as desired.
483.Pp
484Only set this tunable if the defaults are not sufficient.
485Do not use this tunable to try to trim kernel resource limits, you will
486not actually save much memory by doing so and you will leave the system
487more vulnerable to DOS attacks and runaway processes.
488.Pp
489Setting this tunable will scale the maximum number processes, pipes and
490sockets, total open files the system can support, and increase mbuf
491and mbuf-cluster limits.  These other elements can also be separately
492overridden to fine-tune the setup.  We rcommend setting this tunable
493first to create a baseline.
494.Pp
495Setting a high value presumes that you have enough physical memory to
496support the resource utilization.  For example, your system would need
497approximately 128GB of ram to reasonably support a maxproc value of
4984 million (4000000).  The default maxproc given that much ram will
499typically be in the 250000 range.
500.Pp
501Note that the PID is currently limited to 6 digits, so a system cannot
502have more than a million processes operating anyway (though the aggregate
503number of threads can be far greater).
504And yes, there is in fact no reason why a very well-endowed system
505couldn't have that many processes.
506.Pp
507.Va kern.nbuf
508sets how many filesystem buffers the kernel should cache.
509Filesystem buffers can be up to 128KB each.
510UFS typically uses an 8KB blocksize while HAMMER and HAMMER2 typically
511uses 64KB.  The system defaults usually suffice for this parameter.
512Cached buffers represent wired physical memory so specifying a value
513that is too large can result in excessive kernel memory use, and is also
514not entirely necessary since the pages backing the buffers are also
515cached by the VM page cache (which does not use wired memory).
516The buffer cache significantly improves the hot path for cached file
517accesses and dirty data.
518.Pp
519The kernel reserves (128KB * nbuf) bytes of KVM.  The actual physical
520memory use depends on the filesystem buffer size.
521It is generally more flexible to manage the filesystem cache via
522.Va kern.maxfiles
523than via
524.Va kern.nbuf ,
525but situations do arise where you might want to increase or decrease
526the latter.
527.Pp
528The
529.Va kern.dfldsiz
530and
531.Va kern.dflssiz
532tunables set the default soft limits for process data and stack size
533respectively.
534Processes may increase these up to the hard limits by calling
535.Xr setrlimit 2 .
536The
537.Va kern.maxdsiz ,
538.Va kern.maxssiz ,
539and
540.Va kern.maxtsiz
541tunables set the hard limits for process data, stack, and text size
542respectively; processes may not exceed these limits.
543The
544.Va kern.sgrowsiz
545tunable controls how much the stack segment will grow when a process
546needs to allocate more stack.
547.Pp
548.Va kern.ipc.nmbclusters
549and
550.Va kern.ipc.nmbjclusters
551may be adjusted to increase the number of network mbufs the system is
552willing to allocate.
553Each normal cluster represents approximately 2K of memory,
554so a value of 1024 represents 2M of kernel memory reserved for network
555buffers.
556Each 'j' cluster is typically 4KB, so a value of 1024 represents 4M of
557kernel memory.
558You can do a simple calculation to figure out how many you need but
559keep in mind that tcp buffer sizing is now more dynamic than it used to
560be.
561.Pp
562The defaults usually suffice but you may want to bump it up on service-heavy
563machines.
564Modern machines often need a large number of mbufs to operate services
565efficiently, values of 65536, even upwards of 262144 or more are common.
566If you are running a server, it is better to be generous than to be frugal.
567Remember the memory calculation though.
568.Pp
569Under no circumstances
570should you specify an arbitrarily high value for this parameter, it could
571lead to a boot-time crash.
572The
573.Fl m
574option to
575.Xr netstat 1
576may be used to observe network cluster use.
577.Sh KERNEL CONFIG TUNING
578There are a number of kernel options that you may have to fiddle with in
579a large-scale system.
580In order to change these options you need to be
581able to compile a new kernel from source.
582The
583.Xr build 7
584manual page and the handbook are good starting points for learning how to
585do this.
586Generally speaking, removing options to trim the size of the kernel
587is not going to save very much memory on a modern system.
588In the grand scheme of things, saving a megabyte or two is in the noise
589on a system that likely has multiple gigabytes of memory.
590.Pp
591If your motherboard is AHCI-capable then we strongly recommend turning
592on AHCI mode in the BIOS if it is not already the default.
593.Sh CPU, MEMORY, DISK, NETWORK
594The type of tuning you do depends heavily on where your system begins to
595bottleneck as load increases.
596If your system runs out of CPU (idle times
597are perpetually 0%) then you need to consider upgrading the CPU or moving to
598an SMP motherboard (multiple CPU's), or perhaps you need to revisit the
599programs that are causing the load and try to optimize them.
600If your system
601is paging to swap a lot you need to consider adding more memory.
602If your
603system is saturating the disk you typically see high CPU idle times and
604total disk saturation.
605.Xr systat 1
606can be used to monitor this.
607There are many solutions to saturated disks:
608increasing memory for caching, mirroring disks, distributing operations across
609several machines, and so forth.
610.Pp
611Finally, you might run out of network suds.
612Optimize the network path
613as much as possible.
614If you are operating a machine as a router you may need to
615setup a
616.Xr pf 4
617firewall (also see
618.Xr firewall 7 .
619.Dx
620has a very good fair-share queueing algorithm for QOS in
621.Xr pf 4 .
622.Sh BULK BUILDING MACHINE SETUP
623Generally speaking memory is at a premium when doing bulk compiles.
624Machines dedicated to bulk building usually reduce
625.Va kern.maxvnodes
626to 1000000 (1 million) vnodes or lower.  Don't get too cocky here, this
627parameter should never be reduced below around 100000 on reasonably well
628endowed machines.
629.Pp
630Bulk build setups also often benefit from a relatively large amount
631of SSD swap, allowing the system to 'burst' high-memory-usage situations
632while still maintaining optimal concurrency for other periods during the
633build which do not use as much run-time memory and prefer more parallelism.
634.Sh SOURCE OF KERNEL MEMORY USAGE
635The primary sources of kernel memory usage are:
636.Bl -tag -width ".Va kern.maxvnodes"
637.It Va kern.maxvnodes
638The maximum number of cached vnodes in the system.
639These can eat quite a bit of kernel memory, primarily due to auxiliary
640structures tracked by the HAMMER filesystem.
641It is relatively easy to configure a smaller value, but we do not
642recommend reducing this parameter below 100000.
643Smaller values directly impact the number of discrete files the
644kernel can cache data for at once.
645.It Va kern.ipc.nmbclusters , Va kern.ipc.nmbjclusters
646Calculate approximately 2KB per normal cluster and 4KB per jumbo
647cluster.
648Do not make these values too low or you risk deadlocking the network
649stack.
650.It Va kern.nbuf
651The number of filesystem buffers managed by the kernel.
652The kernel wires the underlying cached VM pages, typically 8KB (UFS) or
65364KB (HAMMER) per buffer.
654.It swap/swapcache
655Swap memory requires approximately 1MB of physical ram for each 1GB
656of swap space.
657When swapcache is used, additional memory may be required to keep
658VM objects around longer (only really reducable by reducing the
659value of
660.Va kern.maxvnodes
661which you can do post-boot if you desire).
662.It tmpfs
663Tmpfs is very useful but keep in mind that while the file data itself
664is backed by swap, the meta-data (the directory topology) requires
665wired kernel memory.
666.It mmu page tables
667Even though the underlying data pages themselves can be paged to swap,
668the page tables are usually wired into memory.
669This can create problems when a large number of processes are
670.Fn mmap Ns ing
671very large files.
672Sometimes turning on
673.Va machdep.pmap_mmu_optimize
674suffices to reduce overhead.
675Page table kernel memory use can be observed by using 'vmstat -z'
676.It Va kern.ipc.shm_use_phys
677It is sometimes necessary to force shared memory to use physical memory
678when running a large database which uses shared memory to implement its
679own data caching.
680The use of sysv shared memory in this regard allows the database to
681distinguish between data which it knows it can access instantly (i.e.
682without even having to page-in from swap) verses data which it might require
683and I/O to fetch.
684.Pp
685If you use this feature be very careful with regards to the database's
686shared memory configuration as you will be wiring the memory.
687.El
688.Sh SEE ALSO
689.Xr netstat 1 ,
690.Xr systat 1 ,
691.Xr dm 4 ,
692.Xr dummynet 4 ,
693.Xr nata 4 ,
694.Xr pf 4 ,
695.Xr login.conf 5 ,
696.Xr pf.conf 5 ,
697.Xr rc.conf 5 ,
698.Xr sysctl.conf 5 ,
699.Xr build 7 ,
700.Xr firewall 7 ,
701.Xr hier 7 ,
702.Xr boot 8 ,
703.Xr ccdconfig 8 ,
704.Xr disklabel 8 ,
705.Xr fsck 8 ,
706.Xr ifconfig 8 ,
707.Xr ipfw 8 ,
708.Xr loader 8 ,
709.Xr mount 8 ,
710.Xr newfs 8 ,
711.Xr route 8 ,
712.Xr sysctl 8 ,
713.Xr tunefs 8
714.Sh HISTORY
715The
716.Nm
717manual page was inherited from
718.Fx
719and first appeared in
720.Fx 4.3 ,
721May 2001.
722.Sh AUTHORS
723The
724.Nm
725manual page was originally written by
726.An Matthew Dillon .
727