xref: /dragonfly/share/man/man7/tuning.7 (revision e26d350b)
1.\" Copyright (c) 2001 Matthew Dillon.  Terms and conditions are those of
2.\" the BSD Copyright as specified in the file "/usr/src/COPYRIGHT" in
3.\" the source tree.
4.\"
5.Dd August 24, 2018
6.Dt TUNING 7
7.Os
8.Sh NAME
9.Nm tuning
10.Nd performance tuning under DragonFly
11.Sh SYSTEM SETUP
12Modern
13.Dx
14systems typically have just three partitions on the main drive.
15In order, a UFS
16.Pa /boot ,
17.Pa swap ,
18and a HAMMER or HAMMER2
19.Pa root .
20In prior years the installer created separate PFSs for half a dozen
21directories, but now we just put (almost) everything in the root.
22The installer will separate stuff that doesn't need to be backed up into
23a /build subdirectory and create null-mounts for things like /usr/obj, but it
24no longer creates separate PFSs for these.
25If desired, you can make /build its own mount to separate-out the
26components of the filesystem which do not need to be persistent.
27.Pp
28Generally speaking the
29.Pa /boot
30partition should be 1GB in size.  This is the minimum recommended
31size, giving you room for backup kernels and alternative boot schemes.
32.Dx
33always installs debug-enabled kernels and modules and these can take
34up quite a bit of disk space (but will not take up any extra ram).
35.Pp
36In the old days we recommended that swap be sized to at least 2x main
37memory.  These days swap is often used for other activities, including
38.Xr tmpfs 5
39and
40.Xr swapcache 8 .
41We recommend that swap be sized to the larger of 2x main memory or
421GB if you have a fairly small disk and 16GB or more if you have a
43modestly endowed system.
44If you have a modest SSD + large HDD combination, we recommend
45a large dedicated swap partition on the SSD.  For example, if
46you have a 128GB SSD and 2TB or more of HDD storage, dedicating
47upwards of 64GB of the SSD to swap and using
48.Xr swapcache 8
49will significantly improve your HDD's performance.
50.Pp
51In an all-SSD or mostly-SSD system,
52.Xr swapcache 8
53is not normally used and should be left disabled (the default), but you
54may still want to have a large swap partition to support
55.Xr tmpfs 5
56use.
57Our synth/poudriere build machines run with at least 200GB of
58swap and use tmpfs for all the builder jails.  50-100 GB
59is swapped out at the peak of the build.  As a result, actual
60system storage bandwidth is minimized and performance increased.
61.Pp
62If you are on a minimally configured machine you may, of course,
63configure far less swap or no swap at all but we recommend at least
64some swap.
65The kernel's VM paging algorithms are tuned to perform best when there is
66swap space configured.
67Configuring too little swap can lead to inefficiencies in the VM
68page scanning code as well as create issues later on if you add
69more memory to your machine, so don't be shy about it.
70Swap is a good idea even if you don't think you will ever need it as it
71allows the
72machine to page out completely unused data and idle programs (like getty),
73maximizing the ram available for your activities.
74.Pp
75If you intend to use the
76.Xr swapcache 8
77facility with a SSD + HDD combination we recommend configuring as much
78swap space as you can on the SSD.
79However, keep in mind that each 1GByte of swapcache requires around
801MByte of ram, so don't scale your swap beyond the equivalent ram
81that you reasonably want to eat to support it.
82.Pp
83Finally, on larger systems with multiple drives, if the use
84of SSD swap is not in the cards or if it is and you need higher-than-normal
85swapcache bandwidth, you can configure swap on up to four drives and
86the kernel will interleave the storage.
87The swap partitions on the drives should be approximately the same size.
88The kernel can handle arbitrary sizes but
89internal data structures scale to 4 times the largest swap partition.
90Keeping
91the swap partitions near the same size will allow the kernel to optimally
92stripe swap space across the N disks.
93Do not worry about overdoing it a
94little, swap space is the saving grace of
95.Ux
96and even if you do not normally use much swap, having some allows the system
97to move idle program data out of ram and allows the machine to more easily
98handle abnormal runaway programs.
99However, keep in mind that any sort of swap space failure can lock the
100system up.
101Most machines are configured with only one or two swap partitions.
102.Pp
103Most
104.Dx
105systems have a single HAMMER or HAMMER2 root.
106PFSs can be used to administratively separate domains for backup purposes
107but tend to be a hassle otherwise so if you don't need the administrative
108separation you don't really need to use multiple PFSs.
109All the PFSs share the same allocation layer so there is no longer a need
110to size each individual mount.
111Instead you should review the
112.Xr hammer 8
113manual page and use the 'hammer viconfig' facility to adjust snapshot
114retention and other parameters.
115By default
116HAMMER1 keeps 60 days worth of snapshots, and HAMMER2 keeps none.
117By convention
118.Pa /build
119is not backed up and contained only directory trees that do not need
120to be backed-up or snapshotted.
121.Pp
122If a very large work area is desired it is often beneficial to
123configure it as its own filesystem in a completely independent partition
124so allocation blowouts (if they occur) do not affect the main system.
125By convention a large work area is named
126.Pa /build .
127Similarly if a machine is going to have a large number of users
128you might want to separate your
129.Pa /home
130out as well.
131.Pp
132A number of run-time
133.Xr mount 8
134options exist that can help you tune the system.
135The most obvious and most dangerous one is
136.Cm async .
137Do not ever use it; it is far too dangerous.
138A less dangerous and more
139useful
140.Xr mount 8
141option is called
142.Cm noatime .
143.Ux
144filesystems normally update the last-accessed time of a file or
145directory whenever it is accessed.
146However, neither HAMMER nor HAMMER2 implement atime so there is usually
147no need to mess with this option.
148The lack of atime updates can create issues with certain programs
149such as when detecting whether unread mail is present, but
150applications for the most part no longer depend on it.
151.Sh SSD SWAP
152The single most important thing you can do to improve performance is to`
153have at least one solid-state drive in your system, and to configure your
154swap space on that drive.
155If you are using a combination of a smaller SSD and a very larger HDD,
156you can use
157.Xr swapcache 8
158to automatically cache data from your HDD.
159But even if you do not, having swap space configured on your SSD will
160significantly improve performance under even modest paging loads.
161It is particularly useful to configure a significant amount of swap
162on a workstation, 32GB or more is not uncommon, to handle bloated
163leaky applications such as browsers.
164.Sh SYSCTL TUNING
165.Xr sysctl 8
166variables permit system behavior to be monitored and controlled at
167run-time.
168Some sysctls simply report on the behavior of the system; others allow
169the system behavior to be modified;
170some may be set at boot time using
171.Xr rc.conf 5 ,
172but most will be set via
173.Xr sysctl.conf 5 .
174There are several hundred sysctls in the system, including many that appear
175to be candidates for tuning but actually are not.
176In this document we will only cover the ones that have the greatest effect
177on the system.
178.Pp
179The
180.Va kern.ipc.shm_use_phys
181sysctl defaults to 1 (on) and may be set to 0 (off) or 1 (on).
182Setting
183this parameter to 1 will cause all System V shared memory segments to be
184mapped to unpageable physical RAM.
185This feature only has an effect if you
186are either (A) mapping small amounts of shared memory across many (hundreds)
187of processes, or (B) mapping large amounts of shared memory across any
188number of processes.
189This feature allows the kernel to remove a great deal
190of internal memory management page-tracking overhead at the cost of wiring
191the shared memory into core, making it unswappable.
192.Pp
193The
194.Va vfs.write_behind
195sysctl defaults to 1 (on).  This tells the filesystem to issue media
196writes as full clusters are collected, which typically occurs when writing
197large sequential files.  The idea is to avoid saturating the buffer
198cache with dirty buffers when it would not benefit I/O performance.  However,
199this may stall processes and under certain circumstances you may wish to turn
200it off.
201.Pp
202The
203.Va vfs.lorunningspace
204and
205.Va vfs.hirunningspace
206sysctls determines how much outstanding write I/O may be queued to
207disk controllers system wide at any given moment.  The default is
208usually sufficient, particularly when SSDs are part of the mix.
209Note that setting too high a value can lead to extremely poor
210clustering performance.  Do not set this value arbitrarily high!  Also,
211higher write queueing values may add latency to reads occurring at the same
212time.
213The
214.Va vfs.bufcache_bw
215controls data cycling within the buffer cache.  I/O bandwidth less than
216this specification (per second) will cycle into the much larger general
217VM page cache while I/O bandwidth in excess of this specification will
218be recycled within the buffer cache, reducing the load on the rest of
219the VM system at the cost of bypassing normal VM caching mechanisms.
220The default value is 200 megabytes/s (209715200), which means that the
221system will try harder to cache data coming off a slower hard drive
222and less hard trying to cache data coming off a fast SSD.
223.Pp
224This parameter is particularly important if you have NVMe drives in
225your system as these storage devices are capable of transferring
226well over 2GBytes/sec into the system and can blow normal VM paging
227and caching algorithms to bits.
228.Pp
229There are various other buffer-cache and VM page cache related sysctls.
230We do not recommend modifying their values.
231.Pp
232The
233.Va net.inet.tcp.sendspace
234and
235.Va net.inet.tcp.recvspace
236sysctls are of particular interest if you are running network intensive
237applications.
238They control the amount of send and receive buffer space
239allowed for any given TCP connection.
240However,
241.Dx
242now auto-tunes these parameters using a number of other related
243sysctls (run 'sysctl net.inet.tcp' to get a list) and usually
244no longer need to be tuned manually.
245We do not recommend
246increasing or decreasing the defaults if you are managing a very large
247number of connections.
248Note that the routing table (see
249.Xr route 8 )
250can be used to introduce route-specific send and receive buffer size
251defaults.
252.Pp
253As an additional management tool you can use pipes in your
254firewall rules (see
255.Xr ipfw 8 )
256to limit the bandwidth going to or from particular IP blocks or ports.
257For example, if you have a T1 you might want to limit your web traffic
258to 70% of the T1's bandwidth in order to leave the remainder available
259for mail and interactive use.
260Normally a heavily loaded web server
261will not introduce significant latencies into other services even if
262the network link is maxed out, but enforcing a limit can smooth things
263out and lead to longer term stability.
264Many people also enforce artificial
265bandwidth limitations in order to ensure that they are not charged for
266using too much bandwidth.
267.Pp
268Setting the send or receive TCP buffer to values larger than 65535 will result
269in a marginal performance improvement unless both hosts support the window
270scaling extension of the TCP protocol, which is controlled by the
271.Va net.inet.tcp.rfc1323
272sysctl.
273These extensions should be enabled and the TCP buffer size should be set
274to a value larger than 65536 in order to obtain good performance from
275certain types of network links; specifically, gigabit WAN links and
276high-latency satellite links.
277RFC 1323 support is enabled by default.
278.Pp
279The
280.Va net.inet.tcp.always_keepalive
281sysctl determines whether or not the TCP implementation should attempt
282to detect dead TCP connections by intermittently delivering
283.Dq keepalives
284on the connection.
285By default, this is now enabled for all applications.
286We do not recommend turning it off.
287The extra network bandwidth is minimal and this feature will clean-up
288stalled and long-dead connections that might not otherwise be cleaned
289up.
290In the past people using dialup connections often did not want to
291use this feature in order to be able to retain connections across
292long disconnections, but in modern day the only default that makes
293sense is for the feature to be turned on.
294.Pp
295The
296.Va net.inet.tcp.delayed_ack
297TCP feature is largely misunderstood.  Historically speaking this feature
298was designed to allow the acknowledgement to transmitted data to be returned
299along with the response.  For example, when you type over a remote shell
300the acknowledgement to the character you send can be returned along with the
301data representing the echo of the character.   With delayed acks turned off
302the acknowledgement may be sent in its own packet before the remote service
303has a chance to echo the data it just received.  This same concept also
304applies to any interactive protocol (e.g. SMTP, WWW, POP3) and can cut the
305number of tiny packets flowing across the network in half.   The
306.Dx
307delayed-ack implementation also follows the TCP protocol rule that
308at least every other packet be acknowledged even if the standard 100ms
309timeout has not yet passed.  Normally the worst a delayed ack can do is
310slightly delay the teardown of a connection, or slightly delay the ramp-up
311of a slow-start TCP connection.  While we aren't sure we believe that
312the several FAQs related to packages such as SAMBA and SQUID which advise
313turning off delayed acks may be referring to the slow-start issue.
314.Pp
315The
316.Va net.inet.tcp.inflight_enable
317sysctl turns on bandwidth delay product limiting for all TCP connections.
318This feature is now turned on by default and we recommend that it be
319left on.
320It will slightly reduce the maximum bandwidth of a connection but the
321benefits of the feature in reducing packet backlogs at router constriction
322points are enormous.
323These benefits make it a whole lot easier for router algorithms to manage
324QOS for multiple connections.
325The limiting feature reduces the amount of data built up in intermediate
326router and switch packet queues as well as reduces the amount of data built
327up in the local host's interface queue.  With fewer packets queued up,
328interactive connections, especially over slow modems, will also be able
329to operate with lower round trip times.  However, note that this feature
330only affects data transmission (uploading / server-side).  It does not
331affect data reception (downloading).
332.Pp
333The system will attempt to calculate the bandwidth delay product for each
334connection and limit the amount of data queued to the network to just the
335amount required to maintain optimum throughput.  This feature is useful
336if you are serving data over modems, GigE, or high speed WAN links (or
337any other link with a high bandwidth*delay product), especially if you are
338also using window scaling or have configured a large send window.
339.Pp
340For production use setting
341.Va net.inet.tcp.inflight_min
342to at least 6144 may be beneficial.  Note, however, that setting high
343minimums may effectively disable bandwidth limiting depending on the link.
344.Pp
345Adjusting
346.Va net.inet.tcp.inflight_stab
347is not recommended.
348This parameter defaults to 50, representing +5% fudge when calculating the
349bwnd from the bw.  This fudge is on top of an additional fixed +2*maxseg
350added to bwnd.  The fudge factor is required to stabilize the algorithm
351at very high speeds while the fixed 2*maxseg stabilizes the algorithm at
352low speeds.  If you increase this value excessive packet buffering may occur.
353.Pp
354The
355.Va net.inet.ip.portrange.*
356sysctls control the port number ranges automatically bound to TCP and UDP
357sockets.  There are three ranges:  A low range, a default range, and a
358high range, selectable via an IP_PORTRANGE
359.Fn setsockopt
360call.
361Most network programs use the default range which is controlled by
362.Va net.inet.ip.portrange.first
363and
364.Va net.inet.ip.portrange.last ,
365which defaults to 1024 and 5000 respectively.  Bound port ranges are
366used for outgoing connections and it is possible to run the system out
367of ports under certain circumstances.  This most commonly occurs when you are
368running a heavily loaded web proxy.  The port range is not an issue
369when running serves which handle mainly incoming connections such as a
370normal web server, or has a limited number of outgoing connections such
371as a mail relay.  For situations where you may run yourself out of
372ports we recommend increasing
373.Va net.inet.ip.portrange.last
374modestly.  A value of 10000 or 20000 or 30000 may be reasonable.  You should
375also consider firewall effects when changing the port range.  Some firewalls
376may block large ranges of ports (usually low-numbered ports) and expect systems
377to use higher ranges of ports for outgoing connections.  For this reason
378we do not recommend that
379.Va net.inet.ip.portrange.first
380be lowered.
381.Pp
382The
383.Va kern.ipc.somaxconn
384sysctl limits the size of the listen queue for accepting new TCP connections.
385The default value of 128 is typically too low for robust handling of new
386connections in a heavily loaded web server environment.
387For such environments,
388we recommend increasing this value to 1024 or higher.
389The service daemon
390may itself limit the listen queue size (e.g.\&
391.Xr sendmail 8 ,
392apache) but will
393often have a directive in its configuration file to adjust the queue size up.
394Larger listen queues also do a better job of fending off denial of service
395attacks.
396.Pp
397The
398.Va kern.maxvnodes
399specifies how many vnodes and related file structures the kernel will
400cache.
401The kernel uses a modestly generous default for this parameter based on
402available physical memory.
403You generally do not want to mess with this parameter as it directly
404effects how well the kernel can cache not only file structures but also
405the underlying file data.
406.Pp
407However, situations may crop up where you wish to cache less filesystem
408data in order to make more memory available for programs.  Not only will
409this reduce kernel memory use for vnodes and inodes, it will also have a
410tendancy to reduce the impact of the buffer cache on main memory because
411recycling a vnode also frees any underlying data that has been cached for
412that vnode.
413.Pp
414It is, in fact, possible for the system to have more files open than the
415value of this tunable, but as files are closed the system will try to
416reduce the actual number of cached vnodes to match this value.
417The read-only
418.Va kern.openfiles
419sysctl may be interrogated to determine how many files are currently open
420on the system.
421.Pp
422The
423.Va vm.swap_idle_enabled
424sysctl is useful in large multi-user systems where you have lots of users
425entering and leaving the system and lots of idle processes.
426Such systems
427tend to generate a great deal of continuous pressure on free memory reserves.
428Turning this feature on and adjusting the swapout hysteresis (in idle
429seconds) via
430.Va vm.swap_idle_threshold1
431and
432.Va vm.swap_idle_threshold2
433allows you to depress the priority of pages associated with idle processes
434more quickly than the normal pageout algorithm.
435This gives a helping hand
436to the pageout daemon.
437Do not turn this option on unless you need it,
438because the tradeoff you are making is to essentially pre-page memory sooner
439rather than later, eating more swap and disk bandwidth.
440In a small system
441this option will have a detrimental effect but in a large system that is
442already doing moderate paging this option allows the VM system to stage
443whole processes into and out of memory more easily.
444.Sh LOADER TUNABLES
445Some aspects of the system behavior may not be tunable at runtime because
446memory allocations they perform must occur early in the boot process.
447To change loader tunables, you must set their values in
448.Xr loader.conf 5
449and reboot the system.
450.Pp
451.Va kern.maxusers
452is automatically sized at boot based on the amount of memory available in
453the system.  The value can be read (but not written) via sysctl.
454.Pp
455You can change this value as a loader tunable if the default resource
456limits are not sufficient.
457This tunable works primarily by adjusting
458.Va kern.maxproc ,
459so you can opt to override that instead.
460It is generally easier formulate an adjustment to
461.Va kern.maxproc
462instead of
463.Va kern.maxusers .
464.Pp
465.Va kern.maxproc
466controls most kernel auto-scaling components.  If kernel resource limits
467are not scaled high enough, setting this tunables to a higher value is
468usually sufficient.
469Generally speaking you will want to set this tunable to the upper limit
470for the number of process threads you want the kernel to be able to handle.
471The kernel may still decide to cap maxproc at a lower value if there is
472insufficient ram to scale resources as desired.
473.Pp
474Only set this tunable if the defaults are not sufficient.
475Do not use this tunable to try to trim kernel resource limits, you will
476not actually save much memory by doing so and you will leave the system
477more vulnerable to DOS attacks and runaway processes.
478.Pp
479Setting this tunable will scale the maximum number processes, pipes and
480sockets, total open files the system can support, and increase mbuf
481and mbuf-cluster limits.  These other elements can also be separately
482overridden to fine-tune the setup.  We rcommend setting this tunable
483first to create a baseline.
484.Pp
485Setting a high value presumes that you have enough physical memory to
486support the resource utilization.  For example, your system would need
487approximately 128GB of ram to reasonably support a maxproc value of
4884 million (4000000).  The default maxproc given that much ram will
489typically be in the 250000 range.
490.Pp
491Note that the PID is currently limited to 6 digits, so a system cannot
492have more than a million processes operating anyway (though the aggregate
493number of threads can be far greater).
494And yes, there is in fact no reason why a very well-endowed system
495couldn't have that many processes.
496.Pp
497.Va kern.nbuf
498sets how many filesystem buffers the kernel should cache.
499Filesystem buffers can be up to 128KB each.
500UFS typically uses an 8KB blocksize while HAMMER and HAMMER2 typically
501uses 64KB.  The system defaults usually suffice for this parameter.
502Cached buffers represent wired physical memory so specifying a value
503that is too large can result in excessive kernel memory use, and is also
504not entirely necessary since the pages backing the buffers are also
505cached by the VM page cache (which does not use wired memory).
506The buffer cache significantly improves the hot path for cached file
507accesses and dirty data.
508.Pp
509The kernel reserves (128KB * nbuf) bytes of KVM.  The actual physical
510memory use depends on the filesystem buffer size.
511It is generally more flexible to manage the filesytem cache via
512.Va kern.maxfiles
513than via
514.Va kern.nbuf ,
515but situations do arise where you might want to increase or decrease
516the latter.
517.Pp
518The
519.Va kern.dfldsiz
520and
521.Va kern.dflssiz
522tunables set the default soft limits for process data and stack size
523respectively.
524Processes may increase these up to the hard limits by calling
525.Xr setrlimit 2 .
526The
527.Va kern.maxdsiz ,
528.Va kern.maxssiz ,
529and
530.Va kern.maxtsiz
531tunables set the hard limits for process data, stack, and text size
532respectively; processes may not exceed these limits.
533The
534.Va kern.sgrowsiz
535tunable controls how much the stack segment will grow when a process
536needs to allocate more stack.
537.Pp
538.Va kern.ipc.nmbclusters
539and
540.Va kern.ipc.nmbjclusters
541may be adjusted to increase the number of network mbufs the system is
542willing to allocate.
543Each normal cluster represents approximately 2K of memory,
544so a value of 1024 represents 2M of kernel memory reserved for network
545buffers.
546Each 'j' cluster is typically 4KB, so a value of 1024 represents 4M of
547kernel memory.
548You can do a simple calculation to figure out how many you need but
549keep in mind that tcp buffer sizing is now more dynamic than it used to
550be.
551.Pp
552The defaults usually suffice but you may want to bump it up on service-heavy
553machines.
554Modern machines often need a large number of mbufs to operate services
555efficiently, values of 65536, even upwards of 262144 or more are common.
556If you are running a server, it is better to be generous than to be frugal.
557Remember the memory calculation though.
558.Pp
559Under no circumstances
560should you specify an arbitrarily high value for this parameter, it could
561lead to a boot-time crash.
562The
563.Fl m
564option to
565.Xr netstat 1
566may be used to observe network cluster use.
567.Sh KERNEL CONFIG TUNING
568There are a number of kernel options that you may have to fiddle with in
569a large-scale system.
570In order to change these options you need to be
571able to compile a new kernel from source.
572The
573.Xr config 8
574manual page and the handbook are good starting points for learning how to
575do this.
576Generally speaking, removing options to trim the size of the kernel
577is not going to save very much memory on a modern system.
578In the grand scheme of things, saving a megabyte or two is in the noise
579on a system that likely has multiple gigabytes of memory.
580.Pp
581If your motherboard is AHCI-capable then we strongly recommend turning
582on AHCI mode in the BIOS if it is not already the default.
583.Sh CPU, MEMORY, DISK, NETWORK
584The type of tuning you do depends heavily on where your system begins to
585bottleneck as load increases.
586If your system runs out of CPU (idle times
587are perpetually 0%) then you need to consider upgrading the CPU or moving to
588an SMP motherboard (multiple CPU's), or perhaps you need to revisit the
589programs that are causing the load and try to optimize them.
590If your system
591is paging to swap a lot you need to consider adding more memory.
592If your
593system is saturating the disk you typically see high CPU idle times and
594total disk saturation.
595.Xr systat 1
596can be used to monitor this.
597There are many solutions to saturated disks:
598increasing memory for caching, mirroring disks, distributing operations across
599several machines, and so forth.
600.Pp
601Finally, you might run out of network suds.
602Optimize the network path
603as much as possible.
604If you are operating a machine as a router you may need to
605setup a
606.Xr pf 4
607firewall (also see
608.Xr firewall 7 .
609.Dx
610has a very good fair-share queueing algorithm for QOS in
611.Xr pf 4 .
612.Sh BULK BUILDING MACHINE SETUP
613Generally speaking memory is at a premium when doing bulk compiles.
614Machines dedicated to bulk building usually reduce
615.Va kern.maxvnodes
616to 1000000 (1 million) vnodes or lower.  Don't get too cocky here, this
617parameter should never be reduced below around 100000 on reasonably well
618endowed machines.
619.Pp
620Bulk build setups also often benefit from a relatively large amount
621of SSD swap, allowing the system to 'burst' high-memory-usage situations
622while still maintaining optimal concurrency for other periods during the
623build which do not use as much run-time memory and prefer more parallelism.
624.Sh SOURCE OF KERNEL MEMORY USAGE
625The primary sources of kernel memory usage are:
626.Bl -tag -width ".Va kern.maxvnodes"
627.It Va kern.maxvnodes
628The maximum number of cached vnodes in the system.
629These can eat quite a bit of kernel memory, primarily due to auxiliary
630structures tracked by the HAMMER filesystem.
631It is relatively easy to configure a smaller value, but we do not
632recommend reducing this parameter below 100000.
633Smaller values directly impact the number of discrete files the
634kernel can cache data for at once.
635.It Va kern.ipc.nmbclusters , Va kern.ipc.nmbjclusters
636Calculate approximately 2KB per normal cluster and 4KB per jumbo
637cluster.
638Do not make these values too low or you risk deadlocking the network
639stack.
640.It Va kern.nbuf
641The number of filesystem buffers managed by the kernel.
642The kernel wires the underlying cached VM pages, typically 8KB (UFS) or
64364KB (HAMMER) per buffer.
644.It swap/swapcache
645Swap memory requires approximately 1MB of physical ram for each 1GB
646of swap space.
647When swapcache is used, additional memory may be required to keep
648VM objects around longer (only really reducable by reducing the
649value of
650.Va kern.maxvnodes
651which you can do post-boot if you desire).
652.It tmpfs
653Tmpfs is very useful but keep in mind that while the file data itself
654is backed by swap, the meta-data (the directory topology) requires
655wired kernel memory.
656.It mmu page tables
657Even though the underlying data pages themselves can be paged to swap,
658the page tables are usually wired into memory.
659This can create problems when a large number of processes are mmap()ing
660very large files.
661Sometimes turning on
662.Va machdep.pmap_mmu_optimize
663suffices to reduce overhead.
664Page table kernel memory use can be observed by using 'vmstat -z'
665.It Va kern.ipc.shm_use_phys
666It is sometimes necessary to force shared memory to use physical memory
667when running a large database which uses shared memory to implement its
668own data caching.
669The use of sysv shared memory in this regard allows the database to
670distinguish between data which it knows it can access instantly (i.e.
671without even having to page-in from swap) verses data which it might require
672and I/O to fetch.
673.Pp
674If you use this feature be very careful with regards to the database's
675shared memory configuration as you will be wiring the memory.
676.El
677.Sh SEE ALSO
678.Xr netstat 1 ,
679.Xr systat 1 ,
680.Xr dm 4 ,
681.Xr dummynet 4 ,
682.Xr nata 4 ,
683.Xr pf 4 ,
684.Xr login.conf 5 ,
685.Xr pf.conf 5 ,
686.Xr rc.conf 5 ,
687.Xr sysctl.conf 5 ,
688.Xr firewall 7 ,
689.Xr hier 7 ,
690.Xr boot 8 ,
691.Xr ccdconfig 8 ,
692.Xr config 8 ,
693.Xr disklabel 8 ,
694.Xr fsck 8 ,
695.Xr ifconfig 8 ,
696.Xr ipfw 8 ,
697.Xr loader 8 ,
698.Xr mount 8 ,
699.Xr newfs 8 ,
700.Xr route 8 ,
701.Xr sysctl 8 ,
702.Xr tunefs 8
703.Sh HISTORY
704The
705.Nm
706manual page was inherited from
707.Fx
708and first appeared in
709.Fx 4.3 ,
710May 2001.
711.Sh AUTHORS
712The
713.Nm
714manual page was originally written by
715.An Matthew Dillon .
716