xref: /dragonfly/share/man/man7/tuning.7 (revision 7bcb6caf)
1.\" Copyright (c) 2001 Matthew Dillon.  Terms and conditions are those of
2.\" the BSD Copyright as specified in the file "/usr/src/COPYRIGHT" in
3.\" the source tree.
4.\"
5.Dd August 13, 2017
6.Dt TUNING 7
7.Os
8.Sh NAME
9.Nm tuning
10.Nd performance tuning under DragonFly
11.Sh SYSTEM SETUP
12Modern
13.Dx
14systems typically have just three partitions on the main drive.
15In order, a UFS
16.Pa /boot ,
17.Pa swap ,
18and a HAMMER
19.Pa root .
20The installer used to create separate PFSs for half a dozen directories,
21but now it just puts (almost) everything in the root.
22It will separate stuff that doesn't need to be backed up into a /build
23subdirectory and create null-mounts for things like /usr/obj, but it
24no longer creates separate PFSs for these.
25If desired, you can make /build its own mount to separate-out the
26components of the filesystem which do not need to be persistent.
27.Pp
28Generally speaking the
29.Pa /boot
30partition should be 1GB in size.  This is the minimum recommended
31size, giving you room for backup kernels and alternative boot schemes.
32.Dx
33always installs debug-enabled kernels and modules and these can take
34up quite a bit of disk space (but will not take up any extra ram).
35.Pp
36In the old days we recommended that swap be sized to at least 2x main
37memory.  These days swap is often used for other activities, including
38.Xr tmpfs 5
39and
40.Xr swapcache 8 .
41We recommend that swap be sized to the larger of 2x main memory or
421GB if you have a fairly small disk and 16GB or more if you have a
43modestly endowed system.
44If you have a modest SSD + large HDD combination, we recommend
45a large dedicated swap partition on the SSD.  For example, if
46you have a 128GB SSD and 2TB or more of HDD storage, dedicating
47upwards of 64GB of the SSD to swap and using
48.Xr swapcache 8
49and
50.Xr tmpfs 5
51will significantly improve your HDD's performance.
52.Pp
53In an all-SSD or mostly-SSD system,
54.Xr swapcache 8
55is not normally used but you may still want to have a large swap
56partition to support
57.Xr tmpfs 5
58use.
59Our synth/poudriere build machines run with a 200GB
60swap partition and use tmpfs for all the builder jails.  50-100 GB
61is swapped out at the peak of the build.  As a result, actual
62system storage bandwidth is minimized and performance increased.
63.Pp
64If you are on a minimally configured machine you may, of course,
65configure far less swap or no swap at all but we recommend at least
66some swap.
67The kernel's VM paging algorithms are tuned to perform best when there is
68swap space configured.
69Configuring too little swap can lead to inefficiencies in the VM
70page scanning code as well as create issues later on if you add
71more memory to your machine, so don't be shy about it.
72Swap is a good idea even if you don't think you will ever need it as it
73allows the
74machine to page out completely unused data and idle programs (like getty),
75maximizing the ram available for your activities.
76.Pp
77If you intend to use the
78.Xr swapcache 8
79facility with a SSD + HDD combination we recommend configuring as much
80swap space as you can on the SSD.
81However, keep in mind that each 1GByte of swapcache requires around
821MByte of ram, so don't scale your swap beyond the equivalent ram
83that you reasonably want to eat to support it.
84.Pp
85Finally, on larger systems with multiple drives, if the use
86of SSD swap is not in the cards or if it is and you need higher-than-normal
87swapcache bandwidth, you can configure swap on up to four drives and
88the kernel will interleave the storage.
89The swap partitions on the drives should be approximately the same size.
90The kernel can handle arbitrary sizes but
91internal data structures scale to 4 times the largest swap partition.
92Keeping
93the swap partitions near the same size will allow the kernel to optimally
94stripe swap space across the N disks.
95Do not worry about overdoing it a
96little, swap space is the saving grace of
97.Ux
98and even if you do not normally use much swap, it can give you more time to
99recover from a runaway program before being forced to reboot.
100However, keep in mind that any sort of swap space failure can lock the
101system up.
102Most machines are setup with only one or two swap partitions.
103.Pp
104Most
105.Dx
106systems have a single HAMMER root.
107PFSs can be used to administratively separate domains for backup purposes
108but tend to be a hassle otherwise so if you don't need the administrative
109separation you don't really need to use multiple HAMMER PFSs.
110All the PFSs share the same allocation layer so there is no longer a need
111to size each individual mount.
112Instead you should review the
113.Xr hammer 8
114manual page and use the 'hammer viconfig' facility to adjust snapshot
115retention and other parameters.
116By default
117HAMMER keeps 60 days worth of snapshots.
118Usually snapshots are not desired on PFSs such as
119.Pa /usr/obj
120or
121.Pa /tmp
122since data on these partitions cycles a lot.
123.Pp
124If a very large work area is desired it is often beneficial to
125configure it as a separate HAMMER mount.  If it is integrated into
126the root mount it should at least be its own HAMMER PFS.
127We recommend naming the large work area
128.Pa /build .
129Similarly if a machine is going to have a large number of users
130you might want to separate your
131.Pa /home
132out as well.
133.Pp
134A number of run-time
135.Xr mount 8
136options exist that can help you tune the system.
137The most obvious and most dangerous one is
138.Cm async .
139Do not ever use it; it is far too dangerous.
140A less dangerous and more
141useful
142.Xr mount 8
143option is called
144.Cm noatime .
145.Ux
146filesystems normally update the last-accessed time of a file or
147directory whenever it is accessed.
148However, this creates a massive burden on copy-on-write filesystems like
149HAMMER, particularly when scanning the filesystem.
150.Dx
151currently defaults to disabling atime updates on HAMMER mounts.
152It can be enabled by setting the
153.Va vfs.hammer.noatime
154tunable to 0 in
155.Xr loader.conf 5
156but we recommend leaving it disabled.
157The lack of atime updates can create issues with certain programs
158such as when detecting whether unread mail is present, but
159applications for the most part no longer depend on it.
160.Sh SSD SWAP
161The single most important thing you can do is have at least one
162solid-state drive in your system, and configure your swap space
163on that drive.
164If you are using a combination of a smaller SSD and a very larger HDD,
165you can use
166.Xr swapcache 8
167to automatically cache data from your HDD.
168But even if you do not, having swap space configured on your SSD will
169significantly improve performance under even modest paging loads.
170It is particularly useful to configure a significant amount of swap
171on a workstation, 32GB or more is not uncommon, to handle bloated
172leaky applications such as browsers.
173.Sh SYSCTL TUNING
174.Xr sysctl 8
175variables permit system behavior to be monitored and controlled at
176run-time.
177Some sysctls simply report on the behavior of the system; others allow
178the system behavior to be modified;
179some may be set at boot time using
180.Xr rc.conf 5 ,
181but most will be set via
182.Xr sysctl.conf 5 .
183There are several hundred sysctls in the system, including many that appear
184to be candidates for tuning but actually are not.
185In this document we will only cover the ones that have the greatest effect
186on the system.
187.Pp
188The
189.Va kern.ipc.shm_use_phys
190sysctl defaults to 1 (on) and may be set to 0 (off) or 1 (on).
191Setting
192this parameter to 1 will cause all System V shared memory segments to be
193mapped to unpageable physical RAM.
194This feature only has an effect if you
195are either (A) mapping small amounts of shared memory across many (hundreds)
196of processes, or (B) mapping large amounts of shared memory across any
197number of processes.
198This feature allows the kernel to remove a great deal
199of internal memory management page-tracking overhead at the cost of wiring
200the shared memory into core, making it unswappable.
201.Pp
202The
203.Va vfs.write_behind
204sysctl defaults to 1 (on).  This tells the filesystem to issue media
205writes as full clusters are collected, which typically occurs when writing
206large sequential files.  The idea is to avoid saturating the buffer
207cache with dirty buffers when it would not benefit I/O performance.  However,
208this may stall processes and under certain circumstances you may wish to turn
209it off.
210.Pp
211The
212.Va vfs.hirunningspace
213sysctl determines how much outstanding write I/O may be queued to
214disk controllers system wide at any given instance.  The default is
215usually sufficient but on machines with lots of disks you may want to bump
216it up to four or five megabytes.  Note that setting too high a value
217(exceeding the buffer cache's write threshold) can lead to extremely
218bad clustering performance.  Do not set this value arbitrarily high!  Also,
219higher write queueing values may add latency to reads occurring at the same
220time.
221The
222.Va vfs.bufcache_bw
223controls data cycling within the buffer cache.  I/O bandwidth less than
224this specification (per second) will cycle into the much larger general
225VM page cache while I/O bandwidth in excess of this specification will
226be recycled within the buffer cache, reducing the load on the rest of
227the VM system.
228The default value is 200 megabytes (209715200), which means that the
229system will try harder to cache data coming off a slower hard drive
230and less hard trying to cache data coming off a fast SSD.
231This parameter is particularly important if you have NVMe drives in
232your system as these storage devices are capable of transferring
233well over 2GBytes/sec into the system.
234.Pp
235There are various other buffer-cache and VM page cache related sysctls.
236We do not recommend modifying their values.
237.Pp
238The
239.Va net.inet.tcp.sendspace
240and
241.Va net.inet.tcp.recvspace
242sysctls are of particular interest if you are running network intensive
243applications.
244They control the amount of send and receive buffer space
245allowed for any given TCP connection.
246However,
247.Dx
248now auto-tunes these parameters using a number of other related
249sysctls (run 'sysctl net.inet.tcp' to get a list) and usually
250no longer need to be tuned manually.
251We do not recommend
252increasing or decreasing the defaults if you are managing a very large
253number of connections.
254Note that the routing table (see
255.Xr route 8 )
256can be used to introduce route-specific send and receive buffer size
257defaults.
258.Pp
259As an additional management tool you can use pipes in your
260firewall rules (see
261.Xr ipfw 8 )
262to limit the bandwidth going to or from particular IP blocks or ports.
263For example, if you have a T1 you might want to limit your web traffic
264to 70% of the T1's bandwidth in order to leave the remainder available
265for mail and interactive use.
266Normally a heavily loaded web server
267will not introduce significant latencies into other services even if
268the network link is maxed out, but enforcing a limit can smooth things
269out and lead to longer term stability.
270Many people also enforce artificial
271bandwidth limitations in order to ensure that they are not charged for
272using too much bandwidth.
273.Pp
274Setting the send or receive TCP buffer to values larger than 65535 will result
275in a marginal performance improvement unless both hosts support the window
276scaling extension of the TCP protocol, which is controlled by the
277.Va net.inet.tcp.rfc1323
278sysctl.
279These extensions should be enabled and the TCP buffer size should be set
280to a value larger than 65536 in order to obtain good performance from
281certain types of network links; specifically, gigabit WAN links and
282high-latency satellite links.
283RFC 1323 support is enabled by default.
284.Pp
285The
286.Va net.inet.tcp.always_keepalive
287sysctl determines whether or not the TCP implementation should attempt
288to detect dead TCP connections by intermittently delivering
289.Dq keepalives
290on the connection.
291By default, this is now enabled for all applications.
292We do not recommend turning it off.
293The extra network bandwidth is minimal and this feature will clean-up
294stalled and long-dead connections that might not otherwise be cleaned
295up.
296In the past people using dialup connections often did not want to
297use this feature in order to be able to retain connections across
298long disconnections, but in modern day the only default that makes
299sense is for the feature to be turned on.
300.Pp
301The
302.Va net.inet.tcp.delayed_ack
303TCP feature is largely misunderstood.  Historically speaking this feature
304was designed to allow the acknowledgement to transmitted data to be returned
305along with the response.  For example, when you type over a remote shell
306the acknowledgement to the character you send can be returned along with the
307data representing the echo of the character.   With delayed acks turned off
308the acknowledgement may be sent in its own packet before the remote service
309has a chance to echo the data it just received.  This same concept also
310applies to any interactive protocol (e.g. SMTP, WWW, POP3) and can cut the
311number of tiny packets flowing across the network in half.   The
312.Dx
313delayed-ack implementation also follows the TCP protocol rule that
314at least every other packet be acknowledged even if the standard 100ms
315timeout has not yet passed.  Normally the worst a delayed ack can do is
316slightly delay the teardown of a connection, or slightly delay the ramp-up
317of a slow-start TCP connection.  While we aren't sure we believe that
318the several FAQs related to packages such as SAMBA and SQUID which advise
319turning off delayed acks may be referring to the slow-start issue.
320.Pp
321The
322.Va net.inet.tcp.inflight_enable
323sysctl turns on bandwidth delay product limiting for all TCP connections.
324This feature is now turned on by default and we recommend that it be
325left on.
326It will slightly reduce the maximum bandwidth of a connection but the
327benefits of the feature in reducing packet backlogs at router constriction
328points are enormous.
329These benefits make it a whole lot easier for router algorithms to manage
330QOS for multiple connections.
331The limiting feature reduces the amount of data built up in intermediate
332router and switch packet queues as well as reduces the amount of data built
333up in the local host's interface queue.  With fewer packets queued up,
334interactive connections, especially over slow modems, will also be able
335to operate with lower round trip times.  However, note that this feature
336only affects data transmission (uploading / server-side).  It does not
337affect data reception (downloading).
338.Pp
339The system will attempt to calculate the bandwidth delay product for each
340connection and limit the amount of data queued to the network to just the
341amount required to maintain optimum throughput.  This feature is useful
342if you are serving data over modems, GigE, or high speed WAN links (or
343any other link with a high bandwidth*delay product), especially if you are
344also using window scaling or have configured a large send window.
345.Pp
346For production use setting
347.Va net.inet.tcp.inflight_min
348to at least 6144 may be beneficial.  Note, however, that setting high
349minimums may effectively disable bandwidth limiting depending on the link.
350.Pp
351Adjusting
352.Va net.inet.tcp.inflight_stab
353is not recommended.
354This parameter defaults to 50, representing +5% fudge when calculating the
355bwnd from the bw.  This fudge is on top of an additional fixed +2*maxseg
356added to bwnd.  The fudge factor is required to stabilize the algorithm
357at very high speeds while the fixed 2*maxseg stabilizes the algorithm at
358low speeds.  If you increase this value excessive packet buffering may occur.
359.Pp
360The
361.Va net.inet.ip.portrange.*
362sysctls control the port number ranges automatically bound to TCP and UDP
363sockets.  There are three ranges:  A low range, a default range, and a
364high range, selectable via an IP_PORTRANGE
365.Fn setsockopt
366call.
367Most network programs use the default range which is controlled by
368.Va net.inet.ip.portrange.first
369and
370.Va net.inet.ip.portrange.last ,
371which defaults to 1024 and 5000 respectively.  Bound port ranges are
372used for outgoing connections and it is possible to run the system out
373of ports under certain circumstances.  This most commonly occurs when you are
374running a heavily loaded web proxy.  The port range is not an issue
375when running serves which handle mainly incoming connections such as a
376normal web server, or has a limited number of outgoing connections such
377as a mail relay.  For situations where you may run yourself out of
378ports we recommend increasing
379.Va net.inet.ip.portrange.last
380modestly.  A value of 10000 or 20000 or 30000 may be reasonable.  You should
381also consider firewall effects when changing the port range.  Some firewalls
382may block large ranges of ports (usually low-numbered ports) and expect systems
383to use higher ranges of ports for outgoing connections.  For this reason
384we do not recommend that
385.Va net.inet.ip.portrange.first
386be lowered.
387.Pp
388The
389.Va kern.ipc.somaxconn
390sysctl limits the size of the listen queue for accepting new TCP connections.
391The default value of 128 is typically too low for robust handling of new
392connections in a heavily loaded web server environment.
393For such environments,
394we recommend increasing this value to 1024 or higher.
395The service daemon
396may itself limit the listen queue size (e.g.\&
397.Xr sendmail 8 ,
398apache) but will
399often have a directive in its configuration file to adjust the queue size up.
400Larger listen queues also do a better job of fending off denial of service
401attacks.
402.Pp
403The
404.Va kern.maxvnodes
405specifies how many vnodes and related file structures the kernel will
406cache.
407The kernel uses a very generous default for this parameter based on
408available physical memory.
409You generally do not want to mess with this parameter as it directly
410effects how well the kernel can cache not only file structures but also
411the underlying file data.
412.Pp
413However, situations may crop up where caching too many vnodes can wind
414up eating too much kernel memory due to filesystem resources that are
415also associated with the vnodes.
416You can lower this value if kernel memory use is higher than you would like.
417It is, in fact, possible for the system to have more files open than the
418value of this tunable, but as files are closed the system will try to
419reduce the actual number of cached vnodes to match this value.
420.Pp
421The
422.Va kern.maxfiles
423sysctl determines how many open files the system supports.
424The default is
425typically based on available physical memory but you may need to bump
426it up if you are running databases or large descriptor-heavy daemons.
427The read-only
428.Va kern.openfiles
429sysctl may be interrogated to determine the current number of open files
430on the system.
431.Pp
432The
433.Va vm.swap_idle_enabled
434sysctl is useful in large multi-user systems where you have lots of users
435entering and leaving the system and lots of idle processes.
436Such systems
437tend to generate a great deal of continuous pressure on free memory reserves.
438Turning this feature on and adjusting the swapout hysteresis (in idle
439seconds) via
440.Va vm.swap_idle_threshold1
441and
442.Va vm.swap_idle_threshold2
443allows you to depress the priority of pages associated with idle processes
444more quickly than the normal pageout algorithm.
445This gives a helping hand
446to the pageout daemon.
447Do not turn this option on unless you need it,
448because the tradeoff you are making is to essentially pre-page memory sooner
449rather than later, eating more swap and disk bandwidth.
450In a small system
451this option will have a detrimental effect but in a large system that is
452already doing moderate paging this option allows the VM system to stage
453whole processes into and out of memory more easily.
454.Sh LOADER TUNABLES
455Some aspects of the system behavior may not be tunable at runtime because
456memory allocations they perform must occur early in the boot process.
457To change loader tunables, you must set their values in
458.Xr loader.conf 5
459and reboot the system.
460.Pp
461.Va kern.maxusers
462is automatically sized at boot based on the amount of memory available in
463the system.  The value can be read (but not written) via sysctl.
464.Pp
465You can change this value as a loader tunable if the default resource
466limits are not sufficient.
467This tunable works primarily by adjusting
468.Va kern.maxproc ,
469so you can opt to override that instead.
470It is generally easier formulate an adjustment to
471.Va kern.maxproc
472instead of
473.Va kern.maxusers .
474.Pp
475.Va kern.maxproc
476controls most kernel auto-scaling components.  If kernel resource limits
477are not scaled high enough, setting this tunables to a higher value is
478usually sufficient.
479Generally speaking you will want to set this tunable to the upper limit
480for the number of process threads you want the kernel to be able to handle.
481The kernel may still decide to cap maxproc at a lower value if there is
482insufficient ram to scale resources as desired.
483.Pp
484Only set this tunable if the defaults are not sufficient.
485Do not use this tunable to try to trim kernel resource limits, you will
486not actually save much memory by doing so and you will leave the system
487more vulnerable to DOS attacks and runaway processes.
488.Pp
489Setting this tunable will scale the maximum number processes, pipes and
490sockets, total open files the system can support, and increase mbuf
491and mbuf-cluster limits.  These other elements can also be separately
492overridden to fine-tune the setup.  We rcommend setting this tunable
493first to create a baseline.
494.Pp
495Setting a high value presumes that you have enough physical memory to
496support the resource utilization.  For example, your system would need
497approximately 128GB of ram to reasonably support a maxproc value of
4984 million (4000000).  The default maxproc given that much ram will
499typically be in the 250000 range.
500.Pp
501Note that the PID is currently limited to 6 digits, so a system cannot
502have more than a million processes operating anyway (though the aggregate
503number of threads can be far greater).
504And yes, there is in fact no reason why a very well-endowed system
505couldn't have that many processes.
506.Pp
507.Va kern.nbuf
508sets how many filesystem buffers the kernel should cache.
509Filesystem buffers can be up to 128KB each.
510UFS typically uses an 8KB blocksize while HAMMER typically uses 64KB.
511The defaults usually suffice.
512The cached buffers represent wired physical memory so specifying a value
513that is too large can result in excessive kernel memory use, and is also
514not entirely necessary since the pages backing the buffers are also
515cached by the VM page cache (which does not use wired memory).
516The buffer cache significantly improves the hot path for cached file
517accesses and dirty data.
518.Pp
519The kernel reserves (128KB * nbuf) bytes of KVM.  The actual physical
520memory use depends on the filesystem buffer size.
521.Pp
522The
523.Va kern.dfldsiz
524and
525.Va kern.dflssiz
526tunables set the default soft limits for process data and stack size
527respectively.
528Processes may increase these up to the hard limits by calling
529.Xr setrlimit 2 .
530The
531.Va kern.maxdsiz ,
532.Va kern.maxssiz ,
533and
534.Va kern.maxtsiz
535tunables set the hard limits for process data, stack, and text size
536respectively; processes may not exceed these limits.
537The
538.Va kern.sgrowsiz
539tunable controls how much the stack segment will grow when a process
540needs to allocate more stack.
541.Pp
542.Va kern.ipc.nmbclusters
543and
544.Va kern.ipc.nmbjclusters
545may be adjusted to increase the number of network mbufs the system is
546willing to allocate.
547Each normal cluster represents approximately 2K of memory,
548so a value of 1024 represents 2M of kernel memory reserved for network
549buffers.
550Each 'j' cluster is typically 4KB, so a value of 1024 represents 4M of
551kernel memory.
552You can do a simple calculation to figure out how many you need but
553keep in mind that tcp buffer sizing is now more dynamic than it used to
554be.
555.Pp
556The defaults usually suffice but you may want to bump it up on service-heavy
557machines.
558Modern machines often need a large number of mbufs to operate services
559efficiently, values of 65536, even upwards of 262144 or more are common.
560If you are running a server, it is better to be generous than to be frugal.
561Remember the memory calculation though.
562.Pp
563Under no circumstances
564should you specify an arbitrarily high value for this parameter, it could
565lead to a boot-time crash.
566The
567.Fl m
568option to
569.Xr netstat 1
570may be used to observe network cluster use.
571.Sh KERNEL CONFIG TUNING
572There are a number of kernel options that you may have to fiddle with in
573a large-scale system.
574In order to change these options you need to be
575able to compile a new kernel from source.
576The
577.Xr config 8
578manual page and the handbook are good starting points for learning how to
579do this.
580Generally speaking, removing options to trim the size of the kernel
581is not going to save very much memory on a modern system.
582In the grand scheme of things, saving a megabyte or two is in the noise
583on a system that likely has multiple gigabytes of memory.
584.Pp
585If your motherboard is AHCI-capable then we strongly recommend turning
586on AHCI mode in the BIOS if it is not already the default.
587.Sh CPU, MEMORY, DISK, NETWORK
588The type of tuning you do depends heavily on where your system begins to
589bottleneck as load increases.
590If your system runs out of CPU (idle times
591are perpetually 0%) then you need to consider upgrading the CPU or moving to
592an SMP motherboard (multiple CPU's), or perhaps you need to revisit the
593programs that are causing the load and try to optimize them.
594If your system
595is paging to swap a lot you need to consider adding more memory.
596If your
597system is saturating the disk you typically see high CPU idle times and
598total disk saturation.
599.Xr systat 1
600can be used to monitor this.
601There are many solutions to saturated disks:
602increasing memory for caching, mirroring disks, distributing operations across
603several machines, and so forth.
604.Pp
605Finally, you might run out of network suds.
606Optimize the network path
607as much as possible.
608If you are operating a machine as a router you may need to
609setup a
610.Xr pf 4
611firewall (also see
612.Xr firewall 7 .
613.Dx
614has a very good fair-share queueing algorithm for QOS in
615.Xr pf 4 .
616.Sh SOURCE OF KERNEL MEMORY USAGE
617The primary sources of kernel memory usage are:
618.Bl -tag -width ".Va kern.maxvnodes"
619.It Va kern.maxvnodes
620The maximum number of cached vnodes in the system.
621These can eat quite a bit of kernel memory, primarily due to auxiliary
622structures tracked by the HAMMER filesystem.
623It is relatively easy to configure a smaller value, but we do not
624recommend reducing this parameter below 100000.
625Smaller values directly impact the number of discrete files the
626kernel can cache data for at once.
627.It Va kern.ipc.nmbclusters , Va kern.ipc.nmbjclusters
628Calculate approximately 2KB per normal cluster and 4KB per jumbo
629cluster.
630Do not make these values too low or you risk deadlocking the network
631stack.
632.It Va kern.nbuf
633The number of filesystem buffers managed by the kernel.
634The kernel wires the underlying cached VM pages, typically 8KB (UFS) or
63564KB (HAMMER) per buffer.
636.It swap/swapcache
637Swap memory requires approximately 1MB of physical ram for each 1GB
638of swap space.
639When swapcache is used, additional memory may be required to keep
640VM objects around longer (only really reducable by reducing the
641value of
642.Va kern.maxvnodes
643which you can do post-boot if you desire).
644.It tmpfs
645Tmpfs is very useful but keep in mind that while the file data itself
646is backed by swap, the meta-data (the directory topology) requires
647wired kernel memory.
648.It mmu page tables
649Even though the underlying data pages themselves can be paged to swap,
650the page tables are usually wired into memory.
651This can create problems when a large number of processes are mmap()ing
652very large files.
653Sometimes turning on
654.Va machdep.pmap_mmu_optimize
655suffices to reduce overhead.
656Page table kernel memory use can be observed by using 'vmstat -z'
657.It Va kern.ipc.shm_use_phys
658It is sometimes necessary to force shared memory to use physical memory
659when running a large database which uses shared memory to implement its
660own data caching.
661The use of sysv shared memory in this regard allows the database to
662distinguish between data which it knows it can access instantly (i.e.
663without even having to page-in from swap) verses data which it might require
664and I/O to fetch.
665.Pp
666If you use this feature be very careful with regards to the database's
667shared memory configuration as you will be wiring the memory.
668.El
669.Sh SEE ALSO
670.Xr netstat 1 ,
671.Xr systat 1 ,
672.Xr dm 4 ,
673.Xr dummynet 4 ,
674.Xr nata 4 ,
675.Xr pf 4 ,
676.Xr login.conf 5 ,
677.Xr pf.conf 5 ,
678.Xr rc.conf 5 ,
679.Xr sysctl.conf 5 ,
680.Xr firewall 7 ,
681.Xr hier 7 ,
682.Xr boot 8 ,
683.Xr ccdconfig 8 ,
684.Xr config 8 ,
685.Xr disklabel 8 ,
686.Xr fsck 8 ,
687.Xr ifconfig 8 ,
688.Xr ipfw 8 ,
689.Xr loader 8 ,
690.Xr mount 8 ,
691.Xr newfs 8 ,
692.Xr route 8 ,
693.Xr sysctl 8 ,
694.Xr tunefs 8
695.Sh HISTORY
696The
697.Nm
698manual page was inherited from
699.Fx
700and first appeared in
701.Fx 4.3 ,
702May 2001.
703.Sh AUTHORS
704The
705.Nm
706manual page was originally written by
707.An Matthew Dillon .
708