xref: /openbsd/share/man/man9/pmap.9 (revision 8529ddd3)
1.\"	$OpenBSD: pmap.9,v 1.14 2014/11/16 12:31:01 deraadt Exp $
2.\"
3.\" Copyright (c) 2001, 2002, 2003 CubeSoft Communications, Inc.
4.\" <http://www.csoft.org>
5.\"
6.\" Redistribution and use in source and binary forms, with or without
7.\" modification, are permitted provided that the following conditions
8.\" are met:
9.\" 1. Redistributions of source code must retain the above copyright
10.\"    notice, this list of conditions and the following disclaimer.
11.\" 2. Redistributions in binary form must reproduce the above copyright
12.\"    notice, this list of conditions and the following disclaimer in the
13.\"    documentation and/or other materials provided with the distribution.
14.\"
15.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
16.\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
17.\" WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
18.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT,
19.\" INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
20.\" (INCLUDING BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
21.\" SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
22.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
23.\" STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
24.\" IN ANY WAY OUT OF THE USE OF THIS SOFTWARE EVEN IF ADVISED OF THE
25.\" POSSIBILITY OF SUCH DAMAGE.
26.\"
27.Dd $Mdocdate: November 16 2014 $
28.Dt PMAP 9
29.Os
30.Sh NAME
31.Nm pmap
32.Nd machine dependent interface to the MMU
33.Sh SYNOPSIS
34.In machine/pmap.h
35.Sh DESCRIPTION
36The architecture-dependent
37.Nm
38module describes how the physical mapping is done between the user-processes
39and kernel virtual addresses and the physical addresses of the main memory,
40providing machine-dependent translation and access tables that are used
41directly or indirectly by the memory-management hardware.
42The
43.Nm
44layer can be viewed as a big array of mapping entries that are indexed by
45virtual address to produce a physical address and flags.
46These flags describe
47the page's protection, whether the page has been referenced or modified and
48other characteristics.
49.Pp
50The
51.Nm
52interface is consistent across all platforms and hides the way page mappings
53are stored.
54.Sh INITIALIZATION
55.nr nS 1
56.Ft void
57.Fn pmap_init "void"
58.nr nS 0
59.Pp
60The
61.Fn pmap_init
62function is called from the machine-independent
63.Xr uvm 9
64initialization code, when the MMU is enabled.
65.Sh PAGE MANAGEMENT
66Modified/referenced information is only tracked for pages managed by
67.Xr uvm 9
68(pages for which a vm_page structure exists).
69Only managed mappings of those pages have modified/referenced tracking.
70The use of unmanaged mappings should be limited to code which may execute
71in interrupt context (such as
72.Xr malloc 9 )
73or to enter mappings for physical addresses which are not managed by
74.Xr uvm 9 .
75This allows
76.Nm
77modules to avoid blocking interrupts when manipulating data structures or
78holding locks.
79Unmanaged mappings may only be entered into the kernel's virtual address space.
80The modified/referenced bits must be tracked on a per-page basis, as they
81are not attributes of a mapping, but attributes of a page.
82Therefore, even after all mappings for a given page have been removed, the
83modified/referenced bits for that page must be preserved.
84The only time the modified/referenced bits may be cleared is when
85.Xr uvm 9
86explicitly calls the
87.Fn pmap_clear_modify
88and
89.Fn pmap_clear_reference
90functions.
91These functions must also change any internal state necessary to detect
92the page being modified or referenced again after the modified/referenced
93state is cleared.
94.Pp
95Mappings entered by
96.Fn pmap_enter
97are managed, mappings entered by
98.Fn pmap_kenter_pa
99are not.
100.Sh MAPPING ALLOCATION
101.nr nS 1
102.Ft int
103.Fn pmap_enter "pmap_t pmap" "vaddr_t va" "paddr_t pa" "vm_prot_t prot" \
104               "int flags"
105.Ft void
106.Fn pmap_kenter_pa "vaddr_t va" "paddr_t pa" "vm_prot_t prot"
107.Ft void
108.Fn pmap_remove "pmap_t pmap" "vaddr_t sva" "vaddr_t eva"
109.Ft void
110.Fn pmap_kremove "vaddr_t va" "vsize_t size"
111.nr nS 0
112.Pp
113The
114.Fn pmap_enter
115function creates a managed mapping for physical page
116.Fa pa
117at the specified virtual address
118.Fa va
119in the target physical map
120.Fa pmap
121with protection specified by
122.Fa prot :
123.Bl -tag -width "PROT_WRITE"
124.It PROT_READ
125The mapping must allow reading.
126.It PROT_WRITE
127The mapping must allow writing.
128.It PROT_EXEC
129The page mapped contains instructions that will be executed by the
130processor.
131.El
132.Pp
133The
134.Fa flags
135argument contains protection bits (the same bits used in the
136.Fa prot
137argument) indicating the type of access that caused the mapping to
138be created.
139This information may be used to seed modified/referenced
140information for the page being mapped, possibly avoiding redundant
141faults on platforms that track modified/referenced information in
142software.
143Other information provided by
144.Fa flags :
145.Bl -tag -width "PMAP_CANFAIL"
146.It PMAP_WIRED
147The mapping being created is a wired mapping.
148.It PMAP_CANFAIL
149The call to
150.Fn pmap_enter
151is allowed to fail.
152If this flag is not set, and the
153.Fn pmap_enter
154call is unable to create the mapping, perhaps due to insufficient
155resources, the
156.Nm
157module must panic.
158.El
159.Pp
160The access type provided in the
161.Fa flags
162argument will never exceed the protection specified by
163.Fa prot .
164.Pp
165The
166.Fn pmap_enter
167function is called by the fault routine to establish a mapping for
168the page being faulted in.
169If
170.Fn pmap_enter
171is called to enter a mapping at a virtual address for which a mapping
172already exists, the previous mapping must be invalidated.
173.Fn pmap_enter
174is sometimes called to change the protection for a pre-existing mapping,
175or to change the
176.Dq wired
177attribute for a pre-existing mapping.
178.Pp
179The
180.Fn pmap_kenter_pa
181function creates an unmanaged mapping of physical address
182.Fa pa
183at the specified virtual address
184.Fa va
185with the protection specified by
186.Fa prot .
187.Pp
188The
189.Fn pmap_remove
190function removes the range of virtual addresses
191.Fa sva
192to
193.Fa eva
194from
195.Fa pmap ,
196assuming proper alignment.
197.Fn pmap_remove
198is called during an unmap
199operation to remove low-level machine dependent mappings.
200.Pp
201The
202.Fn pmap_kremove
203function removes an unmanaged mapping at virtual address
204.Fa va
205of size
206.Fa size .
207.Pp
208A call to
209.Fn pmap_update
210must be made after
211.Fn pmap_kenter_pa
212or
213.Fn pmap_kremove
214to notify the
215.Nm
216layer that the mappings need to be made correct.
217.Sh ACCESS PROTECTION
218.nr nS 1
219.Ft void
220.Fn pmap_unwire "pmap_t pmap" "vaddr_t va"
221.Ft void
222.Fn pmap_protect "pmap_t pmap" "vaddr_t sva" "vaddr_t eva" "vm_prot_t prot"
223.Ft void
224.Fn pmap_page_protect "struct vm_page *pg" "vm_prot_t prot"
225.nr nS 0
226.Pp
227The
228.Fn pmap_unwire
229function clears the wired attribute for a map/virtual-address pair.
230The mapping must already exist in
231.Fa pmap .
232.Pp
233The
234.Fn pmap_protect
235function sets the physical protection on range
236.Fa sva
237to
238.Fa eva ,
239in
240.Fa pmap .
241.Pp
242The
243.Fn pmap_protect
244function is called during a copy-on-write operation to write protect
245copy-on-write memory, and when paging out a page to remove all mappings
246of a page.
247The
248.Fn pmap_page_protect
249function sets the permission for all mapping to page
250.Fa pg .
251The
252.Fn pmap_page_protect
253function is called before a pageout operation to ensure that all pmap
254references to a page are removed.
255.Sh PHYSICAL PAGE-USAGE INFORMATION
256.nr nS 1
257.Ft boolean_t
258.Fn pmap_is_modified "struct vm_page *pg"
259.Ft boolean_t
260.Fn pmap_clear_modify "struct vm_page *pg"
261.Ft boolean_t
262.Fn pmap_is_referenced "struct vm_page *pg"
263.Ft boolean_t
264.Fn pmap_clear_reference "struct vm_page *pg"
265.nr nS 0
266.Pp
267The
268.Fn pmap_is_modified
269and
270.Fn pmap_clear_modify
271functions read/set the modify bits on the specified physical page
272.Fa pg .
273The
274.Fn pmap_is_referenced
275and
276.Fn pmap_clear_reference
277functions read/set the reference bits on the specified physical page
278.Fa pg .
279.Pp
280The
281.Fn pmap_is_referenced
282and
283.Fn pmap_is_modified
284functions are called by the pagedaemon when looking for pages to free.
285The
286.Fn pmap_clear_referenced
287and
288.Fn pmap_clear_modify
289functions are called by the pagedaemon to help identification of pages
290that are no longer in demand.
291.Sh PHYSICAL PAGE INITIALIZATION
292.nr nS 1
293.Ft void
294.Fn pmap_copy_page "struct vm_page *src" "struct vm_page *dst"
295.Ft void
296.Fn pmap_zero_page "struct vm_page *page"
297.nr nS 0
298.Pp
299The
300.Fn pmap_copy_page
301function copies the content of the physical page
302.Fa src
303to physical page
304.Fa dst .
305.Pp
306The
307.Fn pmap_zero_page
308function fills
309.Fa page
310with zeroes.
311.Sh INTERNAL DATA STRUCTURE MANAGEMENT
312.nr nS 1
313.Ft pmap_t
314.Fn pmap_create "void"
315.Ft void
316.Fn pmap_reference "pmap_t pmap"
317.Ft void
318.Fn pmap_destroy "pmap_t pmap"
319.nr nS 0
320.Pp
321The
322.Fn pmap_create
323function creates an instance of the
324.Em pmap
325structure.
326.Pp
327The
328.Fn pmap_reference
329function increments the reference count on
330.Fa pmap .
331.Pp
332The
333.Fn pmap_destroy
334function decrements the reference count on physical map
335.Fa pmap
336and retires it from service if the count drops to zero, assuming
337it contains no valid mappings.
338.Sh OPTIONAL FUNCTIONS
339.nr nS 1
340.Ft vaddr_t
341.Fn pmap_steal_memory "vsize_t size" "vaddr_t *vstartp" "vaddr_t *vendp"
342.Ft vaddr_t
343.Fn pmap_growkernel "vaddr_t maxkvaddr"
344.Ft void
345.Fn pmap_update "pmap_t pmap"
346.Ft void
347.Fn pmap_collect "pmap_t pmap"
348.Ft void
349.Fn pmap_virtual_space "vaddr_t *vstartp" "vaddr_t *vendp"
350.Ft void
351.Fn pmap_copy "pmap_t dst_pmap" "pmap_t src_pmap" "vaddr_t dst_addr" \
352              "vsize_t len" "vaddr_t src_addr"
353.nr nS 0
354.Pp
355Wired memory allocation before the virtual memory system is bootstrapped
356is accomplished by the
357.Fn pmap_steal_memory
358function.
359After that point, the kernel memory allocation routines should be used.
360.Pp
361The
362.Fn pmap_growkernel
363function can preallocate kernel page tables to a specified virtual address.
364.Pp
365The
366.Fn pmap_update
367function notifies the
368.Nm
369module to force processing of all delayed actions for all pmaps.
370.Pp
371The
372.Fn pmap_collect
373function informs the
374.Nm
375module that the given
376.Em pmap
377is not expected to be used for some time, giving the
378.Nm
379module a chance to prioritize.
380The initial bounds of the kernel virtual address space are returned by
381.Fn pmap_virtual_space .
382.Pp
383The
384.Fn pmap_copy
385function copies the range specified by
386.Fa src_addr
387and
388.Fa src_len
389from
390.Fa src_pmap
391to the range described by
392.Fa dst_addr
393and
394.Fa dst_len
395in
396.Fa dst_map .
397.Fn pmap_copy
398is called during a
399.Xr fork 2
400operation to give the child process an initial set of low-level
401mappings.
402.Sh SEE ALSO
403.Xr fork 2 ,
404.Xr uvm 9
405.Sh HISTORY
406The
407.Bx 4.4
408.Nm
409module is based on Mach 3.0.
410The introduction of
411.Xr uvm 9
412left the
413.Nm
414interface unchanged for the most part.
415.Sh BUGS
416Ifdefs must be documented.
417.Pp
418.Fn pmap_update
419should be mandatory.
420