Home
last modified time | relevance | path

Searched refs:huge (Results 1 – 25 of 117) sorted by relevance

12345

/linux/Documentation/admin-guide/mm/
H A Dhugetlbpage.rst30 and surplus huge pages in the pool of huge pages of default size.
55 huge page from the pool of huge pages at fault time.
80 pages in the kernel's huge page pool. "Persistent" huge pages will be
93 Once a number of huge pages have been pre-allocated to the kernel huge page
103 Some platforms support multiple huge page sizes. To allocate huge pages
120 specific huge page size. Valid huge page sizes are architecture
176 huge page pool to 20, allocating or freeing huge pages, as required.
209 persistent huge page pool is exhausted. As these surplus huge pages become
226 of the in-use huge pages to surplus huge pages. This will occur even if
260 1GB and 2MB huge pages sizes. A 1GB huge page can be split into 512
[all …]
H A Dtranshuge.rst11 using huge pages for the backing of virtual memory with huge pages
299 ``huge=``. It can have following values:
305 Do not allocate huge pages;
317 ``huge=never`` will not attempt to break up huge pages at all, just stop more
417 is incremented if kernel fails to split huge
421 is incremented when a huge page is put onto split
429 munmap() on part of huge page. It doesn't split huge page, only
435 the huge zero page, only its allocation.
448 for the huge page.
461 freed a huge page for use.
[all …]
H A Dconcepts.rst79 `huge`. Usage of huge pages significantly reduces pressure on TLB,
83 memory with the huge pages. The first one is `HugeTLB filesystem`, or
86 the memory and mapped using huge pages. The hugetlbfs is described at
89 Another, more recent, mechanism that enables use of the huge pages is
92 the system memory should and can be mapped by the huge pages, THP
201 buffer for DMA, or when THP allocates a huge page. Memory `compaction`
/linux/arch/powerpc/include/asm/nohash/32/
H A Dpte-8xx.h124 unsigned long clr, unsigned long set, int huge);
137 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local
139 pte_update(vma->vm_mm, address, ptep, clr, set, huge); in __ptep_set_access_flags()
176 static inline int number_of_cells_per_pte(pmd_t *pmd, pte_basic_t val, int huge) in number_of_cells_per_pte() argument
178 if (!huge) in number_of_cells_per_pte()
189 unsigned long clr, unsigned long set, int huge) in pte_update() argument
197 num = number_of_cells_per_pte(pmd, new, huge); in pte_update()
/linux/tools/testing/selftests/mm/
H A Dcharge_reserved_hugetlb.sh54 if [[ -e /mnt/huge ]]; then
55 rm -rf /mnt/huge/*
56 umount /mnt/huge || echo error
57 rmdir /mnt/huge
262 if [[ -e /mnt/huge ]]; then
263 rm -rf /mnt/huge/*
264 umount /mnt/huge
265 rmdir /mnt/huge
292 mkdir -p /mnt/huge
293 mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
[all …]
H A Drun_vmtests.sh67 test transparent huge pages
69 test hugetlbfs huge pages
119 for huge in -t -T "-H -m $hugetlb_mb"; do
131 $huge $test_cmd $write $share $num
/linux/Documentation/mm/
H A Dhugetlbfs_reserv.rst14 of huge pages at mmap() time. The idea is that if there were not enough
15 huge pages to cover the mapping, the mmap() would fail. This was first
19 'reserve' huge pages at mmap() time to ensure that huge pages would be
35 huge pages are only available to the task which reserved them.
36 Therefore, the number of huge pages generally available is computed
50 There is one reserve map for each huge page mapping in the system.
75 The PagePrivate page flag is used to indicate that a huge page
76 reservation must be restored when the huge page is freed. More
77 details will be discussed in the "Freeing huge pages" section.
312 huge pages. If they can not be reserved, the mount fails.
[all …]
H A Dtranshuge.rst13 knowledge fall back to breaking huge pmd mapping into table of ptes and,
41 is complete, so they won't ever notice the fact the page is huge. But
57 Code walking pagetables but unaware about huge pmds can simply call
92 To make pagetable walks huge pmd aware, all you need to do is to call
94 mmap_lock in read (or write) mode to be sure a huge pmd cannot be
100 page table lock will prevent the huge pmd being converted into a
104 before. Otherwise, you can proceed to process the huge pmd and the
107 Refcounts and transparent huge pages
133 requests to split pinned huge pages: it expects page count to be equal to
H A Dzsmalloc.rst157 per zspage. Any object larger than 3264 bytes is considered huge and belongs
159 in huge classes do not share pages).
162 for the huge size class and fewer huge classes overall. This allows for more
165 For zspage chain size of 8, huge class watermark becomes 3632 bytes:::
178 For zspage chain size of 16, huge class watermark becomes 3840 bytes:::
207 pages per zspage number of size classes (clusters) huge size class watermark
H A Darch_pgtable_helpers.rst145 | pmd_set_huge | Creates a PMD huge mapping |
147 | pmd_clear_huge | Clears a PMD huge mapping |
201 | pud_set_huge | Creates a PUD huge mapping |
203 | pud_clear_huge | Clears a PUD huge mapping |
/linux/arch/powerpc/include/asm/book3s/64/
H A Dhash.h161 pte_t *ptep, unsigned long pte, int huge);
168 int huge) in hash__pte_update() argument
186 if (!huge) in hash__pte_update()
191 hpte_need_flush(mm, addr, ptep, old, huge); in hash__pte_update()
H A Dradix.h176 int huge) in radix__pte_update() argument
181 if (!huge) in radix__pte_update()
/linux/arch/powerpc/include/asm/nohash/
H A Dpgtable.h7 unsigned long clr, unsigned long set, int huge);
51 unsigned long clr, unsigned long set, int huge) in pte_update() argument
65 if (!huge) in pte_update()
113 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local
115 pte_update(vma->vm_mm, address, ptep, 0, set, huge); in __ptep_set_access_flags()
/linux/Documentation/filesystems/
H A Dtmpfs.rst112 configured with CONFIG_TRANSPARENT_HUGEPAGE and with huge supported for
117 huge=never Do not allocate huge pages. This is the default.
118 huge=always Attempt to allocate huge page every time a new page is needed.
119 huge=within_size Only allocate huge page if it will be fully within i_size.
121 huge=advise Only allocate huge page if requested with madvise(2).
126 be used to deny huge pages on all tmpfs mounts in an emergency, or to
127 force huge pages on all tmpfs mounts for testing.
/linux/arch/loongarch/mm/
H A Dinit.c144 int huge = pmd_val(*pmd) & _PAGE_HUGE; in vmemmap_check_pmd()
146 if (huge) in vmemmap_check_pmd()
149 return huge; in vmemmap_check_pmd()
143 int huge = pmd_val(*pmd) & _PAGE_HUGE; vmemmap_check_pmd() local
/linux/Documentation/admin-guide/hw-vuln/
H A Dmultihit.rst81 * - KVM: Mitigation: Split huge pages
111 In order to mitigate the vulnerability, KVM initially marks all huge pages
125 The KVM hypervisor mitigation mechanism for marking huge pages as
134 non-executable huge pages in Linux kernel KVM module. All huge
/linux/Documentation/core-api/
H A Dpin_user_pages.rst64 severely by huge pages, because each tail page adds a refcount to the
66 field, refcount overflows were seen in some huge page stress tests.
68 This also means that huge pages and large folios do not suffer
248 acquired since the system was powered on. For huge pages, the head page is
249 pinned once for each page (head page and each tail page) within the huge page.
250 This follows the same sort of behavior that get_user_pages() uses for huge
251 pages: the head page is refcounted once for each tail or head page in the huge
252 page, when get_user_pages() is applied to a huge page.
256 PAGE_SIZE granularity, even if the original pin was applied to a huge page.
/linux/arch/alpha/lib/
H A Dev6-clear_user.S86 subq $1, 16, $4 # .. .. .. E : If < 16, we can not use the huge loop
87 and $16, 0x3f, $2 # .. .. E .. : Forward work for huge loop
88 subq $2, 0x40, $3 # .. E .. .. : bias counter (huge loop)
/linux/arch/powerpc/mm/book3s64/
H A Dhash_tlb.c41 pte_t *ptep, unsigned long pte, int huge) in hpte_need_flush() argument
61 if (huge) { in hpte_need_flush()
/linux/mm/
H A Dmemory-failure.c2534 bool huge = false; in unpoison_memory() local
2588 huge = true; in unpoison_memory()
2604 huge = true; in unpoison_memory()
2622 if (!huge) in unpoison_memory()
2676 bool huge = folio_test_hugetlb(folio); in soft_offline_in_use_page() local
2683 if (!huge && folio_test_large(folio)) { in soft_offline_in_use_page()
2692 if (!huge) in soft_offline_in_use_page()
2719 bool release = !huge; in soft_offline_in_use_page()
2721 if (!page_handle_poison(page, huge, release)) in soft_offline_in_use_page()
2728 pfn, msg_page[huge], ret, &page->flags); in soft_offline_in_use_page()
[all …]
H A Dshmem.c121 int huge; member
598 switch (huge) { in shmem_format_huge()
1645 huge = false; in shmem_alloc_and_add_folio()
1647 if (huge) { in shmem_alloc_and_add_folio()
1682 } else if (huge) { in shmem_alloc_and_add_folio()
4214 sbinfo->huge = ctx->huge; in shmem_reconfigure()
4286 if (sbinfo->huge) in shmem_show_options()
4380 sbinfo->huge = ctx->huge; in shmem_fill_super()
4732 int huge; in shmem_enabled_store() local
4745 huge != SHMEM_HUGE_NEVER && huge != SHMEM_HUGE_DENY) in shmem_enabled_store()
[all …]
/linux/Documentation/arch/riscv/
H A Dvm-layout.rst42 …0000004000000000 | +256 GB | ffffffbfffffffff | ~16M TB | ... huge, almost 64 bits wide hole of…
78 …0000800000000000 | +128 TB | ffff7fffffffffff | ~16M TB | ... huge, almost 64 bits wide hole of…
114 …0100000000000000 | +64 PB | feffffffffffffff | ~16K PB | ... huge, almost 64 bits wide hole of…
/linux/Documentation/features/vm/huge-vmap/
H A Darch-support.txt2 # Feature name: huge-vmap
/linux/drivers/misc/lkdtm/
H A Dbugs.c304 static volatile unsigned int huge = INT_MAX - 2; variable
311 value = huge; in lkdtm_OVERFLOW_SIGNED()
326 value = huge; in lkdtm_OVERFLOW_UNSIGNED()
/linux/Documentation/admin-guide/blockdev/
H A Dzram.rst133 size of the disk when not in use so a huge zram is wasteful.
321 echo huge > /sys/block/zramX/writeback
346 Additionally, if a user choose to writeback only huge and idle pages
416 algorithm can, for example, be more successful compressing huge pages (those
453 #HUGE pages recompression is activated by `huge` mode
454 echo "type=huge" > /sys/block/zram0/recompress
488 echo "type=huge algo=zstd" > /sys/block/zramX/recompress
518 huge page
527 and the block's state is huge so it is written back to the backing

12345