Home
last modified time | relevance | path

Searched refs:GPU (Results 201 – 225 of 11286) sorted by relevance

12345678910>>...452

/dports/science/liggghts/LIGGGHTS-PUBLIC-3.8.0-26-g6e873439/lib/gpu/
H A DREADME15 links against when using the GPU package.
59 This library, libgpu.a, provides routines for GPU acceleration
61 library requires installing the CUDA GPU driver and CUDA toolkit for
64 be built. This can be used to query the names and properties of GPU
91 Current styles supporting GPU acceleration:
147 LAMMPS user manual for details on running with GPU acceleration.
156 the build is complete. Additionally, the GPU package must be installed and
164 on the head node of a GPU cluster, this library may not be installed,
177 when attempting to run PPPM on a GPU with compute capability 1.0.
180 compute capability>=1.3). If you compile the GPU library for
[all …]
/dports/biology/ugene/ugene-40.1/src/plugins/opencl_support/transl/
H A Dturkish.ts19 <translation>Eklenti, OpenCL etkin GPU&apos;lar için destek sağlar.</translation>
23 …iver dynamic library.&lt;p&gt; Install the latest video GPU driver.</source>
24 …namik kitaplığı yüklenemiyor.&lt;p&gt; En son video GPU sürücüsünü yükleyin…
29 …<translation>Yüklü OpenCL GPU&apos;lar hakkında bilgi alınırken bir hata olu…
48 …<source>Registering OpenCL-enabled GPU: %1, global mem: %2 Mb, local …
49 …<translation>OpenCL etkin GPU kaydetme: %1, global hafıza: %2 Mb, yer…
/dports/multimedia/libv4l/linux-5.13-rc2/Documentation/driver-api/thermal/
H A Dnouveau_thermal.rst14 This driver allows to read the GPU core temperature, drive the GPU fan and
28 In order to protect the GPU from overheating, Nouveau supports 4 configurable
34 The GPU will be downclocked to reduce its power dissipation;
36 The GPU is put on hold to further lower power dissipation;
38 Shut the computer down to protect your GPU.
44 The default value for these thresholds comes from the GPU's vbios. These
/dports/multimedia/v4l-utils/linux-5.13-rc2/Documentation/driver-api/thermal/
H A Dnouveau_thermal.rst14 This driver allows to read the GPU core temperature, drive the GPU fan and
28 In order to protect the GPU from overheating, Nouveau supports 4 configurable
34 The GPU will be downclocked to reduce its power dissipation;
36 The GPU is put on hold to further lower power dissipation;
38 Shut the computer down to protect your GPU.
44 The default value for these thresholds comes from the GPU's vbios. These
/dports/multimedia/v4l_compat/linux-5.13-rc2/Documentation/driver-api/thermal/
H A Dnouveau_thermal.rst14 This driver allows to read the GPU core temperature, drive the GPU fan and
28 In order to protect the GPU from overheating, Nouveau supports 4 configurable
34 The GPU will be downclocked to reduce its power dissipation;
36 The GPU is put on hold to further lower power dissipation;
38 Shut the computer down to protect your GPU.
44 The default value for these thresholds comes from the GPU's vbios. These
/dports/emulators/yuzu/yuzu-0b47f7a46/src/video_core/command_classes/
H A Dnvdec.h13 class GPU; variable
22 explicit Nvdec(GPU& gpu);
35 GPU& gpu;
/dports/math/curv/curv-0.5/ideas/v-rep/
H A DFast_2D2 can be rendered much faster on a modern GPU, than is done by current software.
12 A fast, practical GPU rasterizer for fonts and vector graphics (Rust)
24 Experimental Metal-based GPU renderer for piet 2D graphics.
27 "2D Graphics on Modern GPU" blog post
35 Glyphy: high quality font rendering using SDFs on the GPU.
45 with GPU rendering in a fragment shader: given [x,y], return pixel colour.
54 vector graphics on the GPU, along with efficient encoding and rendering
69 Ultralight: a lightweight, pure-GPU, HTML UI renderer for native apps
/dports/devel/llvm70/llvm-7.0.1.src/tools/clang/lib/Basic/Targets/
H A DAMDGPU.h176 GPUInfo GPU;
190 if (GPU.Kind <= GK_R600_LAST)
333 GPU = parseAMDGCNName(Name);
335 GPU = parseR600Name(Name);
337 return GK_NONE != GPU.Kind;
345 if (GPU.HasFP64)
347 if (GPU.Kind >= GK_CEDAR) {
354 if (GPU.Kind >= GK_AMDGCN_FIRST) {
/dports/misc/mxnet/incubator-mxnet-1.9.0/julia/src/
H A Dcontext.jl18 @enum CONTEXT_TYPE CPU=1 GPU=2 CPU_PINNED=3
63 julia> mx.@context mx.GPU begin
70 julia> @context mx.GPU mx.zeros(3, 2)
90 A shorthand for `@context mx.GPU`.
132 Get a GPU context with a specific id. The K GPUs on a node is typically numbered as 0,...,K-1.
135 * `dev_id::Integer = 0` the GPU device id.
137 gpu(dev_id::Integer = 0) = Context(GPU, dev_id)
166 Query CUDA for the free and total bytes of GPU global memory.
194 julia> mx.@context mx.GPU 1 begin # Context changed in the following code block
/dports/misc/py-mxnet/incubator-mxnet-1.9.0/julia/src/
H A Dcontext.jl18 @enum CONTEXT_TYPE CPU=1 GPU=2 CPU_PINNED=3
63 julia> mx.@context mx.GPU begin
70 julia> @context mx.GPU mx.zeros(3, 2)
90 A shorthand for `@context mx.GPU`.
132 Get a GPU context with a specific id. The K GPUs on a node is typically numbered as 0,...,K-1.
135 * `dev_id::Integer = 0` the GPU device id.
137 gpu(dev_id::Integer = 0) = Context(GPU, dev_id)
166 Query CUDA for the free and total bytes of GPU global memory.
194 julia> mx.@context mx.GPU 1 begin # Context changed in the following code block
/dports/misc/vxl/vxl-3.3.2/contrib/brl/bseg/boxm2/doc/book/
H A Dchapter_ocl.texi2 GPU-accelerated octree based voxel modeling library.
41 All data structures and algorithms have been optimized to run on a GPU.
42 The GPU processor has been designed as a wrapper around some necessary OpenCL
48 and outgoing memory on the GPU, as well as ensure that memory allocated does not
49 surpass the GPU's limits.
53 The OpenCL cache currently maintains one block and its data on GPU memory.
54 Whenever a new block is requested, the GPU releases the old block and writes the
62 asynchronously, which will hide the latency between host (CPU) and device (GPU)
103 //initialize the GPU render process
/dports/www/firefox/firefox-99.0/gfx/ipc/
H A DPGPU.ipdl49 // This protocol allows the UI process to talk to the GPU process. There is one
51 // the GPU process and the GPUChild living on the main thread of the UI process.
72 // Forward GPU process its endpoints to the VR process.
85 // Called to notify the GPU process of who owns a layersId.
89 // Request the current DeviceStatus from the GPU process. This blocks until
94 // the GPU process. This blocks until one is available (i.e., Init has completed).
97 // Have a message be broadcasted to the GPU process by the GPU process
127 // Causes the GPU process to crash. Used for tests and diagnostics.
131 // Sent when the GPU process has initialized devices. This occurs once, after
174 // GPU parent.
/dports/science/jdftx/jdftx-1.6.0/jdftx/doc/compiling/
H A DCustomization.dox72 ## GPU support
74 For GPU support, install the [CUDA SDK](http://developer.nvidia.com/cuda-toolkit)
81 …stMemory=yes</b>: use page-locked memory on the host (CPU) to speed up memory transfers to the GPU.
84 + <b>-D CudaAwareMPI=yes</b>: If your MPI library supports direct transfers from GPU memory,
87 + Prior to GPU runs, consider setting the environment variable JDFTX_MEMPOOL_SIZE
88 to a large fraction of the memory size in MB / GPU. For example, say
89 "export JDFTX_MEMPOOL_SIZE=4096" (i.e 4 GB) for a GPU with 6 GB memory.
93 If you want to run on a GPU, it must be a discrete (not on-board) NVIDIA GPU
96 to match the compute architecture x.y of the oldest GPU you want to run on.
103 Also keep in mind that you need a lot of memory on the GPU to actually fit systems
[all …]
/dports/devel/llvm80/llvm-8.0.1.src/lib/Target/AMDGPU/
H A DAMDGPUSubtarget.cpp48 StringRef GPU, StringRef FS) { in initializeSubtargetDependencies() argument
51 ParseSubtargetFeatures(GPU, FullFS); in initializeSubtargetDependencies()
68 StringRef GPU, StringRef FS) { in initializeSubtargetDependencies() argument
99 ParseSubtargetFeatures(GPU, FullFS); in initializeSubtargetDependencies()
150 GCNSubtarget::GCNSubtarget(const Triple &TT, StringRef GPU, StringRef FS, in GCNSubtarget() argument
152 AMDGPUGenSubtargetInfo(TT, GPU, FS), in GCNSubtarget()
157 InstrItins(getInstrItineraryForCPU(GPU)), in GCNSubtarget()
219 InstrInfo(initializeSubtargetDependencies(TT, GPU, FS)), in GCNSubtarget()
456 R600GenSubtargetInfo(TT, GPU, FS), in R600Subtarget()
469 TLInfo(TM, initializeSubtargetDependencies(TT, GPU, FS)), in R600Subtarget()
[all …]
/dports/www/chromium-legacy/chromium-88.0.4324.182/third_party/perfetto/protos/perfetto/metrics/android/
H A Dgpu_metric.proto26 // max/min/avg GPU memory used by this process.
32 // GPU metric for processes using GPU.
35 // max/min/avg GPU memory used by the entire system.
/dports/emulators/citra/citra-ac98458e0/src/core/hw/
H A Dhw.cpp33 GPU::Read(var, addr); in Read()
62 GPU::Write(addr, data); in Write()
91 GPU::Init(memory); in Init()
98 GPU::Shutdown(); in Shutdown()
/dports/emulators/citra-qt5/citra-ac98458e0/src/core/hw/
H A Dhw.cpp33 GPU::Read(var, addr); in Read()
62 GPU::Write(addr, data); in Write()
91 GPU::Init(memory); in Init()
98 GPU::Shutdown(); in Shutdown()
/dports/sysutils/plasma5-libksysguard/libksysguard-5.23.5/po/zh_CN/
H A Dksysguard_plugins_process.po47 msgid "GPU Usage"
48 msgstr "GPU 使用率"
52 msgid "GPU Memory"
53 msgstr "GPU 内存"
/dports/math/faiss/faiss-1.7.1/
H A DREADME.md3 …or Python/numpy. Some of the most useful algorithms are implemented on the GPU. It is developed by…
15GPU implementation can accept input from either CPU or GPU memory. On a server with GPUs, the GPU
19 The library is mostly implemented in C++, with optional GPU support provided via CUDA, and an optio…
31 The optional GPU implementation provides what is likely (as of March 2017) the fastest exact and ap…
47 - [Jeff Johnson](https://github.com/wickedfoo) implemented all of the GPU Faiss
/dports/devel/taskflow/taskflow-3.2.0/doxygen/usecases/
H A Ddreamplace.dox11 @section UseCasesDreamPlace DreamPlace: GPU-accelerated Placement Engine
21 To reduce the long runtime, recent work started investigating new CPU-GPU algorithms.
22 We consider matching-based hybrid CPU-GPU placement refinement algorithm developed
26 @li A GPU-based maximal independent set algorithm to identify cell candidates
31 GPU tasks with nested conditions to decide the convergence.
37 We implemented the hybrid CPU-GPU placement algorithm using %Taskflow,
39 The algorithm is crafted on one GPU and many CPUs.
63 Using 8 CPUs and 1 GPU, %Taskflow is consistently faster than others across
92 We corun the same program by up to nine processes that compete for 40 CPUs and 1 GPU.
114 …hailany and David Z. Pan, &quot;[DREAMPlace: Deep Learning Toolkit-Enabled GPU Acceleration for Mo…
/dports/net/boinc-client/boinc-client_release-7.8-7.8.6/locale/ja/
H A DBOINC-Client.po120 "Upgrade to the latest driver to process tasks using your computer's GPU"
121 msgstr "あなたのコンピュータが持つGPUを使ってタスクを処理したければ、ドライバーを最新のものにアップグレードしてください。"
125 "Upgrade to the latest driver to use all of this project's GPU applications"
126 msgstr "このプロジェクトのGPUアプリケーションの全てを使いたければ、ドライバーを最新のものにアップグレードしてください。"
130 "A newer version of BOINC is needed to use your NVIDIA GPU; please upgrade to"
132 msgstr "あなたのマシンのNVIDIA GPUを使いたければ、より新しいバージョンのBOINCが必要です。最新バージョンのBOINCへアップグレードしてください。"
136 msgid "An %s GPU is required to run tasks for this project"
137 msgstr "このプロジェクトの仕事を動作させるためには %s GPUが必要とされます"
/dports/net/boinc-client/boinc-client_release-7.8-7.8.6/locale/fa_IR/
H A DBOINC-Client.po120 "Upgrade to the latest driver to process tasks using your computer's GPU"
121 msgstr "برای استفاده از GPU لطفاً داریور مربوطه را به بالاترین نسخهء ممکن به روز رسانی کنید."
125 "Upgrade to the latest driver to use all of this project's GPU applications"
126 msgstr "برای استفاده همهء اپ های پروژه از GPU، درایور مربوطه را به بالاترین نسخه به روز رسانی کنید."
130 "A newer version of BOINC is needed to use your NVIDIA GPU; please upgrade to"
132 msgstr "برای استفاده از NVIDIA GPU نسخهء بالاتری از BOINC لازم دارید. لطفا به نسخهء جدید به روز رسا…
136 msgid "An %s GPU is required to run tasks for this project"
137 msgstr "یک پردازندهء GPU %s برای اجرای وظایف این پروژه لازم است."
/dports/devel/py-numba/numba-0.51.2/docs/source/cuda/
H A Doverview.rst5 Numba supports CUDA GPU programming by directly compiling a restricted subset
9 GPU automatically.
18 - *device*: the GPU
20 - *device memory*: onboard memory on a GPU card
21 - *kernels*: a GPU function launched by the host and executed on the device
22 - *device function*: a GPU function executed on the device which can only be
40 Numba supports CUDA-enabled GPU with compute capability 2.0 or above with an
/dports/science/chrono/chrono-7.0.1/doxygen/documentation/module_fsi/
H A Dmodule_fsi_installation.md21 - Use GPU-based sparse linear solvers such as BICGSTAB
26 - To **run** applications based on this module an NVIDIA GPU card is required.
29 - Linux, CUDA 10, GCC 7.1.0 (Pascal, Volta, and Turing GPU architectures)
30 - Linux, CUDA 9.0, GCC/6.1.0 (Pascal, Volta, and Turing GPU architectures)
31 - Linux, CUDA 7.5, GCC/4.9.2 (Kepler GPU architecture)
32 - Windows, CUDA 10.0, MS Visual Studio 2015 and 2017 (Pascal GPU architecture)
/dports/www/chromium-legacy/chromium-88.0.4324.182/docs/gpu/
H A Ddebugging_gpu_related_code.md1 # Debugging GPU related code
3 Chromium's GPU system is multi-process, which can make debugging it rather
4 difficult. See [GPU Command Buffer] for some of the nitty gitty. These are just
16 If you are trying to track down a bug in a GPU client process (compositing,
19 GPU service process. (From the point of view of a GPU client, it's calling
56 **Note:** If `about:gpu` is telling you that your GPU is disabled and
135 ## GPU Process Code
209 ### Debugging in the GPU Process
215 being processed in the GPU process (put a break point on
219 To actually debug the GPU process:
[all …]

12345678910>>...452