1.. index:: kspace_style ewald
2.. index:: kspace_style ewald/dipole
3.. index:: kspace_style ewald/dipole/spin
4.. index:: kspace_style ewald/disp
5.. index:: kspace_style ewald/disp/dipole
6.. index:: kspace_style ewald/omp
7.. index:: kspace_style pppm
8.. index:: kspace_style pppm/kk
9.. index:: kspace_style pppm/omp
10.. index:: kspace_style pppm/gpu
11.. index:: kspace_style pppm/intel
12.. index:: kspace_style pppm/cg
13.. index:: kspace_style pppm/dielectric
14.. index:: kspace_style pppm/dipole
15.. index:: kspace_style pppm/dipole/spin
16.. index:: kspace_style pppm/disp
17.. index:: kspace_style pppm/disp/omp
18.. index:: kspace_style pppm/disp/tip4p
19.. index:: kspace_style pppm/disp/tip4p/omp
20.. index:: kspace_style pppm/disp/intel
21.. index:: kspace_style pppm/disp/dielectric
22.. index:: kspace_style pppm/cg/omp
23.. index:: kspace_style pppm/stagger
24.. index:: kspace_style pppm/tip4p
25.. index:: kspace_style pppm/tip4p/omp
26.. index:: kspace_style msm
27.. index:: kspace_style msm/omp
28.. index:: kspace_style msm/cg
29.. index:: kspace_style msm/cg/omp
30.. index:: kspace_style msm/dielectric
31.. index:: kspace_style scafacos
32
33kspace_style command
34====================
35
36Syntax
37""""""
38
39.. code-block:: LAMMPS
40
41   kspace_style style value
42
43* style = *none* or *ewald* or *ewald/dipole* or *ewald/dipole/spin* or *ewald/disp* or *ewald/disp/dipole* or *ewald/omp* or *pppm* or *pppm/cg* or *pppm/disp* or *pppm/tip4p* or *pppm/stagger* or *pppm/disp/tip4p* or *pppm/gpu* or *pppm/intel* or *pppm/disp/intel* or *pppm/kk* or *pppm/omp* or *pppm/cg/omp* or *pppm/disp/tip4p/omp* or *pppm/tip4p/omp* or *pppm/dielectic* or *pppm/disp/dielectric* or *msm* or *msm/cg* or *msm/omp* or *msm/cg/omp* or *msm/dielectric* or *scafacos*
44
45  .. parsed-literal::
46
47       *none* value = none
48       *ewald* value = accuracy
49         accuracy = desired relative error in forces
50       *ewald/dipole* value = accuracy
51         accuracy = desired relative error in forces
52       *ewald/dipole/spin* value = accuracy
53         accuracy = desired relative error in forces
54       *ewald/disp* value = accuracy
55         accuracy = desired relative error in forces
56       *ewald/disp/dipole* value = accuracy
57         accuracy = desired relative error in forces
58       *ewald/omp* value = accuracy
59         accuracy = desired relative error in forces
60       *pppm* value = accuracy
61         accuracy = desired relative error in forces
62       *pppm/cg* values = accuracy (smallq)
63         accuracy = desired relative error in forces
64         smallq = cutoff for charges to be considered (optional) (charge units)
65       *pppm/dipole* value = accuracy
66         accuracy = desired relative error in forces
67       *pppm/dipole/spin* value = accuracy
68         accuracy = desired relative error in forces
69       *pppm/disp* value = accuracy
70         accuracy = desired relative error in forces
71       *pppm/tip4p* value = accuracy
72         accuracy = desired relative error in forces
73       *pppm/disp/tip4p* value = accuracy
74         accuracy = desired relative error in forces
75       *pppm/gpu* value = accuracy
76         accuracy = desired relative error in forces
77       *pppm/intel* value = accuracy
78         accuracy = desired relative error in forces
79       *pppm/disp/intel* value = accuracy
80         accuracy = desired relative error in forces
81       *pppm/kk* value = accuracy
82         accuracy = desired relative error in forces
83       *pppm/omp* value = accuracy
84         accuracy = desired relative error in forces
85       *pppm/cg/omp* values = accuracy (smallq)
86         accuracy = desired relative error in forces
87         smallq = cutoff for charges to be considered (optional) (charge units)
88       *pppm/disp/omp* value = accuracy
89         accuracy = desired relative error in forces
90       *pppm/tip4p/omp* value = accuracy
91         accuracy = desired relative error in forces
92       *pppm/disp/tip4p/omp* value = accuracy
93         accuracy = desired relative error in forces
94       *pppm/stagger* value = accuracy
95         accuracy = desired relative error in forces
96       *pppm/dielectric* value = accuracy
97         accuracy = desired relative error in forces
98       *pppm/disp/dielectric* value = accuracy
99         accuracy = desired relative error in forces
100       *msm* value = accuracy
101         accuracy = desired relative error in forces
102       *msm/cg* value = accuracy (smallq)
103         accuracy = desired relative error in forces
104         smallq = cutoff for charges to be considered (optional) (charge units)
105       *msm/omp* value = accuracy
106         accuracy = desired relative error in forces
107       *msm/cg/omp* value = accuracy (smallq)
108         accuracy = desired relative error in forces
109         smallq = cutoff for charges to be considered (optional) (charge units)
110       *msm/dielectric* value = accuracy
111         accuracy = desired relative error in forces
112       *scafacos* values = method accuracy
113         method = fmm or p2nfft or p3m or ewald or direct
114         accuracy = desired relative error in forces
115
116Examples
117""""""""
118
119.. code-block:: LAMMPS
120
121   kspace_style pppm 1.0e-4
122   kspace_style pppm/cg 1.0e-5 1.0e-6
123   kspace style msm 1.0e-4
124   kspace style scafacos fmm 1.0e-4
125   kspace_style none
126
127Used in input scripts:
128
129   .. parsed-literal::
130
131      examples/peptide/in.peptide
132
133Description
134"""""""""""
135
136Define a long-range solver for LAMMPS to use each timestep to compute
137long-range Coulombic interactions or long-range :math:`1/r^6` interactions.
138Most of the long-range solvers perform their computation in K-space,
139hence the name of this command.
140
141When such a solver is used in conjunction with an appropriate pair
142style, the cutoff for Coulombic or :math:`1/r^N` interactions is effectively
143infinite.  If the Coulombic case, this means each charge in the system
144interacts with charges in an infinite array of periodic images of the
145simulation domain.
146
147Note that using a long-range solver requires use of a matching :doc:`pair style <pair_style>` to perform consistent short-range pairwise
148calculations.  This means that the name of the pair style contains a
149matching keyword to the name of the KSpace style, as in this table:
150
151+----------------------+-----------------------+
152| Pair style           | KSpace style          |
153+----------------------+-----------------------+
154| coul/long            | ewald or pppm         |
155+----------------------+-----------------------+
156| coul/msm             | msm                   |
157+----------------------+-----------------------+
158| lj/long or buck/long | disp (for dispersion) |
159+----------------------+-----------------------+
160| tip4p/long           | tip4p                 |
161+----------------------+-----------------------+
162| dipole/long          | dipole                |
163+----------------------+-----------------------+
164
165----------
166
167The *ewald* style performs a standard Ewald summation as described in
168any solid-state physics text.
169
170The *ewald/disp* style adds a long-range dispersion sum option for
171:math:`1/r^6` potentials and is useful for simulation of interfaces
172:ref:`(Veld) <Veld>`.  It also performs standard Coulombic Ewald summations,
173but in a more efficient manner than the *ewald* style.  The :math:`1/r^6`
174capability means that Lennard-Jones or Buckingham potentials can be
175used without a cutoff, i.e. they become full long-range potentials.
176
177The *ewald/disp/dipole* style can also be used with point-dipoles, see
178:ref:`(Toukmaji) <Toukmaji>`.
179
180The *ewald/dipole* style adds long-range standard Ewald summations
181for dipole-dipole interactions, see :ref:`(Toukmaji) <Toukmaji>`.
182
183The *ewald/dipole/spin* style adds long-range standard Ewald
184summations for magnetic dipole-dipole interactions between
185magnetic spins.
186
187----------
188
189The *pppm* style invokes a particle-particle particle-mesh solver
190:ref:`(Hockney) <Hockney>` which maps atom charge to a 3d mesh, uses 3d FFTs
191to solve Poisson's equation on the mesh, then interpolates electric
192fields on the mesh points back to the atoms.  It is closely related to
193the particle-mesh Ewald technique (PME) :ref:`(Darden) <Darden>` used in
194AMBER and CHARMM.  The cost of traditional Ewald summation scales as
195:math:`N^{\frac{3}{2}}` where :math:`N` is the number of atoms in the system.  The PPPM solver
196scales as :math:`N \log{N}` due to the FFTs, so it is almost always a faster
197choice :ref:`(Pollock) <Pollock>`.
198
199The *pppm/cg* style is identical to the *pppm* style except that it
200has an optimization for systems where most particles are uncharged.
201Similarly the *msm/cg* style implements the same optimization for *msm*\ .
202The optional *smallq* argument defines the cutoff for the absolute
203charge value which determines whether a particle is considered charged
204or not.  Its default value is 1.0e-5.
205
206The *pppm/dipole* style invokes a particle-particle particle-mesh solver
207for dipole-dipole interactions, following the method of :ref:`(Cerda) <Cerda2008>`.
208
209The *pppm/dipole/spin* style invokes a particle-particle particle-mesh solver
210for magnetic dipole-dipole interactions between magnetic spins.
211
212The *pppm/tip4p* style is identical to the *pppm* style except that it
213adds a charge at the massless fourth site in each TIP4P water molecule.
214It should be used with :doc:`pair styles <pair_style>` with a
215*tip4p/long* in their style name.
216
217The *pppm/stagger* style performs calculations using two different
218meshes, one shifted slightly with respect to the other.  This can
219reduce force aliasing errors and increase the accuracy of the method
220for a given mesh size.  Or a coarser mesh can be used for the same
221target accuracy, which saves CPU time.  However, there is a trade-off
222since FFTs on two meshes are now performed which increases the
223computation required.  See :ref:`(Cerutti) <Cerutti>`, :ref:`(Neelov) <Neelov>`,
224and :ref:`(Hockney) <Hockney>` for details of the method.
225
226For high relative accuracy, using staggered PPPM allows the mesh size
227to be reduced by a factor of 2 in each dimension as compared to
228regular PPPM (for the same target accuracy).  This can give up to a 4x
229speedup in the KSpace time (8x less mesh points, 2x more expensive).
230However, for low relative accuracy, the staggered PPPM mesh size may
231be essentially the same as for regular PPPM, which means the method
232will be up to 2x slower in the KSpace time (simply 2x more expensive).
233For more details and timings, see the :doc:`Speed tips <Speed_tips>` doc
234page.
235
236.. note::
237
238   Using *pppm/stagger* may not give the same increase in the
239   accuracy of energy and pressure as it does in forces, so some caution
240   must be used if energy and/or pressure are quantities of interest,
241   such as when using a barostat.
242
243----------
244
245The *pppm/disp* and *pppm/disp/tip4p* styles add a mesh-based long-range
246dispersion sum option for 1/r\^6 potentials :ref:`(Isele-Holder) <Isele-Holder2012>`,
247similar to the *ewald/disp* style. The 1/r\^6 capability means
248that Lennard-Jones or Buckingham potentials can be used without a cutoff,
249i.e. they become full long-range potentials.
250
251For these styles, you will possibly want to adjust the default choice
252of parameters by using the :doc:`kspace_modify <kspace_modify>` command.
253This can be done by either choosing the Ewald and grid parameters, or
254by specifying separate accuracies for the real and kspace
255calculations. When not making any settings, the simulation will stop
256with an error message. Further information on the influence of the
257parameters and how to choose them is described in
258:ref:`(Isele-Holder) <Isele-Holder2012>`,
259:ref:`(Isele-Holder2) <Isele-Holder2013>` and the :doc:`Howto dispersion <Howto_dispersion>` doc page.
260
261----------
262
263.. note::
264
265   All of the PPPM styles can be used with single-precision FFTs by
266   using the compiler switch -DFFT_SINGLE for the FFT_INC setting in your
267   low-level Makefile.  This setting also changes some of the PPPM
268   operations (e.g. mapping charge to mesh and interpolating electric
269   fields to particles) to be performed in single precision.  This option
270   can speed-up long-range calculations, particularly in parallel or on
271   GPUs.  The use of the -DFFT_SINGLE flag is discussed on the :doc:`Build settings <Build_settings>` doc page. MSM does not currently support
272   the -DFFT_SINGLE compiler switch.
273
274----------
275
276The *msm* style invokes a multi-level summation method MSM solver,
277:ref:`(Hardy) <Hardy2006>` or :ref:`(Hardy2) <Hardy2009>`, which maps atom charge
278to a 3d mesh, and uses a multi-level hierarchy of coarser and coarser
279meshes on which direct Coulomb solvers are done.  This method does not
280use FFTs and scales as :math:`N`. It may therefore be faster than the other
281K-space solvers for relatively large problems when running on large
282core counts. MSM can also be used for non-periodic boundary conditions
283and for mixed periodic and non-periodic boundaries.
284
285MSM is most competitive versus Ewald and PPPM when only relatively
286low accuracy forces, about 1e-4 relative error or less accurate,
287are needed. Note that use of a larger Coulombic cutoff (i.e. 15
288angstroms instead of 10 angstroms) provides better MSM accuracy for
289both the real space and grid computed forces.
290
291Currently calculation of the full pressure tensor in MSM is expensive.
292Using the :doc:`kspace_modify <kspace_modify>` *pressure/scalar yes*
293command provides a less expensive way to compute the scalar pressure
294(Pxx + Pyy + Pzz)/3.0. The scalar pressure can be used, for example,
295to run an isotropic barostat. If the full pressure tensor is needed,
296then calculating the pressure at every timestep or using a fixed
297pressure simulation with MSM will cause the code to run slower.
298
299----------
300
301The *scafacos* style is a wrapper on the `ScaFaCoS Coulomb solver library <http://www.scafacos.de>`_ which provides a variety of solver
302methods which can be used with LAMMPS.  The paper by :ref:`(Sutman) <Sutmann2014>`
303gives an overview of ScaFaCoS.
304
305ScaFaCoS was developed by a consortium of German research facilities
306with a BMBF (German Ministry of Science and Education) funded project
307in 2009-2012. Participants of the consortium were the Universities of
308Bonn, Chemnitz, Stuttgart, and Wuppertal as well as the
309Forschungszentrum Juelich.
310
311The library is available for download at "http://scafacos.de" or can
312be cloned from the git-repository
313"git://github.com/scafacos/scafacos.git".
314
315In order to use this KSpace style, you must download and build the
316ScaFaCoS library, then build LAMMPS with the SCAFACOS package
317installed package which links LAMMPS to the ScaFaCoS library.
318See details on :ref:`this page <SCAFACOS>`.
319
320.. note::
321
322   Unlike other KSpace solvers in LAMMPS, ScaFaCoS computes all
323   Coulombic interactions, both short- and long-range.  Thus you should
324   NOT use a Coulombic pair style when using kspace_style scafacos.  This
325   also means the total Coulombic energy (short- and long-range) will be
326   tallied for :doc:`thermodynamic output <thermo_style>` command as part
327   of the *elong* keyword; the *ecoul* keyword will be zero.
328
329.. note::
330
331   See the current restriction below about use of ScaFaCoS in
332   LAMMPS with molecular charged systems or the TIP4P water model.
333
334The specified *method* determines which ScaFaCoS algorithm is used.
335These are the ScaFaCoS methods currently available from LAMMPS:
336
337* *fmm* = Fast Multi-Pole method
338* *p2nfft* = FFT-based Coulomb solver
339* *ewald* = Ewald summation
340* *direct* = direct O(N\^2) summation
341* *p3m* = PPPM
342
343We plan to support additional ScaFaCoS solvers from LAMMPS in the
344future.  For an overview of the included solvers, refer to
345:ref:`(Sutmann) <Sutmann2013>`
346
347The specified *accuracy* is similar to the accuracy setting for other
348LAMMPS KSpace styles, but is passed to ScaFaCoS, which can interpret
349it in different ways for different methods it supports.  Within the
350ScaFaCoS library the *accuracy* is treated as a tolerance level
351(either absolute or relative) for the chosen quantity, where the
352quantity can be either the Columic field values, the per-atom Columic
353energy or the total Columic energy.  To select from these options, see
354the :doc:`kspace_modify scafacos accuracy <kspace_modify>` doc page.
355
356The :doc:`kspace_modify scafacos <kspace_modify>` command also explains
357other ScaFaCoS options currently exposed to LAMMPS.
358
359----------
360
361The specified *accuracy* determines the relative RMS error in per-atom
362forces calculated by the long-range solver.  It is set as a
363dimensionless number, relative to the force that two unit point
364charges (e.g. 2 monovalent ions) exert on each other at a distance of
3651 Angstrom.  This reference value was chosen as representative of the
366magnitude of electrostatic forces in atomic systems.  Thus an accuracy
367value of 1.0e-4 means that the RMS error will be a factor of 10000
368smaller than the reference force.
369
370The accuracy setting is used in conjunction with the pairwise cutoff
371to determine the number of K-space vectors for style *ewald* or the
372grid size for style *pppm* or *msm*\ .
373
374Note that style *pppm* only computes the grid size at the beginning of
375a simulation, so if the length or triclinic tilt of the simulation
376cell increases dramatically during the course of the simulation, the
377accuracy of the simulation may degrade.  Likewise, if the
378:doc:`kspace_modify slab <kspace_modify>` option is used with
379shrink-wrap boundaries in the z-dimension, and the box size changes
380dramatically in z.  For example, for a triclinic system with all three
381tilt factors set to the maximum limit, the PPPM grid should be
382increased roughly by a factor of 1.5 in the y direction and 2.0 in the
383z direction as compared to the same system using a cubic orthogonal
384simulation cell.  One way to handle this issue if you have a long
385simulation where the box size changes dramatically, is to break it
386into shorter simulations (multiple :doc:`run <run>` commands).  This
387works because the grid size is re-computed at the beginning of each
388run.  Another way to ensure the described accuracy requirement is met
389is to run a short simulation at the maximum expected tilt or length,
390note the required grid size, and then use the
391:doc:`kspace_modify <kspace_modify>` *mesh* command to manually set the
392PPPM grid size to this value for the long run.  The simulation then
393will be "too accurate" for some portion of the run.
394
395RMS force errors in real space for *ewald* and *pppm* are estimated
396using equation 18 of :ref:`(Kolafa) <Kolafa>`, which is also referenced as
397equation 9 of :ref:`(Petersen) <Petersen>`. RMS force errors in K-space for
398*ewald* are estimated using equation 11 of :ref:`(Petersen) <Petersen>`,
399which is similar to equation 32 of :ref:`(Kolafa) <Kolafa>`. RMS force
400errors in K-space for *pppm* are estimated using equation 38 of
401:ref:`(Deserno) <Deserno>`. RMS force errors for *msm* are estimated
402using ideas from chapter 3 of :ref:`(Hardy) <Hardy2006>`, with equation 3.197
403of particular note. When using *msm* with non-periodic boundary
404conditions, it is expected that the error estimation will be too
405pessimistic. RMS force errors for dipoles when using *ewald/disp*
406or *ewald/dipole* are estimated using equations 33 and 46 of
407:ref:`(Wang) <Wang>`. The RMS force errors for *pppm/dipole* are estimated
408using the equations in :ref:`(Cerda) <Cerda2008>`.
409
410See the :doc:`kspace_modify <kspace_modify>` command for additional
411options of the K-space solvers that can be set, including a *force*
412option for setting an absolute RMS error in forces, as opposed to a
413relative RMS error.
414
415----------
416
417Styles with a *gpu*, *intel*, *kk*, *omp*, or *opt* suffix are
418functionally the same as the corresponding style without the suffix.
419They have been optimized to run faster, depending on your available
420hardware, as discussed on the :doc:`Speed packages <Speed_packages>` doc
421page.  The accelerated styles take the same arguments and should
422produce the same results, except for round-off and precision issues.
423
424More specifically, the *pppm/gpu* style performs charge assignment and
425force interpolation calculations on the GPU.  These processes are
426performed either in single or double precision, depending on whether
427the -DFFT_SINGLE setting was specified in your low-level Makefile, as
428discussed above.  The FFTs themselves are still calculated on the CPU.
429If *pppm/gpu* is used with a GPU-enabled pair style, part of the PPPM
430calculation can be performed concurrently on the GPU while other
431calculations for non-bonded and bonded force calculation are performed
432on the CPU.
433
434The *pppm/kk* style performs charge assignment and force interpolation
435calculations, along with the FFTs themselves, on the GPU or (optionally) threaded
436on the CPU when using OpenMP and FFTW3.
437
438These accelerated styles are part of the GPU, INTEL, KOKKOS,
439OPENMP, and OPT packages respectively.  They are only enabled if
440LAMMPS was built with those packages.  See the :doc:`Build package <Build_package>` page for more info.
441
442See the :doc:`Speed packages <Speed_packages>` page for more
443instructions on how to use the accelerated styles effectively.
444
445----------
446
447Restrictions
448""""""""""""
449
450Note that the long-range electrostatic solvers in LAMMPS assume conducting
451metal (tinfoil) boundary conditions for both charge and dipole
452interactions. Vacuum boundary conditions are not currently supported.
453
454The *ewald/disp*, *ewald*, *pppm*, and *msm* styles support
455non-orthogonal (triclinic symmetry) simulation boxes. However,
456triclinic simulation cells may not yet be supported by all suffix
457versions of these styles.
458
459Most of the base kspace styles are part of the KSPACE package.  They are
460only enabled if LAMMPS was built with that package.  See the :doc:`Build
461package <Build_package>` page for more info.
462
463The *msm/dielectric* and *pppm/dielectric* kspace styles are part of the
464DIELECTRIC package. They are only enabled if LAMMPS was built with
465that package **and** the KSPACE package.  See the :doc:`Build package
466<Build_package>` page for more info.
467
468For MSM, a simulation must be 3d and one can use any combination of
469periodic, non-periodic, or shrink-wrapped boundaries (specified using
470the :doc:`boundary <boundary>` command).
471
472For Ewald and PPPM, a simulation must be 3d and periodic in all
473dimensions.  The only exception is if the slab option is set with
474:doc:`kspace_modify <kspace_modify>`, in which case the xy dimensions
475must be periodic and the z dimension must be non-periodic.
476
477The scafacos KSpace style will only be enabled if LAMMPS is built with
478the SCAFACOS package.  See the :doc:`Build package <Build_package>`
479doc page for more info.
480
481The use of ScaFaCos in LAMMPS does not yet support molecular charged
482systems where the short-range Coulombic interactions between atoms in
483the same bond/angle/dihedral are weighted by the
484:doc:`special_bonds <special_bonds>` command.  Likewise it does not
485support the "TIP4P water style" where a fictitious charge site is
486introduced in each water molecule.
487Finally, the methods *p3m* and *ewald* do not support computing the
488virial, so this contribution is not included.
489
490Related commands
491""""""""""""""""
492
493:doc:`kspace_modify <kspace_modify>`, :doc:`pair_style lj/cut/coul/long <pair_lj_cut_coul>`, :doc:`pair_style lj/charmm/coul/long <pair_charmm>`, :doc:`pair_style lj/long/coul/long <pair_lj_long>`, :doc:`pair_style buck/coul/long <pair_buck>`
494
495Default
496"""""""
497
498.. code-block:: LAMMPS
499
500   kspace_style none
501
502----------
503
504.. _Darden:
505
506**(Darden)** Darden, York, Pedersen, J Chem Phys, 98, 10089 (1993).
507
508.. _Deserno:
509
510**(Deserno)** Deserno and Holm, J Chem Phys, 109, 7694 (1998).
511
512.. _Hockney:
513
514**(Hockney)** Hockney and Eastwood, Computer Simulation Using Particles,
515Adam Hilger, NY (1989).
516
517.. _Kolafa:
518
519**(Kolafa)** Kolafa and Perram, Molecular Simulation, 9, 351 (1992).
520
521.. _Petersen:
522
523**(Petersen)** Petersen, J Chem Phys, 103, 3668 (1995).
524
525.. _Wang:
526
527**(Wang)** Wang and Holm, J Chem Phys, 115, 6277 (2001).
528
529.. _Pollock:
530
531**(Pollock)** Pollock and Glosli, Comp Phys Comm, 95, 93 (1996).
532
533.. _Cerutti:
534
535**(Cerutti)** Cerutti, Duke, Darden, Lybrand, Journal of Chemical Theory
536and Computation 5, 2322 (2009)
537
538.. _Neelov:
539
540**(Neelov)** Neelov, Holm, J Chem Phys 132, 234103 (2010)
541
542.. _Veld:
543
544**(Veld)** In 't Veld, Ismail, Grest, J Chem Phys, 127, 144711 (2007).
545
546.. _Toukmaji:
547
548**(Toukmaji)** Toukmaji, Sagui, Board, and Darden, J Chem Phys, 113,
54910913 (2000).
550
551.. _Isele-Holder2012:
552
553**(Isele-Holder)** Isele-Holder, Mitchell, Ismail, J Chem Phys, 137,
554174107 (2012).
555
556.. _Isele-Holder2013:
557
558**(Isele-Holder2)** Isele-Holder, Mitchell, Hammond, Kohlmeyer, Ismail,
559J Chem Theory Comput 9, 5412 (2013).
560
561.. _Hardy2006:
562
563**(Hardy)** David Hardy thesis: Multilevel Summation for the Fast
564Evaluation of Forces for the Simulation of Biomolecules, University of
565Illinois at Urbana-Champaign, (2006).
566
567.. _Hardy2009:
568
569**(Hardy2)** Hardy, Stone, Schulten, Parallel Computing, 35, 164-177
570(2009).
571
572.. _Sutmann2013:
573
574**(Sutmann)** Sutmann, Arnold, Fahrenberger, et. al., Physical review / E 88(6), 063308 (2013)
575
576.. _Cerda2008:
577
578**(Cerda)** Cerda, Ballenegger, Lenz, Holm, J Chem Phys 129, 234104 (2008)
579
580.. _Sutmann2014:
581
582**(Sutmann)** G. Sutmann. ScaFaCoS - a Scalable library of Fast Coulomb Solvers for particle Systems.
583  In Bajaj, Zavattieri, Koslowski, Siegmund, Proceedings of the Society of Engineering Science 51st Annual Technical Meeting. 2014.
584