Tuning

The choice of parameters like the neighborlist cutoff, the smearing or the lr_wavelength/mesh_spacing has a large influence one the accuracy of the calculation. To help find the parameters that meet the accuracy requirements, this module offers tuning methods for the calculators.

The scheme behind all tuning functions is a gradient-based optimization, which tries to find the minimal of the error estimation formula and stops after the error is smaller than the given accuracy. Because these methods are gradient-based, be sure to pay attention to the learning_rate and max_steps parameter. A good choice of these two parameters can enhance the optimization speed and performance.

class torchpme.utils.tune_ewald(sum_squared_charges: float, cell: Tensor, positions: Tensor, smearing: float | None = None, lr_wavelength: float | None = None, cutoff: float | None = None, exponent: int = 1, accuracy: float = 0.001, max_steps: int = 50000, learning_rate: float = 0.1)[source]

Find the optimal parameters for torchpme.EwaldCalculator.

The error formulas are given online (now not available, need to be updated later). Note the difference notation between the parameters in the reference and ours:

\[ \begin{align}\begin{aligned}\alpha &= \left( \sqrt{2}\,\mathrm{smearing} \right)^{-1}\\K &= \frac{2 \pi}{\mathrm{lr\_wavelength}}\\r_c &= \mathrm{cutoff}\end{aligned}\end{align} \]

For the optimization we use the torch.optim.Adam optimizer. By default this function optimize the smearing, lr_wavelength and cutoff based on the error formula given online. You can limit the optimization by giving one or more parameters to the function. For example in usual ML workflows the cutoff is fixed and one wants to optimize only the smearing and the lr_wavelength with respect to the minimal error and fixed cutoff.

Parameters:
  • sum_squared_charges (float) – accumulated squared charges, must be positive

  • cell (Tensor) – single tensor of shape (3, 3), describing the bounding

  • positions (Tensor) – single tensor of shape (len(charges), 3) containing the Cartesian positions of all point charges in the system.

  • smearing (float | None) – if its value is given, it will not be tuned, see torchpme.EwaldCalculator for details

  • lr_wavelength (float | None) – if its value is given, it will not be tuned, see torchpme.EwaldCalculator for details

  • cutoff (float | None) – if its value is given, it will not be tuned, see torchpme.EwaldCalculator for details

  • exponent (int) – exponent \(p\) in \(1/r^p\) potentials

  • accuracy (float) – Recomended values for a balance between the accuracy and speed is \(10^{-3}\). For more accurate results, use \(10^{-6}\).

  • max_steps (int) – maximum number of gradient descent steps

  • learning_rate (float) – learning rate for gradient descent

Returns:

Tuple containing a float of the optimal smearing for the :class: CoulombPotential, a dictionary with the parameters for EwaldCalculator and a float of the optimal cutoff value for the neighborlist computation.

Return type:

tuple[float, dict[str, float], float]

Example

>>> import torch
>>> positions = torch.tensor(
...     [[0.0, 0.0, 0.0], [0.5, 0.5, 0.5]], dtype=torch.float64
... )
>>> charges = torch.tensor([[1.0], [-1.0]], dtype=torch.float64)
>>> cell = torch.eye(3, dtype=torch.float64)
>>> smearing, parameter, cutoff = tune_ewald(
...     torch.sum(charges**2, dim=0), cell, positions, accuracy=1e-1
... )

You can check the values of the parameters

>>> print(smearing)
0.7527865828476816
>>> print(parameter)
{'lr_wavelength': 11.138556788117427}
>>> print(cutoff)
2.207855328192979

You can give one parameter to the function to tune only other parameters, for example, fixing the cutoff to 0.1

>>> smearing, parameter, cutoff = tune_ewald(
...     torch.sum(charges**2, dim=0), cell, positions, cutoff=0.4, accuracy=1e-1
... )

You can check the values of the parameters, now the cutoff is fixed

>>> print(round(smearing, 4))
0.1402

We can also check the value of the other parameter like the lr_wavelength

>>> print(round(parameter["lr_wavelength"], 3))
0.255

and finally as requested the value of the cutoff is fixed

>>> print(cutoff)
0.4
class torchpme.utils.tune_pme(sum_squared_charges: float, cell: Tensor, positions: Tensor, smearing: float | None = None, mesh_spacing: float | None = None, cutoff: float | None = None, interpolation_nodes: int = 4, exponent: int = 1, accuracy: float = 0.001, max_steps: int = 50000, learning_rate: float = 0.1)[source]

Find the optimal parameters for torchpme.PMECalculator.

For the error formulas are given elsewhere. Note the difference notation between the parameters in the reference and ours:

\[\alpha = \left(\sqrt{2}\,\mathrm{smearing} \right)^{-1}\]

For the optimization we use the torch.optim.Adam optimizer. By default this function optimize the smearing, mesh_spacing and cutoff based on the error formula given elsewhere. You can limit the optimization by giving one or more parameters to the function. For example in usual ML workflows the cutoff is fixed and one wants to optimize only the smearing and the mesh_spacing with respect to the minimal error and fixed cutoff.

Parameters:
  • sum_squared_charges (float) – accumulated squared charges, must be positive

  • cell (Tensor) – single tensor of shape (3, 3), describing the bounding

  • positions (Tensor) – single tensor of shape (len(charges), 3) containing the Cartesian positions of all point charges in the system.

  • smearing (float | None) – if its value is given, it will not be tuned, see torchpme.PMECalculator for details

  • mesh_spacing (float | None) – if its value is given, it will not be tuned, see torchpme.PMECalculator for details

  • cutoff (float | None) – if its value is given, it will not be tuned, see torchpme.PMECalculator for details

  • interpolation_nodes (int) – The number n of nodes used in the interpolation per coordinate axis. The total number of interpolation nodes in 3D will be n^3. In general, for n nodes, the interpolation will be performed by piecewise polynomials of degree n - 1 (e.g. n = 4 for cubic interpolation). Only the values 3, 4, 5, 6, 7 are supported.

  • exponent (int) – exponent \(p\) in \(1/r^p\) potentials

  • accuracy (float) – Recomended values for a balance between the accuracy and speed is \(10^{-3}\). For more accurate results, use \(10^{-6}\).

  • max_steps (int) – maximum number of gradient descent steps

  • learning_rate (float) – learning rate for gradient descent

Returns:

Tuple containing a float of the optimal smearing for the :class: CoulombPotential, a dictionary with the parameters for PMECalculator and a float of the optimal cutoff value for the neighborlist computation.

Example

>>> import torch

To allow reproducibility, we set the seed to a fixed value

>>> _ = torch.manual_seed(0)
>>> positions = torch.tensor(
...     [[0.0, 0.0, 0.0], [0.5, 0.5, 0.5]], dtype=torch.float64
... )
>>> charges = torch.tensor([[1.0], [-1.0]], dtype=torch.float64)
>>> cell = torch.eye(3, dtype=torch.float64)
>>> smearing, parameter, cutoff = tune_pme(
...     torch.sum(charges**2, dim=0), cell, positions, accuracy=1e-1
... )

You can check the values of the parameters

>>> print(smearing)
0.6768985898318037
>>> print(parameter)
{'mesh_spacing': 0.6305733973385922, 'interpolation_nodes': 4}
>>> print(cutoff)
2.243154348782357

You can give one parameter to the function to tune only other parameters, for example, fixing the cutoff to 0.1

>>> smearing, parameter, cutoff = tune_pme(
...     torch.sum(charges**2, dim=0), cell, positions, cutoff=0.6, accuracy=1e-1
... )

You can check the values of the parameters, now the cutoff is fixed

>>> print(smearing)
0.22038829671671745
>>> print(parameter)
{'mesh_spacing': 0.5006356677116188, 'interpolation_nodes': 4}
>>> print(cutoff)
0.6