.. _fine-tuning: Fine-tuning =========== .. note:: Detailed fine-tuning instructions are work in progress. In the meantime, refer to the `metatrain fine-tuning tutorial `_, which covers the full workflow end-to-end. UPET models can be fine-tuned using the `metatrain `_ library. We currently recommend fine-tuning from our **PET-OMat** models, as they are pre-trained on a very large dataset and come in all sizes (from XS to XL), giving a good trade-off for most applications. Head selection -------------- By default, :py:class:`~upet.calculator.UPETCalculator` uses the energy and non-conservative forces/stresses heads **provided with the pre-trained models**. If you fine-tune a model and create a new head for your energy target, you need to explicitly select the corresponding variant at runtime (and similarly for non-conservative forces and stresses). As a running example, suppose you fine-tuned the energy head and named it ``energy/finetune`` in the ``options.yaml`` file passed to ``mtt train``. ASE interface ~~~~~~~~~~~~~ Load the fine-tuned checkpoint and construct the calculator with the ``variants`` parameter: .. code-block:: python from upet.calculator import UPETCalculator # For the new energy head called "energy/finetune" calc = UPETCalculator(checkpoint_path="finetuned.ckpt", variants={"energy": "finetune"}) The same applies to non-conservative forces and stresses, if you created new heads for them during fine-tuning. metatrain interface ~~~~~~~~~~~~~~~~~~~ When evaluating with ``mtt eval``, select the new head in the ``options.yaml`` file: .. code-block:: yaml systems: your-test-dataset.xyz targets: energy/finetune: key: "energy" unit: "eV" LAMMPS interface ~~~~~~~~~~~~~~~~ Select the new head with the ``variant/energy`` parameter in the ``pair_style metatomic`` command: .. code-block:: none read_data silicon.data pair_style metatomic model.pt variant/energy finetune pair_coeff * * 14