You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+52-16
Original file line number
Diff line number
Diff line change
@@ -5,10 +5,8 @@
5
5
6
6
SevenNet (Scalable EquiVariance Enabled Neural Network) is a graph neural network (GNN) interatomic potential package that supports parallel molecular dynamics simulations with [`LAMMPS`](https://lammps.org). Its underlying GNN model is based on [`NequIP`](https://github.com/mir-group/nequip).
7
7
8
-
> [!CAUTION]
9
-
> SevenNet+LAMMPS parallel after the commit id of `14851ef (v0.9.3 ~ 0.9.5)` has a serious bug.
10
-
> It gives wrong forces when the number of mpi processes is greater than two. The corresponding pip version is yanked for this reason. The bug is fixed for the main branch since `v0.10.0`, and pip (`v0.9.3.post0`).
11
-
8
+
> [!NOTE]
9
+
> We will soon release a CUDA-accelerated version of SevenNet, which will significantly increase the speed of our pre-trained models on [Matbench Discovery](https://matbench-discovery.materialsproject.org/).
12
10
13
11
## Features
14
12
- Pre-trained GNN interatomic potential and fine-tuning interface.
@@ -19,29 +17,66 @@ SevenNet (Scalable EquiVariance Enabled Neural Network) is a graph neural networ
19
17
20
18
## Pre-trained models
21
19
So far, we have released three pre-trained SevenNet models. Each model has various hyperparameters and training sets, resulting in different accuracy and speed. Please read the descriptions below carefully and choose the model that best suits your purpose.
22
-
We provide the training set MAEs (energy, force, and stress) F1 scorefor WBM dataset and $\kappa_{\mathrm{SRME}}$ from phonondb. For details on these metrics and performance comparisons with other pre-trained models, please visit [Matbench Discovery](https://matbench-discovery.materialsproject.org/).
20
+
We provide the training set MAEs (energy, force, and stress) F1 score, and RMSD for the WBM dataset, as well as $\kappa_{\mathrm{SRME}}$ from phonondb and CPS (Combined Performance Score). For details on these metrics and performance comparisons with other pre-trained models, please visit [Matbench Discovery](https://matbench-discovery.materialsproject.org/).
23
21
24
22
These models can be used as interatomic potential on LAMMPS, and also can be loaded through ASE calculator by calling the `keywords` of each model. Please refer [ASE calculator](#ase_calculator) to see the way to load a model through ASE calculator.
25
23
Additionally, `keywords` can be called in other parts of SevenNet, such as `sevenn_inference`, `sevenn_get_model`, and `checkpoint` in `input.yaml` for fine-tuning.
26
24
27
25
**Acknowledgments**: The models trained on [`MPtrj`](https://figshare.com/articles/dataset/Materials_Project_Trjectory_MPtrj_Dataset/23713842) were supported by the Neural Processing Research Center program of Samsung Advanced Institute of Technology, Samsung Electronics Co., Ltd. The computations for training models were carried out using the Samsung SSC-21 cluster.
28
26
27
+
---
28
+
29
+
### **SevenNet-MF-ompa (17Mar2025)**
30
+
> Model keywords: `7net-mf-ompa` | `SevenNet-mf-ompa`
31
+
32
+
**This is our recommended pre-trained model**
33
+
34
+
This model leverages [multi-fidelity learning](https://pubs.acs.org/doi/10.1021/jacs.4c14455) to simultaneously train on the [MPtrj](https://figshare.com/articles/dataset/Materials_Project_Trjectory_MPtrj_Dataset/23713842), [sAlex](https://huggingface.co/datasets/fairchem/OMAT24), and [OMat24](https://huggingface.co/datasets/fairchem/OMAT24) datasets. As of March 17, 2025, it has achieved state-of-the-art performance on the [Matbench Discovery](https://matbench-discovery.materialsproject.org/) in the CPS (Combined Performance Score). We have found that this model outperforms most tasks, except for isolated molecule energy, where it performs slightly worse than SevenNet-l3i5.
35
+
36
+
```python
37
+
from sevenn.calculator import SevenNetCalculator
38
+
# "mpa" refers to the MPtrj + sAlex modal, used for evaluating Matbench Discovery.
39
+
calc = SevenNetCalculator('7net-mf-ompa', modal='mpa') # Use modal='omat24' for OMat24-trained modal weights.
40
+
```
41
+
Theoretically, the `mpa` modal should produce PBE52 results, while the `omat24` modal yields PBE54 results.
42
+
43
+
When using the command-line interface of SevenNet, include the `--modal mpa` or `--modal omat24` option to select the desired modality.
44
+
45
+
46
+
#### **Matbench Discovery**
47
+
| CPS | F1 | $\kappa_{\mathrm{SRME}}$ | RMSD |
48
+
|:---:|:---:|:---:|:---:|
49
+
|**0.883**|**0.901**|0.317|**0.0115**|
50
+
51
+
[Detailed instructions for multi-fidelity](https://github.com/MDIL-SNU/SevenNet/blob/main/sevenn/pretrained_potentials/SevenNet_MF_0/README.md)
52
+
53
+
[Link to the full-information checkpoint](https://figshare.com/articles/software/7net_MF_ompa/28590722?file=53029859)
29
54
30
55
---
56
+
### **SevenNet-omat (17Mar2025)**
57
+
> Model keywords: `7net-omat` | `SevenNet-omat`
58
+
59
+
This model was trained solely on the [OMat24](https://huggingface.co/datasets/fairchem/OMAT24) dataset. It achieves state-of-the-art (SOTA) performance in $\kappa_{\mathrm{SRME}}$ on [Matbench Discovery](https://matbench-discovery.materialsproject.org/); however, the F1 score was not available due to a difference in the POTCAR version. Similar to `SevenNet-MF-ompa`, this model outperforms `SevenNet-l3i5` in most tasks, except for isolated molecule energy.
31
60
61
+
[Link to the full-information checkpoint](https://figshare.com/articles/software/SevenNet_omat/28593938).
62
+
63
+
#### **Matbench Discovery**
64
+
* $\kappa_{\mathrm{SRME}}$: **0.221**
65
+
---
32
66
### **SevenNet-l3i5 (12Dec2024)**
33
-
> Keywords in ASE: `7net-l3i5` and `SevenNet-l3i5`
67
+
> Model keywords: `7net-l3i5` | `SevenNet-l3i5`
68
+
69
+
The model increases the maximum spherical harmonic degree ($l_{\mathrm{max}}$) to 3, compared to `SevenNet-0` with $l_{\mathrm{max}}$ of 2. While **l3i5** offers improved accuracy across various systems compared to `SevenNet-0`, it is approximately four times slower. As of March 17, 2025, this model has achieved state-of-the-art (SOTA) performance on the CPS metric among compliant models, newly introduced in this [Matbench Discovery](https://matbench-discovery.materialsproject.org/).
34
70
35
-
The model increases the maximum spherical harmonic degree ($l_{\mathrm{max}}$) to 3, compared to **SevenNet-0 (11Jul2024)** with $l_{\mathrm{max}}$ of 2.
36
-
While **l3i5** offers improved accuracy across various systems compared to **SevenNet-0 (11Jul2024)**, it is approximately four times slower.
71
+
#### **Matbench Discovery**
72
+
| CPS | F1 | $\kappa_{\mathrm{SRME}}$ | RMSD |
73
+
|:---:|:---:|:---:|:---:|
74
+
|0.764 |0.76|0.55|0.0182|
37
75
38
-
* Training set MAE: 8.3 meV/atom (energy), 0.029 eV/Ang. (force), and 2.33 kbar (stress)
39
-
* Matbench F1 score: 0.76, $\kappa_{\mathrm{SRME}}$: 0.560
40
-
* Training time: 381 GPU-days on A100
41
76
---
42
77
43
78
### **SevenNet-0 (11Jul2024)**
44
-
> Keywords in ASE:`7net-0`, `SevenNet-0`, `7net-0_11Jul2024`, and`SevenNet-0_11Jul2024`
79
+
> Model keywords::`7net-0` | `SevenNet-0` | `7net-0_11Jul2024` |`SevenNet-0_11Jul2024`
45
80
46
81
The model architecture is mainly line with [GNoME](https://github.com/google-deepmind/materials_discovery), a pretrained model that utilizes the NequIP architecture.
47
82
Five interaction blocks with node features that consist of 128 scalars (*l*=0), 64 vectors (*l*=1), and 32 tensors (*l*=2).
@@ -50,9 +85,11 @@ The model was trained with [MPtrj](https://figshare.com/articles/dataset/Materia
50
85
This model is loaded as the default pre-trained model in ASE calculator.
51
86
For more information, click [here](sevenn/pretrained_potentials/SevenNet_0__11Jul2024).
52
87
53
-
* Training set MAE: 11.5 meV/atom (energy), 0.041 eV/Ang. (force), and 2.78 kbar (stress)
54
-
* Matbench F1 score: 0.67, $\kappa_{\mathrm{SRME}}$: 0.767
55
-
* Training time: 90 GPU-days on A100
88
+
#### **Matbench Discovery**
89
+
| F1 | $\kappa_{\mathrm{SRME}}$ |
90
+
|:---:|:---:|
91
+
|0.67|0.767|
92
+
56
93
---
57
94
58
95
In addition to these latest models, you can find our legacy models from [pretrained_potentials](./sevenn/pretrained_potentials).
@@ -106,7 +143,6 @@ The model can be loaded through the following Python code.
0 commit comments