All requests for technical support from the VASP group must be addressed to: vasp.materialphysik@univie.ac.at

# Difference between revisions of "Memory requirements of low-scaling GW and RPA algorithms"

(Created page with " The cubic scaling space-time RPA as well as GW algorithm require considerably more memory than the correspondong quartic-scaling implementations, two Green's functions <math>...") |
|||

(One intermediate revision by the same user not shown) | |||

Line 1: | Line 1: | ||

− | The cubic scaling space-time RPA as well as GW algorithm require considerably more memory than the correspondong quartic-scaling implementations, two Green's functions <math> | + | The cubic scaling space-time RPA as well as GW algorithm require considerably more memory than the correspondong quartic-scaling implementations, two Green's functions <math>G({\bf r,r'},i\tau_n)</math> have to be stored in real-space. To reduce the memory overhead, VASP exploits Fast Fourier Transformations (FFT) to avoid storage of the matrices on the (larger) real space grid, on the one hand. The precision of the FFT can be selected with {{TAG|PRECFOCK}}, where usually the values ''Fast'' sufficient. |

On the other hand, the code avoids storage of redundant information,i.e. both the Green's function and polarizability matrices are distributed as well as the individual imaginary grid points. The distribution of the imaginary grid points can be set by hand with the {{TAG|NTAUPAR}} and {{TAG|NOMEGAPAR}} tags, which splits the imaginary grid points {{TAG|NOMEGA}} into {{TAG|NTAUPAR}} time and {{TAG|NOMEGAPAR}} groups. For this purpose both tags have to be divisors of {{TAG|NOMEGA}}. | On the other hand, the code avoids storage of redundant information,i.e. both the Green's function and polarizability matrices are distributed as well as the individual imaginary grid points. The distribution of the imaginary grid points can be set by hand with the {{TAG|NTAUPAR}} and {{TAG|NOMEGAPAR}} tags, which splits the imaginary grid points {{TAG|NOMEGA}} into {{TAG|NTAUPAR}} time and {{TAG|NOMEGAPAR}} groups. For this purpose both tags have to be divisors of {{TAG|NOMEGA}}. |

## Latest revision as of 12:57, 12 August 2019

The cubic scaling space-time RPA as well as GW algorithm require considerably more memory than the correspondong quartic-scaling implementations, two Green's functions have to be stored in real-space. To reduce the memory overhead, VASP exploits Fast Fourier Transformations (FFT) to avoid storage of the matrices on the (larger) real space grid, on the one hand. The precision of the FFT can be selected with
PRECFOCK, where usually the values *Fast* sufficient.

On the other hand, the code avoids storage of redundant information,i.e. both the Green's function and polarizability matrices are distributed as well as the individual imaginary grid points. The distribution of the imaginary grid points can be set by hand with the NTAUPAR and NOMEGAPAR tags, which splits the imaginary grid points NOMEGA into NTAUPAR time and NOMEGAPAR groups. For this purpose both tags have to be divisors of NOMEGA.

The default values are usually reasonable choices provided the tag MAXMEM is set correctly and we strongly recommend to set MAXMEM instead of NTAUPAR. The optimum value of MPI groups that share the same time points will then be set internally to an optimum value.

The required storage for an low-scaling RPA or GW calculation depends mostly on NTAUPAR, the number of MPI groups that share same imaginary time points. A rough estimate for the required bytes is given by

NKPTS * (NGX*NGY*NGZ)^2 / ( NCPU / NTAUPAR ) * 16

where "NKPTS" is the number of irreducible q-points, "NCPU" the number of MPI ranks used for the job and "NGX,NGY,NGZ" the number of FFT grid points for the supercell, which is written in the OUTCAR file in the line

FFT grid for supercell: NGX = 32; NGY = 32; NGZ = 32

The smaller NTAUPAR is set, the less memory the job will require to finish successfully. Note that VASP finds the optimum value of NTAUPAR based on MAXMEM, the freely available memory per MPI rank on each node. Thus it is recommended not to set NTAUPAR in the INCAR, but to set MAXMEM instead and allow VASP to find the optimum NTAUPAR.

The approximate memory requirement is calculated in advance and printed to screen and OUTCAR as follows:

min. memory requirement per mpi rank 1234 MB, per node 9872 MB