next up previous contents index
Next: T3D_SMA Up: Pre-compiler flags overview, parallel Previous: use_collective   Contents   Index

MPI_BLOCK

Presently VASP breaks up immediate MPI send (MPI_isend) and MPI receive (MPI_irecv) calls using large data blocks into smaller ones. We found that large blocks cause a dramatic bandwidth reduction on LINUX clusters linked by a 100 Mbit and/or Gbit Ethernet (all Kernels, all mpi versions including 2.6.X Linux kernels, lam.7.1.1). MPI_BLOCK determines the block size. If use_collective is used, MPI_BLOCK is used only for the fast global sum routine (search for M_sumf_d in mpi.F).



N.B. Requests for support are to be addressed to: vasp.materialphysik@univie.ac.at