[DFTB-Plus-User] Problem of parallel computing with openmpi

yokoi-mp.pse yokoi at mp.pse.nagoya-u.ac.jp
Thu May 11 15:35:10 CEST 2017


Dear DFTB+ developers,

I am a new user of DFTB+, and now I have a problem of parallel computing 
with openmpi.

When I perform a MD calculation on 4 cores in one node, computational 
time effectively decreases compared to a single-core calculation. 
However, when the same MD calculation is done on 8 cores of two nodes, 
its computational time become 2-3 times longer than the case of 4 cores 
in one node.

I think that the calculated simulation box contains relatively a large 
number of atoms (496 Si atoms) and the number of nodes is just two, and 
thus communication overhead is not a cause. Would you give me advice 
about possible causes of this problem?

My architecture and package version are
CPU: intel core i7
OS: CentOS 6.8
intel compiler and mkl version: 14.0.2
openmpi version: 1.6.5 and 1.8.8
DFTB+: dftb+.mpi-r4473

An input file of the MD simulation is also attached as follows.

Geometry = GenFormat {
     <<< "config.dftb.in"       # This simulation box contains 496 Si 
atoms.
}
Driver = VelocityVerlet {
   Steps = 100000
   TimeStep [Femtosecond] = 1.0
   Thermostat = Andersen {
     Temperature [Kelvin] = 1000.0
     ReselectProbability = 0.200000000000000
     ReselectIndividually = No
     AdaptFillingTemp = Yes
   }
   OutputPrefix = "geo_end"
}
Hamiltonian = DFTB {
     SCC = No
   MaxAngularMomentum = {
     Si = "p"
   }
   SlaterKosterFiles = {
     Si-Si = "./Si-Si.skf"
   }
   KPointsAndWeights = SupercellFolding {
    1 0 0
    0 4 0
    0 0 6
    0.0 0.0 0.0
   }
}
Options = {
   WriteAutotestTag = Yes
   AtomResolvedEnergies = Yes
}

ParserOptions = {
   ParserVersion = 3
}

Sincerely,
tatsu



More information about the DFTB-Plus-User mailing list