[DFTB-Plus-User] Scalapack error in MD run

Luca Babetto luca.babetto at greenenergystorage.eu
Mon Jul 25 10:13:07 CEST 2022


Dear Bálint,

As always, thank you for your reply. I checked the geometries and everything seems fine, there are no atoms "dangerously" close to one another which would cause the issue you mentioned, so I suspect that is not the cause.

I have tried running a few tests with the same exact input file, but i) reducing the number of cores from 20 to 6 and ii) using the shared memory version of DFTB+ and even though I have only let it run for a couple thousand steps, I have not encountered the same issue in either case (the "normal" run would consistently crash within a couple hundred MD steps).

This problem therefore seems to be related to simulations running on many cores. For information, we are running DFTB+ on a 64-core Threadripper PRO 3995WX workstation running Ubuntu 20.04, launching the simulations via Slurm for the job scheduling, on the DFTB+ software as installed directly via the conda repository (both for the openmpi and shared memory version). Let me know if there are more tests I can run to better help diagnose the problem, or if I should directly open an issue on GitHub with this information. We have not encountered this problem before on simulations running with the mio and "standard" 3ob parameters, as well as the xTB Hamiltonian.

Kind regards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.zfn.uni-bremen.de/pipermail/dftb-plus-user/attachments/20220725/81081a0f/attachment.htm>


More information about the DFTB-Plus-User mailing list