[DFTB-Plus-User] Running DFTB+ with XTB (tblite)

Bálint Aradi aradi at uni-bremen.de
Tue Feb 6 16:07:31 CET 2024


Dear Francesca,

how much speed-up of you can gain with a parallel DFTB/xTB calculation 
depends on the hardware you use, and the size of the problem you treat. 
DFTB/xTB typically use quite small matrix sizes for their Hamiltonians, 
therefore it does not give you any advantage to try to spread those 
relatively small matrices over multiple nodes, unless your system is 
really big enough. (Your system with a few hundreds of atoms is rather 
small...)

You should first test the efficiency of the parallelisation on a single 
node, buy using different number of processes (1, 2, 4, 8, 16, ...) up 
to the number of cores on that node. If the parallel efficiency already 
drops significantly within the node, it does not make sense to use 
further nodes for the calculation.

If it scales within the node, and you go for more than 1 node, it is 
also important, to use the libraries provided by your HPC system, 
instead of the Conda provided one. In that case, you should (or better 
your system administrator) compile the code using the MPI-libraries 
provided/recommended by the system administrators.

Best regards,

Bálint


On 06.02.24 15:22, Francesca Lønstad Bleken via DFTB-Plus-User wrote:
> Dear Bálint,
> 
> Thank you very much for these pointers. Unfortunately, with these changes in order to run it in parallel the simulation does not fail, but it also runs just as quickly using four nodes as 1 node, i.e. it is not running in parallel.
> I wonder if one should rather install from source on supercomputers.
> 
> Best regards,
> Framcesca
> 
> -----Original Message-----
> From: DFTB-Plus-User <dftb-plus-user-bounces at mailman.zfn.uni-bremen.de> On Behalf Of Bálint Aradi
> Sent: Tuesday, February 6, 2024 11:15
> To: dftb-plus-user at mailman.zfn.uni-bremen.de
> Subject: Re: [DFTB-Plus-User] Running DFTB+ with XTB (tblite)
> 
> Dear Francesca,
> 
> I just tried to run your system on my x86_64 laptop, using
> dftbplus=*=mpi_* (which was resolved to dftbplus=23.1=mpi_mpich in my case). I could do two geometry steps without any issues, then I stopped it.
> 
> Two remarks to your input:
> 
> * You explicitly allow for OMP-threads (UseOmpThreads = Yes). This option is mostly used for testing, not in production, as is it leads very easily to an oversubscription on your system. My recommendation would be to delete this option and make sure, that your environment variable OMP_NUM_THREADS is set to 1. Invoke then mpirun with the number of cores on your system, e.g.
> 
> OMP_NUM_THREADS=1 mpirun -n 4 dftb+
> 
> * As your system is quite big, you might consider to use Gamma point only for the k-sampling (by setting the shift to 0 0 0). It should result in higher execution speed, as all relevant matrices for the diagonalization would be real, instead of complex.
> 
> I hope this helps.
> 
> Best regards,
> 
> Bálint
> 
> On 01.02.24 15:21, Francesca Lønstad Bleken wrote:
>> Hi,
>>
>> I am testing out the possibility of running DFTB+ with XTB in parallel.
>> Unfortunately I have issues, starting with a direct segmentation fault
>> and absolutely no output when I try to run in parallel.
>>
>> When running on only one cpu I do not have any issues.
>>
>> I have installed using conda and mamba on a supercomputer using the
>> **=mpi_** option and I did not get any specific errors or warnings
>> from that.
>>
>> I have pasted the dftb_in.hsd below, and would be grateful if anyone
>> knowledgeable could confirm or deny if this at least should work in theory.
>>
>> Best regards,
>>
>> Francesca
> 
> --
> Dr. Bálint Aradi
> Bremen Center for Computational Materials Science, University of Bremen
> http://www.bccms.uni-bremen.de/cms/people/b-aradi/
> 
> 
> _______________________________________________
> DFTB-Plus-User mailing list
> DFTB-Plus-User at mailman.zfn.uni-bremen.de
> https://mailman.zfn.uni-bremen.de/cgi-bin/mailman/listinfo/dftb-plus-user

-- 
Dr. Bálint Aradi
Bremen Center for Computational Materials Science, University of Bremen
http://www.bccms.uni-bremen.de/cms/people/b-aradi/


-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 840 bytes
Desc: OpenPGP digital signature
URL: <http://mailman.zfn.uni-bremen.de/pipermail/dftb-plus-user/attachments/20240206/5a1e54cb/attachment.sig>


More information about the DFTB-Plus-User mailing list