[DFTB-Plus-User] Running DFTB+ with XTB (tblite)

Francesca Lønstad Bleken francesca.l.bleken at sintef.no
Thu Feb 29 09:57:53 CET 2024


Thank you very much, Bálint.

I have looked around in the folders and I could not find any tests with big enough systems to really test if good parallelization is achieved.

Of all the once specified with LONG_TEST in the testsuite I have a hard time identifying which ones are suited. Most are with very few number of atoms,
and I understood from a previous correspondence that only a few 100 atoms is not large enough system.
There is one with more than 400 atoms (HBI+H2O_periodic_stress), but that is specified without MPI (and indeed fail with more than one node for me).
Another with more 400+atoms (local_curr) focuses on transport which our system owners have not included in the installation. Before proceeding they have asked for a script that they can use to test performance.

I would be very grateful if you could help me towards one test that is demanding enough to see if there is significant improvement in speed once a few steps have run with md (preferably with xTB, but not necessarily).
Best regards,

Francesca

-----Original Message-----
From: DFTB-Plus-User <dftb-plus-user-bounces at mailman.zfn.uni-bremen.de> On Behalf Of Bálint Aradi
Sent: Wednesday, February 28, 2024 16:22
To: dftb-plus-user at mailman.zfn.uni-bremen.de
Subject: Re: [DFTB-Plus-User] Running DFTB+ with XTB (tblite)

Dear Francesca,

you can find test calculation of various kinds in the test folder of the repository.

https://github.com/dftbplus/dftbplus/tree/main/test/app/dftb%2B

The reference values are stored in the _autotest.tag files (note: values are in atomic units).

Best regards,

Bálint

On 27.02.24 11:14, Francesca Lønstad Bleken via DFTB-Plus-User wrote:
> Hi,
>
> thank you for your feedback.
> I still have issues, and also run into memory problems as soon as I
> increase the unit cell significantly. However, to make sure that the issue is not with how I have set up the calculation I am looking for a set of jobs or test calculations that can be run to check if the installation is done correctly.
>
> Do you have sample jobs that I could use for this purpose?
>
> Best regards,
> Francesca
>
> -----Original Message-----
> From: DFTB-Plus-User
> <dftb-plus-user-bounces at mailman.zfn.uni-bremen.de> On Behalf Of Bálint
> Aradi
> Sent: Tuesday, February 6, 2024 16:08
> To: dftb-plus-user at mailman.zfn.uni-bremen.de
> Subject: Re: [DFTB-Plus-User] Running DFTB+ with XTB (tblite)
>
> Dear Francesca,
>
> how much speed-up of you can gain with a parallel DFTB/xTB calculation depends on the hardware you use, and the size of the problem you treat.
> DFTB/xTB typically use quite small matrix sizes for their
> Hamiltonians, therefore it does not give you any advantage to try to
> spread those relatively small matrices over multiple nodes, unless
> your system is really big enough. (Your system with a few hundreds of
> atoms is rather
> small...)
>
> You should first test the efficiency of the parallelisation on a single node, buy using different number of processes (1, 2, 4, 8, 16, ...) up to the number of cores on that node. If the parallel efficiency already drops significantly within the node, it does not make sense to use further nodes for the calculation.
>
> If it scales within the node, and you go for more than 1 node, it is also important, to use the libraries provided by your HPC system, instead of the Conda provided one. In that case, you should (or better your system administrator) compile the code using the MPI-libraries provided/recommended by the system administrators.
>
> Best regards,
>
> Bálint
>
>
> On 06.02.24 15:22, Francesca Lønstad Bleken via DFTB-Plus-User wrote:
>> Dear Bálint,
>>
>> Thank you very much for these pointers. Unfortunately, with these changes in order to run it in parallel the simulation does not fail, but it also runs just as quickly using four nodes as 1 node, i.e. it is not running in parallel.
>> I wonder if one should rather install from source on supercomputers.
>>
>> Best regards,
>> Framcesca
>>
>> -----Original Message-----
>> From: DFTB-Plus-User
>> <dftb-plus-user-bounces at mailman.zfn.uni-bremen.de> On Behalf Of
>> Bálint Aradi
>> Sent: Tuesday, February 6, 2024 11:15
>> To: dftb-plus-user at mailman.zfn.uni-bremen.de
>> Subject: Re: [DFTB-Plus-User] Running DFTB+ with XTB (tblite)
>>
>> Dear Francesca,
>>
>> I just tried to run your system on my x86_64 laptop, using
>> dftbplus=*=mpi_* (which was resolved to dftbplus=23.1=mpi_mpich in my case). I could do two geometry steps without any issues, then I stopped it.
>>
>> Two remarks to your input:
>>
>> * You explicitly allow for OMP-threads (UseOmpThreads = Yes). This option is mostly used for testing, not in production, as is it leads very easily to an oversubscription on your system. My recommendation would be to delete this option and make sure, that your environment variable OMP_NUM_THREADS is set to 1. Invoke then mpirun with the number of cores on your system, e.g.
>>
>> OMP_NUM_THREADS=1 mpirun -n 4 dftb+
>>
>> * As your system is quite big, you might consider to use Gamma point only for the k-sampling (by setting the shift to 0 0 0). It should result in higher execution speed, as all relevant matrices for the diagonalization would be real, instead of complex.
>>
>> I hope this helps.
>>
>> Best regards,
>>
>> Bálint
>>
>> On 01.02.24 15:21, Francesca Lønstad Bleken wrote:
>>> Hi,
>>>
>>> I am testing out the possibility of running DFTB+ with XTB in parallel.
>>> Unfortunately I have issues, starting with a direct segmentation
>>> fault and absolutely no output when I try to run in parallel.
>>>
>>> When running on only one cpu I do not have any issues.
>>>
>>> I have installed using conda and mamba on a supercomputer using the
>>> **=mpi_** option and I did not get any specific errors or warnings
>>> from that.
>>>
>>> I have pasted the dftb_in.hsd below, and would be grateful if anyone
>>> knowledgeable could confirm or deny if this at least should work in theory.
>>>
>>> Best regards,
>>>
>>> Francesca
>>
>> --
>> Dr. Bálint Aradi
>> Bremen Center for Computational Materials Science, University of
>> Bremen
>> http://www/.
>> b%2F&data=05%7C02%7Cfrancesca.l.bleken%40sintef.no%7Cc7f1e5d444ae4735
>> 6ceb08dc387105f3%7Ce1f00f39604145b0b309e0210d8b32af%7C1%7C0%7C6384473
>> 05278787260%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luM
>> zIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=80wEUk8mB481YvTu
>> LRfnf%2Bpgztt0yjX5jrmn8mCsNjA%3D&reserved=0
>> ccms.uni-bremen.de%2Fcms%2Fpeople%2Fb-aradi%2F&data=05%7C02%7Cfrances
>> c
>> a.l.bleken%40sintef.no%7Cbed085b2d5d745bd431808dc272566a1%7Ce1f00f396
>> 0
>> 4145b0b309e0210d8b32af%7C1%7C0%7C638428288786640058%7CUnknown%7CTWFpb
>> G
>> Zsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0
>> %
>> 3D%7C0%7C%7C%7C&sdata=d5cd2YxnIU8fza7gh8cpZe65srfQU7CFPJ%2BipapLvr8%3
>> D
>> &reserved=0
>>
>>
>> _______________________________________________
>> DFTB-Plus-User mailing list
>> DFTB-Plus-User at mailman.zfn.uni-bremen.de
>> https://mail/
>> man.zfn.uni-bremen.de%2Fcgi-bin%2Fmailman%2Flistinfo%2Fdftb-plus-user
>> &
>> data=05%7C02%7Cfrancesca.l.bleken%40sintef.no%7Cbed085b2d5d745bd43180
>> 8
>> dc272566a1%7Ce1f00f39604145b0b309e0210d8b32af%7C1%7C0%7C6384282887866
>> 4
>> 9540%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJB
>> T
>> iI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=fvlXlCV2%2BlQZwYoIUH4pCA
>> C
>> YYugD7DwgmFnYL5x%2Fids%3D&reserved=0
>
> --
> Dr. Bálint Aradi
> Bremen Center for Computational Materials Science, University of
> Bremen
> http://www.b/
> ccms.uni-bremen.de%2Fcms%2Fpeople%2Fb-aradi%2F&data=05%7C02%7Cfrancesc
> a.l.bleken%40sintef.no%7Cc7f1e5d444ae47356ceb08dc387105f3%7Ce1f00f3960
> 4145b0b309e0210d8b32af%7C1%7C0%7C638447305278793444%7CUnknown%7CTWFpbG
> Zsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%
> 3D%7C0%7C%7C%7C&sdata=jhYoIGZ7ekht0XYz6mNT2zH5hRBXb6yfqLx9Y2VBRR8%3D&r
> eserved=0
>
>
> _______________________________________________
> DFTB-Plus-User mailing list
> DFTB-Plus-User at mailman.zfn.uni-bremen.de
> https://mail/
> man.zfn.uni-bremen.de%2Fcgi-bin%2Fmailman%2Flistinfo%2Fdftb-plus-user&
> data=05%7C02%7Cfrancesca.l.bleken%40sintef.no%7Cc7f1e5d444ae47356ceb08
> dc387105f3%7Ce1f00f39604145b0b309e0210d8b32af%7C1%7C0%7C63844730527879
> 8306%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBT
> iI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=Mlr6D7wRtrXK%2FkZpftZ2CrH
> Gn2Y%2B8%2BJNkisi3EsSQ4o%3D&reserved=0

--
Dr. Bálint Aradi
Bremen Center for Computational Materials Science, University of Bremen
http://www.bccms.uni-bremen.de/cms/people/b-aradi/





More information about the DFTB-Plus-User mailing list