[DFTB-Plus-User] Parallelization - periodic systems - small unit cell
Bálint Aradi
aradi at uni-bremen.de
Tue Jun 21 14:29:19 CEST 2022
Dear Ariadni Boziki,
The problem is, that if your Hamiltonian is small, it does not make
sense to distribute its diagonalization over many processes, as you
won't gain any speedup.
If you have many k-points, on the other hand, you should define
processor groups (see the option "Groups" in the "Parallel" section. If
you have N groups, you would calculate N different k-points at the same
time. In your case, you could set the number of groups to be equal to
the nr. of k-points, so that basically each process diagonalizes a
separate Hamiltonian. (However, please note, that the nr. of groups must
be an integer divisor of the nr. of processes you use and the nr. of
k-points in your calculation.)
Best regards,
Bálint
On 21.06.22 11:57, Ariadni BOZIKI wrote:
> Dear all,
>
>
> I am running MD simulations using DFTB+ for a small unit cell of 16
> atoms. Ideally, my goal is to use a parallelized version of DFTB+ in
> order to decrease as much as possible the computational cost. So far, I
> installed the code via conda with MPI. In a supercomputer when I use 1
> node and 1 processor, my simulations runs fine. However, when I increase
> the number of processors I get errors like the one below, according to
> the number of processors I am requesting each time:
>
>
> /WARNING!
> -> Insufficient atoms for this number of MPI processors
> ERROR!
> -> Processor grid (3 x 4) too big (> 1 x 1)/
>
>
> From the error itself I understand that the small size of the unit cell
> does not allow for a higher processor grid. However, is there a way to
> use more processors? I tried to increase the number of groups in the
> Parallel section so that I get a distribution of processes over
> K-points, but it did not work either. I should actually mention here
> that I have a dense K-point grid of 6x6x6. I was wondering for example
> if a hybrid MPI/OpenMP-parallelization allows for more processors to be
> used and in turn a reduction of the computational cost.
>
>
> I would like to thank you in advance.
>
>
> Sincerely yours,
>
>
> Ariadni Boziki
>
>
> _______________________________________________
> DFTB-Plus-User mailing list
> DFTB-Plus-User at mailman.zfn.uni-bremen.de
> https://mailman.zfn.uni-bremen.de/cgi-bin/mailman/listinfo/dftb-plus-user
--
Dr. Bálint Aradi
Bremen Center for Computational Materials Science, University of Bremen
http://www.bccms.uni-bremen.de/cms/people/b-aradi/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 840 bytes
Desc: OpenPGP digital signature
URL: <http://mailman.zfn.uni-bremen.de/pipermail/dftb-plus-user/attachments/20220621/22f0558c/attachment.sig>
More information about the DFTB-Plus-User
mailing list