[DFTB-Plus-User] SCC not converged for large system

xyuan at stu.xmu.edu.cn xyuan at stu.xmu.edu.cn
Thu Jun 25 08:38:10 CEST 2020


Hi all,
    Does anybody meet the SCC not converged problems for large system(about  2000atoms)?
 After initial several step's normal iterations, the electronic energy changes to positive value and will keep the status all the time until maximum cycles.
And the SCC error always keeps about 10^1 which can never reach the 10^-5 threshold.  
   
    My SCC information is as follow:

  iSCC  Total electronic         Diff electronic          SCC error    
    1   -0.22550217E+04    0.00000000E+00    0.98989869E+00
    2   -0.13266903E+04    0.92833144E+03    0.53919336E+01
    3    0.30763755E+05    0.32090445E+05    0.69495697E+01
    4    0.10089331E+06    0.70129554E+05    0.71586879E+01
    5    0.11620144E+06    0.15308132E+05    0.71311977E+01
    6    0.11707412E+06    0.87268319E+03    0.72229597E+01
    7    0.30625064E+05   -0.86449060E+05    0.70358079E+01
    8    0.42191900E+05    0.11566837E+05    0.63066979E+01
    9    0.32899594E+05   -0.92923064E+04    0.75667934E+01
   10    0.29401499E+05   -0.34980947E+04    0.66074300E+01
......
   1997    0.27429467E+05    0.19828862E+04    0.64429006E+01
   1998    0.38316675E+05    0.10887208E+05    0.63588855E+01
   1999    0.25176288E+05   -0.13140387E+05    0.66119929E+01
    2000    0.16600439E+05   -0.85758494E+04    0.66443277E+01

Total Energy:                    16621.6498759141 H       452298.1058 eV
Extrapolated to 0:               16621.6498759141 H       452298.1058 eV
Total Mermin free energy:        16621.6498759141 H       452298.1058 eV
Force related energy:            16621.6498759141 H       452298.1058 eV
WARNING!
-> SCC is NOT converged, maximal SCC iterations exceeded



Best wishes
xyuan at stu.xmu.edu.cn
 
From: dftb-plus-user-request
Date: 2020-06-22 19:51
To: dftb-plus-user
Subject: DFTB-Plus-User Digest, Vol 70, Issue 15
Send DFTB-Plus-User mailing list submissions to
dftb-plus-user at mailman.zfn.uni-bremen.de
 
To subscribe or unsubscribe via the World Wide Web, visit
https://mailman.zfn.uni-bremen.de/cgi-bin/mailman/listinfo/dftb-plus-user
 
or, via email, send a message with subject or body 'help' to
dftb-plus-user-request at mailman.zfn.uni-bremen.de
 
You can reach the person managing the list at
dftb-plus-user-owner at mailman.zfn.uni-bremen.de
 
When replying, please edit your Subject line so it is more specific
than "Re: Contents of DFTB-Plus-User digest..."
 
 
Today's Topics:
 
   1. Re: mpiprocs and ompthreads setting (Ben Hourahine)
   2. Re: How fast is DFTB+ in MD simulation? (Ben Hourahine)
 
 
----------------------------------------------------------------------
 
Message: 1
Date: Mon, 22 Jun 2020 12:38:10 +0100
From: Ben Hourahine <benjamin.hourahine at strath.ac.uk>
To: dftb-plus-user at mailman.zfn.uni-bremen.de
Subject: Re: [DFTB-Plus-User] mpiprocs and ompthreads setting
Message-ID: <0b3c477b-2430-b64a-ee1d-54d3902c1f83 at strath.ac.uk>
Content-Type: text/plain; charset="utf-8"
 
Hello Zhang,
 
yes, both of these can change calculation speed (both wall clock and
also cpu time used). Have a look at
 
https://dftbplus-recipes.readthedocs.io/en/stable/parallel/index.html
 
for some discussion about openMP (most of the theory is relevant for MPI
parallel as well)
 
OMP_NUM_THREADS is the shell variable for setting number of threads,
while ompthreads is an instruction to the queueing system to set that
variable as well as  management tasks over job scheduling. There are
some more pbs examples at
 
https://www2.cisl.ucar.edu/resources/computational-systems/cheyenne/running-jobs/pbs-pro-job-script-examples
 
including a pure openMP parallel calculation. Depending on how you have
compiled DFTB+, it may be parallelised with openMP, MPI or both.
Generally, mixed openMP and MPI does not give good performance.
 
For 3x10^4 atoms you may get some memory limitations, as that will
require ~1Tb of memory if you use an eigensolver. If your system is
suitable you might want to explore using one of the alternative solvers
from ELSI (these are only available for MPI parallel code and need to be
enabled at compile time).
 
Regards
 
Ben
 
 
On 19/06/2020 13:53, jsxz wrote:
> Hi All,
>
> May I ask how to set the mpipros and ompthreads in the submission
> script? Do these two parameters affect the calculation speed?
> Is the ompthreads the same as OMP_NUM_THREADS?
> My system is about 30000 atoms.  I plan to calculate the molecular
> dynamics. The simulation time is aout 10 ps. 
> The documentation below is my submission script. The select is the
> number of nodes. The number of CPU per node is 24.
>
> #!/bin/bash
> #PBS -N dftb
> #PBS -q normal
> #PBS -P 13101xxx
> #PBS -l select=30:ncpus=24:mpiprocs=12:ompthreads=24:mem=96gb
>
> #PBS -l walltime=24:00:00
> #PBS -j oe
>
>
> module load composerxe/2016.1.150
> module load intelmpi
> cd "$PBS_O_WORKDIR"
> export OMP_NUM_THREADS=12
>
> /home/users/xz/software/DFTB/18.2/bin/dftb+ <  dftb_in.hsd > out
>
> Thanks a lot.
>
> Best Regards,
> Chao Zhang
>
>
>  
>
>
> _______________________________________________
> DFTB-Plus-User mailing list
> DFTB-Plus-User at mailman.zfn.uni-bremen.de
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmailman.zfn.uni-bremen.de%2Fcgi-bin%2Fmailman%2Flistinfo%2Fdftb-plus-user&data=02%7C01%7Cbenjamin.hourahine%40strath.ac.uk%7C4ba8cf7c18584f7e39d308d8145462e4%7C631e0763153347eba5cd0457bee5944e%7C0%7C0%7C637281700548282867&sdata=uOmltkQ7UwTbGCYNaI0SwneGDi6Wch8c2TlYBQSdrxc%3D&reserved=0
 
-- 
        Dr.  B.  Hourahine,  Senior  lecturer
            SUPA, Department  of  Physics,
             University  of  Strathclyde,
              John  Anderson  Building,
          107 Rottenrow, Glasgow G4 0NG, UK.                    
  +44 141 548 2325, benjamin.hourahine at strath.ac.uk
 
The  Department is  a partner  in SUPA,  the Scottish
            Universities Physics Alliance
 
The University  of Strathclyde  is a  charitable body,
       registered in Scotland, number SC015263
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.zfn.uni-bremen.de/pipermail/dftb-plus-user/attachments/20200622/9d5476df/attachment-0001.htm>
 
------------------------------
 
Message: 2
Date: Mon, 22 Jun 2020 12:51:41 +0100
From: Ben Hourahine <benjamin.hourahine at strath.ac.uk>
To: dftb-plus-user at mailman.zfn.uni-bremen.de
Subject: Re: [DFTB-Plus-User] How fast is DFTB+ in MD simulation?
Message-ID: <1d651f53-fa44-f312-ae0d-bd4e8f426a5d at strath.ac.uk>
Content-Type: text/plain; charset="windows-1252"
 
Hello Moyassar,
 
 
you are probably using too many cores to get efficient performance on
such a small job (the matrix dimension is only ~2500).
 
 
Are you comparing equivalent tolerances between the CP2K and DFTB+
calculations? There is probably some difference in the self-consistency
requirements between the codes and you may also be using different time
steps for the MD. Also what is your criteria for reliable results?
 
 
For standard Born-Oppenheimer MD, the accuracy of the energy is
quadratic in the self-consistency tollerance (for DFTB+ the energy error
in Hartree is approximately equal to the square of the SCC tolerance in
electron charges). The error in forces is closer to linearly proportional.
 
 
If you can still get good results (test carefully), you can accelerate
MD calculations by:
 
 
1) Change the SCCTolerance to a larger value (the default is 1.0E-5 you
might be OK with 1 - 2 orders of magnitude looser values depending on
what you are doing).
 
 
2) Try increasing the accuracy of the evaluated forces to compensate for
the poorer convergence by using  ForceEvaluation = Dynamics (or DynamicsT0).
 
 
3) You didn't mention whether you use a finite temperature electron
(this can speed up self-consistency) or if you are using a periodic
calculation (if its a water box, probably a single k-point at 0,0,0 will
work).
 
 
4) Test number of processors against performance. You are choosing to
trade off between wall clock time to complete against computation
resources. But if you request too many processes, this can actually
increase wall clock time to finish (look up 'spin locks' for an
explanation).
 
 
Unfortunately the XL scheme as implemented currently does not support
thermostats (this might change in the future), but you could thermalize
your calculation with conventional NVT (thermostat) and then restart an
NVE calculation if that can give you the properties you need.
 
 
Regards
 
 
Ben
 
 
On 19/06/2020 21:09, Moyassar Meshhal wrote:
> Hello DFTB+ community,
>
> I've noticed that DFTB+ is super fast as well as efficient in geometry
> optimization.
> On contrary, in MD it's relatively slow compared to DFTB in CP2K. 
>
> For a system consists of about 250 atoms (C, H and O) in a box of
> water (250-300 water molecules), DFTB+ produces about 800 steps per
> day using 16 cpus, and increasing the number of cpus up to 32 does not
> show much improvement in fastening the simulation. On the other
> hand, CP2K is about 8-10 times faster. But DFTB+'s results are much
> reliable than cp2k.
>
> Does this sound normal? As if it is normal, one would expect that it
> may take about 2 months to get a long MD trajectory for each of my
> systems!
> Is there any way to make the calculations faster?
>
> On the "dftbplus-recipes" website, SCC-MD could be speeded up by
> using the extended Lagrangian (XL) scheme, however, according to the
> manual: "The extended Lagrangian implementation only works for the
> (N,V,E) ensemble so far, so neither thermostats nor barostats are
> allowed."
> So, it does not apply in my case as I do NVT MD.
>
> Best Regards,
> Moyassar
>
>
>
> _______________________________________________
> DFTB-Plus-User mailing list
> DFTB-Plus-User at mailman.zfn.uni-bremen.de
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmailman.zfn.uni-bremen.de%2Fcgi-bin%2Fmailman%2Flistinfo%2Fdftb-plus-user&data=02%7C01%7Cbenjamin.hourahine%40strath.ac.uk%7C483fdc0fcbc340811cb808d8148ca7a6%7C631e0763153347eba5cd0457bee5944e%7C0%7C0%7C637281941903342295&sdata=jELumBQUePJczhi7i32rPpcEmEJJQRdTjPAF41kwyaQ%3D&reserved=0
 
-- 
        Dr.  B.  Hourahine,  Senior  lecturer
            SUPA, Department  of  Physics,
             University  of  Strathclyde,
              John  Anderson  Building,
          107 Rottenrow, Glasgow G4 0NG, UK.                    
  +44 141 548 2325, benjamin.hourahine at strath.ac.uk
 
The  Department is  a partner  in SUPA,  the Scottish
            Universities Physics Alliance
 
The University  of Strathclyde  is a  charitable body,
       registered in Scotland, number SC015263
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.zfn.uni-bremen.de/pipermail/dftb-plus-user/attachments/20200622/b41ccd38/attachment.htm>
 
------------------------------
 
Subject: Digest Footer
 
_______________________________________________
DFTB-Plus-User mailing list
DFTB-Plus-User at mailman.zfn.uni-bremen.de
https://mailman.zfn.uni-bremen.de/cgi-bin/mailman/listinfo/dftb-plus-user
 
 
------------------------------
 
End of DFTB-Plus-User Digest, Vol 70, Issue 15
**********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.zfn.uni-bremen.de/pipermail/dftb-plus-user/attachments/20200625/acf16dcf/attachment-0001.htm>


More information about the DFTB-Plus-User mailing list