NESSi
v1.0.2
The NonEquilibrium Systems Simulation Library

After downloading or cloning the NESSi repository nessi
, the example programs are compiled in a similar fashion as the libcntr
library. We have prepared a CMake build environment and it is most convenient to write a configuration shell script configure.sh
of this type:
The dependencies for the example programs are the same as for libcntr
. The eigen3
library and, optionally (turned on by hdf5=ON
), the HDF5
library are required. Make sure the corresponding libraries can be found in the library path CMAKE_LIBRARY_PATH
, while the corresponding headers should be placed in CMAKE_INCLUDE_PATH
. Furthermore, the examples depend on the libcntr
library. Therefore, the library path for libcntr
should be provided by CMAKE_LIBRARY_PATH
, while the header cntr/cntr.hpp
should be found in the include path. Create a build directory (for instance, cbuild/
), navigate there and run the configure script:
After successful configuration, compile via
The executables are then found in the exe/
directory.
The utils/
directory contains useful python driver scripts which simplify the execution of the example programs. In order to run the python script, we need to make sure to set the python path to nessi/libcntr/python
and/or nessi/libcntr/python3
. The scripts should be run from nessi/examples
. In the following table, we summarize the python scripts and the corresponding exe files and provide brief explanations of what is done in the python scripts.
Summary of python script to run example codes in nessi  
test_equilibrium.py  Runs the execute file 
test_nonequilibrium.py  Runs 
demo_hubbard_chain.py  Runs 
demo_Holstein_impurity.py  Runs 
demo_Holstein.py  Runs 
demo_Holstein_sc.py  Runs 
demo_gw.py  Runs 
demo_integration.py  Runs integration.x to demonstrate the accuracy of the Gregory integration implemented in nessi . 
This test program tests the accuracy of the solution of the Dyson equation on the Matsubara axis (see Dyson equation) in the context of a numerically exactly solvable downfolding problem. The scaling of the averaged error can be obtained for different values of SolveOrder
.
We consider a \(2\times 2\) matrixvalued timeindependent Hamiltonian
\begin{align} \label{eq:ham2x2} \epsilon = \begin{pmatrix} \epsilon_1 & i \lambda \\ i\lambda & \epsilon_2 \end{pmatrix} \ . \end{align}
The corresponding numerically exact GF \(G(t,t^\prime)\) (assuming fermions) is computed using the routine cntr::green_from_H
. One can also calculate the (1,1) component of this GF by downfolding. In this procedure, we solve
\begin{align} \label{eq:downfold1} \left(i \partial_t  \epsilon_1 \right) g_1(t,t^\prime) = \delta_\mathcal{C}(t,t^\prime) + \int_{\mathcal{C}}d \bar{t}\, \Sigma(t,\bar t) g_1(\bar t,t^\prime) \end{align}
with the embedding selfenergy \(\Sigma(t,t^\prime) = \lambda^2 g_2(t,t^\prime)\). Here, \(g_2(t,t^\prime)\) is the free GF with respect to
\begin{align} \left(i \partial_t  \epsilon_2 \right) g_2(t,t^\prime) = \delta_\mathcal{C}(t,t^\prime) \ . \end{align}
The solution of the downfolding Dyson equation \eqref{eq:downfold1} must be identical to the \((1,1)\) matrix element of \(G\): \(G_{1,1}(t,t^\prime)=g_1(t,t^\prime)\).
The test program for the equilibrium problem solves Eq. \eqref{eq:downfold1} for the Matsubara component only (tstp=1
), and computes the differences between the exact and approximate solution as
\begin{align} \label{eq:test_eq_err} \mathrm{err.} = \frac{1}{N_\tau} \Delta[G_{1,1},g_1] \ . \end{align}
Here, \(\Delta[A,B]\) denotes the norm difference (see Comparing Green's functions) at tstp=1
.
In the test program programs/test_equilibrium.cpp
, we have implemented this task in a straightforward way. The code is a direct extension of a generic libcntr
based program (see Creating a custom program). The parameters of the Hamiltonian are defined as constants. In particular, we fix \(\epsilon_1=1\), \(\epsilon_2=1\), \(\lambda=0.5\). The chemical potential is set to \(\mu=0\) and the inverse temperature fixed to \(\beta=20\). After reading the input from file (see Creating and passing input files), we define the \(2\times 2\) Hamiltonian \eqref{eq:ham2x2} as an eigen3
complex matrix:
The \(1\times 1\) Hamiltonian representing \(\epsilon_1\) is constructed as
Here, eps_11_func
is a contour function entering the solvers below. Note the first argument in the constructor of CFUNC
: the number of realtime points \(N_t\) is set to 1. In this case, only the Matsubara part is addressed. Its value is fixed to the constant \(1\times 1\) matrix by the last line. With the Hamiltonians defined, we can initialize and construct the free \(2\times 2\) exact GF by
The time step h
is a dummy argument here, as the realtime components are not addressed. From the exact GF, we extract the submatrix \(G_{1,1}\) by
Finally, we define the embedding selfenergy by
The last line performs the multiplication of \(\mathcal{T}[\Sigma]_{1}\) with the scalar \(\lambda^2\). After initializing the approximate GF G_approx
, we can solve the Matsubara Dyson equation and compute the average error:
The relevant source files are the following:
programs/test_equilibrium.cpp  source code of program 
utils/test_equilibrium.py  python driver script 
The program is run by
Here, inp/input_equilibrium.inp
is a libcntr
input file with the format described in Creating and passing input files. It contains the variables
Ntau  Number of points on the Matsubara axis 
SolveOrder  Solution order 
The output is appended to the file out/test_nonequilibrium.dat
in two columns, the error of the Fourier method and the Newton iteration (see [6] for details). For instance, the input Ntau=100
and SolveOrder=2
yields the output
We provide a useful python driver script, which runs test_equilibrium.x
for a range of values of Ntau
. It uses the python module ReadCNTR
(see Python tools). It can be launched via
The last argument is SolveOrder=1,...,5
. After running the calculation, the logarithm of the error is plotted against \(\log(N_\tau)\), and the scaling of the error is extracted by a linear regression.
SolveOrder=1
(left) and SolveOrder=5
(right panel). This test program tests  similar to the example Test of accuracy of Dyson solver: equilibrium  the accuracy of the solution of the Dyson equation (see Dyson equation) and Volterra integral equation (see VIE2) for the numerically exactly solvable downfolding problem. The scaling of the averaged error can be obtained for different values of SolveOrder
.
In this example, we solve the Dyson equation \eqref{eq:downfold1} on the whole KB contour \(\mathcal{C}\). Equivalently, one can also solve the Dyson equation in integral form
\begin{align} \label{eq:downfold1_vie} & g_1(t,t^\prime) + [F\ast g_1](t,t^\prime) = g^{(0)}_1(t,t^\prime) \ , \\ & g_1(t,t^\prime) + [g_1\ast F^\ddagger](t,t^\prime) = g^{(0)}_1(t,t^\prime) \ , \end{align}
where \(F= \Sigma\ast g_1^{(0)}\) and \(F^\ddagger=g_1^{(0)}\ast \Sigma\). The free GF \(g_1^{(0)}(t,t^\prime)\) is known analytically and computed by calling the routine cntr::green_from_H
(see Green's function for a constant or timedependent Hamiltonian)
The structure of the test program is analogous to the equilibrium case. After reading the parameters from the input file and fixing h=Tmax/Nt
, the embedding selfenergy is constructed on the whole contour by
Solving a generic Dyson equation is accomplished by three steps (see Dyson equation):
n=0,...,SolveOrder
by using the starting algorithm (bootstrapping), andn=SolveOrder+1,...,Nt
.For Eq. \eqref{eq:downfold1}, this task is accomplished by
The deviation of the nonequilbrium Keldysh components from the exact solution is calculated using the Euclidean norm distance (see Comparing Green's functions). Here, we define the average error by
Note that we compare here individual components of the Green's functions (G_exact
and G_approx
) by using cntr::distance_norm2_XXX. One may use also a single cntr::distance_norm2 routine.
Solving the integral form \eqref{eq:downfold1_vie} by using vie2
proceeds in three analogous steps, as for Dyson (see Nonsingular VIE2). The following source code demonstrates the implemenation:
For convenience, we have defined the routine GenKernel
which calculates the convolution kernels \(F\) and \(F^\ddagger\):
The error is computed in the same way as above:
The implementation of this example can be found here:
programs/test_nonequilibrium.cpp  source code of program 
utils/test_nonequilibrium.py  python driver script 
The program is run by
As above, inp/input_nonequilibrium.inp
is an input file formatted as explained in Creating and passing input files, providing the following input variables:
Nt  Number of points on the realtime axis 
Ntau  Number of points on the Matsubara axis 
Tmax  Propagation time, defines the time step interval by h=Tmax/Nt 
SolveOrder  Solution order 
The output is appended to the file out/test_nonequilibrium.dat
. The two columns contain the error of solving the problem as Dyson equation and as Volterra integral equation, respectively. As an example, setting Nt=100
, Ntau=200
, Tmax=5.0
and SolveOrder=1
yields
The python driver script utils/test_nonequilibrium.py
provides a simple interfaces to run the test program, which performs calculations for a series of values of Nt
. Simply run
where k=1,...,5
stands for the SolveOrder
input.
After running the calculations, the logarithm of the error is plotted against \(\log(N_t)\). The scaling of the error is extracted by a linear regression of the logarithmic errors. The plots produced by utils/test_nonequilibrium.py
are shown in Fig. Nonequilibrium_scaling. As confirmed in Fig. Nonequilibrium_scaling, the average error scales as \(\mathcal{O}(h^{k+1})\) for dyson
, while solving the problem using vie2
yields a \(\mathcal{O}(h^{k+2})\) scaling ( \(k\) stands for SolveOrder
.)
SolveOrder=1
(left) and SolveOrder=5
(right panel). We adopt the same parameters as for the equilibrium case but fix Ntau=800
In this example we consider the onedimension (1D) Hubbard model with the Hamiltonian
\begin{align} \label{eq:hubbmodel1} \hat{H}_0 = J \sum_{\langle i,j \rangle, \sigma} \hat{c}^\dagger_{i\sigma} \hat{c}_{j \sigma} + \frac{U}{2}\sum_{i,\sigma} \hat{n}_{i\sigma} \hat{n}_{i\bar\sigma} \ , \end{align}
where \(\langle i,j\rangle\) constrains the lattice sites \(i,j\) to the nearest neighbors, while \(\bar\sigma=\uparrow,\downarrow\) for \(\sigma=\downarrow,\uparrow\). We consider \(M\) lattice sites with open boundary conditions. Furthermore, we restrict ourselves to the paramagnetic case with an equal number of spinup ( \(N_\uparrow\)) and and spindown ( \(N_\downarrow\)) particles. The number of particles determines the filling factor \(\bar{n}=N_\uparrow/M\).
In this example, the system is excited with an instantaneous quench of the onsite potential of the first lattice site to \(w_0\):
\begin{align} \label{eq:hubbchain_quench} \hat{H}(t) = \hat{H}_0 + \theta(t) w_0 \sum_\sigma \hat{c}^\dagger_{1\sigma} \hat{c}_{1\sigma} \ . \end{align}
Here, we will treat the dynamics with respect to the Hamiltonian \eqref{eq:hubbchain_quench} within the secondBorn (2B) and the \(GW\) approximation.
The source code is organized as follows:
programs/hubbard_chain_2b.cpp  main program for the 2B approximation 
programs/hubbard_chain_gw.cpp  main program for the \(GW\) approximation 
programs/hubbard_chain_selfen_impl.cpp  implementation of selfenergy approximations 
Corresponding declarations of the routines are in the .hpp
files with the same name as the .cpp
files. We will discuss the implementation of the 2B approximation in detail and then remark on how to adopt it for the \(GW\) approximation.
The 2B approximation corresponds to the secondorder expansion in terms of the Hubbard repulsion \(U(t)\), which we treat as time dependent here for generality. Defining the GF with respect to the lattice basis \(i,j\), the 2B is defined by
\begin{align} \label{eq:sigma_hubb_2b} \Sigma_{ij,\sigma}(t,t^\prime) = U(t)U(t^\prime) G_{ij,\sigma}(t,t^\prime) G_{ij,\bar\sigma}(t,t^\prime)G_{ji,\bar\sigma}(t^\prime,t) \ . \end{align}
Due to the paramagnetic constraint, \(G_{ij,\sigma}(t,t^\prime) = G_{ij,\bar\sigma}(t,t^\prime)\); the spin index can be dropped throughout. The 2B selfenergy \eqref{eq:sigma_hubb_2b} is implemented in two steps.
cntr::Bubble1
and subsequent multiplication by (1).cntr::Bubble2
.The 2B selfenergy is computed by the routine Sigma_2B
(found in programs/hubbard_chain_selfen_impl.cpp
) as follows:
First, the polarization Pol
, which represents \(P_{ij}(t,t^\prime)\), is defined for the given time step. After computing \(P_{ij}(t,t^\prime)\) by the function
the lines
perform the operation \(P_{ij}(t,t^\prime) \rightarrow P_{ij}(t,t^\prime) U(t^\prime)\) and \(P_{ij}(t,t^\prime) \rightarrow U(t)P_{ij}(t,t^\prime) \), respectively. Finally, cntr::Bubble2
computes \(\Sigma_{ij}(t,t^\prime)\).
The total selfenergy also includes the HartreeFock (HF) contribution, which we incorporate into the meanfield Hamiltonian \(\epsilon^{\mathrm{MF}}_{ij}(t) = \epsilon^{0}_{ij}(t) + U (n_i\bar{n}) \) with the occupation (per spin) \(n_i(t)= \mathrm{Im}[G^<_{ii}(t,t)]\). The shift of chemical potential \(U \bar{n}\) is a convention to fix the chemical potential at half filling at \(\mu=0\). In the example program, the meanfield Hamiltonian is represented by the cntr::function
eps_mf
. Updating eps_mf
is accomplished by computing the density matrix using the class routine cntr::herm_matrix::density_matrix
.
The general procedure to implement a quench of some parameter \(\lambda\) at \(t=0\) is to represent \(\lambda\) by a contour function \(\lambda_n\): \(\lambda_{1}\) corresponds to the prequench value which determines the thermal equilibrium, while \(\lambda_n\) with \(n\ge 0\) governs the time evolution. In the example program, we simplify this procedure by redefining \(\epsilon^{0}_{ij} \rightarrow \epsilon^{0}_{ij} + w_0 \delta_{i,1}\delta_{j,1}\) after the Matsubara Dyson equation has been solved.
The program is structured similar to the previous examples. After reading variables from file and initializing the variables and classes, the Matsubara Dyson equation is solved in a selfconsistent fashion. The example below illustrates this procedure for the 2B approximation.
Updating the meanfield Hamiltonian (hubb::Ham_MF
), the selfenergy (hubb::Sigma_2B
) and solving the corresponding Dyson equation (cntr::dyson_mat
) is repeated until selfconsistency has been reached, which in practice means that the deviation between the previous and updated GF is smaller than MatsMaxErr
. For other selfenergy approximations, the steps described above (updating auxiliary quantities) have to be performed before the selfenergy can be updated.
Once the Matsubara Dyson equation has been solved up to the required convergence threshold, the starting algorithm for time steps \(n=0,\dots,k\) can be applied. To reach selfconsistency for the first few time steps, we employ the bootstrapping loop:
Finally, after the bootstrapping iteration has converged, the time propagation for time steps n>SolveOrder
is launched. The selfconsistency at each time step is accomplished by iterating the update of the meanfield Hamiltonian, GF and selfenergy over a fixed number of CorrectorSteps
. As an initial guess, we employ a polynomial extrapolation of the GF from time step \(n1\) to \(n\), as implemented in the routine cntr::extrapolate_timestep
. Thus, the time propagation loop takes the form
After the GF has been computed for all required time steps, we compute the observables. In particular, the conservation of the total energy provides a good criterion to assess the accuracy of the calculation. The total energy for the Hubbard model \eqref{eq:hubbmodel1} is given in terms of the GalitskiiMigdal formula
\begin{align} E = \frac{1}{2}\mathrm{Tr}\left[\rho(t)\left(\epsilon^{(0)} +\epsilon^{\mathrm{MF}}(t)\right)\right] + \frac{1}{2} \mathrm{Im}\mathrm{Tr} \left[\Sigma\ast G\right]^<(t,t) \ .\label{Eq:Total_ener} \end{align}
The last term, known as the correlation energy, is most conveniently computed by the routine:
Next we consider the \(GW\) approximation. We remark that we formally treat the Hubbard interaction as spinindependent, while the spinsummation in the polarization \(P\) (which is forbidden by the Pauli principle) is excluded by the corresponding prefactor.
Within the same setup as above, the \(GW\) approximation is defined by
\begin{align} \Sigma_{ij}(t,t^\prime) = i G_{ij}(t,t^\prime) \delta W_{ij}(t,t^\prime) \ , \end{align}
where \(\delta W_{ij}(t,t^\prime)\) denotes the dynamical part of the screened interaction \(W_{ij}(t,t^\prime) = U \delta_{ij}\delta_\mathcal{C}(t,t^\prime) + \delta W_{ij}(t,t^\prime)\). We compute \(\delta W_{ij}(t,t^\prime)\) from the charge susceptibility \(\chi_{ij}(t,t^\prime)\) by \(\delta W_{ij}(t,t^\prime) = U(t) \chi_{ij}(t,t^\prime) U(t^\prime)\). In turn, the susceptibility obeys the Dyson equation
\begin{align} \label{eq:dyson_chi} \chi = P + P\ast U \ast \chi \ , \end{align}
The strategy to compute the \(GW\) selfenergy consists of three steps:
cntr::Bubble1
(as in SecondBorn approximation).cntr::vie2
.cntr::Bubble2
.The implementation of step 1 was already discussed above. For step 2, we distinguish between the equilibrium and time stepping on the one hand, and the starting phase on the other hand. For the former, we have defined the routine
Here, PxU
and UxP
correspond to the kernel \(K_{ij}\) and its hermitian conjugate, respectively. Analogously, the starting routine is implemented as
Finally, the selfenergy is computed by
The above implementation of the \(GW\) selfenergy can be found in programs/hubbard_chain_selfen_impl.cpp
. The structure of the program (programs/hubbard_chain_gw.cpp
) is analogous to Structure of the example program.
There are two programs for the 2B and the \(GW\) approximation, respectively: hubbard_chain_2b.x
, hubbard_chain_gw.x
, The driver script demo_hubbard_chain.py
located in the utils/
directory provides a simple interface to these programs. Simply run
After defining the parameters and convergence parameters, the script creates the corresponding input file and launches all three programs in a loop. The occupation of the first lattice site \(n_1(t)\) and the kinetic and total energy are then read from the output files and plotted. The script demo_hubbard_chain.py
also allows to pass reference data as an optional argument, which can be used to compare to exact results. For some parameters, results of the exact treatment are available in the data/
directory.
As an example, we consider the Hubbard dimer ( \(M=2\)) for \(U=1\) at half filling ( \(\mu=0\), \(\bar{n}=1/2\)). The system is excited by a strong onsite quench \(w_0=5\). The results are obtained by running
This strong excitation leads to the phenomenon of artificial damping: although the density \(n_1(t)\) exhibits an oscillatory behavior for all times in an exact treatment, the approximate NEGF treatment – with either selfenergy approximation considered here – shows damping until an unphysical steady state is reached.
M=2
and U=1
at half filling. We used Nt=400
and Ntau=400
at an inverse temperature of beta=20.0
and a time step of h=0.025
. In this example and the next example, we consider electronphonon (elph) coupled systems. We start with a simple model describing electrons on two sites \(i=0,1\) and a phonon mode coupled to site 0. The Hamiltonian reads
\begin{align} \hat{H}(t)&=J(t)\sum_{\sigma}(\hat{c}_{0\sigma}^{\dagger}\hat{c}_{1\sigma}+\hat{c}_{1\sigma}^{\dagger}\hat{c}_{0\sigma}) + \sum_{i} \epsilon_i \hat{n}_{i} +\omega_0\sum_i \hat{a}^{\dagger}_i \hat{a}_i+g(t)\sum_i (\hat{a}_i^{\dagger}+\hat{a}_i)\hat{n}_0.\label{eq:Holstein_imp} \end{align}
Here, \(\hat{c}^\dagger_{i\sigma}\) is the creation operator of an electron with spin \(\sigma\) at lattice site \(i\), \(\hat{n}_i = \sum_{\sigma}\hat{c}_{i\sigma}^{\dagger}\hat{c}_{i\sigma}\), and \(\hat{a}^\dagger\) creates an Einstein phonon coupled to the \(0\)th site. \(J(t)\) is the hopping parameter of the electrons, \(\epsilon_i\) is the energy level of site \(i\), \(\omega_0\) is the phonon frequency and \(g(t)\) is the elph coupling. If we regard the 0th site as an impurity, this model corresponds to a Holstein impurity model with a single bath site. This type of impurity model is selfconsistently solved in a dynamical meanfield theory calculation, which we explain in the next example.
In the present example, we consider excitations via a modulation of the hopping parameter or the elph coupling. We treat the dynamics within the selfconsistent Migdal approximation or the unrenormalized Migdal approximation.
The source code is organized as follows:
programs/Holstein_impurity_singlebath_uMig.cpp  main program for the unrenormalized Migdal approximation 
programs/Holstein_impurity_singlebath_Migdal.cpp  main program for the selfconsistent Migdal approximation 
programs/Holstein_utils_impl.cpp  subroutines for the evaluation of the phonon energy and the phonon displacement 
programs/Holstein_impurity_impl.cpp  implementation of selfenergy approximations 
The corresponding declarations of the routines are in the .hpp
files with the same name as the .cpp
files. We will first discuss how to solve the problem, and then explain the Migdal approximations.
We first introduce the GFs for the electrons and phonons as
\begin{align} G_{ij}(t,t') &= i \Bigl\langle T_{\mathcal{C}}\hat{c}_{i\sigma}(t) \hat{c}^{\dagger}_{j\sigma} (t')\Bigl\rangle, \\ D(t,t') &= i\langle T_{\mathcal{C}} \Delta\hat{X}(t) \Delta\hat{X}(t') \rangle, \end{align}
where \(\hat{X} = \hat{a}^\dagger + \hat{a}\) and \(\Delta\hat{X}(t) = \hat{X}(t)  \langle \hat{X}(t) \rangle \). Here, we consider the spin symmetric case.
Since the system is coupled to the phonon only at site 0, one can trace out the contribution from the bath site (site 1) and focus on the GF for site 0, \(G_{00}\). Then the electron and phonon GFs are determined by the Dyson equations
\begin{align} &[i\partial_{t}\epsilon_0\Sigma^{\rm MF}(t)]G_{00}(t,t')[(\Delta + \Sigma^{\rm corr})* G_{00}](t,t')=\delta_{\mathcal C}(t,t')\label{eq:Dyson_imp},\\ &D(t,t^\prime) = D_{0}(t,t^\prime) + [D_{0}*\Pi * D](t,t^\prime) \label{eq:D_dyson}, \end{align}
and the phonon displacement, \(X(t)=\langle \hat{X}(t)\rangle\), which can be calculated as
\begin{align} X(t) = \frac{2 g(0)}{\omega_0} n_0(0) + \int^t_0 d\bar{t} D_0^R(t,\bar{t})[g({\bar t})n_0(\bar{t})g(0)n_0(0)]. \label{eq:X} \end{align}
Here the meanfield contribution ( \(\Sigma^{\rm MF}(t)\)) corresponds to
\begin{align} \Sigma^{\rm MF}(t) &= g(t)X(t),\label{eq:H_mf_Holstein} \end{align}
and \(\Delta(t,tâ€™)\) is the hybridization function, which can be expressed as
\begin{align} \Delta(t,t') = J(t) G_{0,11}(t,t') J(t') \label{eq:Hyb_imp} \end{align}
with \([i\partial_t \epsilon_j]G_{0,jj}(t,t')=\delta_\mathcal{C}(t,tâ€™)\). \(\Sigma^{\rm corr}(t,\bar{t})\) is the beyondmeanfield contribution to the selfenergy, \(D_{0}(t,t')\equivi\langle \Delta\hat{X}(t) \Delta\hat{X}(t') \rangle_0\) is the GF for the free phonon system, \(\Pi\) is the phonon selfenergy and \(n_0(t)= \langle \hat{n}_0(t) \rangle\).
For a general impurity model, the hybridization function \(\Delta(t,tâ€™)\) takes an arbitrary form; however, the structure of equations (Eqs. \eqref{eq:Dyson_imp}, \eqref{eq:D_dyson}, \eqref{eq:X}, \eqref{eq:H_mf_Holstein}) stays the same. This type of impurity problem is solved within nonequilibrium dmft, where the hybridization function is selfconsistently determined (see Sec. [Idea of DMFT]).
Once \(G_{00}\) and \(\Sigma\) are obtained, the remaining electron GFs can be easily calculated. For example, \( G_{10} = G_{0,11} * J * G_{00} \). The energies can be also evaluated in a post processing operation. The kinetic energy (the expectation value of the first two terms of the Hamiltonian \eqref{eq:Holstein_imp}) is
\begin{align} E_{\rm kin}(t)&=2i[\Delta \ast G_{00}+G_{00} \ast \Delta]^<(t,t) + \sum_{i} \epsilon_i n_{i}(t) . \end{align}
The interaction energy (the expectation of the third term of Eq. \eqref{eq:Holstein_imp}) is can be expressed as
\begin{align} E_{\rm nX}(t)&=g(t) X(t) n_0(t)  2i[\Sigma^{\rm corr} * G_{00}]^<(t,t), \end{align}
where the first term is the meanfield contribution and the second term is the correlation energy. The phonon energy (the expectation value of the last term of Eq. \eqref{eq:Holstein_imp}) is
\begin{align}\label{eq:ph_energy} E_{\rm ph}(t)=\frac{\omega_{0}}{4} [iD^<(t,t) + X(t)^2]+\frac{\omega_{0}}{4} [iD_{\rm PP}^<(t,t) + P(t)^2]. \end{align}
Here \( D_{\rm PP}(t,t')=i\langle T_\mathcal{C} \Delta\hat{P}(t) \Delta\hat{P}(t')\rangle\) with \(\hat{P}=\frac{1}{i}(\hat{a}\hat{a}^\dagger)\), \( P(t) \equiv \langle\hat{P}(t)\rangle \) and \(\Delta\hat{P}(t) \equiv \hat{P}(t)  \langle \hat{P}(t)\rangle\).
The selfconsistent Migdal approximation is the lowest order ( \(\mathcal{O}(g^2)\)) selfconsistent approximation for the selfenergies, based on renormalized GFs. Within this approximation, one can treat the dynamics and the renomalization of the phonons induced by the electronphonon coupling. Even though the phonons can absorb energy from the electrons, the total energy of the system is conserved in this approximation. The impurity selfenergies for the electrons and phonons are approximated as
\begin{align} \hat{\Sigma}^{\rm sMig,corr}(t,t') & =ig(t)g(t') D(t,t')G_{00}(t,t'),\label{eq:sMig_el}\\ \Pi^{\rm sMig}(t,t')&=2 i g(t)g(t') G_{00}(t,t') G_{00}(t',t).\label{eq:sMig_ph} \end{align}
In the sample program, the sMig selfenergy is computed by the routine Sigma_Mig
(found in programs/Holstein_impurity_impl.cpp
). We provide two interfaces for \(0\leq \) tstp
\( \leq \) SolverOrder
(bootstrapping part) and tstp
\(=1\), tstp
\(>\) SolverOrder
(Matsubara part and the timestepping part), respectively. Here, we show the latter as an example:
This routine contains three steps.
cntr::Bubble1
.step_D
, see below. Here MAT_METHOD
specifies which method is used to solve the Dyson equation of the Matsubara part.cntr::Bubble2
.For the bootstrapping part, we carry out these steps for tstp
\(\leq\) SolverOrder
at once, using start_D
instead of step_D
.
The routine for solving the Dyson equation for the phonons is implemented using the cntr
routines vie2_**
. The interface for solving the Matsubara Dyson equation and time stepping (step_D
) is given by
This routine is separated into two steps.
cntr::convolution_timestep
.vie
with vie2_**
for the corresponding time step.In start_D
, we perform these steps for \(0 \leq\) tstp
\(\leq\) SolverOrder
at the same time, and use vie2_start
.
The free phonon GF, \(D_{0}(t,t')/2\) is computed using the cntr
routine
In the unrenormalized Migdal approximation, the phonons are kept in equilibrium and play the role of a heat bath. Therefore, the total energy is not conserved. The impurity selfenergy for the electrons is approximated as
\begin{align} \label{eq:uMig} \hat{\Sigma}^{\rm uMig,corr}(t,t') & =ig(t)g(t')D_{0}(t,t') G_{00}(t,t'). \end{align}
We call this approximation uMig. The implementation is simpler than in the case of sMig and can be found in the routine Sigma_uMig
.
The example programs for the sMig and uMig approximations are provided in programs/Holstein_impurity_singlebath_Migdal.cpp
and programs/Holstein_impurity_single_bath_uMig.cpp
. The main body of the example programs consists of two parts. In the first part, we focus on \(G_{00}\) and evaluate it by solving the equations with the corresponding approximations for the selfenergies. Once \(G_{00}\) and the selfenergies are determined, all other GFs and the energies are evaluated in the second part.
As in the case of the Hubbard chain, the first part consists of three main steps: (i) solving the Matsubara Dyson equation, (ii) bootstrapping (tstp
\(\leq\) SolverOrder
) and (iii) time propagation for tstp
\(>\) SolverOrder
. Since the generic structure of each part is similar to that of the Hubbard chain, we only show here the part corresponding to the time propagation.
Here the hybridization \(\Delta(t,t')\) is initially fixed via Eq. \eqref{eq:Hyb_imp}. At the beginning of each time step, we extrapolate \(G_{00}\) to the current time step, which works as a predictor. Next, we iterate Eqs. \eqref{eq:Dyson_imp}, \eqref{eq:D_dyson}, \eqref{eq:X}, \eqref{eq:H_mf_Holstein}, \eqref{eq:sMig_el} and \eqref{eq:sMig_ph} (or \eqref{eq:uMig}) to selfconsistently determine the selfenergies, which works as a corrector. During the iteration, we also update the phonon displacement, \(X(t)\), using Eq. \eqref{eq:X}. The update of \(X(t)\) is implemented in the routine get_phonon_displace
using a cntr
routine for the (Gregory) integration, integration::Integrator
.
In the second part, we evaluate the rest of the GFs. For the energies, the cntr
routines correlation_energy_convolution
and convolution_density_matrix
are used. In particular, the calculation of the phonon energy \eqref{eq:ph_energy} is implemented by the routine evaluate_phonon_energy
.
In the example programs, we store GFs with the hdf5
format using routines provided in cntr::herm_matrix
, cntr::herm_matrix::write_to_hdf5_slices
and cntr::herm_matrix::write_to_hdf5_tavtrel
. The corresponding part looks
It is useful to store the number of time steps ( Nt,Ntau
), the size of the time step (dt
) and the inverse temperature as parm
in the same hdf5
file. The routine write_to_hdf5_slices
stores the contour function \(F(t,t')\) in the form of \(F^R({\rm tstp},n), F^<(n,{\rm tstp}), F^{\rceil}(n,\tau)\) for \(n\leq {\rm tstp}\) at \({\rm tstp} = m\cdot {\rm OutEvery}\). The routine write_to_hdf5_tavtrel
stores the retarded and lesser parts in the form of \(F(t_{\rm rel};t_{\rm av})\), where \(t_{\rm rel}=tt'\) and \(t_{\rm av}=\frac{t+t'}{2}\), for \(\frac{t_{\rm av}}{dt}\) = \([0,{\rm OutEvery} ,2{\rm OutEvery},...]\) for \(t_{\rm rel}\geq 0\).
There are two programs for sMig and uMig, respectively: Holstein_impurity_singlebath_Migdal.x
and Holstein_impurity_singlebath_uMig.x
. These programs allow to treat excitations via a modulation of the hopping parameter and the elph coupling. We use \(\epsilon_{0,\mathrm{MF}}\equiv \epsilon_{0} + gX(0)\) as an input parameter instead of \(\epsilon_{0}\). ( \(\epsilon_{0}\) is determined in the post processing.) The driver script demo_Holstein_impurity.py
located in the utils/
directory provides a simple interface to these programs. After defining the system parameters, calculation scheme (sMig or uMig, time step, convergence parameter, etc.) and excitation parameters, the script creates the corresponding input file and starts the simulation. So simply run
In demo_Holstein_impurity.py
, the shape of the excitation is
\begin{align} g(t) & = g + \Delta g \sin\Bigl(\frac{\pi t}{T_{\rm end}}\Bigl)^2\cos(\Omega t), \label{eq:g_mod_shape}\\ J(t) & = J + \Delta J \sin\Bigl(\frac{\pi t}{T_{\rm end}}\Bigl)^2\cos(\Omega t), \end{align}
and one needs to specify \(T_{\rm end}\), \(\Omega\), \(\Delta g\) and \(\Delta J\). At the end of the simulation, the number of particles per spin for each site ( \(n_{i,\sigma}(t)\)), the phonon displacement ( \(X(t)\)), phonon momentum ( \(P(t)\)) and the energies are plotted. In addition, the spectral functions of the electrons and phonons,
\begin{align} A^R(\omega;t_{\rm av}) &= \frac{1}{\pi} \int dt_{\rm rel} e^{i\omega t_{\rm rel}} F_{\rm gauss}(t_{\rm rel}) G^R (t_{\rm rel};t_{\rm av})\label{eq:spectrum_demo},\\ B^R(\omega;t_{\rm av}) &= \frac{1}{\pi} \int dt_{\rm rel} e^{i\omega t_{\rm rel}} F_{\rm gauss}(t_{\rm rel}) D^R (t_{\rm rel};t_{\rm av})\label{eq:spectrum_demo2}, \end{align}
for the average time \(t_{\rm av}=\frac{Nt \cdot h}{2}\). Here \(t_{\rm rel}\) indicats the relative time, and \(F_{\rm gauss}(t_{\rm rel})\) is a Gaussian window function, which can also be specified in demo_Holstein_impurity.py
. Specifically, the evaluatation of the spectral function is done in two steps as
In the first line, with a python routine read_gf_ret_let_tavtrel
in python3/SpectrumCNTRhdf5.py
, the relative time and the correspoinding retarded and lesser GFs are obtained from the .h5
file for the Green's functions stored in the form of \( G (t_{\rm rel};t_{\rm av})\). Here \( t_{\rm av} \) specified with tstp_plot
. In the second line, we operate Eq. \eqref{eq:spectrum_demo} with a subroutine evaluate_spectrum_windowed
in python3/SpectrumCNTRhdf5.py
. Here one can specify the order of the interpolation of GFs for the Fourier transformation, and Fwindow
is the window function, which we take a Gauss function in this example.
In Fig. holstein_imp (a)–(b), we show the electron spectral function measured at site 0 and the phonon spectrum evaluated within the sMig approximation. For \(J=0\), the site 0 is decoupled from the site 1. The electron spectrum shows a main peak around \(\omega=0.5=\epsilon_{0,\mathrm{MF}}\) and side peaks around \(\epsilon_{0,\mathrm{MF}}\pm\omega_0\). As the hopping parameter is increased, the electron levels at the site 0 and the site 1 hybridize and an additional sharp peak at \(\omega<0\) appears in the electron spectrum, see panel (a). The phonon spectrum shown in panel (b) reveals a shift of the peak position from the bare value \(\omega_0=1.0\) and a finite width of the peak (phonon renormalization).
In Fig. holstein_imp (c)–(e), we show the time evolution of the phonon displacement, the density and the total energy after an excitation via hopping modulation. These results were obtained using the sMig approximation. The phonon displacement shows damped oscillations, see panel (c). Panel (d) demonstrates that after the excitation, the number of electrons at site 0 and site 1 changes with time, while the total number is conserved. Within sMig the total energy is also conserved after the excitation, as is shown in panel (e). On the other hand, when uMig is used, the total energy is not conserved after the excitation, see panel (f). In uMig, the phonons can act as a heat bath and dissipate energy, which is clearly evident when the method is used to treat extended systems, see next section.
Here, we consider the lattice version of the previous problem, the socalled Holstein model, whose Hamiltonian is
\begin{align} H(t)&=J(t)\sum_{\langle i,j\rangle,\sigma}\hat{c}_{i,\sigma}^{\dagger}\hat{c}_{j,\sigma}\mu\sum_i \hat{n}_i +\omega_0\sum_i \hat{a}^{\dagger}_i \hat{a}_i+g(t)\sum_i (\hat{a}_i^{\dagger}+\hat{a}_i)\hat{n}_i.\label{eq:Holstein} \end{align}
Here \(\mu\) is the chemical potential and the remaining terms are the same as in the previous section. For simplicity, in the following we consider the Bethe lattice (with infinite coordination number). For this lattice, the free problem has a semicircular density of states, \(\rho_0(\epsilon) = \frac{1}{2\pi J^{*2}} \sqrt{4J^{*2}\epsilon^2}\), with \(J^*\) a properly renormalized hopping amplitude. Here we take \(J^*=1\). Assuming spin symmetry, the lattice GFs are
\begin{align} G_{ij}(t,t') &= i \langle T_{\mathcal{C}}\hat{c}_{i,\sigma}(t) \hat{c}_{j,\sigma}^{\dagger}(t') \rangle, \\ D_{ij}(t,t') &= i\langle T_{\mathcal{C}} \Delta\hat{X}_i(t) \Delta\hat{X}_j(t') \rangle. \end{align}
In this example, we treat the dynamics of the Holstein model using the dmft
formalism, where we map the lattice model to an effective impurity model with a selfconsistently adjusted free electron bath. We provide examples for the normal state and the superconducting (SC) state with the selfconsistent Migdal (sMig) and the unrenormalized Midgal (uMig) approximations as impurity solvers.
The source code is organized as follows:
programs/Holstein_bethe_uMig.cppp  main program for dmft with the unrenormalized Migdal approximation 
programs/Holstein_bethe_Migdal.cpp  main program for dmft with the selfconsistent Migdal apprixmation 
programs/Holstein_bethe_Nambu_uMig.cppp  main program for dmft with the unrenormalized Migdal approximation for SC 
programs/Holstein_bethe_Nambu_Migdal.cpp  main program for dmft with the selfconsistent Migdal apprixmation for SC 
programs/Holstein_utils_impl.cpp  subroutines for the evaluation of the phonon energy and the phonon displacement 
programs/Holstein_impurity_impl.cpp  implementation of selfenergy approximations 
We note that Holstein_utils_impl.cpp
and Holstein_impurity_impl.cpp
are shared with the previous examples.
For the normal states, the corresponding effective impurity model for the Holstein model is the model considered in the previous section with a general hybridization function \(\Delta(t,t')\) (corresponding to an infinite number of bath sites). The hybridization function is selfconsistently determined so that the impurity GF and the impurity selfenergy ( \(\Sigma_{\rm imp}\)) are identical to the local Green's function ( \(G_{\rm loc}=G_{ii}\)) and the local selfenergy of the lattice problem, respectively. In the case of a Bethe lattice, the dmft
lattice selfconsistency procedure simplifies and the hybridization function can be determined directly from the GF,
\begin{align} \Delta(t,t')=J^*(t) G_{\rm imp}(t,t') J^*(t').\label{eq:bethe_condition} \end{align}
When the impurity selfenergy is given, the impurity GFs are determined by solving Eqs. \eqref{eq:Dyson_imp}, \eqref{eq:D_dyson} and Eqs. \eqref{eq:X} regarding \(G_{00}\) as \(G_{\rm imp}\) and \(D\) as \(D_{\rm imp}\). For the impurity selfenergy, we can again use the sMig approximation or the uMig approximation. In this case, the main difference between the dmft
simulation of the Holstein model on the Bethe lattice and the Holstein impurity problem discussed in the previous example is whether or not the hybridization function is updated according to Eq. \eqref{eq:bethe_condition}.
After the NEGFs are obtained, we can calculate some observables such as different energy contributions. The expressions for the energies (per site) are the same as in the case of the impurity, except for the kinetic energy, which now becomes
\begin{align} E_{\rm kin}(t)&=\frac{1}{N}\sum_{\langle i,j\rangle,\sigma}J(t)\langle c_{i,\sigma}^{\dagger}(t)c_{j,\sigma}(t)\rangle =2i[\Delta \ast G_{\rm loc}]^<(t,t). \end{align}
The corresponding programs are implement in Holstein_bethe_Migdal.cpp
and Holstein_bethe_uMig.cpp
for the normal states. The structure of the code is almost the same as that for the impurity problem and the major difference is the update of the hybridization function. To clarify this, we show the implementation of the time propagation in Holstein_bethe_Migdal.cpp
,
At the beginning of each time step, we extrapolate the local GF and the hybridization function, which serves as a predictor. Next, we iterate the dmft
selfconsistency loop (corrector) for several times. In this loop, we first solve the impurity problem to update the local selfenergy and GF. Then we update the hybridization function by the lattice selfconsistency condition, which in the case of the Bethe lattice simplifies to Eq. \eqref{eq:bethe_condition}.
Because of the attractive interaction mediated by the phonons, this model exhibits an swave SC state at low enough temperatures. We can treat this situation as well using the Nambu formalism. By defining \(\hat{\Psi}^\dagger_i = [c^\dagger_{i\uparrow}, c_{i\downarrow}]\), the Holstein model is expressed as
\begin{align} H(t)&=\sum_{\langle i,j\rangle}\hat{\Psi}_{i}^{\dagger}\hat{J}(t)\hat{\Psi}_{j}\mu'\sum_i \hat{\Psi}_{i}^{\dagger}{\boldsymbol\sigma}_z\hat{\Psi}_{i} +\omega_0\sum_i \hat{b}^{\dagger}_i \hat{b}_i+g\sum_i (\hat{b}_i^{\dagger}+\hat{b}_i)\hat{\Psi}_{i}^{\dagger}{\boldsymbol\sigma}_z\hat{\Psi}_{i} +F_{\rm sc}(t) \sum_i \hat{\Psi}_{i}^{\dagger}{\boldsymbol\sigma}_x\hat{\Psi}_{i}.\label{eq:Holstein_Nambu} \end{align}
Here \({\boldsymbol\sigma}_a\) is the normal Pauli matrix, and \(\hat{J}(t) = {\rm diag}[J(t),J(t)]\). In this expression, we introduce a new phonon operator as \(\hat{b}^\dagger = \hat{a}^\dagger + \frac{g}{\omega_0}\) and \(\mu' = \mu +\frac{2g^2}{\omega_0}\). We take \(\phi_{sc}\equiv \langle \hat{c}^\dagger_{i,\uparrow} \hat{c}^\dagger_{i,\downarrow} + h.c. \rangle\) as the superconducting order parameter and choose it real in equilibrium. We also introduce the timedependent pair field, \(F_{\rm sc}(t) \sum_i (\hat{c}^\dagger_{i,\uparrow} \hat{c}^\dagger_{i,\downarrow} + h.c.) =F_{\rm sc}(t) \sum_i \hat{\Psi}_{i}^{\dagger}{\boldsymbol \sigma}_x\hat{\Psi}_{i}\), which is is useful to numerically evaluate the dynamical pair susceptibility, \(\chi^R_{\rm pair}(t,t') = i \frac{1}{N} \langle [\hat{B}(t),\hat{B}(t')]\rangle\) with \(\hat{B}(t) = \sum_i \hat{c}^\dagger_{i,\uparrow} \hat{c}^\dagger_{i,\downarrow} + h.c.\). In the Nambu formalism, the lattice GFs are defined as
\begin{align} {\bf G}_{ij}(t,t') &= i \Bigl\langle T_{\mathcal{C}}\hat{\Psi}_{i}(t) \hat{\Psi}_{j}^{\dagger}(t')\Bigl\rangle, \\ D_{ij}(t,t') &= i\langle T_{\mathcal{C}} \Delta\hat{X}_{b,i}(t) \Delta\hat{X}_{b,j}(t') \rangle, \end{align}
for electrons and phonons, respectively. Here \(\hat{X}_{b,i} = \hat{b}^\dagger_i + \hat{b}_i\) and \(\Delta \hat{X}_{b,i} = \hat{X}_{b,i}\langle \hat{X}_{b,i}(t)\rangle\). We can also treat this problem with dmft
but the corresponding impurity problem now includes the hybridization function between \( \hat{c}^\dagger_{i\uparrow}\) ( \( \hat{c}_{i,\downarrow}\)) and \( \hat{c}^\dagger_{i,\downarrow}\)( \( \hat{c}_{i,\uparrow}\)). We can solve the impurity problem with the selfconsistent Migdal approximation or with the unrnormalized Migdal approximations as in the normal states, and the corresponding programs are implemented in Holstein_bethe_Nambu_Migdal.cpp
and Holstein_bethe_Nambu_uMig.cpp
.
For the simulation of normal states, we provide two programs for the sMig and uMig approximations, respectively: Holstein_bethe_Migdal.x
and Holstein_bethe_uMig.x
. For the simulation of superconducting states, we additionally provide Holstein_bethe_Nambu_Migdal.x
and Holstein_bethe_Nambu_uMig.x
. In these programs, we use \(\mu_{\mathrm{MF}}\equiv \mu  gX(0)\) (or \(\mu_{\mathrm{MF}}\equiv \mu'  gX_b(0)\) for the Nambu case) as an input parameter instead of \(\mu\) ( \(\mu'\)). ( \(\mu\) and \(\mu'\) are determined in a post processing step.) For the normal phase, excitations via modulations of the hopping and elph coupling are implemented, while for the superconducting phase, excitations via a timedependent pair field (the excitation term is \(F_{\rm sc}(t) \sum_i (\hat{c}^\dagger_{i,\uparrow} \hat{c}^\dagger_{i,\downarrow} + h.c. )\)) and hopping modulation are implemented. The driver script demo_Holstein.py
and demo_Holstein_sc.py
located in the utils/
directory provide a simple interface to these programs. They have essentially the same structure as demo_Holstein_impurity.py
described in the previous section. After specifying the system parameters and excitation conditions, simply run
The form of the excitation provided in these python scripts are the same as in the precious example, see Eq. \eqref{eq:g_mod_shape}.
In Fig. hostein_dmft (a)–(b), we show the equilibrium spectral functions in the normal phase and the SC phase. These results were obtained using dmft
+ sMig. One can observe the opening of the SC gap below the transition temperature. Within dmft
+ sMig, the phonon spectrum is renormalized, see Fig. hostein_dmft (b). In the normal phase, the peak in the phonon spectrum is shifted relative to the bare phonon frequency and it exhibits a finite width. If the electronphonon coupling is sufficiently strong (in the SC phase), a peculiar peak around the energy scale of the SC gap appears. In Fig. hostein_dmft (c),(d), we show the time evolution of the kinetic energy and the total energy after an excitation via hopping modulation using dmft
+ sMig. The kinetic energy approaches a value above the equilibrium value, and the total energy is conserved. Here, the system is expected to gradually approach an equilibrium state with an elevated temperature, whose total energy is the same as the conserved total energy during the time evolution. On the other hand, within dmft
+ uMig, the phonons act as a heat bath which absorbs the injected energy and the total energy is not conserved. As a result, the system relaxes back to the original equilibrium state as exemplified by the kinetic energy returning to the original value, see Fig. hostein_dmft (e). In Fig. hostein_dmft (f), we show the response of the system against a small pair field, which is roughly corresponding to the dynamical pair susceptibility, \(\chi^R_{\rm sc} = i \theta(t)\langle [\hat{B}(t),\hat{B}(0)]\rangle\) with \(\hat{B}(t)=(\hat{c}^\dagger_{i,\uparrow} \hat{c}^\dagger_{i,\downarrow} + h.c.)\). One can see that above the transition temperature, the signal damps without any oscillation but the damping speed becomes slower as we approach the transition temperature. At the transition temperature, the damping time diverges. In the SC phase, there emerge oscillations, whose excitation frequency and lifetime increase with decreasing temperature. This is nothing but the amplitude Higgs mode of the superconductor.
dmft
+ sMig. (b) Corresponding local phonon spectrum. (c)(d) Time evolution of the kinetic energy and the total energy after excitation via hopping modulation with different strengths calculated within dmft
+ sMig. (e) Time evolution of the kinetic energy after the excitation via hopping modulation within dmft
+ uMig. (f) Response of the superconducting order parameter \(\phi_{\rm sc} = \langle \hat{c}^\dagger_\uparrow \hat{c}^\dagger_\downarrow \rangle\) against the SC field with the maximum field strength \(F_{\rm sc} = 0.01\) obtained within dmft
+ sMig. We consider halffilled systems. For (a)(b)(f) we use \(g=0.72\;\omega_0=1.0\), and for (c)(d)(e) we use \(g=0.5\; \omega_0=0.5\) and \(\beta=10.0\). For (c)(d)(e) we use a \(\sin^2\) envelope for the hopping modulation with excitation frequency \(\Omega=1.5\) and pulse duration \(T_{\rm end}=12.56\). For (f), we use a pair field pulse of the same shape with \(T_{\rm end}=0.6\) and \(\Omega=0.0\). As in the discussion of the Hubbard chain, we will consider the Hubbard model, but we will assume here a translationally invariant system with periodic boundary conditions. The translational invariance implies that all observables and propagators are diagonal in momentum space. Hence, all quantities can be labeled by the reciprocal lattice vector \(\vec k\) within the first Brillouin zone (BZ). This reduces the computational and storage complexity from \(\mathcal{O}(N_k^2)\) for the real space formalism introduced in Section Hubbard chain to \(\mathcal{O}(N_k).\) Moreover, the Dyson equation is diagonal in momentum space and can be efficiently parallelized using a distributedmemory parallelization based on MPI
.
We will consider a 1D chain described by the Hubbard model, see Eq. \eqref{eq:hubbmodel1}. The single particle part of the Hamiltonian can be diagonalized as
\begin{equation} \hat{H}_0 = \sum_{\vec k,\sigma} \epsilon(\vec k) c_{\vec{k}\sigma}^{\dagger} c_{\vec{k}\sigma}, \end{equation}
where we have introduced the freeelectron dispersion \(\epsilon(\vec k)=2 J \cos(\vec k_x).\) The system is excited via an electromagnetic excitation, which in translationally invariant systems is conveniently introduced using the Peierls substitution. The latter involves the vector potential \(\vec A(\vec r,t)\) as a phase factor in the hopping
\begin{equation} J\rightarrow J \exp\Bigg[\frac{i q}{\hbar} \int_{\vec R_i}^{\vec R_j} d\vec r \vec A(\vec r,t)\Bigg]\ , \end{equation}
leading to a timedependent singleparticle dispersion
\begin{equation} \epsilon(\vec k,t)=\epsilon(\vec k q \vec A(t)/\hbar)\ . \end{equation}
In this example, we treat the dynamics within the \(GW\) approximation. The numerical evaluation of the respective selfenergy expressions is implemented in the C++ module gw_selfen_impl.cpp
. In momentum space, the correlation part of the \(GW\) selfenergy can be written as
\begin{equation} \label{Eq:Sigmak} \Sigma_{\vec k}(t,t^\prime)=\frac{i}{N_k} \sum_{\vec q} G_{\vec k\vec q}(t,t^\prime) \delta W_{\vec q}(t,t^\prime), \end{equation}
where we have introduced the Fourier transform of the propagator \(X_{\vec q}(t,t^\prime)=(1/N_k)\sum_{i} \exp(i (\vec r_i\vec r_j) \vec q) X_{ij}(t,t^\prime),\) see also the previous Hubbard chain discussion for the definition of the propagators. In line with the previous notation, we have introduced the dynamical part of the effective interaction \(\delta W_{\vec q}\) via \(W_{\vec q}(t,t^\prime)=U\delta_\mathcal{C}(t,t^\prime)+\delta W_{\vec q}(t,t^\prime)\). The retarded part of the interaction is obtained as a solution of the Dysonlike equation
\begin{equation} \label{Eq:Wk} W_{\vec k}(t,t')=U\delta_\mathcal{C}(t,t^\prime)+ U [ \Pi_{\vec k} \ast W_{\vec k} ](t,t^\prime), \end{equation}
the Fourier transform of the polarization is given by
\begin{equation} \label{Eq:Pol} \Pi_{\vec k}(t,t')= \frac{i}{N_k} \sum_{\vec q} G_{\vec k+\vec q}(t,t') G_{\vec q}(t',t). \end{equation}
The main advantage of the translational invariance is that the propagators and corresponding Dyson equations are diagonal in momentum space. This leads to a significant speedup of of calculations since the most complex operation, the solutions of the VIE, can be performed in parallel. Moreover, the main bottleneck to reach longer propagation times is the memory restriction imposed by the hardware. The usage of the MPI
parallelization scheme over momentum points alleviates this problem because memory can be distributed among different nodes.
The Dyson equation is diagonal in the momentum space and it is convenient to introduce a class called kpoint
that includes all propagators and corresponding methods for a given momentum point \(\vec q\). In terms of data, the kpoint
class includes single and two particle propagators for a given momentum point \(\vec q\), namely \(G_{\vec q}\) and \(W_{\vec q}\), as well as the selfenergy \(\Sigma_{\vec q}\) and the polarization \(\Pi_{\vec q}\). The routines include wrappers around vie2
solvers to solve the singleparticle and twoparticle Dyson equations.
While the Dyson equation is diagonal in momentum space, the evaluation of the selfenergy diagrams mixes information between different momentum points and requires communication between different ranks. The communication between different ranks is performed using the cntr::distributed_timestep_array
as described in MPI parallelization. Users should be careful that the simple example code presented in MPI parallelization includes a global object of type cntr::herm_matrix_timestep_view<T>
, which only allocates memory if a given momentum is owned by that rank. In the following implementation we will rather work with a local copy of this object on every rank. The latter produces only a minimal overhead in the memory usage and simplifies the implementation. The strategy to compute the \(GW\) selfenergy \(\mathcal{T}[\Sigma_{\vec{k}}]_n\) at time step \(n\) thus consist of two steps:
MPI
ranks.Step 1 is implemented as
and an analogous routine is used for the bosonic counterpart. We gather the information from all ranks into an object of type cntr::distributed_timestep_array
named gk_all_timesteps
. We have introduced a map kindex_rank
which maps an index of momenta from a subset on a given rank to the full BZ, like \(\mathbf{k}_{\text{rank}}\rightarrow \mathbf{k}\). The variable corrK_rank
is just a collection of kpoints
on a given rank. mpi_bcast_all()
is a wrapper around the MPI
routine Allgather
adjusted to the type cntr::distributed_timestep_array
.
Step 2 is implemented as
As each rank includes only a subset of momentum points \(\mathbf{k}_{\text{rank}}\) we only evaluate the selfenergy diagrams \(\Sigma_{\mathbf{k}_{\text{rank}}}\) for this subset of momentum points. All ranks carry information about the latest timestep for all momentum points and the internal sum over momentum \(\vec q\) in Eq. \eqref{Eq:Sigmak} is performed on each rank.
As the generic structure is similar to the above examples, we will focus on the peculiarities connected to the usage of MPI
. First, we need to initialize the MPI
session
and the cntr::distributed_timestep_array
for the electronic and bosonic propagators
To simplify the mapping between a subset of points on a given rank and the full BZ we introduce a map kindex_rank
which at position \(j=0,\ldots,N_{\text{rank}}1\) includes an index of the corresponding momenta in the full BZ.
The program consists of three main parts, namely solving the Matsubara Dyson equation, bootstrapping (tstp
\(\leq\) SolverOrder
) and time propagation for tstp
\(>\) SolverOrder
. The selfconsistency iterations include the communication of all fermionic and bosonic propagators between different ranks using the routine gather_gk_timestep
(introduced above) and the determination of the local propagators using a routine get_loc
As on each MPI
rank, the momentumdependent singleparticle density matrix \(\rho(\vec k)\) is known for the whole BZ, the evaluation of the HF contribution is done as in Hubbard chain. The selfenergies \(\Sigma_{k_{\text{rank}}}\) for the momentum points \(k_{\text{rank}}=0,\ldots,N_{\text{rank}}1\) on each rank are obtained by the routine sigma_GW
(introduced above).
Similarly, the solution of the Dyson equation for the fermionic (bosonic) propagators for each momentum point is obtained by step_dyson_with_error
(step_W_with_error
) which is just a wrapper around the Dyson solver which returns an error corresponding to the difference between the propagators at the previous and current iterations. The momentumdependent error for the fermionic propagators is stored in err_ele
and at the end we use MPI_Allreduce
to communicate among the ranks.
The bootstrapping and the realtime propagation have a similar structure as the Matsubara solver. The main difference lies in the predictorcorrector scheme as explained in Hubbard chain. At the beginning of each time step, we extrapolate the momentumdependent GF \(G_{k}\) and the retarded interactions \(W_{k}\), which works as a predictor
Then we make several iterations at a given time step until convergence, which acts as a corrector.
After the NEGFs are obtained, we evaluate the kinetic energy \(E_{\text{kin}}(t)=\frac{1}{2 N_k}\sum_{\vec k}\text{Tr}[ \rho_{\vec k}(t) \epsilon_{\vec k}(t)]\) using the routine diag::KineticEnergy
. The interaction energy is obtained from the GalitskiiMigdal formula
\begin{equation} E_{\text{int}}(t)=\frac{1}{2 N_k} \sum_{\vec k} \left ( \mathrm{Tr}\left[\rho_{\vec k}(t) h^{\mathrm{MF}}_{\vec k}(t)\right] + \mathrm{Im}\mathrm{Tr} \left[\Sigma_{\vec k}\ast G_{\vec k}\right]^<(t,t) \right), \end{equation}
using the routine diag::CorrelationEnergy
. The two operations include the MPI
reduction as the momentum sum is performed over the whole BZ.
There is one program for the \(GW\) calculation, called gw.x
. The driver script demo_gw.py
located in the utils/
directory provides a simple interface to this program. Similar to the the previous cases, the script creates an input file and launches the program. The user can specify the shape of the electric field pulse, but by default, we use a singlefrequency pulse with a Gaussian envelope
\begin{equation} \label{Eq:pulse} E(t)=E_0 \exp(4.6(tt_0)^2/t_0^2) \sin(\omega (tt_0)), \end{equation}
where \(t_0=2\pi/\omega N_{p}\) is determined by the number of cycles \(N_p\). After the simulation, the time evolution of the kinetic energy and potential energy are plotted.
The relevant programs for the lattice \(GW\) example are
programs/gw.cpp  Main program 
utils/demo_gw.py  Running script 
programs/gw_latt_impl.cpp  Implementation of lattice and singleparticle properties 
programs/gw_selfene_impl.cpp  Evaluation of selfenergy contributions including MPI communication 
programs/gw_latt_impl.cpp  Solution of the Dyson equation for each momentum point 
Corresponding declarations of the routines are in the .hpp
files with the same name as the .cpp
files.
Here, we will briefly illustrate the dynamics after a photoexcitation. A more detailed analysis can be found in the accompanying paper [6]. We excite the system with a short oscillating electric field pulse, see Eq. \eqref{Eq:pulse}. The amplitude of the excitation \(E_0\) determines the absorbed energy. In Fig. FigGW_ekin, we present the time evolution of the kinetic energy for the two excitation strengths \(E_0=3\) and \(E_0=5\). As the energy is increased (during the pulse) and the system heats up, the kinetic energy increases. The observed behavior is consistent with thermalization at a higher temperature, but the transient evolution is complicated by the energy exchange between the electronic and bosonic subsystems (plasmon emission). For the strongest excitations, there is a clear relaxation dynamics to the final state, see inset of Fig. FigGW_ekin, accompanied with strongly damped oscillations.
As discussed in the introduction, the usage of the MPI
parallelization scheme over momentum points reduces the memory restrictions and also speeds up the execution of the program. To demonstrate this, we have performed a scaling analysis for a system with fixed number of momentum points \(N_k=128,\) and parallelized up to 128 processors, see Fig. FigGW_scaling. Moreover, for all tests we have fixed the number of tasks per node to one, since in a realworld scenario we want to maximally distribute the memory. We can see that the scaling is almost perfect up to 128 processors, with only a slight deviation from optimal scaling. The main reason for the deviation is the communication overhead, since a substantial amount of data, namely timesteps of propagators for all momentum points, has to be communicated among all nodes.
MPI
processes for systems with \(N_k=128\) momentum points, where we fixed one task per node. The maximum number of time steps used is \(N_t=2500\). These calculations have been performed on the Popeye cluster at the Flatiron Institute. previous page Manual