NESSi  v1.0.1
The NonEquilibrium Systems Simulation Library
Example programs

Installation

After downloading or cloning the NESSi repository nessi, the example programs are compiled in a similar fashion as the libcntr library. We have prepared a CMake build environment and it is most convenient to write a configuration shell script configure.sh of this type:

CC=[C compiler] CXX=[C++ compiler]
cmake \
-DCMAKE_BUILD_TYPE=[Debug|Release] \
-Domp=[ON|OFF] \
-Dhdf5=[ON|OFF] \
-Dmpi=[ON|OFF] \
-DCMAKE_INCLUDE_PATH=[include directory] \
-DCMAKE_LIBRARY_PATH=[library directory] \
-DCMAKE_CXX_FLAGS="[compiling flags]" \
..

The dependencies for the example programs are the same as for libcntr. The eigen3 library and, optionally (turned on by hdf5=ON), the HDF5 library are required. Make sure the corresponding libraries can be found in the library path CMAKE_LIBRARY_PATH, while the corresponding headers should be placed in CMAKE_INCLUDE_PATH. Furthermore, the examples depend on the libcntr library. Therefore, the library path for libcntr should be provided by CMAKE_LIBRARY_PATH, while the header cntr/cntr.hpp should be found in the include path. Create a build directory (for instance, cbuild/), navigate there and run the configure script:

sh ../configure.sh

After successful configuration, compile via

make

The executables are then found in the exe/ directory.

The utils/ directory contains useful python driver scripts which simplify the execution of the example programs. In order to run the python script, we need to make sure to set the python path to nessi/libcntr/python and/or nessi/libcntr/python3. The scripts should be run from nessi/examples. In the following table, we summarize the python scripts and the corresponding exe files and provide brief explanations of what is done in the python scripts.

Summary of python script to run example codes in nessi
test_equilibrium.py

Runs the execute file test_equilibrium.x to show the scaling of accuracy of the Matsubara Dyson solvers with the specified order as an input. A figure for the scaling against \(N_{\tau}\) is created.

test_nonequilibrium.py

Runs test_nonequilibrium.x to show the scaling of accuracy of the integro-differential (Dyson) and integral (VIE2) formulation with the specified order as an input. A figure for the scaling against \(N_{t}\) is created.

demo_hubbard_chain.py

Runs hubbard_chain_**.x to simulate quench dynamics of the Hubbard chain. Here, ** (= tt 2b, gw, tpp) indicates different many body approximations, which can be specified in the python script. Figure for the time evolution of density and energies are created.

demo_Holstein_impurity.py

Runs Holstein_impurity_singlebath_**.x to simulate dynamics against modulation of system parameters in the Holstein-type impurity with a single bath site. Here, ** (= Migdal, uMig) indicates different approximate impurity solvers, which can be specified in the python script. The spectra of electrons and phonons and the time evolution of phonon displacement and energies are plotted.

demo_Holstein.py

Runs Holstein_bethe_**.x to simulate dynamics of the Holstein model against modulation of system parameters within DMFT. The rest is the same as demo_Holstein_impurity.py.

demo_Holstein_sc.py

Runs Holstein_bethe_Nambu_**.x, which is a generalized version of Holstein_bethe_**.x to treat s-wave superconductor(SC). In addition to figures for the spectra and evolution of densities and energies, evolution of the SC order parameter is plotted.

demo_gw.py

Runs gw.x to simulate the 1dim chain of the extended Hubbard model within the GW approximation using MPI parallelization. The script create figures for the electric filed and the change in the kinetic energy.

demo_integration.py Runs integration.x to demonstrate the accuracy of the Gregory integration implemented in nessi.

Test of accuracy of Dyson solver: equilibrium

Synopsis

This test program tests the accuracy of the solution of the Dyson equation on the Matsubara axis (see Dyson equation) in the context of a numerically exactly solvable downfolding problem. The scaling of the averaged error can be obtained for different values of SolveOrder.

Details and implementation

We consider a \(2\times 2\) matrix-valued time-independent Hamiltonian

\begin{align} \label{eq:ham2x2} \epsilon = \begin{pmatrix} \epsilon_1 & i \lambda \\ -i\lambda & \epsilon_2 \end{pmatrix} \ . \end{align}

The corresponding numerically exact GF \(G(t,t^\prime)\) (assuming fermions) is computed using the routine cntr::green_from_H. One can also calculate the (1,1) component of this GF by downfolding. In this procedure, we solve

\begin{align} \label{eq:downfold1} \left(i \partial_t - \epsilon_1 \right) g_1(t,t^\prime) = \delta_\mathcal{C}(t,t^\prime) + \int_{\mathcal{C}}d \bar{t}\, \Sigma(t,\bar t) g_1(\bar t,t^\prime) \end{align}

with the embedding self-energy \(\Sigma(t,t^\prime) = |\lambda|^2 g_2(t,t^\prime)\). Here, \(g_2(t,t^\prime)\) is the free GF with respect to

\begin{align} \left(i \partial_t - \epsilon_2 \right) g_2(t,t^\prime) = \delta_\mathcal{C}(t,t^\prime) \ . \end{align}

The solution of the downfolding Dyson equation \eqref{eq:downfold1} must be identical to the \((1,1)\) matrix element of \(G\): \(G_{1,1}(t,t^\prime)=g_1(t,t^\prime)\).

The test program for the equilibrium problem solves Eq. \eqref{eq:downfold1} for the Matsubara component only (tstp=-1), and computes the differences between the exact and approximate solution as

\begin{align} \label{eq:test_eq_err} \mathrm{err.} = \frac{1}{N_\tau} \Delta[G_{1,1},g_1] \ . \end{align}

Here, \(\Delta[A,B]\) denotes the norm difference (see Comparing Green's functions) at tstp=-1.

In the test program programs/test_equilibrium.cpp, we have implemented this task in a straightforward way. The code is a direct extension of a generic libcntr-based program (see Creating a custom program). The parameters of the Hamiltonian are defined as constants. In particular, we fix \(\epsilon_1=-1\), \(\epsilon_2=1\), \(\lambda=0.5\). The chemical potential is set to \(\mu=0\) and the inverse temperature fixed to \(\beta=20\). After reading the input from file (see Creating and passing input files), we define the \(2\times 2\) Hamiltonian \eqref{eq:ham2x2} as an eigen3 complex matrix:

cdmatrix eps_2x2(2,2);
eps_2x2(0,0) = eps1;
eps_2x2(1,1) = eps2;
eps_2x2(0,1) = I*lam;
eps_2x2(1,0) = -I*lam;

The \(1\times 1\) Hamiltonian representing \(\epsilon_1\) is constructed as

CFUNC eps_11_func(-1,1);
eps_11_func.set_constant(eps1*MatrixXcd::Identity(1,1));

Here, eps_11_func is a contour function entering the solvers below. Note the first argument in the constructor of CFUNC: the number of real-time points \(N_t\) is set to -1. In this case, only the Matsubara part is addressed. Its value is fixed to the constant \(1\times 1\) matrix by the last line. With the Hamiltonians defined, we can initialize and construct the free \(2\times 2\) exact GF by

GREEN G2x2(-1,Ntau,2,FERMION);
cntr::green_from_H(G2x2,mu,eps_2x2,beta,h);

The time step h is a dummy argument here, as the real-time components are not addressed. From the exact GF, we extract the submatrix \(G_{1,1}\) by

GREEN G_exact(-1,Ntau,1,FERMION);
G_exact.set_matrixelement(-1,0,0,G2x2);

Finally, we define the embedding self-energy by

GREEN Sigma(-1,Ntau,1,FERMION);
cdmatrix eps_22=eps2*MatrixXcd::Identity(1,1);
cntr::green_from_H(Sigma, mu, eps_22, beta, h);
Sigma.smul(-1,lam*lam);

The last line performs the multiplication of \(\mathcal{T}[\Sigma]_{-1}\) with the scalar \(\lambda^2\). After initializing the approximate GF G_approx, we can solve the Matsubara Dyson equation and compute the average error:

cntr::dyson_mat(G_approx, mu, eps_11_func, Sigma, beta, SolveOrder, CNTR_MAT_FOURIER);
err_fourier = cntr::distance_norm2(-1,G_exact,G_approx) / Ntau;
cntr::dyson_mat(G_approx, mu, eps_11_func, Sigma, beta, SolveOrder, CNTR_MAT_FIXPOINT);
err_fixpoint = cntr::distance_norm2(-1,G_exact,G_approx) / Ntau;

The relevant source files are the following:

programs/test_equilibrium.cpp source code of program
utils/test_equilibrium.py python driver script

Running the program

The program is run by

./exe/test_equilibrium.x inp/input_equilibrium.inp

Here, inp/input_equilibrium.inp is a libcntr input file with the format described in Creating and passing input files. It contains the variables

Ntau Number of points on the Matsubara axis
SolveOrder Solution order

The output is appended to the file out/test_nonequilibrium.dat in two columns, the error of the Fourier method and the Newton iteration (see [6] for details). For instance, the input Ntau=100 and SolveOrder=2 yields the output

$ cat out/test_equilibrium.dat
1.9385e-05 7.0587e-06

We provide a useful python driver script, which runs test_equilibrium.x for a range of values of Ntau. It uses the python module ReadCNTR (see Python tools). It can be launched via

python utils/test_equilibrium.py 2

The last argument is SolveOrder=1,...,5. After running the calculation, the logarithm of the error is plotted against \(\log(N_\tau)\), and the scaling of the error is extracted by a linear regression.

Equilibrium scaling
test_equilibrium.svg
Average error of solving the Matsubara Dyson equation with the Fourier method (circles) and the fix-point iteration (squares) for SolveOrder=1 (left) and SolveOrder=5 (right panel).

↑ to the top of the page ↑

Test of accuracy of Dyson solver: nonequilibrium

Synopsis

This test program tests - similar to the example Test of accuracy of Dyson solver: equilibrium - the accuracy of the solution of the Dyson equation (see Dyson equation) and Volterra integral equation (see VIE2) for the numerically exactly solvable downfolding problem. The scaling of the averaged error can be obtained for different values of SolveOrder.

Details and Implementation

In this example, we solve the Dyson equation \eqref{eq:downfold1} on the whole KB contour \(\mathcal{C}\). Equivalently, one can also solve the Dyson equation in integral form

\begin{align} \label{eq:downfold1_vie} & g_1(t,t^\prime) + [F\ast g_1](t,t^\prime) = g^{(0)}_1(t,t^\prime) \ , \\ & g_1(t,t^\prime) + [g_1\ast F^\ddagger](t,t^\prime) = g^{(0)}_1(t,t^\prime) \ , \end{align}

where \(F= -\Sigma\ast g_1^{(0)}\) and \(F^\ddagger=-g_1^{(0)}\ast \Sigma\). The free GF \(g_1^{(0)}(t,t^\prime)\) is known analytically and computed by calling the routine cntr::green_from_H (see Green's function for a constant or time-dependent Hamiltonian)

The structure of the test program is analogous to the equilibrium case. After reading the parameters from the input file and fixing h=Tmax/Nt, the embedding self-energy is constructed on the whole contour by

cntr::green_from_H(Sigma, mu, eps_22, beta, h);
for(tstp=-1; tstp<=Nt; tstp++) {
Sigma.smul(tstp,lam*lam);
}

Solving a generic Dyson equation is accomplished by three steps (see Dyson equation):

  1. solve the equilibrium problem by solving the corresponding Matsubara Dyson equation,
  2. compute the NEGFs for time steps n=0,...,SolveOrder by using the starting algorithm (bootstrapping), and
  3. perform the time stepping for n=SolveOrder+1,...,Nt.

For Eq. \eqref{eq:downfold1}, this task is accomplished by

GREEN G_approx(Nt, Ntau, 1, FERMION);
// equilibrium
cntr::dyson_mat(G_approx, mu, eps_11_func, Sigma, beta, SolveOrder);
// start
cntr::dyson_start(G_approx, mu, eps_11_func, Sigma, beta, h, SolveOrder);
// time stepping
for (tstp=SolverOrder+1; tstp<=Nt; tstp++) {
cntr::dyson_timestep(tstp, G_approx, mu, eps_11_func, Sigma, beta, h, SolveOrder);
}

The deviation of the nonequilbrium Keldysh components from the exact solution is calculated using the Euclidean norm distance (see Comparing Green's functions). Here, we define the average error by

err_dyson=0.0;
for(tstp=0; tstp<=Nt; tstp++){
err_dyson += 2.0 * cntr::distance_norm2_les(tstp, G_exact, G_approx) / (Nt*Nt);
err_dyson += 2.0 * cntr::distance_norm2_ret(tstp, G_exact, G_approx) / (Nt*Nt);
err_dyson += cntr::distance_norm2_tv(tstp, G_exact, G_approx) / (Nt*Ntau);
}

Note that we compare here individual components of the Green's functions (G_exact and G_approx) by using cntr::distance_norm2_XXX. One may use also a single cntr::distance_norm2 routine.

Solving the integral form \eqref{eq:downfold1_vie} by using vie2 proceeds in three analogous steps, as for Dyson (see Non-singular VIE2). The following source code demonstrates the implemenation:

// noninteracting 1x1 Greens function (Sigma=0)
GREEN G0(Nt,Ntau,1,FERMION);
cdmatrix eps_11=eps1*MatrixXcd::Identity(1,1);
cntr::green_from_H(G0, mu, eps_11, beta, h);
GREEN G_approx(Nt,Ntau,1,FERMION);
GREEN F(Nt,Ntau,1,FERMION);
GREEN Fcc(Nt,Ntau,1,FERMION);
// equilibrium
GenKernel(-1, G0, Sigma, F, Fcc, beta, h, SolveOrder);
cntr::vie2_mat(G_approx, F, Fcc, G0, beta, SolveOrder);
// start
for(tstp=0; tstp <= SolveOrder; tstp++){
GenKernel(tstp, G0, Sigma, F, Fcc, beta, h, SolveOrder);
}
cntr::vie2_start(G_approx, F, Fcc, G0, beta, h, SolveOrder);
// time stepping
for (tstp=SolveOrder+1; tstp<=Nt; tstp++) {
GenKernel(tstp, G0, Sigma, F, Fcc, beta, h, SolveOrder);
cntr::vie2_timestep(tstp, G_approx, F, Fcc, G0, beta, h, SolveOrder);
}

For convenience, we have defined the routine GenKernel which calculates the convolution kernels \(F\) and \(F^\ddagger\):

void GenKernel(int tstp, GREEN &G0, GREEN &Sigma, GREEN &F, GREEN &Fcc, const double beta, const double h, const int SolveOrder){
cntr::convolution_timestep(tstp, F, G0, Sigma, beta, h, SolveOrder);
cntr::convolution_timestep(tstp, Fcc, Sigma, G0, beta, h, SolveOrder);
F.smul(tstp,-1);
Fcc.smul(tstp,-1);
}

The error is computed in the same way as above:

err_dyson=0.0;
for(tstp=0; tstp<=Nt; tstp++){
err_vie2 += 2.0 * cntr::distance_norm2_les(tstp, G_exact, G_approx) / (Nt*Nt);
err_vie2 += 2.0 * cntr::distance_norm2_ret(tstp, G_exact, G_approx) / (Nt*Nt);
err_vie2 += cntr::distance_norm2_tv(tstp, G_exact, G_approx) / (Nt*Ntau);
}

The implementation of this example can be found here:

programs/test_nonequilibrium.cpp source code of program
utils/test_nonequilibrium.py python driver script

Running the program

The program is run by

./exe/test_nonequilibrium.x inp/input_nonequilibrium.inp

As above, inp/input_nonequilibrium.inp is an input file formatted as explained in Creating and passing input files, providing the following input variables:

Nt Number of points on the real-time axis
Ntau Number of points on the Matsubara axis
Tmax Propagation time, defines the time step interval by h=Tmax/Nt
SolveOrder Solution order

The output is appended to the file out/test_nonequilibrium.dat. The two columns contain the error of solving the problem as Dyson equation and as Volterra integral equation, respectively. As an example, setting Nt=100, Ntau=200, Tmax=5.0 and SolveOrder=1 yields

$ cat out/test_nonequilibrium.dat
0.0081178 4.5843e-05

The python driver script utils/test_nonequilibrium.py provides a simple interfaces to run the test program, which performs calculations for a series of values of Nt. Simply run

python utils/test_nonequilibrium.py k

where k=1,...,5 stands for the SolveOrder input.

After running the calculations, the logarithm of the error is plotted against \(\log(N_t)\). The scaling of the error is extracted by a linear regression of the logarithmic errors. The plots produced by utils/test_nonequilibrium.py are shown in Fig. Nonequilibrium_scaling. As confirmed in Fig. Nonequilibrium_scaling, the average error scales as \(\mathcal{O}(h^{k+1})\) for dyson, while solving the problem using vie2 yields a \(\mathcal{O}(h^{k+2})\) scaling ( \(k\) stands for SolveOrder.)

Nonequilibrium scaling
test_nonequilibrium.svg
Average error for the integro-differential (Dyson) and integral (VIE2) formulation of the downfolding problem for SolveOrder=1 (left) and SolveOrder=5 (right panel). We adopt the same parameters as for the equilibrium case but fix Ntau=800

↑ to the top of the page ↑

Hubbard chain

Synopsis

In this example we consider the one-dimension (1D) Hubbard model with the Hamiltonian

\begin{align} \label{eq:hubbmodel1} \hat{H}_0 = -J \sum_{\langle i,j \rangle, \sigma} \hat{c}^\dagger_{i\sigma} \hat{c}_{j \sigma} + \frac{U}{2}\sum_{i,\sigma} \hat{n}_{i\sigma} \hat{n}_{i\bar\sigma} \ , \end{align}

where \(\langle i,j\rangle\) constrains the lattice sites \(i,j\) to the nearest neighbors, while \(\bar\sigma=\uparrow,\downarrow\) for \(\sigma=\downarrow,\uparrow\). We consider \(M\) lattice sites with open boundary conditions. Furthermore, we restrict ourselves to the paramagnetic case with an equal number of spin-up ( \(N_\uparrow\)) and and spin-down ( \(N_\downarrow\)) particles. The number of particles determines the filling factor \(\bar{n}=N_\uparrow/M\).

In this example, the system is excited with an instantaneous quench of the on-site potential of the first lattice site to \(w_0\):

\begin{align} \label{eq:hubbchain_quench} \hat{H}(t) = \hat{H}_0 + \theta(t) w_0 \sum_\sigma \hat{c}^\dagger_{1\sigma} \hat{c}_{1\sigma} \ . \end{align}

Here, we will treat the dynamics with respect to the Hamiltonian \eqref{eq:hubbchain_quench} within the second-Born (2B) and the \(GW\) approximation.

Details and implementation

The source code is organized as follows:

programs/hubbard_chain_2b.cpp main program for the 2B approximation
programs/hubbard_chain_gw.cpp main program for the \(GW\) approximation
programs/hubbard_chain_selfen_impl.cpp implementation of self-energy approximations

Corresponding declarations of the routines are in the .hpp files with the same name as the .cpp files. We will discuss the implementation of the 2B approximation in detail and then remark on how to adopt it for the \(GW\) approximation.

Second-Born approximation

The 2B approximation corresponds to the second-order expansion in terms of the Hubbard repulsion \(U(t)\), which we treat as time dependent here for generality. Defining the GF with respect to the lattice basis \(i,j\), the 2B is defined by

\begin{align} \label{eq:sigma_hubb_2b} \Sigma_{ij,\sigma}(t,t^\prime) = U(t)U(t^\prime) G_{ij,\sigma}(t,t^\prime) G_{ij,\bar\sigma}(t,t^\prime)G_{ji,\bar\sigma}(t^\prime,t) \ . \end{align}

Due to the paramagnetic constraint, \(G_{ij,\sigma}(t,t^\prime) = G_{ij,\bar\sigma}(t,t^\prime)\); the spin index can be dropped throughout. The 2B self-energy \eqref{eq:sigma_hubb_2b} is implemented in two steps.

  1. The (per-spin) polarization \( P_{ij}(t,t^\prime)= -i G_{ij}(t,t^\prime)G_{ji}(t^\prime,t) \) is computed using the routine cntr::Bubble1 and subsequent multiplication by (-1).
  2. The self-energy is then given by \(\Sigma_{ij}(t,t^\prime) = i U(t) U(t^\prime) G_{ij}(t,t^\prime) P_{ij}(t,t^\prime) \), which corresponds to a bubble diagram computed by the routine cntr::Bubble2.

The 2B self-energy is computed by the routine Sigma_2B (found in programs/hubbard_chain_selfen_impl.cpp) as follows:

void Sigma_2B(int tstp, GREEN &G, CFUNC &U, GREEN &Sigma){
int nsites=G.size1();
int ntau=G.ntau();
GREEN_TSTP Pol(tstp,ntau,nsites,BOSON);
Polarization(tstp, G, Pol);
Pol.right_multiply(U);
Pol.left_multiply(U);
for(int i=0; i<nsites; i++){
for(int j=0; j<nsites; j++){
cntr::Bubble2(tstp,Sigma,i,j,G,i,j,Pol,i,j);
}
}
}

First, the polarization Pol, which represents \(P_{ij}(t,t^\prime)\), is defined for the given time step. After computing \(P_{ij}(t,t^\prime)\) by the function

void Polarization(int tstp, GREEN &G, GREEN_TSTP &Pol){
int nsites=G.size1();
for(int i=0; i<nsites; i++){
for(int j=0; j<nsites; j++){
cntr::Bubble1(tstp,Pol,i,j,G,i,j,G,i,j);
}
}
Pol.smul(-1.0);
}

the lines

Pol.right_multiply(U);
Pol.left_multiply(U);

perform the operation \(P_{ij}(t,t^\prime) \rightarrow P_{ij}(t,t^\prime) U(t^\prime)\) and \(P_{ij}(t,t^\prime) \rightarrow U(t)P_{ij}(t,t^\prime) \), respectively. Finally, cntr::Bubble2 computes \(\Sigma_{ij}(t,t^\prime)\).

Mean-field Hamiltonian and onsite quench

The total self-energy also includes the Hartree-Fock (HF) contribution, which we incorporate into the mean-field Hamiltonian \(\epsilon^{\mathrm{MF}}_{ij}(t) = \epsilon^{0}_{ij}(t) + U (n_i-\bar{n}) \) with the occupation (per spin) \(n_i(t)= \mathrm{Im}[G^<_{ii}(t,t)]\). The shift of chemical potential \(-U \bar{n}\) is a convention to fix the chemical potential at half filling at \(\mu=0\). In the example program, the mean-field Hamiltonian is represented by the cntr::function eps_mf. Updating eps_mf is accomplished by computing the density matrix using the class routine cntr::herm_matrix::density_matrix.

The general procedure to implement a quench of some parameter \(\lambda\) at \(t=0\) is to represent \(\lambda\) by a contour function \(\lambda_n\): \(\lambda_{-1}\) corresponds to the pre-quench value which determines the thermal equilibrium, while \(\lambda_n\) with \(n\ge 0\) governs the time evolution. In the example program, we simplify this procedure by redefining \(\epsilon^{0}_{ij} \rightarrow \epsilon^{0}_{ij} + w_0 \delta_{i,1}\delta_{j,1}\) after the Matsubara Dyson equation has been solved.

Structure of the example program

The program is structured similar to the previous examples. After reading variables from file and initializing the variables and classes, the Matsubara Dyson equation is solved in a self-consistent fashion. The example below illustrates this procedure for the 2B approximation.

tstp=-1;
gtemp = GREEN(SolverOrder,Ntau,Nsites,FERMION);
gtemp.set_timestep(tstp,G);
for(int iter=0;iter<=MatsMaxIter;iter++){
// update mean field
hubb::Ham_MF(tstp, G, Ut, eps0, eps_mf);
// update self-energy
hubb::Sigma_2B(tstp, G, Ut, Sigma);
// solve Dyson equation
cntr::dyson_mat(G, MuChem, eps_mf, Sigma, beta, SolveOrder);
// self-consistency check
err = cntr::distance_norm2(tstp,G,gtemp);
if(err<MatsMaxErr){
break;
}
gtemp.set_timestep(tstp,G);
}

Updating the mean-field Hamiltonian (hubb::Ham_MF), the self-energy (hubb::Sigma_2B) and solving the corresponding Dyson equation (cntr::dyson_mat) is repeated until self-consistency has been reached, which in practice means that the deviation between the previous and updated GF is smaller than MatsMaxErr. For other self-energy approximations, the steps described above (updating auxiliary quantities) have to be performed before the self-energy can be updated.

Once the Matsubara Dyson equation has been solved up to the required convergence threshold, the starting algorithm for time steps \(n=0,\dots,k\) can be applied. To reach self-consistency for the first few time steps, we employ the bootstrapping loop:

for (int iter = 0; iter <= BootstrapMaxIter; iter++) {
// update mean field
for(tstp=0; tstp<=SolveOrder; tstp++){
hubb::Ham_MF(tstp, G, Ut, eps0, eps_mf);
}
// update self-energy
for(tstp=0; tstp<=SolveOrder; tstp++){
hubb::Sigma_2B(tstp, G, Ut, Sigma);
}
// solve Dyson equation
cntr::dyson_start(G, MuChem, hmf, Sigma, beta, h, SolveOrder);
// self-consistency check
err=0.0;
for(tstp=0; tstp<=SolverOrder; tstp++) {
err += cntr::distance_norm2(tstp,G,gtemp);
}
if(err<BootstrapMaxErr && iter>2){
break;
}
for(tstp=0; tstp<=SolverOrder; tstp++) {
gtemp.set_timestep(tstp,G);
}
}

Finally, after the bootstrapping iteration has converged, the time propagation for time steps n>SolveOrder is launched. The self-consistency at each time step is accomplished by iterating the update of the mean-field Hamiltonian, GF and self-energy over a fixed number of CorrectorSteps. As an initial guess, we employ a polynomial extrapolation of the GF from time step \(n-1\) to \(n\), as implemented in the routine cntr::extrapolate_timestep. Thus, the time propagation loop takes the form

for(tstp = SolverOrder+1; tstp <= Nt; tstp++){
// Predictor: extrapolation
cntr::extrapolate_timestep(tstp-1,G,SolveOrder);
// Corrector
for (int iter=0; iter < CorrectorSteps; iter++){
// update mean field
hubb::Ham_MF(tstp, G, Ut, eps0, hmf);
// update self-energy
hubb::Sigma_2B(tstp, G, Ut, Sigma);
// solve Dyson equation
cntr::dyson_timestep(tstp,G,MuChem,eps_mf,Sigma,beta,h,SolveOrder);
}
}

After the GF has been computed for all required time steps, we compute the observables. In particular, the conservation of the total energy provides a good criterion to assess the accuracy of the calculation. The total energy for the Hubbard model \eqref{eq:hubbmodel1} is given in terms of the Galitskii-Migdal formula

\begin{align} E = \frac{1}{2}\mathrm{Tr}\left[\rho(t)\left(\epsilon^{(0)} +\epsilon^{\mathrm{MF}}(t)\right)\right] + \frac{1}{2} \mathrm{Im}\mathrm{Tr} \left[\Sigma\ast G\right]^<(t,t) \ .\label{Eq:Total_ener} \end{align}

The last term, known as the correlation energy, is most conveniently computed by the routine:

Ecorr = cntr::correlation_energy(tstp, G, Sigma, beta, h);

GW approximation

Next we consider the \(GW\) approximation. We remark that we formally treat the Hubbard interaction as spin-independent, while the spin-summation in the polarization \(P\) (which is forbidden by the Pauli principle) is excluded by the corresponding prefactor.

Within the same setup as above, the \(GW\) approximation is defined by

\begin{align} \Sigma_{ij}(t,t^\prime) = i G_{ij}(t,t^\prime) \delta W_{ij}(t,t^\prime) \ , \end{align}

where \(\delta W_{ij}(t,t^\prime)\) denotes the dynamical part of the screened interaction \(W_{ij}(t,t^\prime) = U \delta_{ij}\delta_\mathcal{C}(t,t^\prime) + \delta W_{ij}(t,t^\prime)\). We compute \(\delta W_{ij}(t,t^\prime)\) from the charge susceptibility \(\chi_{ij}(t,t^\prime)\) by \(\delta W_{ij}(t,t^\prime) = U(t) \chi_{ij}(t,t^\prime) U(t^\prime)\). In turn, the susceptibility obeys the Dyson equation

\begin{align} \label{eq:dyson_chi} \chi = P + P\ast U \ast \chi \ , \end{align}

The strategy to compute the \(GW\) self-energy consists of three steps:

  1. Computing the polarization \(P_{ij}(t,t^\prime)\) by cntr::Bubble1 (as in Second-Born approximation).
  2. Solving the Dyson equation \eqref{eq:dyson_chi} as VIE. By defining the kernel \(K_{ij}=-P_{ij}(t,t^\prime)U(t^\prime)\) and its hermitian conjugate, Eq. \eqref{eq:dyson_chi} amounts to \([1+K]\ast \chi=P\), which is solved using cntr::vie2.
  3. Computing the self-energy by cntr::Bubble2.

The implementation of step 1 was already discussed above. For step 2, we distinguish between the equilibrium and time stepping on the one hand, and the starting phase on the other hand. For the former, we have defined the routine

void GenChi(int tstp, double h, double beta, GREEN &Pol,
CFUNC &U, GREEN &PxU, GREEN &UxP, GREEN &Chi, int order){
PxU.set_timestep(tstp, Pol);
UxP.set_timestep(tstp, Pol);
PxU.right_multiply(tstp, U);
UxP.left_multiply(tstp, U);
PxU.smul(tstp,-1.0);
UxP.smul(tstp,-1.0);
if(tstp==-1){
cntr::vie2_mat(Chi,PxU,UxP,Pol,beta,CINTEG(order));
} else{
cntr::vie2_timestep(tstp,Chi,PxU,UxP,Pol,CINTEG(order),beta,h);
}
}

Here, PxU and UxP correspond to the kernel \(K_{ij}\) and its hermitian conjugate, respectively. Analogously, the starting routine is implemented as

void GenChi(double h, double beta, GREEN &Pol, CFUNC &U,
GREEN &PxU, GREEN &UxP, GREEN &Chi, int order){
for(int n = 0; n <= order; n++){
PxU.set_timestep(n, Pol);
UxP.set_timestep(n, Pol);
PxU.right_multiply(n, U);
UxP.left_multiply(n, U);
PxU.smul(n,-1.0);
UxP.smul(n,-1.0);
}
cntr::vie2_start(Chi,PxU,UxP,Pol,CINTEG(order),beta,h);
}

Finally, the self-energy is computed by

void Sigma_GW(int tstp, GREEN &G, CFUNC &U, GREEN &Chi, GREEN &Sigma){
int nsites=G.size1();
int ntau=G.ntau();
GREEN_TSTP deltaW(tstp,ntau,nsites,CNTR_BOSON);
Chi.get_timestep(tstp,deltaW);
deltaW.left_multiply(U);
deltaW.right_multiply(U);
for(int i=0; i<nsites; i++){
for(int j=0; j<nsites; j++){
cntr::Bubble2(tstp,Sigma,i,j,G,i,j,deltaW,i,j);
}
}
}

The above implementation of the \(GW\) self-energy can be found in programs/hubbard_chain_selfen_impl.cpp. The structure of the program (programs/hubbard_chain_gw.cpp) is analogous to Structure of the example program.

Running the example programs

There are two programs for the 2B and the \(GW\) approximation, respectively: hubbard_chain_2b.x, hubbard_chain_gw.x, The driver script demo_hubbard_chain.py located in the utils/ directory provides a simple interface to these programs. Simply run

python utils/demo_hubbard_chain.py

After defining the parameters and convergence parameters, the script creates the corresponding input file and launches all three programs in a loop. The occupation of the first lattice site \(n_1(t)\) and the kinetic and total energy are then read from the output files and plotted. The script demo_hubbard_chain.py also allows to pass reference data as an optional argument, which can be used to compare to exact results. For some parameters, results of the exact treatment are available in the data/ directory.

Discussion

As an example, we consider the Hubbard dimer ( \(M=2\)) for \(U=1\) at half filling ( \(\mu=0\), \(\bar{n}=1/2\)). The system is excited by a strong on-site quench \(w_0=5\). The results are obtained by running

python utils/demo_hubbard_chain.py data/hubbardchain_exact_M2_U1_n1_w5.dat

This strong excitation leads to the phenomenon of artificial damping: although the density \(n_1(t)\) exhibits an oscillatory behavior for all times in an exact treatment, the approximate NEGF treatment – with either self-energy approximation considered here – shows damping until an unphysical steady state is reached.

Dynamics of the Hubbard chain
hubbardchain.svg
Here, we consider the Hubbard dimer with M=2 and U=1 at half filling. We used Nt=400 and Ntau=400 at an inverse temperature of beta=20.0 and a time step of h=0.025.

↑ to the top of the page ↑

Holstein impurity model coupled to a bath

Synopsis

In this example and the next example, we consider electron-phonon (el-ph) coupled systems. We start with a simple model describing electrons on two sites \(i=0,1\) and a phonon mode coupled to site 0. The Hamiltonian reads

\begin{align} \hat{H}(t)&=-J(t)\sum_{\sigma}(\hat{c}_{0\sigma}^{\dagger}\hat{c}_{1\sigma}+\hat{c}_{1\sigma}^{\dagger}\hat{c}_{0\sigma}) + \sum_{i} \epsilon_i \hat{n}_{i} +\omega_0\sum_i \hat{a}^{\dagger}_i \hat{a}_i+g(t)\sum_i (\hat{a}_i^{\dagger}+\hat{a}_i)\hat{n}_0.\label{eq:Holstein_imp} \end{align}

Here, \(\hat{c}^\dagger_{i\sigma}\) is the creation operator of an electron with spin \(\sigma\) at lattice site \(i\), \(\hat{n}_i = \sum_{\sigma}\hat{c}_{i\sigma}^{\dagger}\hat{c}_{i\sigma}\), and \(\hat{a}^\dagger\) creates an Einstein phonon coupled to the \(0\)th site. \(J(t)\) is the hopping parameter of the electrons, \(\epsilon_i\) is the energy level of site \(i\), \(\omega_0\) is the phonon frequency and \(g(t)\) is the el-ph coupling. If we regard the 0th site as an impurity, this model corresponds to a Holstein impurity model with a single bath site. This type of impurity model is self-consistently solved in a dynamical mean-field theory calculation, which we explain in the next example.

In the present example, we consider excitations via a modulation of the hopping parameter or the el-ph coupling. We treat the dynamics within the self-consistent Migdal approximation or the unrenormalized Migdal approximation.

Details and implementation

The source code is organized as follows:

programs/Holstein_impurity_singlebath_uMig.cpp main program for the unrenormalized Migdal approximation
programs/Holstein_impurity_singlebath_Migdal.cpp main program for the self-consistent Migdal approximation
programs/Holstein_utils_impl.cpp subroutines for the evaluation of the phonon energy and the phonon displacement
programs/Holstein_impurity_impl.cpp implementation of self-energy approximations

The corresponding declarations of the routines are in the .hpp files with the same name as the .cpp files. We will first discuss how to solve the problem, and then explain the Migdal approximations.

We first introduce the GFs for the electrons and phonons as

\begin{align} G_{ij}(t,t') &= -i \Bigl\langle T_{\mathcal{C}}\hat{c}_{i\sigma}(t) \hat{c}^{\dagger}_{j\sigma} (t')\Bigl\rangle, \\ D(t,t') &= -i\langle T_{\mathcal{C}} \Delta\hat{X}(t) \Delta\hat{X}(t') \rangle, \end{align}

where \(\hat{X} = \hat{a}^\dagger + \hat{a}\) and \(\Delta\hat{X}(t) = \hat{X}(t) - \langle \hat{X}(t) \rangle \). Here, we consider the spin symmetric case.

Since the system is coupled to the phonon only at site 0, one can trace out the contribution from the bath site (site 1) and focus on the GF for site 0, \(G_{00}\). Then the electron and phonon GFs are determined by the Dyson equations

\begin{align} &[i\partial_{t}-\epsilon_0-\Sigma^{\rm MF}(t)]G_{00}(t,t')-[(\Delta + \Sigma^{\rm corr})* G_{00}](t,t')=\delta_{\mathcal C}(t,t')\label{eq:Dyson_imp},\\ &D(t,t^\prime) = D_{0}(t,t^\prime) + [D_{0}*\Pi * D](t,t^\prime) \label{eq:D_dyson}, \end{align}

and the phonon displacement, \(X(t)=\langle \hat{X}(t)\rangle\), which can be calculated as

\begin{align} X(t) = -\frac{2 g(0)}{\omega_0} n_0(0) + \int^t_0 d\bar{t} D_0^R(t,\bar{t})[g({\bar t})n_0(\bar{t})-g(0)n_0(0)]. \label{eq:X} \end{align}

Here the mean-field contribution ( \(\Sigma^{\rm MF}(t)\)) corresponds to

\begin{align} \Sigma^{\rm MF}(t) &= g(t)X(t),\label{eq:H_mf_Holstein} \end{align}

and \(\Delta(t,t’)\) is the hybridization function, which can be expressed as

\begin{align} \Delta(t,t') = J(t) G_{0,11}(t,t') J(t') \label{eq:Hyb_imp} \end{align}

with \([i\partial_t -\epsilon_j]G_{0,jj}(t,t')=\delta_\mathcal{C}(t,t’)\). \(\Sigma^{\rm corr}(t,\bar{t})\) is the beyond-mean-field contribution to the self-energy, \(D_{0}(t,t')\equiv-i\langle \Delta\hat{X}(t) \Delta\hat{X}(t') \rangle_0\) is the GF for the free phonon system, \(\Pi\) is the phonon self-energy and \(n_0(t)= \langle \hat{n}_0(t) \rangle\).

For a general impurity model, the hybridization function \(\Delta(t,t’)\) takes an arbitrary form; however, the structure of equations (Eqs. \eqref{eq:Dyson_imp}, \eqref{eq:D_dyson}, \eqref{eq:X}, \eqref{eq:H_mf_Holstein}) stays the same. This type of impurity problem is solved within nonequilibrium dmft, where the hybridization function is self-consistently determined (see Sec. [Idea of DMFT]).

Once \(G_{00}\) and \(\Sigma\) are obtained, the remaining electron GFs can be easily calculated. For example, \( G_{10} = -G_{0,11} * J * G_{00} \). The energies can be also evaluated in a post processing operation. The kinetic energy (the expectation value of the first two terms of the Hamiltonian \eqref{eq:Holstein_imp}) is

\begin{align} E_{\rm kin}(t)&=-2i[\Delta \ast G_{00}+G_{00} \ast \Delta]^<(t,t) + \sum_{i} \epsilon_i n_{i}(t) . \end{align}

The interaction energy (the expectation of the third term of Eq. \eqref{eq:Holstein_imp}) is can be expressed as

\begin{align} E_{\rm nX}(t)&=g(t) X(t) n_0(t) - 2i[\Sigma^{\rm corr} * G_{00}]^<(t,t), \end{align}

where the first term is the mean-field contribution and the second term is the correlation energy. The phonon energy (the expectation value of the last term of Eq. \eqref{eq:Holstein_imp}) is

\begin{align}\label{eq:ph_energy} E_{\rm ph}(t)=\frac{\omega_{0}}{4} [iD^<(t,t) + X(t)^2]+\frac{\omega_{0}}{4} [iD_{\rm PP}^<(t,t) + P(t)^2]. \end{align}

Here \( D_{\rm PP}(t,t')=-i\langle T_\mathcal{C} \Delta\hat{P}(t) \Delta\hat{P}(t')\rangle\) with \(\hat{P}=\frac{1}{i}(\hat{a}-\hat{a}^\dagger)\), \( P(t) \equiv \langle\hat{P}(t)\rangle \) and \(\Delta\hat{P}(t) \equiv \hat{P}(t) - \langle \hat{P}(t)\rangle\).

Self-consistent Migdal approximation as an impurity solver: sMig

The self-consistent Migdal approximation is the lowest order ( \(\mathcal{O}(g^2)\)) self-consistent approximation for the self-energies, based on renormalized GFs. Within this approximation, one can treat the dynamics and the renomalization of the phonons induced by the electron-phonon coupling. Even though the phonons can absorb energy from the electrons, the total energy of the system is conserved in this approximation. The impurity self-energies for the electrons and phonons are approximated as

\begin{align} \hat{\Sigma}^{\rm sMig,corr}(t,t') & =ig(t)g(t') D(t,t')G_{00}(t,t'),\label{eq:sMig_el}\\ \Pi^{\rm sMig}(t,t')&=-2 i g(t)g(t') G_{00}(t,t') G_{00}(t',t).\label{eq:sMig_ph} \end{align}

In the sample program, the sMig self-energy is computed by the routine Sigma_Mig (found in programs/Holstein_impurity_impl.cpp). We provide two interfaces for \(0\leq \) tstp \( \leq \) SolverOrder (bootstrapping part) and tstp \(=-1\), tstp \(>\) SolverOrder (Matsubara part and the time-stepping part), respectively. Here, we show the latter as an example:

void Sigma_Mig(int tstp, GREEN &G, GREEN &Sigma, GREEN &D0, GREEN &D, GREEN &Pi, GREEN &D0_Pi, GREEN &Pi_D0, CFUNC &g_el_ph, double beta, double h, int SolverOrder, int MAT_METHOD){
int Norb=G.size1();
int Ntau=G.ntau();
//step1: phonon self-energy gGgG
GREEN_TSTP gGg(tstp,Ntau,Norb,FERMION);
G.get_timestep(tstp,gGg);
gGg.right_multiply(g_el_ph);
gGg.left_multiply(g_el_ph);
GREEN_TSTP Pol(tstp,Ntau,1,BOSON);
Pi.set_timestep_zero(tstp);
Bubble1(tstp,Pol,0,0,gGg,0,0,G,0,0);
Pi.incr_timestep(tstp,Pol,-2.0);
//step2: solve phonon Dyson to evaluate D^corr
step_D(tstp, D0, D, Pi, D0_Pi, Pi_D0, beta, h, SolverOrder, MAT_METHOD);
//step3: evaluate electron self-energy
Bubble2(tstp,Sigma,0,0,D,0,0,gGg,0,0);
}

This routine contains three steps.

  1. The phonon self-energy (Eq. \eqref{eq:sMig_ph}) is evaluated using cntr::Bubble1.
  2. The Dyson equation of the phonon Green's function (Eq. \eqref{eq:D_dyson}) is solved using the routine step_D, see below. Here MAT_METHOD specifies which method is used to solve the Dyson equation of the Matsubara part.
  3. The electron self-energy (Eq. \eqref{eq:sMig_el}) is evaluated using cntr::Bubble2.

For the bootstrapping part, we carry out these steps for tstp \(\leq\) SolverOrder at once, using start_D instead of step_D.

The routine for solving the Dyson equation for the phonons is implemented using the cntr routines vie2_**. The interface for solving the Matsubara Dyson equation and time stepping (step_D) is given by

void step_D(int tstp,GREEN &D0, GREEN &D, GREEN &Pi, GREEN &D0_Pi, GREEN &Pi_D0,
double beta, double h, int SolverOrder, int MAT_METHOD){
//step1: set D0_Pi=-D0*Pi
cntr::convolution_timestep(tstp,D0_Pi,D0,Pi,beta,h,SolverOrder);
D0_Pi.smul(tstp,-1.0);
cntr::convolution_timestep(tstp,Pi_D0,Pi,D0,beta,h,SolverOrder);
Pi_D0.smul(tstp,-1.0);
//step2: solve [1-D0*Pi]*D=[1+D0_Pi]*D=D0
if(tstp==-1){
cntr::vie2_mat(D,D0_Pi,Pi_D0,D0,beta,SolverOrder,MAT_METHOD);
}else {
cntr::vie2_timestep(tstp,D,D0_Pi,Pi_D0,D0,beta,h,SolverOrder);
}
}

This routine is separated into two steps.

  1. \(-D_0 * \Pi\), which is the kernel of Eq. \eqref{eq:D_dyson}, and its conjugate for \(-\Pi * D_0\) are prepared by cntr::convolution_timestep.
  2. Eq. \eqref{eq:D_dyson} is solved as a vie with vie2_** for the corresponding time step.

In start_D, we perform these steps for \(0 \leq\) tstp \(\leq\) SolverOrder at the same time, and use vie2_start.

The free phonon GF, \(D_{0}(t,t')/2\) is computed using the cntr routine

cntr::green_single_pole_XX(D0,Phfreq_w0,beta,h);

Unrenormalized Migdal approximation as an impurity solver: uMig

In the unrenormalized Migdal approximation, the phonons are kept in equilibrium and play the role of a heat bath. Therefore, the total energy is not conserved. The impurity self-energy for the electrons is approximated as

\begin{align} \label{eq:uMig} \hat{\Sigma}^{\rm uMig,corr}(t,t') & =ig(t)g(t')D_{0}(t,t') G_{00}(t,t'). \end{align}

We call this approximation uMig. The implementation is simpler than in the case of sMig and can be found in the routine Sigma_uMig.

Structure of the example program

The example programs for the sMig and uMig approximations are provided in programs/Holstein_impurity_singlebath_Migdal.cpp and programs/Holstein_impurity_single_bath_uMig.cpp. The main body of the example programs consists of two parts. In the first part, we focus on \(G_{00}\) and evaluate it by solving the equations with the corresponding approximations for the self-energies. Once \(G_{00}\) and the self-energies are determined, all other GFs and the energies are evaluated in the second part.

As in the case of the Hubbard chain, the first part consists of three main steps: (i) solving the Matsubara Dyson equation, (ii) bootstrapping (tstp \(\leq\) SolverOrder) and (iii) time propagation for tstp \(>\) SolverOrder. Since the generic structure of each part is similar to that of the Hubbard chain, we only show here the part corresponding to the time propagation.

for(tstp = SolverOrder+1; tstp <= Nt; tstp++){
// Predictor: extrapolation
// Corrector
for (int iter=0; iter < CorrectorSteps; iter++){
cdmatrix rho_M(1,1), Xph_tmp(1,1);
cdmatrix g_elph_tmp(1,1), h00_MF_tmp(1,1);
//update correlated self-energies
Hols::Sigma_Mig(tstp, G00, Sigma, D0, D, Pi, D0_Pi, Pi_D0, g_elph_t, beta, h, SolverOrder);
//update phonon displacement
G00.density_matrix(tstp,rho_M);
rho_M *= 2.0;//spin number=2
n0_tot_t.set_value(tstp,rho_M);
Hols::get_phonon_displace(tstp, Xph_t, n0_tot_t, g_elph_t, D0, Phfreq_w0, SolverOrder, h);
//update mean-field
Xph_t.get_value(tstp,Xph_tmp);
g_elph_t.get_value(tstp,g_elph_tmp);
h00_MF_tmp = h00 + Xph_tmp*g_elph_tmp;
h00_MF_t.set_value(tstp,h00_MF_tmp);
//solve Dyson for G00
Hyb_Sig.set_timestep(tstp,Hyb);
Hyb_Sig.incr_timestep(tstp,Sigma,1.0);
cntr::dyson_timestep(tstp, G00, 0.0, h00_MF_t, Hyb_Sig, beta, h, SolverOrder);
}
}

Here the hybridization \(\Delta(t,t')\) is initially fixed via Eq. \eqref{eq:Hyb_imp}. At the beginning of each time step, we extrapolate \(G_{00}\) to the current time step, which works as a predictor. Next, we iterate Eqs. \eqref{eq:Dyson_imp}, \eqref{eq:D_dyson}, \eqref{eq:X}, \eqref{eq:H_mf_Holstein}, \eqref{eq:sMig_el} and \eqref{eq:sMig_ph} (or \eqref{eq:uMig}) to self-consistently determine the self-energies, which works as a corrector. During the iteration, we also update the phonon displacement, \(X(t)\), using Eq. \eqref{eq:X}. The update of \(X(t)\) is implemented in the routine get_phonon_displace using a cntr routine for the (Gregory) integration, integration::Integrator.

In the second part, we evaluate the rest of the GFs. For the energies, the cntr routines correlation_energy_convolution and convolution_density_matrix are used. In particular, the calculation of the phonon energy \eqref{eq:ph_energy} is implemented by the routine evaluate_phonon_energy.

Storing Green's functions

In the example programs, we store GFs with the hdf5 format using routines provided in cntr::herm_matrix, cntr::herm_matrix::write_to_hdf5_slices and cntr::herm_matrix::write_to_hdf5_tavtrel. The corresponding part looks

char fnametmp[1000];
std::sprintf(fnametmp,"\%s_green.h5",flout);
hid_t file_id = open_hdf5_file(std::string(fnametmp));
hid_t group_id = create_group(file_id,"parm");
//parameters
store_int_attribute_to_hid(group_id,"Nt",Nt);
store_int_attribute_to_hid(group_id,"Ntau",Ntau);
store_double_attribute_to_hid(group_id,"dt",dt);
store_double_attribute_to_hid(group_id,"beta",beta);
close_group(group_id);
//green's function
//G at each t_n: G^R(t_n,t), G^<(t,t_n), G^tv(t_n,\tau) for t<=t_n
group_id = create_group(file_id, "G00_slice");
G.write_to_hdf5_slices(group_id,OutEvery);
close_group(group_id);
// G(t_rel,t_av) for t_av = [0,OutEvery*dt,2*OutEvery*dt...] for t_rel >= 0
group_id = create_group(file_id, "G00_tavrel");
G.write_to_hdf5_tavtrel(group_id,OutEvery);
close_group(group_id);

It is useful to store the number of time steps ( Nt,Ntau), the size of the time step (dt) and the inverse temperature as parm in the same hdf5 file. The routine write_to_hdf5_slices stores the contour function \(F(t,t')\) in the form of \(F^R({\rm tstp},n), F^<(n,{\rm tstp}), F^{\rceil}(n,\tau)\) for \(n\leq {\rm tstp}\) at \({\rm tstp} = m\cdot {\rm OutEvery}\). The routine write_to_hdf5_tavtrel stores the retarded and lesser parts in the form of \(F(t_{\rm rel};t_{\rm av})\), where \(t_{\rm rel}=t-t'\) and \(t_{\rm av}=\frac{t+t'}{2}\), for \(\frac{t_{\rm av}}{dt}\) = \([0,{\rm OutEvery} ,2{\rm OutEvery},...]\) for \(t_{\rm rel}\geq 0\).

Running the example programs

There are two programs for sMig and uMig, respectively: Holstein_impurity_singlebath_Migdal.x and Holstein_impurity_singlebath_uMig.x. These programs allow to treat excitations via a modulation of the hopping parameter and the el-ph coupling. We use \(\epsilon_{0,\mathrm{MF}}\equiv \epsilon_{0} + gX(0)\) as an input parameter instead of \(\epsilon_{0}\). ( \(\epsilon_{0}\) is determined in the post processing.) The driver script demo_Holstein_impurity.py located in the utils/ directory provides a simple interface to these programs. After defining the system parameters, calculation scheme (sMig or uMig, time step, convergence parameter, etc.) and excitation parameters, the script creates the corresponding input file and starts the simulation. So simply run

python utils/demo_Holstein_impurity.py

In demo_Holstein_impurity.py, the shape of the excitation is

\begin{align} g(t) & = g + \Delta g \sin\Bigl(\frac{\pi t}{T_{\rm end}}\Bigl)^2\cos(\Omega t), \label{eq:g_mod_shape}\\ J(t) & = J + \Delta J \sin\Bigl(\frac{\pi t}{T_{\rm end}}\Bigl)^2\cos(\Omega t), \end{align}

and one needs to specify \(T_{\rm end}\), \(\Omega\), \(\Delta g\) and \(\Delta J\). At the end of the simulation, the number of particles per spin for each site ( \(n_{i,\sigma}(t)\)), the phonon displacement ( \(X(t)\)), phonon momentum ( \(P(t)\)) and the energies are plotted. In addition, the spectral functions of the electrons and phonons,

\begin{align} A^R(\omega;t_{\rm av}) &= -\frac{1}{\pi} \int dt_{\rm rel} e^{i\omega t_{\rm rel}} F_{\rm gauss}(t_{\rm rel}) G^R (t_{\rm rel};t_{\rm av})\label{eq:spectrum_demo},\\ B^R(\omega;t_{\rm av}) &= -\frac{1}{\pi} \int dt_{\rm rel} e^{i\omega t_{\rm rel}} F_{\rm gauss}(t_{\rm rel}) D^R (t_{\rm rel};t_{\rm av})\label{eq:spectrum_demo2}, \end{align}

for the average time \(t_{\rm av}=\frac{Nt \cdot h}{2}\). Here \(t_{\rm rel}\) indicats the relative time, and \(F_{\rm gauss}(t_{\rm rel})\) is a Gaussian window function, which can also be specified in demo_Holstein_impurity.py. Specifically, the evaluatation of the spectral function is done in two steps as

t_rel,G_ret,G_les = read_gf_ret_let_tavtrel(output_file + '_green.h5','Gloc_tavrel',dt,tstp_plot)
A_w = evaluate_spectrum_windowed(G_ret,t_rel,ome,method,Fwindow)

In the first line, with a python routine read_gf_ret_let_tavtrel in python3/SpectrumCNTRhdf5.py, the relative time and the correspoinding retarded and lesser GFs are obtained from the .h5 file for the Green's functions stored in the form of \( G (t_{\rm rel};t_{\rm av})\). Here \( t_{\rm av} \) specified with tstp_plot. In the second line, we operate Eq. \eqref{eq:spectrum_demo} with a subroutine evaluate_spectrum_windowed in python3/SpectrumCNTRhdf5.py. Here one can specify the order of the interpolation of GFs for the Fourier transformation, and Fwindow is the window function, which we take a Gauss function in this example.

Discussion

In Fig. holstein_imp (a)–(b), we show the electron spectral function measured at site 0 and the phonon spectrum evaluated within the sMig approximation. For \(J=0\), the site 0 is decoupled from the site 1. The electron spectrum shows a main peak around \(\omega=0.5=\epsilon_{0,\mathrm{MF}}\) and side peaks around \(\epsilon_{0,\mathrm{MF}}\pm\omega_0\). As the hopping parameter is increased, the electron levels at the site 0 and the site 1 hybridize and an additional sharp peak at \(\omega<0\) appears in the electron spectrum, see panel (a). The phonon spectrum shown in panel (b) reveals a shift of the peak position from the bare value \(\omega_0=1.0\) and a finite width of the peak (phonon renormalization).

In Fig. holstein_imp (c)–(e), we show the time evolution of the phonon displacement, the density and the total energy after an excitation via hopping modulation. These results were obtained using the sMig approximation. The phonon displacement shows damped oscillations, see panel (c). Panel (d) demonstrates that after the excitation, the number of electrons at site 0 and site 1 changes with time, while the total number is conserved. Within sMig the total energy is also conserved after the excitation, as is shown in panel (e). On the other hand, when uMig is used, the total energy is not conserved after the excitation, see panel (f). In uMig, the phonons can act as a heat bath and dissipate energy, which is clearly evident when the method is used to treat extended systems, see next section.

Simulation results for the Holstein impurity model
holstein_imp_demo.svg
(a)(b) Spectral functions of the electrons at site 0 (a) and phonon spectrum (b) for various \(J\) obtained using sMig. (c)(d)(e) Time evolution of the phonon displacement, the electron density and the total energy after excitation via hopping modulations with different strengths obtained within sMig. (f) Time evolution of the total energy after excitation via hopping modulations with different strength obtained within uMig. For (a-f), we use \(g=0.5,\omega_0=1.0,\beta=2.0,\epsilon_{0,mf}=0.5,\epsilon_1=-0.5\). For (a-b), a window width \(\sigma_{\rm window}=20.0\) is used for \(F_{\rm gauss}(t_{\rm rel})\). For (c-f), we use \(J_0=0.5\) and a \(\sin^2\) envelope for the hopping modulation with excitation frequency \(\Omega=2.0\) and pulse duration \(T_{\rm end}=12.56\).

↑ to the top of the page ↑

DMFT for the Holstein model

Synopsis

Here, we consider the lattice version of the previous problem, the so-called Holstein model, whose Hamiltonian is

\begin{align} H(t)&=-J(t)\sum_{\langle i,j\rangle,\sigma}\hat{c}_{i,\sigma}^{\dagger}\hat{c}_{j,\sigma}-\mu\sum_i \hat{n}_i +\omega_0\sum_i \hat{a}^{\dagger}_i \hat{a}_i+g(t)\sum_i (\hat{a}_i^{\dagger}+\hat{a}_i)\hat{n}_i.\label{eq:Holstein} \end{align}

Here \(\mu\) is the chemical potential and the remaining terms are the same as in the previous section. For simplicity, in the following we consider the Bethe lattice (with infinite coordination number). For this lattice, the free problem has a semi-circular density of states, \(\rho_0(\epsilon) = \frac{1}{2\pi J^{*2}} \sqrt{4J^{*2}-\epsilon^2}\), with \(J^*\) a properly renormalized hopping amplitude. Here we take \(J^*=1\). Assuming spin symmetry, the lattice GFs are

\begin{align} G_{ij}(t,t') &= -i \langle T_{\mathcal{C}}\hat{c}_{i,\sigma}(t) \hat{c}_{j,\sigma}^{\dagger}(t') \rangle, \\ D_{ij}(t,t') &= -i\langle T_{\mathcal{C}} \Delta\hat{X}_i(t) \Delta\hat{X}_j(t') \rangle. \end{align}

In this example, we treat the dynamics of the Holstein model using the dmft formalism, where we map the lattice model to an effective impurity model with a self-consistently adjusted free electron bath. We provide examples for the normal state and the superconducting (SC) state with the self-consistent Migdal (sMig) and the unrenormalized Midgal (uMig) approximations as impurity solvers.

Details and implementation

The source code is organized as follows:

programs/Holstein_bethe_uMig.cppp main program for dmft with the unrenormalized Migdal approximation
programs/Holstein_bethe_Migdal.cpp main program for dmft with the self-consistent Migdal apprixmation
programs/Holstein_bethe_Nambu_uMig.cppp main program for dmft with the unrenormalized Migdal approximation for SC
programs/Holstein_bethe_Nambu_Migdal.cpp main program for dmft with the self-consistent Migdal apprixmation for SC
programs/Holstein_utils_impl.cpp subroutines for the evaluation of the phonon energy and the phonon displacement
programs/Holstein_impurity_impl.cpp implementation of self-energy approximations

We note that Holstein_utils_impl.cpp and Holstein_impurity_impl.cpp are shared with the previous examples.

Idea of DMFT

For the normal states, the corresponding effective impurity model for the Holstein model is the model considered in the previous section with a general hybridization function \(\Delta(t,t')\) (corresponding to an infinite number of bath sites). The hybridization function is self-consistently determined so that the impurity GF and the impurity self-energy ( \(\Sigma_{\rm imp}\)) are identical to the local Green's function ( \(G_{\rm loc}=G_{ii}\)) and the local self-energy of the lattice problem, respectively. In the case of a Bethe lattice, the dmft lattice self-consistency procedure simplifies and the hybridization function can be determined directly from the GF,

\begin{align} \Delta(t,t')=J^*(t) G_{\rm imp}(t,t') J^*(t').\label{eq:bethe_condition} \end{align}

When the impurity self-energy is given, the impurity GFs are determined by solving Eqs. \eqref{eq:Dyson_imp}, \eqref{eq:D_dyson} and Eqs. \eqref{eq:X} regarding \(G_{00}\) as \(G_{\rm imp}\) and \(D\) as \(D_{\rm imp}\). For the impurity self-energy, we can again use the sMig approximation or the uMig approximation. In this case, the main difference between the dmft simulation of the Holstein model on the Bethe lattice and the Holstein impurity problem discussed in the previous example is whether or not the hybridization function is updated according to Eq. \eqref{eq:bethe_condition}.

After the NEGFs are obtained, we can calculate some observables such as different energy contributions. The expressions for the energies (per site) are the same as in the case of the impurity, except for the kinetic energy, which now becomes

\begin{align} E_{\rm kin}(t)&=\frac{1}{N}\sum_{\langle i,j\rangle,\sigma}-J(t)\langle c_{i,\sigma}^{\dagger}(t)c_{j,\sigma}(t)\rangle =-2i[\Delta \ast G_{\rm loc}]^<(t,t). \end{align}

Structure of the example program

The corresponding programs are implement in Holstein_bethe_Migdal.cpp and Holstein_bethe_uMig.cpp for the normal states. The structure of the code is almost the same as that for the impurity problem and the major difference is the update of the hybridization function. To clarify this, we show the implementation of the time propagation in Holstein_bethe_Migdal.cpp,

for(tstp = SolverOrder+1; tstp <= Nt; tstp++){
// Predictor: extrapolation
cntr::extrapolate_timestep(tstp-1,G,CINTEG(SolverOrder));
cntr::extrapolate_timestep(tstp-1,Hyb,CINTEG(SolverOrder));
// Corrector
for (int iter=0; iter < CorrectorSteps; iter++){
//=========================
// Solve Impurity problem
// ========================
cdmatrix rho_M(1,1), Xph_tmp(1,1);
cdmatrix g_elph_tmp(1,1),h0_imp_MF_tmp(1,1);
//update self-energy
Hols::Sigma_Mig(tstp, G, Sigma, D0, D, Pi, D0_Pi, Pi_D0, g_elph_t, beta, h, SolverOrder);
//update phonon displacement
G.density_matrix(tstp,rho_M);
rho_M *= 2.0;//spin number=2
n_tot_t.set_value(tstp,rho_M);
Hols::get_phonon_displace(tstp, Xph_t, n_tot_t, g_elph_t, D0, Phfreq_w0, SolverOrder,h);
//update mean-field
Xph_t.get_value(tstp,Xph_tmp);
g_elph_t.get_value(tstp,g_elph_tmp);
h0_imp_MF_tmp = h0_imp + Xph_tmp*g_elph_tmp;
h0_imp_MF_t.set_value(tstp,h0_imp_MF_tmp);
//solve Dyson for impurity
Hyb_Sig.set_timestep(tstp,Hyb);
Hyb_Sig.incr_timestep(tstp,Sigma,1.0);
cntr::dyson_timestep(tstp, G, 0.0, h0_imp_MF_t, Hyb_Sig, beta, h, SolverOrder);
//===================================
// DMFT lattice self-consistency (Bethe lattice)
// ===================================
//Update hybridization
Hyb.set_timestep(tstp,G);
Hyb.right_multiply(tstp,J_hop_t);
Hyb.left_multiply(tstp,J_hop_t);
}
}

At the beginning of each time step, we extrapolate the local GF and the hybridization function, which serves as a predictor. Next, we iterate the dmft self-consistency loop (corrector) for several times. In this loop, we first solve the impurity problem to update the local self-energy and GF. Then we update the hybridization function by the lattice self-consistency condition, which in the case of the Bethe lattice simplifies to Eq. \eqref{eq:bethe_condition}.

s-wave SC: Nambu formalism

Because of the attractive interaction mediated by the phonons, this model exhibits an s-wave SC state at low enough temperatures. We can treat this situation as well using the Nambu formalism. By defining \(\hat{\Psi}^\dagger_i = [c^\dagger_{i\uparrow}, c_{i\downarrow}]\), the Holstein model is expressed as

\begin{align} H(t)&=-\sum_{\langle i,j\rangle}\hat{\Psi}_{i}^{\dagger}\hat{J}(t)\hat{\Psi}_{j}-\mu'\sum_i \hat{\Psi}_{i}^{\dagger}{\boldsymbol\sigma}_z\hat{\Psi}_{i} +\omega_0\sum_i \hat{b}^{\dagger}_i \hat{b}_i+g\sum_i (\hat{b}_i^{\dagger}+\hat{b}_i)\hat{\Psi}_{i}^{\dagger}{\boldsymbol\sigma}_z\hat{\Psi}_{i} +F_{\rm sc}(t) \sum_i \hat{\Psi}_{i}^{\dagger}{\boldsymbol\sigma}_x\hat{\Psi}_{i}.\label{eq:Holstein_Nambu} \end{align}

Here \({\boldsymbol\sigma}_a\) is the normal Pauli matrix, and \(\hat{J}(t) = {\rm diag}[J(t),-J(t)]\). In this expression, we introduce a new phonon operator as \(\hat{b}^\dagger = \hat{a}^\dagger + \frac{g}{\omega_0}\) and \(\mu' = \mu +\frac{2g^2}{\omega_0}\). We take \(\phi_{sc}\equiv \langle \hat{c}^\dagger_{i,\uparrow} \hat{c}^\dagger_{i,\downarrow} + h.c. \rangle\) as the superconducting order parameter and choose it real in equilibrium. We also introduce the time-dependent pair field, \(F_{\rm sc}(t) \sum_i (\hat{c}^\dagger_{i,\uparrow} \hat{c}^\dagger_{i,\downarrow} + h.c.) =F_{\rm sc}(t) \sum_i \hat{\Psi}_{i}^{\dagger}{\boldsymbol \sigma}_x\hat{\Psi}_{i}\), which is is useful to numerically evaluate the dynamical pair susceptibility, \(\chi^R_{\rm pair}(t,t') = -i \frac{1}{N} \langle [\hat{B}(t),\hat{B}(t')]\rangle\) with \(\hat{B}(t) = \sum_i \hat{c}^\dagger_{i,\uparrow} \hat{c}^\dagger_{i,\downarrow} + h.c.\). In the Nambu formalism, the lattice GFs are defined as

\begin{align} {\bf G}_{ij}(t,t') &= -i \Bigl\langle T_{\mathcal{C}}\hat{\Psi}_{i}(t) \hat{\Psi}_{j}^{\dagger}(t')\Bigl\rangle, \\ D_{ij}(t,t') &= -i\langle T_{\mathcal{C}} \Delta\hat{X}_{b,i}(t) \Delta\hat{X}_{b,j}(t') \rangle, \end{align}

for electrons and phonons, respectively. Here \(\hat{X}_{b,i} = \hat{b}^\dagger_i + \hat{b}_i\) and \(\Delta \hat{X}_{b,i} = \hat{X}_{b,i}-\langle \hat{X}_{b,i}(t)\rangle\). We can also treat this problem with dmft but the corresponding impurity problem now includes the hybridization function between \( \hat{c}^\dagger_{i\uparrow}\) ( \( \hat{c}_{i,\downarrow}\)) and \( \hat{c}^\dagger_{i,\downarrow}\)( \( \hat{c}_{i,\uparrow}\)). We can solve the impurity problem with the self-consistent Migdal approximation or with the unrnormalized Migdal approximations as in the normal states, and the corresponding programs are implemented in Holstein_bethe_Nambu_Migdal.cpp and Holstein_bethe_Nambu_uMig.cpp.

Running the example programs

For the simulation of normal states, we provide two programs for the sMig and uMig approximations, respectively: Holstein_bethe_Migdal.x and Holstein_bethe_uMig.x. For the simulation of superconducting states, we additionally provide Holstein_bethe_Nambu_Migdal.x and Holstein_bethe_Nambu_uMig.x. In these programs, we use \(\mu_{\mathrm{MF}}\equiv \mu - gX(0)\) (or \(\mu_{\mathrm{MF}}\equiv \mu' - gX_b(0)\) for the Nambu case) as an input parameter instead of \(\mu\) ( \(\mu'\)). ( \(\mu\) and \(\mu'\) are determined in a post processing step.) For the normal phase, excitations via modulations of the hopping and el-ph coupling are implemented, while for the superconducting phase, excitations via a time-dependent pair field (the excitation term is \(F_{\rm sc}(t) \sum_i (\hat{c}^\dagger_{i,\uparrow} \hat{c}^\dagger_{i,\downarrow} + h.c. )\)) and hopping modulation are implemented. The driver script demo_Holstein.py and demo_Holstein_sc.py located in the utils/ directory provide a simple interface to these programs. They have essentially the same structure as demo_Holstein_impurity.py described in the previous section. After specifying the system parameters and excitation conditions, simply run

python utils/demo_Holstein.py
python utils/demo_Holstein_sc.py

The form of the excitation provided in these python scripts are the same as in the precious example, see Eq. \eqref{eq:g_mod_shape}.

Discussion

In Fig. hostein_dmft (a)–(b), we show the equilibrium spectral functions in the normal phase and the SC phase. These results were obtained using dmft + sMig. One can observe the opening of the SC gap below the transition temperature. Within dmft + sMig, the phonon spectrum is renormalized, see Fig. hostein_dmft (b). In the normal phase, the peak in the phonon spectrum is shifted relative to the bare phonon frequency and it exhibits a finite width. If the electron-phonon coupling is sufficiently strong (in the SC phase), a peculiar peak around the energy scale of the SC gap appears. In Fig. hostein_dmft (c),(d), we show the time evolution of the kinetic energy and the total energy after an excitation via hopping modulation using dmft + sMig. The kinetic energy approaches a value above the equilibrium value, and the total energy is conserved. Here, the system is expected to gradually approach an equilibrium state with an elevated temperature, whose total energy is the same as the conserved total energy during the time evolution. On the other hand, within dmft + uMig, the phonons act as a heat bath which absorbs the injected energy and the total energy is not conserved. As a result, the system relaxes back to the original equilibrium state as exemplified by the kinetic energy returning to the original value, see Fig. hostein_dmft (e). In Fig. hostein_dmft (f), we show the response of the system against a small pair field, which is roughly corresponding to the dynamical pair susceptibility, \(\chi^R_{\rm sc} = -i \theta(t)\langle [\hat{B}(t),\hat{B}(0)]\rangle\) with \(\hat{B}(t)=(\hat{c}^\dagger_{i,\uparrow} \hat{c}^\dagger_{i,\downarrow} + h.c.)\). One can see that above the transition temperature, the signal damps without any oscillation but the damping speed becomes slower as we approach the transition temperature. At the transition temperature, the damping time diverges. In the SC phase, there emerge oscillations, whose excitation frequency and life-time increase with decreasing temperature. This is nothing but the amplitude Higgs mode of the superconductor.

Simulation results for the Holstein model within DMFT formalism
holstein_demo.svg
(a) Local spectral functions of the electrons above and below the SC transition temperature obtained using dmft + sMig. (b) Corresponding local phonon spectrum. (c)(d) Time evolution of the kinetic energy and the total energy after excitation via hopping modulation with different strengths calculated within dmft + sMig. (e) Time evolution of the kinetic energy after the excitation via hopping modulation within dmft + uMig. (f) Response of the superconducting order parameter \(\phi_{\rm sc} = \langle \hat{c}^\dagger_\uparrow \hat{c}^\dagger_\downarrow \rangle\) against the SC field with the maximum field strength \(F_{\rm sc} = 0.01\) obtained within dmft + sMig. We consider half-filled systems. For (a)(b)(f) we use \(g=0.72\;\omega_0=1.0\), and for (c)(d)(e) we use \(g=0.5\; \omega_0=0.5\) and \(\beta=10.0\). For (c)(d)(e) we use a \(\sin^2\) envelope for the hopping modulation with excitation frequency \(\Omega=1.5\) and pulse duration \(T_{\rm end}=12.56\). For (f), we use a pair field pulse of the same shape with \(T_{\rm end}=0.6\) and \(\Omega=0.0\).

↑ to the top of the page ↑

Lattice GW

As in the discussion of the Hubbard chain, we will consider the Hubbard model, but we will assume here a translationally invariant system with periodic boundary conditions. The translational invariance implies that all observables and propagators are diagonal in momentum space. Hence, all quantities can be labeled by the reciprocal lattice vector \(\vec k\) within the first Brillouin zone (BZ). This reduces the computational and storage complexity from \(\mathcal{O}(N_k^2)\) for the real space formalism introduced in Section Hubbard chain to \(\mathcal{O}(N_k).\) Moreover, the Dyson equation is diagonal in momentum space and can be efficiently parallelized using a distributed-memory parallelization based on MPI.

Synopsis

We will consider a 1D chain described by the Hubbard model, see Eq. \eqref{eq:hubbmodel1}. The single particle part of the Hamiltonian can be diagonalized as

\begin{equation} \hat{H}_0 = \sum_{\vec k,\sigma} \epsilon(\vec k) c_{\vec{k}\sigma}^{\dagger} c_{\vec{k}\sigma}, \end{equation}

where we have introduced the free-electron dispersion \(\epsilon(\vec k)=-2 J \cos(\vec k_x).\) The system is excited via an electromagnetic excitation, which in translationally invariant systems is conveniently introduced using the Peierls substitution. The latter involves the vector potential \(\vec A(\vec r,t)\) as a phase factor in the hopping

\begin{equation} J\rightarrow J \exp\Bigg[-\frac{i q}{\hbar} \int_{\vec R_i}^{\vec R_j} d\vec r \vec A(\vec r,t)\Bigg]\ , \end{equation}

leading to a time-dependent single-particle dispersion

\begin{equation} \epsilon(\vec k,t)=\epsilon(\vec k- q \vec A(t)/\hbar)\ . \end{equation}

In this example, we treat the dynamics within the \(GW\) approximation. The numerical evaluation of the respective self-energy expressions is implemented in the C++ module gw_selfen_impl.cpp. In momentum space, the correlation part of the \(GW\) self-energy can be written as

\begin{equation} \label{Eq:Sigmak} \Sigma_{\vec k}(t,t^\prime)=\frac{i}{N_k} \sum_{\vec q} G_{\vec k-\vec q}(t,t^\prime) \delta W_{\vec q}(t,t^\prime), \end{equation}

where we have introduced the Fourier transform of the propagator \(X_{\vec q}(t,t^\prime)=(1/N_k)\sum_{i} \exp(i (\vec r_i-\vec r_j) \vec q) X_{ij}(t,t^\prime),\) see also the previous Hubbard chain discussion for the definition of the propagators. In line with the previous notation, we have introduced the dynamical part of the effective interaction \(\delta W_{\vec q}\) via \(W_{\vec q}(t,t^\prime)=U\delta_\mathcal{C}(t,t^\prime)+\delta W_{\vec q}(t,t^\prime)\). The retarded part of the interaction is obtained as a solution of the Dyson-like equation

\begin{equation} \label{Eq:Wk} W_{\vec k}(t,t')=U\delta_\mathcal{C}(t,t^\prime)+ U [ \Pi_{\vec k} \ast W_{\vec k} ](t,t^\prime), \end{equation}

the Fourier transform of the polarization is given by

\begin{equation} \label{Eq:Pol} \Pi_{\vec k}(t,t')= \frac{-i}{N_k} \sum_{\vec q} G_{\vec k+\vec q}(t,t') G_{\vec q}(t',t). \end{equation}

MPI parallelization

The main advantage of the translational invariance is that the propagators and corresponding Dyson equations are diagonal in momentum space. This leads to a significant speed-up of of calculations since the most complex operation, the solutions of the VIE, can be performed in parallel. Moreover, the main bottleneck to reach longer propagation times is the memory restriction imposed by the hardware. The usage of the MPI parallelization scheme over momentum points alleviates this problem because memory can be distributed among different nodes.

The Dyson equation is diagonal in the momentum space and it is convenient to introduce a class called kpoint that includes all propagators and corresponding methods for a given momentum point \(\vec q\). In terms of data, the kpoint class includes single and two particle propagators for a given momentum point \(\vec q\), namely \(G_{\vec q}\) and \(W_{\vec q}\), as well as the self-energy \(\Sigma_{\vec q}\) and the polarization \(\Pi_{\vec q}\). The routines include wrappers around vie2 solvers to solve the single-particle and two-particle Dyson equations.

While the Dyson equation is diagonal in momentum space, the evaluation of the self-energy diagrams mixes information between different momentum points and requires communication between different ranks. The communication between different ranks is performed using the cntr::distributed_timestep_array as described in MPI parallelization. Users should be careful that the simple example code presented in MPI parallelization includes a global object of type cntr::herm_matrix_timestep_view<T>, which only allocates memory if a given momentum is owned by that rank. In the following implementation we will rather work with a local copy of this object on every rank. The latter produces only a minimal overhead in the memory usage and simplifies the implementation. The strategy to compute the \(GW\) self-energy \(\mathcal{T}[\Sigma_{\vec{k}}]_n\) at time step \(n\) thus consist of two steps:

  1. At time \(t_n\), communicate the latest time slice of the GFs \(\mathcal{T}[G_{\vec k}]_n\) and retarded interactions \(\mathcal{T}[W_{\vec k}]_n\) for all momentum points among all MPI ranks.
  2. Evaluate the self-energy diagram \(\mathcal{T}[\Sigma_{\vec k_{\text{rank}}}]_n\) in Eq. \eqref{Eq:Sigmak} for a subset of momentum points \(\vec k_{\text{rank}}\) present on a given rank using the routine cntr::Bubble2.

Step 1 is implemented as

void gather_gk_timestep(int tstp,int Nk_rank,DIST_TIMESTEP &gk_all_timesteps,std::vector<kpoint> &corrK_rank,std::vector<int> &kindex_rank){
gk_all_timesteps.reset_tstp(tstp);
for(int k=0;k<Nk_rank;k++){
gk_all_timesteps.G()[kindex_rank[k]].get_data(corrK_rank[k].G_);
}
gk_all_timesteps.mpi_bcast_all();
}

and an analogous routine is used for the bosonic counterpart. We gather the information from all ranks into an object of type cntr::distributed_timestep_array named gk_all_timesteps. We have introduced a map kindex_rank which maps an index of momenta from a subset on a given rank to the full BZ, like \(\mathbf{k}_{\text{rank}}\rightarrow \mathbf{k}\). The variable corrK_rank is just a collection of kpoints on a given rank. mpi_bcast_all() is a wrapper around the MPI routine Allgather adjusted to the type cntr::distributed_timestep_array.

Step 2 is implemented as

void sigma_GW(int tstp,int kk,GREEN &S,DIST_TIMESTEP &gk_all_timesteps,DIST_TIMESTEP &wk_all_timesteps,lattice_1d_1b &lattice,int Ntau,int Norb){
assert(tstp==gk_all_timesteps.tstp());
assert(tstp==wk_all_timesteps.tstp());
GREEN_TSTP stmp(tstp,Ntau,Norb,FERMION);
S.set_timestep_zero(tstp);
for(int q=0;q<lattice.nk_;q++){
double wk=lattice.kweight_[q];
int kq=lattice.add_kpoints(kk,1,q,-1);
stmp.clear();
for(int i1=0;i1<Norb;i1++){
for(int i2=0;i2<Norb;i2++){
cntr::Bubble2(tstp,stmp,i1,i2,gk_all_timesteps.G()[kq],gk_all_timesteps.G()[kq],i1,i2,wk_all_timesteps.G()[q],wk_all_timesteps.G()[q],i1,i2);
}
}
S.incr_timestep(tstp,stmp,wk);
}
}

As each rank includes only a subset of momentum points \(\mathbf{k}_{\text{rank}}\) we only evaluate the self-energy diagrams \(\Sigma_{\mathbf{k}_{\text{rank}}}\) for this subset of momentum points. All ranks carry information about the latest timestep for all momentum points and the internal sum over momentum \(\vec q\) in Eq. \eqref{Eq:Sigmak} is performed on each rank.

Generic structure of the example program

As the generic structure is similar to the above examples, we will focus on the peculiarities connected to the usage of MPI. First, we need to initialize the MPI session

MPI::Init(argc,argv);
ntasks=MPI::COMM_WORLD.Get_size();
tid=MPI::COMM_WORLD.Get_rank();
tid_root=0;

and the cntr::distributed_timestep_array for the electronic and bosonic propagators

DIST_TIMESTEP gk_all_timesteps(Nk,Nt,Ntau,Norb,FERMION,true);
DIST_TIMESTEP wk_all_timesteps(Nk,Nt,Ntau,Norb,BOSON,true);

To simplify the mapping between a subset of points on a given rank and the full BZ we introduce a map kindex_rank which at position \(j=0,\ldots,N_{\text{rank}}-1\) includes an index of the corresponding momenta in the full BZ.

The program consists of three main parts, namely solving the Matsubara Dyson equation, bootstrapping (tstp \(\leq\) SolverOrder) and time propagation for tstp \(>\) SolverOrder. The selfconsistency iterations include the communication of all fermionic and bosonic propagators between different ranks using the routine gather_gk_timestep (introduced above) and the determination of the local propagators using a routine get_loc

for(int iter=0;iter<=MatsMaxIter;iter++){
// update propagators via MPI
diag::gather_gk_timestep(tstp,Nk_rank,gk_all_timesteps,corrK_rank,kindex_rank);
diag::gather_wk_timestep(tstp,Nk_rank,wk_all_timesteps,corrK_rank,kindex_rank);
diag::set_density_k(tstp,Norb,Nk,gk_all_timesteps,lattice,density_k,kindex_rank,rho_loc);
diag::get_loc(tstp,Ntau,Norb,Nk,lattice,Gloc,gk_all_timesteps);
diag::get_loc(tstp,Ntau,Norb,Nk,lattice,Wloc,wk_all_timesteps);
}

As on each MPI rank, the momentum-dependent single-particle density matrix \(\rho(\vec k)\) is known for the whole BZ, the evaluation of the HF contribution is done as in Hubbard chain. The self-energies \(\Sigma_{k_{\text{rank}}}\) for the momentum points \(k_{\text{rank}}=0,\ldots,N_{\text{rank}}-1\) on each rank are obtained by the routine sigma_GW (introduced above).

// update mean field and self-energy
for(int k=0;k<Nk_rank;k++){
diag::sigma_Hartree(tstp,Norb,corrK_rank[k].SHartree_,lattice,density_k,vertex);
diag::sigma_Fock(tstp,Norb,kindex_rank[k],corrK_rank[k].SFock_,lattice,density_k,vertex);
diag::sigma_GW(tstp,kindex_rank[k],corrK_rank[k].Sigma_,gk_all_timesteps,wk_all_timesteps,lattice,Ntau,Norb);
}

Similarly, the solution of the Dyson equation for the fermionic (bosonic) propagators for each momentum point is obtained by step_dyson_with_error (step_W_with_error) which is just a wrapper around the Dyson solver which returns an error corresponding to the difference between the propagators at the previous and current iterations. The momentum-dependent error for the fermionic propagators is stored in err_ele and at the end we use MPI_Allreduce to communicate among the ranks.

// solve Dyson equation
double err_ele=0.0,err_bos=0.0;
for(int k=0;k<Nk_rank;k++){
err_ele += corrK_rank[k].step_dyson_with_error(tstp,iter,SolverOrder,lattice);
diag::get_Polarization_Bubble(tstp,Norb,Ntau,kindex_rank[k],corrK_rank[k].P_,gk_all_timesteps,lattice);
err_bos += corrK_rank[k].step_W_with_error(tstp,iter,tstp,SolverOrder,lattice);
}
MPI::COMM_WORLD.Allreduce(MPI::IN_PLACE,&err_ele,1,MPI::DOUBLE_PRECISION,MPI::SUM);
MPI::COMM_WORLD.Allreduce(MPI::IN_PLACE,&err_bos,1,MPI::DOUBLE_PRECISION,MPI::SUM);

The bootstrapping and the real-time propagation have a similar structure as the Matsubara solver. The main difference lies in the predictor-corrector scheme as explained in Hubbard chain. At the beginning of each time step, we extrapolate the momentum-dependent GF \(G_{k}\) and the retarded interactions \(W_{k}\), which works as a predictor

// Predictor: extrapolation
diag::extrapolate_timestep_G(tstp-1,Nk_rank,SolverOrder,Nt,corrK_rank);
diag::extrapolate_timestep_W(tstp-1,Nk_rank,SolverOrder,Nt,corrK_rank);

Then we make several iterations at a given time step until convergence, which acts as a corrector.

After the NEGFs are obtained, we evaluate the kinetic energy \(E_{\text{kin}}(t)=\frac{1}{2 N_k}\sum_{\vec k}\text{Tr}[ \rho_{\vec k}(t) \epsilon_{\vec k}(t)]\) using the routine diag::KineticEnergy. The interaction energy is obtained from the Galitskii-Migdal formula

\begin{equation} E_{\text{int}}(t)=\frac{1}{2 N_k} \sum_{\vec k} \left ( \mathrm{Tr}\left[\rho_{\vec k}(t) h^{\mathrm{MF}}_{\vec k}(t)\right] + \mathrm{Im}\mathrm{Tr} \left[\Sigma_{\vec k}\ast G_{\vec k}\right]^<(t,t) \right), \end{equation}

using the routine diag::CorrelationEnergy. The two operations include the MPI reduction as the momentum sum is performed over the whole BZ.

Running the example program

There is one program for the \(GW\) calculation, called gw.x. The driver script demo_gw.py located in the utils/ directory provides a simple interface to this program. Similar to the the previous cases, the script creates an input file and launches the program. The user can specify the shape of the electric field pulse, but by default, we use a single-frequency pulse with a Gaussian envelope

\begin{equation} \label{Eq:pulse} E(t)=E_0 \exp(-4.6(t-t_0)^2/t_0^2) \sin(\omega (t-t_0)), \end{equation}

where \(t_0=2\pi/\omega N_{p}\) is determined by the number of cycles \(N_p\). After the simulation, the time evolution of the kinetic energy and potential energy are plotted.

The relevant programs for the lattice \(GW\) example are

programs/gw.cpp Main program
utils/demo_gw.py Running script
programs/gw_latt_impl.cpp Implementation of lattice and single-particle properties
programs/gw_selfene_impl.cpp Evaluation of self-energy contributions including MPI communication
programs/gw_latt_impl.cpp Solution of the Dyson equation for each momentum point

Corresponding declarations of the routines are in the .hpp files with the same name as the .cpp files.

Discussion

Here, we will briefly illustrate the dynamics after a photo-excitation. A more detailed analysis can be found in the accompanying paper [6]. We excite the system with a short oscillating electric field pulse, see Eq. \eqref{Eq:pulse}. The amplitude of the excitation \(E_0\) determines the absorbed energy. In Fig. FigGW_ekin, we present the time evolution of the kinetic energy for the two excitation strengths \(E_0=3\) and \(E_0=5\). As the energy is increased (during the pulse) and the system heats up, the kinetic energy increases. The observed behavior is consistent with thermalization at a higher temperature, but the transient evolution is complicated by the energy exchange between the electronic and bosonic subsystems (plasmon emission). For the strongest excitations, there is a clear relaxation dynamics to the final state, see inset of Fig. FigGW_ekin, accompanied with strongly damped oscillations.

Time evolution of the kinetic energy
ekinPulse.svg
Time evolution of the kinetic energy for the two excitation strengths \(E_0=3.0\), \(5.0\), respectively. The dashed lines show the shape of the electric field pulse scaled down by 100 to fit on the scale. The inset presents a zoom into the relaxation dynamics by subtracting the long-time limit \(E_\text{kin}(t)-E_\text{kin}(t_\text{fin})\). Both simulations have been performed with \(N_k=256\), time step \(h=0.01\) and for inverse temperature \(\beta =20\).

As discussed in the introduction, the usage of the MPI parallelization scheme over momentum points reduces the memory restrictions and also speeds up the execution of the program. To demonstrate this, we have performed a scaling analysis for a system with fixed number of momentum points \(N_k=128,\) and parallelized up to 128 processors, see Fig. FigGW_scaling. Moreover, for all tests we have fixed the number of tasks per node to one, since in a real-world scenario we want to maximally distribute the memory. We can see that the scaling is almost perfect up to 128 processors, with only a slight deviation from optimal scaling. The main reason for the deviation is the communication overhead, since a substantial amount of data, namely timesteps of propagators for all momentum points, has to be communicated among all nodes.

MPI scaling analysis
scaling.svg
Speed-up of the total calculation time as a function of the MPI processes for systems with \(N_k=128\) momentum points, where we fixed one task per node. The maximum number of time steps used is \(N_t=2500\). These calculations have been performed on the Popeye cluster at the Flatiron Institute.

↑ to the top of the page ↑

previous page Manual