Baby steps

This guide will help you with a step-by-step introduction to get you started with the project in FYS4411.

There are a couple of components that need to be implemented:

  • Wavefunction
  • Hamiltonian
  • Local energy
  • Random number generation
  • Metropolis algorithm

Implementing these in the order above is likely the best way to get started. It is also very useful to make a couple of tests to verify your implementation and to make sure future changes don't break a working implementation. We will take you through these components and some tests for them now.

Wave function

The wave function can basically be any function, but quantum mechanics sets some restrictions. In the project text the trial wave function fulfils these restrictions, so you don't have to worry about them. Our wave function is

$$\psi_{T}( \vec r_1, \vec r_2) =   \exp{\left(-\alpha(r_1+r_2)\right)}   \exp{\left(\frac{r_{12}}{2(1+\beta r_{12})}\right)}$$

where

$$\exp{\left(-\alpha(r_1+r_2)\right)}$$

is a simple exponential part that favorises the center of space. Later, this part will be constructed using a Slater determinant product, so we will call this the "Slater part" for now.

The second factor

$$\exp{\left(\frac{r_{12}}{2(1+\beta r_{12})}\right)}$$

is the Jastrow factor, which is used to handle the correlation (or interaction) between the electrons.

The first step in your code could be to implement a class named WaveFunction that has a simple function that returns the expression above. You may even make a separate Jastrow function already:

double WaveFunction::waveFunction(const mat &r)
{
    double argument = 0;
    for(int i = 0; i < nParticles; i++) {
        double rSingleParticle = 0;
        for(int j = 0; j < nDimensions; j++) {
            rSingleParticle += r(i,j) * r(i,j);
        }
        argument += sqrt(rSingleParticle);
    }
    return exp(-argument * alpha) * jastrowFactor(r);
}

double WaveFunction::jastrowFactor(const mat &r) {
    rowvec r12 = r.row(1) - r.row(0);
    r12norm = norm(r12, 2);
    return exp(r12norm / (2 * (1 + beta * r12norm));
}

The Jastrow factor will of course also have to be rewritten for a general number of particles, but we'll just stick with the two-particle case for now.

After implementing this function, you should test that what it returns is correct. The best way to do this is to take a simple analytical example, calculate the value by hand and verify the numerical implementation. Of course, this has been done by us, so you may simply test that it returns the value 

0.251182

when the positions are

$$\vec r_1 = [0.4,1.2,-0.3], ~ \vec r_2 = [-1.5, 0.3, -0.5]$$

and the parameters are

$$\alpha = 0.6, ~ \beta = 0.9$$

Hamiltonian

The Hamiltonian is a description of our system in terms of energy. This will be used to find the energy expectation value of a given state by performing the integral over all of space (or at least, a local but hopefully important part of space). The Hamiltonian of our system is

$$   \hat{H}(\vec r_1, \vec r_2)   =   - \frac{\nabla_1^2}{2}-\frac{\nabla_2^2}{2}   - \frac{Z}{r_1} - \frac{Z}{r_2}   + \frac{1}{r_{12}}$$

and may be further divided into a kinetic part

$$- \frac{\nabla_1^2}{2}-\frac{\nabla_2^2}{2}$$

a potential part (from the nucleus),

$$- \frac{Z}{r_1} - \frac{Z}{r_2}$$

and the interaction part (between the electrons),

$$\frac{1}{r_{12}}$$

You should implement these calculations in a separate class, for instance named "Hamiltonian". The kinetic part requires a calculation of the laplacian, $\nabla^2$, of the wave function. To implement this you may either do the second derivation numerically or perform an analytical calculation of $\nabla^2$. In this project you should do both. The numerical implementation is slow, but a useful check because of its generality. The analytical implementation is on the other hand way faster, but will of course require you to do the maths correctly.

Calculating the second derivative numerically

The second derivative of any function may be computed numerically as

$$ f''(x) \approx \frac{\delta_h^2[f](x)}{h^2} = \frac{f(x+h) - 2 f(x) + f(x-h)}{h^{2}}$$

Where $f$ in our case is the wave function, $f = \Psi$. And for the Laplacian you will have to do this in all three dimensions. For an example, see the local energy calculation in the sample in our Git repository.

Local energy

Eventually we want to find the energy expectation value of our wave function,

$$\langle E \rangle = \frac{\int \psi^{\ast}_T({\bf r_1},{\bf r_2})\hat{H}({\bf r_1},{\bf r_2})\psi_T({\bf r_1},{\bf r_2}) d{\bf r_1}d{\bf r_2}} {\int\psi^{\ast}_T({\bf r_1},{\bf r_2})\psi_T({\bf r_1},{\bf r_2}) d{\bf r_1}d{\bf r_2}} $$

which may be sampled using the Metropolis algorithm. However, to do this, we should rewrite the integral into a form that suits the Metropolis algorithm better. This is the topic of chapter 14.4 in the lecture notes, which we urge you to read before continuing. Here we will simply state that we want to compute the expectation value of the local energy,

$$\langle \hat E_L(\alpha) \rangle = \int P({\bf R}, \alpha) \hat {\bf E}_L({\bf R}, \alpha) d {\bf R}$$

Here $P(\bf R, \alpha)$ is the probability distribution (again, see chapter 14.4) which will be followed by the Metropolis algorithm. This will thankfully never be calculated explicitly in our code, because the Metropolis algorithm only uses ratios of probabilities rather than absolute values.

The local energy $\hat {\bf E}_L({\bf R}, \alpha)$ must on the other hand be calculated for each step. This will in turn use the kinetic and potential energies that you should now have implemented in your Hamiltonian class.

For a full implementation where the local energy is computed in one function together with the potential and kinetic energy, see the example code we have put up in our Git repository. Here we are not using a separate Hamiltonian class, but we strongly recommend that you implement one in your own code.

Random numbers

To generate random numbers to perform steps in the Metropolis algorithm, you may use Armadillo's random vectors:

rowvec myRandomVector = randu<rowvec>(3);

This will create a row vector with random components between 0 and 1. You will need to scale and shift this so that the numbers are for instance between -1 and 1.

Metropolis algorithm

The Metropolis algorithm is not easily described in a few lines. For more information about how it works we refer to the lecture notes.

In a general outline, what we need to do is the following (from the lecture notes):

  •  Initialisation: Fix the number of Monte Carlo steps and thermalization steps. Choose an initial $\bf R$ and variational parameters $\alpha$ and calculate $\Psi_T(\bf R, \alpha)|^2$. Define also the value of the stepsize to be used when moving from one value of $\bf R$ to a new one.
  • Initialise the energy and the variance.
  • Start the Monte Carlo calculation with a loop over a given number of Monte Carlo cycles.
  1. Calculate a trial position ${\bf R}_p = {\bf R} + r \cdot \Delta {\bf R}$ where $r$ is a random variable $r \in [0, 1]$ and $\Delta \bf R$ a user-chosen step length.
  2. Use then the Metropolis algorithm to accept or reject this move by calculating the ratio $w = P({\bf R}_p)/P({\bf R})$. If $w \geq s$, where $s$ is a random number $s \in [0, 1]$, the new position is accepted, else we stay at the same place.
  3.  If the step is accepted, then we set ${\bf R} = {\bf R}_p$.
  4. Update the local energy and the variance.
  • When the Monte Carlo sampling is finished, we calculate the mean energy and the
    standard deviation. Finally, we may print our results to a specified file.

To see a sample implementation of the metropolis algorithm, see the example code we have put up in our Git repository.
 

Published Feb. 6, 2013 5:05 PM - Last modified Feb. 6, 2013 5:05 PM